text
stringlengths
59
500k
subset
stringclasses
6 values
\begin{document} \title{\textbf{Multiplicity and Bifurcation Results for a Class of Quasilinear Elliptic Problems with Quadratic Growth on the Gradient} } \markboth{Multiplicity and Bifurcation Results for a Class of Quasilinear Elliptic Problems } {Multiplicity and Bifurcation Results for a Class of Quasilinear Elliptic Problems } \def\authorfont{Fiorella Rendón}{\authorfont{Fiorella Rendón}} \def\authorfont{Mayra Soares}{\authorfont{Mayra Soares}} \def\subauthorfont{Department of Mathematics, PUC-RIO}{\subauthorfont{Department of Mathematics, PUC-RIO}} \def\subauthorfont{Department of Mathematics, University of Brasilia}{\subauthorfont{Department of Mathematics, University of Brasilia}} \author{Fiorella Rend\'on and Mayra Soares} \maketitle \setcounter{tocdepth}{3} \maketitle \bgroup \hrule \egroup \bgroup \egroup \begin{abstract} \textit{We analyse the structure of the set of solutions to the following class of boundary value problems \begin{align*}\tag{$P_\lambda$} -\text{div}(A(x)Du)=c_\lambda(x)u+( M(x)Du,Du)+h(x),\qquad u\in H_0^1(\Omega)\cup L^\infty(\Omega), \end{align*} where $\Omega\subset\mathbb{R}^n$, $n\geq 3$, is a bounded domain with boundary $\partial\Omega$ under low regularity. We assume that $c,h \in L^p(\Omega)$ for some $p>n$, where $c^{\pm} \geq 0$ are such that $c_\lambda(x):=\lambda c^+(x)-c^-(x)$ for a parameter $\lambda\in\mathbb{R}$, $A(x)$ is a uniformly positive bounded measurable matrix and $M(x)$ is a positive bounded matrix. Under suitable assumptions, we describe the continuum of solutions to problem $(P_\lambda)$ and also its bifurcation points, proving existence and uniqueness results in the coercive case $(\lambda \leq 0)$ and multiplicity results in the noncoercive case $(\lambda > 0)$.} \textbf{Keywords}: Quasilinear elliptic equations, quadratic growth on the gradient, sub and super solutions. \end{abstract} \bgroup \hrule \egroup \section{Introduction} \quad \ We consider the following class of boundary value problems \begin{align*}\tag{$P_\lambda$}\label{$P_lambda$} \left\{ \begin{array}{rl} -\divi(A(x)Du)&=c_\lambda(x)u+( M(x)Du,Du)+h(x)\\ u&\in H_0^1(\Omega)\cup L^\infty(\Omega) \end{array} \right. \end{align*} where $\Omega\subset\mathbb{R}^n$, for $n\geq 3$, is a bounded domain with boundary $\partial\Omega$ of class $C^{1,Dini}$ and $c,h \in L^p(\Omega)$ for some $p>n$, with $c^+$ and $c^-$ nonnegative functions such that $c_\lambda(x):=\lambda c^+(x)-c^-(x)$ for a parameter $\lambda\in\mathbb{R}$. Furthermore, $A(x)$ is a uniformly positive bounded measurable matrix, i.e. $\vartheta I_n \leq (a_{ij}(x))\leq \vartheta^{-1}I_n$, $\vartheta$ is a positive constant, and $I_n$ is the identity matrix; and that $M(x)$ is a positive matrix such that \begin{align}\label{5.2} 0<\mu_1I_n \leq M(x) \leq \mu_2 I_n \quad \mbox{ in } \Omega, \end{align} for some positive constants $\mu_1$ and $\mu_2$. Following the ideas in \cite{MR4030257}, we assume the additional assumption \begin{align}\label{A}\tag{A} \left\{ \begin{array}{rl} |\Omega_{c^+}|>0, &\mbox{ where }\; \Omega_{c^+}:=supp(c^+),\\ \mbox{ There exists } \varepsilon>0 \mbox{ such that } c^-=0 &\mbox{ in }\; \{x\in\Omega:d(x,\Omega_{c^+})<\varepsilon\}. \end{array} \right. \end{align} This hypothesis means that we are dealing with the ``hard" noncoercive case, when the zero order coefficient is not negative and the uniqueness of solution is expected to fail. For a definition of $supp(f)$ when $f\in L^p(\Omega)$ for some $p\geq 1$, we refer to \cite[Proposition 4.17]{MR2759829}. As we do not have global sign conditions, the approaches used in \cite{ACJT,CJ} to obtain the a priori bounds cannot be applied here, hence another strategy is required. Depending on the parameter $\lambda\in\mathbb{R}$ we study the existence and multiplicity of solutions to \eqref{$P_lambda$}. We recall that $u$ is a weak Sobolev (super, sub) solution to \eqref{$P_lambda$}, if $u$ satisfies \begin{align*} \int_\Omega A(x)DuD \varphi \, (\geq, \leq) = \int_\Omega c_\lambda(x) u \varphi + \int_\Omega\varphi(M(x)Du,Du)+\int_\Omega h(x)\varphi, \end{align*} for each nonnegative $ \varphi\in C_0^\infty(\Omega)$. Such definitions are going to be essential for our arguments throughout this paper. The class of problems \eqref{$P_lambda$} is more challenger and delicate to study due to the quadratic dependence in the gradient, which gives to the gradient term the same order as the Laplacian, with respect to dilations. We refer to \cite{ CJ, MR4030257, multiplicidade} for a review of the large literature on this topic. The study of the coercive case, i.e. $c\le0$, was initiated by Boccardo, Murat and Puel in the 80’s, and the uniqueness of solution for this case was proved in \cite{ACJTuni}. On the other hand, the noncoercive case has remained unexplored until very recently. We refer to a particular case considered by Jeanjean and Sirakov, where they studied a problem directly connected to \eqref{$P_lambda$}, see \cite{Jeanjean-2013}. In order to state our main results, we will denote by $\gamma_1>0$ the first eigenvalue of the linear problem, which in our case means that the problem \begin{align}\label{eig1}\tag{$P_{\gamma_1}$} \left\{ \begin{array}{rll} -\divi(A(x)D\varphi_1)&=c_{\gamma_1}(x) \varphi_1 &\mbox{ in } \Omega\\ \varphi_1&> 0&\mbox{ in } \Omega\\ \varphi_1&= 0&\mbox{ on } \partial\Omega \end{array} \right. \end{align} has a solution. In this case, when $h(x) \gneqq 0$, problem \eqref{$P_lambda$} does not have a solution $u$ with $c^+(x)u\gneqq 0$, when $\lambda=\gamma_1$, and it neither has nonnegative solutions when $\lambda\geq \gamma_1$, see \cite[Lemma 6.1]{ACJT} for more details. In this paper, we aim to contribute to the literature describing the set of solutions to \eqref{$P_lambda$}. In what follows a continuum means a closed and connected set and the above assumptions on the coefficients of the equation are assumed to hold. More precisely, defining \[ \Sigma:=\{(\lambda,u)\in \mathbb{R}\times C(\overline{\Omega}): u \mbox{ solves } \eqref{$P_lambda$}\}, \] we will show that it is possible to obtain a description of $\Sigma$. The method introduced in \cite{MR4030257} allowed the authors to obtain more information about the qualitative behavior of the solutions. Following the same strategy developed in \cite{MR4030257}, as expressed in the next two theorems, we show the existence of a continuum of solutions to problem (\ref{$P_lambda$}), for the case that the coercive problem ($P_0$), when $\lambda=0$, has a solution. The suitable conditions on the coefficients, which ensure the existence of such a solution, can be found, for instance, in \cite{ACJT, MR4030257}. \begin{theorem}\label{teo5.2} Suppose that $(P_0)$ has a solution $u_0$ with $c^+(x)u_0\gneqq 0$. Then \begin{itemize} \item[(i)] For all $\lambda\leq 0$, (\ref{$P_lambda$}) has a unique solution $u_\lambda$, which satisfies $u_0-\|u_0\|_\infty\leq u_\lambda\leq u_0$; \item[(ii)] There exists a continuum $\mathcal{C}\subset\Sigma$ such that the projection of $\mathcal{C}$ on the $\lambda$-axis is an unbounded interval $(-\infty,\overline{\lambda}]$ for some $\overline{\lambda}\in(0,+\infty)$ and $\mathcal{C}$ bifurcates from infinity to the right of the axis $\lambda=0$; \item [(iii)]There exists $\lambda_0\in (0,\overline{\lambda}]$ such that, for all $\lambda\in(0, \lambda_0)$, (\ref{$P_lambda$}) has at least two solutions with $u_i\geq u_0$ for $i=1,2$. \end{itemize} \end{theorem} \begin{figure}\label{fig: Illustration of Theorem 1.1} \end{figure} \begin{theorem}\label{teo5.3} Suppose that $(P_0)$ has a solution $u_0 \leq 0$ with $c^+(x)u_0\lneqq 0$. Then \begin{itemize} \item[(i)] For $\lambda\leq 0$, (\ref{$P_lambda$}) has a unique nonpositive solution $u_\lambda$ and this solution satisfies $ u_0+\|u_0\|_\infty \geq u_\lambda\geq u_0$; \item[(ii)] There exists a continuum $\mathcal{C}\subset\Sigma$ such that its nonnegative projection $\mathcal{C}^+$ on the $\lambda$-axis is $[0,+\infty)$; \item [(iii)] For $\lambda>0$, every nonpositive solution to (\ref{$P_lambda$}) satisfies $u_\lambda\ll u_0$. Furthermore, (\ref{$P_lambda$}) has at least two nontrivial solutions $u_{\lambda,i}$, for $i=1,2$, with $ u_{\lambda,1}\ll u_0\leq u_{\lambda,2},$ and $\displaystyle\max_{\overline{\Omega}} u_{\lambda,2}>0.$ Moreover, if $0<\lambda_1<\lambda_2$ we have $ u_{\lambda_2,1}\leq u_{\lambda_1,1}\leq u_0.$ \end{itemize} \end{theorem} Note that Theorems \ref{teo5.2} and \ref{teo5.3} require problem $(P_0)$ to have a solution, thus we are in a situation such that a branch of solutions starts from $(0, u_0)$. Our next results consider the alternative situation when problem $(P_0)$ does not have a solution, but there exists a nonpositive supersolution to problem (\ref{$P_lambda$}) for some $\lambda_0 > 0$. \begin{theorem}\label{solucoesnegativas} Assume that $(P_0)$ does not have a solution $u_0 \leq 0$ and that there exist $\lambda_0>0$ and $\beta_0\leq 0$ a supersolution to $(P_{\lambda_0})$. Then, there exists $0 <\underline{\lambda} \leq \lambda_0$ such that \begin{itemize} \item[(i)] for every $\lambda\in (\underline{\lambda},\infty)$, (\ref{$P_lambda$}) has at least two solutions with $u_{\lambda,1}\leq 0$ and $u_{\lambda,1}\leq u_{\lambda,2}$. Moreover, if $\lambda_1 < \lambda_2$, we have $u_{\lambda_1,1}\gg u_{\lambda_2,1}$; \item[(ii)] $(P_{\underline{\lambda}})$ has a unique solution $u_{\underline{\lambda}} \leq 0$; \item[(iii)] for $\lambda < \underline{\lambda}$, $(P_{\lambda})$ has no solution $u\leq 0$. \end{itemize} Furthermore, for every $\lambda<0$ problem \eqref{$P_lambda$} has at most one nonpositive solution $u_\lambda$, there exists an unbounded continuum $\mathcal{C} \subset \Sigma$ and $\lambda=0$ is a bifurcation point from infinity. \end{theorem} \begin{figure}\label{fig: Illustration of Theorem 1.2} \label{fig: Illustration of Theorem 1.3} \end{figure} In the proof of Theorem \ref{teo5.3} we define the auxiliary problem \eqref{P3}, whose solutions are supersolutions of (\ref{$P_lambda$}). In particular, from Theorem \ref{teo5.2} and Lemma \ref{c2} we are able to deduce the following corollary, which concerns to the case $h\lneqq 0$. In this result, we can see the achievement of the two above theorems simultaneously. \begin{coro}\label{coro} Assume that $h\lneqq 0$. For all $\widetilde{\lambda}>\gamma_1$, where $\gamma_1>0$ is the first eigenvalue \eqref{eig1}, there exists $\widetilde{k}>0$ such that, for all $k \in (0,\widetilde{k}]$, \begin{enumerate} \item[(i)] there exists $\lambda_1\in (0,\gamma_1)$ such that \begin{enumerate} \item for all $\lambda\in (0,\lambda_1)$, \eqref{P3} has at least two positive solutions; \item for $\lambda=\lambda_1$, \eqref{P3} has exactly one positive solution; \item for $\lambda> \lambda_1$, \eqref{P3} has no nonnegative solution; \end{enumerate} \item[(ii)] for $\lambda=\gamma_1$ \eqref{P3} has no solution; \item[(iii)] there exists $\lambda_2\in (\gamma_1,\widetilde{\lambda}]$ such that \begin{enumerate} \item for $\lambda>\lambda_2$, \eqref{P3} has at least two solutions with $u_{\lambda,1}\ll 0$ and $\min u_{\lambda,2}<0$; \item for $\lambda=\lambda_2$, \eqref{P3} has a unique nonpositive solution; \item $\lambda<\lambda_2$, \eqref{P3} has no nonpositive solution. \end{enumerate} \end{enumerate} \end{coro} In the particular, but important, case $h(x)\equiv 0 $, we have the following result. Further considerations about the cases when $h(x)$ has a sign are given in Section 4. \begin{theorem}\label{hequiv0} Assume that $h(x)\equiv 0$ and $\gamma_1>0$ is the first eigenvalue of \eqref{eig1}. Then \begin{itemize} \item[(i)] for all $\lambda\in (0,\gamma_1)$, the problem \begin{align}\label{h=0}\tag{$P_{h\equiv 0}$} -\divi(A(x)Du)=c_\lambda(x)u+(M(x)Du,Du) \end{align} has at least two solutions $u_{\lambda,1}\equiv 0$ and $u_{\lambda,2}\gneqq 0$; \item[(ii)] for $\lambda=\gamma_1$, (\ref{h=0}) has only the trivial solution; \item[(iii)] for $\lambda >\gamma_1$, (\ref{h=0}) has at least two solutions $u_{\lambda,1}\equiv 0$ and $u_{\lambda,2}\leq 0$; \item[(iv)] for all $\lambda\leq 0$, (\ref{h=0}) has a unique solution $u_\lambda\equiv 0$; \item[(v)] There exists a continuum $\mathcal{C}\subset\Sigma$ that bifurcates from infinity to the right of the axis $\lambda=0$ and whose projection on the $\lambda$-axis is an unbounded interval $(0,+\infty)$. \end{itemize} \end{theorem} \begin{figure}\label{fig:corollary} \label{fig: Illustration of Theorem 1.5} \end{figure} The key to prove our results is to make use of the Boundary Weak Harnack inequality established in \cite{RSS} for uniformly elliptical equations in the divergence form. In fact, in Lemma 3.2, we show that it is sufficient to control the behavior of the solutions on $\Omega_{c^+}$. Then, taking $\bar x \in \Omega_{c^+}$, we do a local analysis in a ball, if $\bar x \in \Omega_{c^+}\cap \Omega$, or in a semiball, if $\bar x \in \Omega_{c^+}\cap \partial \Omega$. We believe that similar analyses, based on the use of Harnack type inequalities, had not been previously performed for the case $\bar x \in \partial \Omega$. It is also importante to mention that, for obtaining the existence statements in Theorems 1.1-1.5, it is fundamental to introduce and study an auxiliary fixed point problem via degree theory. More specifically, in \cite{MR4030257}, the authors construct an auxiliary fixed point problem for the case $c^+(x)u_0\gneqq 0$ for the $p$-Laplacian with a zero order term. Then, based on this approach, we prove Theorem 1.1 generalizing the arguments in \cite{MR4030257}. On the other hand, since in Theorem 1.2 we have $c^+(x)u_0\lneqq 0$, we introduce a new problem of fixed point for this case. Furthermore, we show that it is possible to obtain a more precise description of $\Sigma$, based on the assumption that we know the sign of $u_0$. This paper is organized as follows. Section 2 presents auxiliary results, which are fundamental for the construction of our arguments. In Section 3 we derive a priori bounds for the solutions of problem \eqref{$P_lambda$}. Finally, in Section 4 we prove our main results. \section{Auxiliary Results}\label{preresults} \quad \ The Strong Maximum Principle is extremely important in our approach. As stated below, it guarantees that a nonnegative supersolution to an elliptic equation in a domain cannot vanish inside the domain, unless it vanishes identically. \begin{theorem}[{\bf{Strong Maximum Principle - SMP}}]\label{SMP} Let $\Omega\subset \mathbb{R}^n$ be a domain. If $u$ satisfies \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)Du)-\mu_1 |Du|^2-c_\lambda(x)u-h(x)&\geq 0 &\mbox{ in }\Omega\\ u &\geq 0 &\mbox{ in }\Omega\\ \end{array} \right. \end{align*} then either $u>0$ in $\Omega$ or $u\equiv 0$ in $\Omega$. \end{theorem} We observe that the SMP is an immediate consequence of the well known Interior Weak Harnack Inequality - IWHI, for more details we refer to \cite{MR4030257} and also \cite[Theorem 3.5, Theorem 8.18]{GT}. On the other hand, the next theorem is a generalization of the IWHI up to the boundary. Such a result is the core of our arguments in order to describe the solutions of the class of problems \eqref{$P_lambda$}. Its proof can be found in \cite[Theorem 4.7]{fio}, see also \cite[Theorem 1.1]{RSS} for a more general version. \begin{theorem}[{\bf{Boundary Weak Harnack Inequality, BWHI}}]\label{teoBWHI} Assume that $\Omega\subset\mathbb{R}^n$, $n\geq 3$, is a bounded domain with boundary $\partial\Omega$ of class $C^{1,Dini}$, $\|a_{ij}\|_{C^{0,Dini}(\Omega)}\le \vartheta^{-1}$, $i, j = 1,\ldots, n$ and the coefficients $b,c\in L^p(\Omega)$ for some $p>n$. Then, there exist constants $\varepsilon = \varepsilon(n,p,\vartheta)>0$ and $C = C(n,p, \vartheta)>0$ depending also on $R>0$ and the $C^{1,Dini}$-representation of the boundary, such that for each nonnegative solution to \begin{align*}\label{1.5} \qquad -\divi(A(x)Du)&+b(x)Du+c(x)u\ge f \quad \text{in} \ \Omega,\\ \text{it \ follows \ that} \quad \qquad \qquad \qquad \inf_{B_{R}^\prime} \frac{u}{d}\geq &C\left(\displaystyle\int_{B_{R}^\prime}\left( \frac{u}{d}\right)^\varepsilon \right)^{1/\varepsilon}-C\|f\|_{L^p(B_{2 R}^\prime)}. \qquad \qquad\qquad \qquad \end{align*} \end{theorem} Before stating the next auxiliary result, we denote by $C(\overline{\Omega})$ the real Banach space of continuous functions defined over $\overline{\Omega}$ and let ${T:\mathbb{R}\times C(\overline{\Omega})\rightarrow C(\overline{\Omega})}$ be a completely continuous map, i.e. it is continuous and maps bounded sets to relatively compact sets. For the purposes of this paper, we consider the problem \begin{equation}\label{Q} u \in C(\overline{\Omega});\quad \Phi(\lambda,u):=u-T(\lambda,u)=0, \end{equation} of finding the zeroes of $\Phi(\lambda,u):=u-T(\lambda,u)$, for each fixed $\lambda\in\mathbb{R}$. Let $\lambda_0\in\mathbb{R}$ be arbitrary but fixed and assume that $u_{\lambda_0}$ is an isolated solution to $\Phi(\lambda_0,u)$, then the degree $\deg(\Phi(\lambda_0,.),B(u_{\lambda_0},r),0)$ is well defined and it is constant for $ r>0$ small enough. Thus, it is possible to define the index \[ i(\Phi(\lambda_0,.),u_{\lambda_0}):=\lim_{r\rightarrow 0}\deg(\Phi(\lambda_0,.),B(u_{\lambda_0},r),0). \] Now we are able to enunciate the following theorem, which was proved in \cite[Theorem 2.2]{ACJT}. \begin{theorem}\label{continuum} If (\ref{Q}) has a unique solution $u_{\lambda_0}$, and $i(\Phi(\lambda_0,.),u_{\lambda_0})\ne 0$ then $\Sigma$ possesses two unbounded components $\mathcal{C}^+$ and $\mathcal{C}^-$ in $[\lambda_0,+\infty]\times C(\overline{\Omega})$ and $[-\infty,\lambda_0]\times C(\overline{\Omega})$, respectively, which meet at $(\lambda_0,u_{\lambda_0})$. \end{theorem} We recall that a strict subsolution to \eqref{$P_lambda$} is a subsolution $\alpha$ such that for every solution $u$ to \eqref{$P_lambda$} satisfying $\alpha \leq u$ it implies that $\alpha\ll u$. As well as, a strict supersolution to \eqref{$P_lambda$} is a supersolution $\beta$ such that $u\ll \beta$, for every solution $u$ to \eqref{$P_lambda$} satisfying $u\leq\beta$. In order to consider the situation where ($P_{\lambda_0}$) has a supersolution, we need the following formulation of the Anti-maximum Principle. This result was established in \cite{MR633824}, under slightly smoother data, but its proof may be directly extended under our assumptions. \begin{lemma}\label{antimax} Let $\overline{c}, \overline{h}, \overline{d}\in L^p(\Omega)$ with $p>n$ and assume $\overline{h}\gneqq 0$. We denote by $\overline{\gamma}_1>0$ the first eigenvalue of \[ -\divi(A(x)Du)+\overline{d}(x)u=\overline{c}_{\overline{\gamma}_1}(x)u, \mbox{ } u\in H^1_0(\Omega). \] Then there exists $\varepsilon_0>0$ such that, for all $\lambda\in(\overline{\gamma}_1,\overline{\gamma}_1+\varepsilon_0)$, the solution $v\in H^1_0(\Omega)$ of \begin{align*} -\divi(A(x)Dv)+\overline{d}(x)v=\overline{c}_{\lambda}(x)v+\overline{h}(x), \quad \mbox{ satisfies \ } v\ll 0. \end{align*} \end{lemma} \section{A priori Bound}\label{aprioribound} \quad \ This section is devoted to the derivation of some a priori bounds results for the solutions to \eqref{$P_lambda$}. Most of our results hold true under more general assumptions than \eqref{A}. Firstly, we obtain the following essential upper bound on the supersolutions to \eqref{$P_lambda$}, which shows that any unbounded continuum of solutions to \eqref{$P_lambda$}, for $\lambda>0$ in a bounded interval, can only bifurcate to the right of $\lambda=0$. \begin{theorem}[{\bf{A priori Upper Bound}}]\label{6.3} Under the stated assumptions of problem \eqref{$P_lambda$}, including hypothesis \eqref{A}, for any $\Lambda_2>\Lambda_1>0$, there exists a constant $\widetilde{M}>0$ such that, for each $\lambda\in[\Lambda_1,\Lambda_2]$, any solution to \eqref{$P_lambda$} satisfies $\displaystyle\sup_{\Omega} u \leq\widetilde{M}$. \end{theorem} Let us point out that if $\lambda=0$ or $c^+\equiv 0$ i,e. $|\Omega_{c^+}|=0$, then problem \eqref{$P_lambda$} reduces to problem $(P_0)$, which is independent of $\lambda$, and has a solution. In fact, in \cite{ACJT,CF} the authors give sufficient conditions to ensure the existence of a solution to $(P_0)$. Such a solution is unique and so, automatically we have an a priori bound for this particular case. For the general case, in order to prove Theorem \ref{6.3}, firstly we show that an a priori bound to a solution to \eqref{$P_lambda$} depends only on controlling the solution on $\Omega_{c^+}$. By compactness, it is equivalent to study what happens around any fixed point $\overline{x}\in\overline{\Omega}_{c^+}$. \begin{lemma}\label{lbound} Under the hypotheses to \eqref{$P_lambda$}, there exists a constant $M>0$ such that, for any $\lambda\in \mathbb{R}$, any solution $u$ to the problem \eqref{$P_lambda$} satisfies \begin{align*} -\sup_{\Omega_{c^+}}u^--M\leq u\leq\sup_{\Omega_{c^+}}u^++M. \end{align*} \end{lemma} \begin{proof} In case problem \eqref{$P_lambda$} has no solutions for any $\lambda\in \mathbb{R}$, there is nothing to prove. Hence, we assume the existence of $\widetilde{\lambda}\in \mathbb{R}$ such that $( P_{\scriptscriptstyle{\widetilde{\lambda}}})$ has a solution $\widetilde{u}$. We shall prove the result for $M:=2\|\widetilde{u}\|_{\infty}$. Let $u$ be an arbitrary solution to \eqref{$P_lambda$}. Setting $\mathcal{D}:=\Omega \setminus \overline{\Omega}_{c^+}$ and $v=u-\sup\limits_{\partial \mathcal{D}} u^+$, we have \begin{eqnarray*} -\divi(A(x)Dv)&=&-c^-(x)v+(M(x)Dv,Dv)+h(x)-c^-(x)\sup\limits_{\partial \mathcal{D}} u^+\\ &\leq& -c^-(x)v+(M(x)Dv,Dv)+h(x)\mbox{ in } \mathcal{D}. \end{eqnarray*} Since $v\leq 0$ on $\partial \mathcal{D}$, it follows that $v$ is a subsolution to $(P_0)$. On the other hand, setting $\widetilde{v}=\widetilde{u}+\|\widetilde{u}\|_\infty$ we obtain \begin{eqnarray*} -\divi(A(x)D\widetilde{v})&=&-c^-(x)\widetilde{v}+(M(x)D\widetilde{v},D\widetilde{v})+h(x)+c^-(x)\|\widetilde{u}\|_\infty\\ &\geq&-c^-(x)\widetilde{v}+(M(x)D\widetilde{v},D\widetilde{v})+h(x)\mbox{ in }\mathcal{D}, \end{eqnarray*} and thus, as $\widetilde{v}\geq 0$ on $\partial \mathcal{D}$, it means that $\widetilde{v}$ is a supersolution to $(P_0)$. By standard regularity results, see for instance \cite[Lemma 2.1]{ACJTuni}, we get $u, \widetilde{u} \in H^1(\Omega)\cap W^{1,n}_{loc}(\Omega)\cap C(\overline{\Omega})$ and hence, $v, \widetilde{v} \in H_0^1(\mathcal{D})\cap W^{1,n}_{loc}(\mathcal{D})\cap C(\overline{\mathcal{D}})$ and the right-hand sides of the above inequalities are $L^n$ functions. Therefore, we are able to apply the Comparison Principle \cite[Lemma 2.11]{fio}, and conclude that $v\leq \widetilde{v}$ in $\mathcal{D}$, namely, $ u \leq \widetilde{u}+\|\widetilde{u}\|_\infty +\sup_{\partial \mathcal{D}} u^+ \mbox{ in } \mathcal{D} $ and then, $u \leq M +\displaystyle\sup_{\Omega_{c^+}} u^+ \mbox{ in } \Omega$. For the other inequality, we now define $v:=u+\sup\limits_{\partial\mathcal{D}}u^-$ and hence obtain that $v\geq 0$ on $\partial \mathcal{D}$, and also that $v$ is a supersolution to $(P_0)$. Furthermore, defining $\widetilde{v}=\widetilde{u}-\|\widetilde{u}\|_{\infty}$, we have that $\widetilde{v}\leq 0$ on $\partial\mathcal{D}$ and also that $\widetilde{v}$ is a subsolution to $(P_0)$. As previously, we have that $v, \widetilde{v} \in H_0^1(\mathcal{D})\cap W^{1,n}_{loc}(\mathcal{D})\cap C(\overline{\mathcal{D}})$, and applying again the Comparison Principle we get $\widetilde{v}\leq v$ in $\mathcal{D}$. Namely, $ \geq \widetilde{u}-\|\widetilde{u}\|_{\infty} u-\sup_{\partial\mathcal{D}}u^- \mbox{ in } \mathcal{D}. $ Therefore, it yields $u\geq -\displaystyle\sup_{\Omega_{c^+}}u^--M \mbox{ in } \Omega$, ending the proof. \end{proof} Now, let $u\in H^1_0(\Omega)\cap L^\infty(\Omega)$ be a solution to \eqref{$P_lambda$}. We introduce the exponential change of variable \begin{align} w_i(x):=\frac{1}{\nu_i}(e^{\nu_i u(x)}-1) \quad\mbox{and}\quad g_i(x):=\frac{1}{\nu_i}\ln(1+\nu_i s)\mbox{, }i=1,2 \end{align} where $\nu_1:=\mu_1\vartheta \ \mbox { and }\ \nu_2:=\mu_2\vartheta^{-1}$, for $\mu_1$, $\mu_2$ given in \eqref{5.2} and $\vartheta$ given in the definition of the matrix $A(x)$. The following change of variables lemma follows straightway from an algebraic computation and it is going to be useful for proving our results. \begin{lemma}[{\bf Exponential change}]\label{exponentialchange} Let $u$ be a weak solution to problem \[-\divi(A(x)Du)=f(x), \quad f\in L^p(\Omega).\] For $m>0$ we define $ v:=\dfrac{e^{mu}-1}{m}$ and $w:=\dfrac{1-e^{-mu}}{m}.$ Then $Dv=(1+mv)Du$, ${Dw=(1-mw)Du}$, and for each $\vartheta>0$ we have, \begin{align*} -\divi(A(x)Du)-\vartheta^{-1} m |Du|^2&\leq\frac{-\divi(A(x)Dv)}{1+mv}\leq -\divi(A(x)Du)-\vartheta m |Du|^2,\\ -\divi(A(x)Du)+\vartheta m |Du|^2&\leq\frac{-\divi(A(x)Dw)}{1-mw}\leq -\divi(A(x)Du)-\vartheta^{-1} m |Du|^2, \end{align*} and $\{u=0\}=\{v=0\}$ and $\{u>0\}=\{v>0\}$. Therefore if $u$ is a weak supersolution to \begin{align}\label{expo} -\divi(A(x)Du)\geq \mu_1 |Du|^2+c_\lambda(x)u+h(x), \end{align} and for $m=\mu_1\vartheta$, $v$ is a weak supersolution to \begin{align*} -\divi(A(x)Dv)\geq h(x)(1+mv) +\frac{c_\lambda(x)}{m}(1+mv)\ln(1+mv). \end{align*} \end{lemma} By Lemma \ref{exponentialchange} we have, \begin{align}\label{w_i} -\divi(A(x)Dw_i)&=(1+\nu_iw_i)\left[c_\lambda(x)g_i(w_i)+h(x)+\big([M(x)-\nu_iA(x)]Du,Du\big)\right]. \end{align} Note that the last term is negative for $i=1$ and positive for $i=2$. Using \eqref{w_i} we shall obtain a uniform a priori upper bound on $u$ in a neighborhood of any fixed point $\overline{x}\in\overline{\Omega}_{c^+}$. We consider the two cases $\overline{x}\in\overline{\Omega}_{c^+}\cap\Omega$ and $\overline{x}\in\overline{\Omega}_{c^+}\cap\partial\Omega$ separately. \begin{lemma}\label{interior} Assume that \eqref{A} holds and that $\overline{x}\in\overline{\Omega}_{c^+}\cap\Omega$. For each $\Lambda_2>\Lambda_1>0$, there exist $M_1>0$ and $R>0$ such that, for any $\lambda\in[\Lambda_1,\Lambda_2]$, any solution $u$ to \eqref{$P_lambda$} satisfies $\sup\limits_{B_R(\widetilde{x})}u\leq M_1$. \end{lemma} \begin{proof} Under the assumption \eqref{A} we can find a $R>0$ such that $M(x)\geq \mu_1I_n>0, \ c^-\equiv 0$ in $B_{4R}(\widetilde{x})$ and $c^+\gneqq 0$ in $B_{R}(\overline{x})$. Observe that from \eqref{w_i} for $i=1$, we get \begin{align*} -\divi(A(x)Dw_1) &\geq(1+\nu_1w_1)[\lambda c^+(x)g_1(w_1)+h^+(x)]-h^-(x)-\nu_1h^-(x)w_1\\ &+(1+\nu_1w_1)(\mu_1-\vartheta^{-1}\nu_1)|Du|^2. \end{align*} Therefore, in $B_{4R}(\overline{x})$ it yields, \begin{align}\label{6.5} -\divi(A(x)Dw_1)+\nu_1h^-(x)w_1&\geq(1+\nu_1w_1)[\lambda c^+(x)g_1(w_1)+h^+(x)]-h^-(x). \end{align} Let $z_0$ be the solution to \begin{align}\label{6.6} -\divi(A(x)Dz_0)+\nu_1h^-(x)z_0&=-\Lambda_2c^+(x)\frac{e^{-1}}{\nu_1},\mbox{ } z_0\in H_0^1(B_{4R}(\widetilde{x})). \end{align} By the classical regularity, see \cite[Theorem III-14.1]{MR0244627}, $z_0\in C(\overline{B_{4R}(\widetilde{x})})$ and there exists a positive constant $\overline{C}= \overline{C}(\overline{x},\nu_1, \Lambda_2, p, R, \|h^-\|_{L^p(B_{4R})}, \|c^+\|_{L^p(B_{4R})})$ such that $ \displaystyle z_0\geq-\overline{C} \mbox{ in } B_{4R}$. Further, by the Weak Maximum Principle we know that $z_0\leq 0$. Since \begin{align*} \min_{(-\scriptstyle\frac{1}{\nu_i},\infty)}(1+\nu_is)g_i(s)=-\frac{e^{-1}}{\nu_i}, \quad \text{setting} \quad v_1:=w_1-z_0+\frac{1}{\nu_1} \quad \text{it \ satisfies}, \end{align*} \begin{equation} \begin{array}{rcl}\label{v_1} \displaystyle-\divi(A(x)Dv_1)+\nu_1h^-(x)v_1 &\geq&(1+\nu_1w_1)[\lambda c^+(x)g_1(w_1)+h^+(x)]+\Lambda_2c^+(x)\displaystyle\frac{e^{-1}}{\nu_1}\\ \displaystyle&\geq& (1+\nu_1w_1)(\Lambda_2- \Lambda_2)[ c^+(x)g_1^-(w_1)]\\ \displaystyle &+& (1+\nu_1w_1)\Lambda_1 c^+(x)g_1^+(w_1)\\ \displaystyle&\geq&\displaystyle\frac{\Lambda_1 c^+(x)}{\nu_1}(1+\nu_1w_1)\ln(1+\nu_1w_1)\\ &=&f(x,v_1), \quad \text{in} \quad B_{4R}(\overline{x}), \end{array} \end{equation} \begin{equation}\label{f} \begin{array}{rcl} \text{where} \qquad \qquad \qquad f:\Omega\times\mathbb{R}&\rightarrow&\mathbb{R}\\\nonumber (x,s)&\rightarrow& f(x,s):=\Lambda_1c^+(x)\big([s+z_0][\ln(\nu_1)+\ln(s+z_0(x))] \big) \end{array} \qquad \qquad \qquad \qquad \qquad \end{equation} is a superlinear function in the variable $s$. Since $w_1>-1/\nu_1$, we have $v_1>0$ in $\overline{B_{4R}(\overline{x})}$. On the other hand, for $i=2$, in view of \eqref{5.2} and $w_2>-1/\nu_2$, by \eqref{w_i} in a similar way we conclude that $w_2$ satisfies \begin{equation} \begin{array}{rrl}\label{w_2} \displaystyle-\divi(A(x)Dw_2)&\leq&[1+\nu_2w_2](\lambda c^+(x)g_2(w_2)+h^+(x))\\ &+& (\nu_1-\nu_2)h^-(x)w_2-h^-(x)-\nu_1h^-(x)w_2 \quad \text{and}\\ \displaystyle-\divi(A(x)Dw_2)+\nu_1h^-(x)w_2&\leq&[1+\nu_2w_2]\left(\displaystyle\frac{\Lambda_2 c^+(x)}{\nu_2}\ln(1+\nu_2w_2)+h^+(x)\right)\\ \displaystyle&=:& g(x,w_2) \quad \text{in} \quad B_{4R}(\overline{x}), \end{array} \end{equation} where $g:\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ satisfies \begin{align}\label{g} g(x,s)\leq a_0[1+(\nu_2s)^{\alpha+1}], \quad\mbox{for each} \quad \alpha>0, \quad \mbox{where \ } a_0(x)\in L^p(\Omega). \end{align} In fact, in order to obtain \eqref{g}, let $c_\alpha>0$ be a constant such that \begin{align*} \ln(1+x)&\leq (1+x)^\alpha+c_\alpha, \mbox{ for all } x\geq 0.\\ \mbox{Then,} \qquad \quad g(x,w_2) \leq[1+\nu_2w_2]&\left(\frac{\Lambda_2}{\nu_2}c^+(x)(1+\nu_2w_2)^\alpha+c_\alpha\frac{\Lambda_2}{\nu_2}c^+(x)+h^+(x)\right)\\ \leq [1+\nu_2w_2]&^{\alpha+1}\left(\frac{\Lambda_2}{\nu_2}c^+(x)(1+c_\alpha)+h^+(x)\right)\leq[1+(\nu_2w_2)^{\alpha+1}]a_0(x). \end{align*} In addition, we note that $[1+\nu_2w_2]^{\frac{\nu_1}{\nu_2}}=(e^{\nu_2u})^{\frac{\nu_1}{\nu_2}}=(e^{\nu_1u})=1+\nu_1w_1=\nu_1[v_1+z_0],$ which means that $w_2=\xi(v_1+z_0)$, where $\xi(s):=[(\nu_1s)^{\frac{\nu_2}{\nu_1}}-1]\nu_2^{-1}$ is an increasing function satisfying \begin{align}\label{xi} \lim_{s\rightarrow\infty}\frac{\xi(s)}{s^\beta}=\lim_{s\rightarrow\infty}\frac{(\nu_1s)^{\nu_2/\nu_1}-1}{\nu_2s^{\nu_2/\nu_1}}=\lim_{s\rightarrow\infty}\frac{\nu_1^{\nu_2/\nu_1}-\frac{1}{s^{\nu_2/\nu_1}}}{\nu_2}=\frac{\nu_1^{\nu_2/\nu_1}}{\nu_2}<\infty, \ \mbox{for \ } \beta=\nu_2/\nu_1. \end{align} Thus, we are in position to apply the following theorem, which under our assumptions is a straightforward generalization of \cite[Theorem 2]{Anew}. In fact, as a consequence of Theorem \ref{teoBWHI}, the Theorems 3-6 stated in \cite{Anew} are valid under our assumptions on the domain and on the coefficients of \eqref{$P_lambda$}. Hence, it remains to observe that the other generalizations on the hypotheses of Theorem 5.3, in comparison with \cite[Theorem 2]{Anew}, are natural, in view of \cite[Remark 4]{Anew}. As a matter of completeness, we state here our adapted version. \begin{theorem}\label{newmethod} Let $\Omega\subset\mathbb{R}^n$, $n\geq 2$ be a bounded domain with boundary $\partial\Omega$ satisfying the interior $C^{1,Dini}$-paraboloid condition and $(P_\lambda)$ be a uniformly elliptic operator under our assumptions. Assume that $z_0$ is a bounded function and $v\geq 0$, and $\xi(v+z_0)$ where $\xi$ satisfies \eqref{xi} are functions in $H^{1}(\Omega)$ satisfying the following inequalities in the weak sense \begin{align*} -\divi(A(x)Dv)+\nu_1h^{-1}(x)v&\geq f(x,v)\\ -\divi(A(x)D\xi(v+z_0))+\nu_1h^{-1}(x)\xi(v+z_0)&\leq g(x,\xi(v+z_0)), \end{align*} where $f$ satisfies \eqref{f} and $g$ satisfies \eqref{g} for some $r=\alpha+1$ with \begin{align*} r<\frac{n+1}{n-1}+\left(\frac{1}{\beta}-1\right)\frac{2}{n-1}. \end{align*} Then, for some $C$ depending on the concerned quantities we have \begin{align*} \xi(v(x)+z_0)\leq Cd(x) \mbox{ \ in \ } \Omega \quad \mbox{ and \ hence }\quad v(x)\leq C. \end{align*} \end{theorem} In view of \eqref{v_1} and \eqref{w_2} we are able to apply Theorem \ref{newmethod} for $v=v_1$ and $w_2=\xi(v_1+z_0)$ and conclude that $v_1$ and $w_2$ have upper bounds in $B_{4R}(\overline{x})$. As a consequence, the same holds for $w_1$ and also for $u$, as desired. \end{proof} \begin{lemma}\label{exterior} Assume that \eqref{A} holds and that $\overline{x}\in\overline{\Omega}_{c^+}\cap\partial\Omega$. For each $\Lambda_2>\Lambda_1>0$, there exist $R>0$ and $M_2>0$ such that, for any $\lambda\in[\Lambda_1,\Lambda_2]$, any solution $u$ to \eqref{$P_lambda$} satisfies $\sup\limits_{B_R(\overline{x})\cap\Omega}u\leq M_2$. \end{lemma} \begin{proof} This proof is very similar to the previous one, we only need to observe that our assumptions allow us to find $\Omega_1\subset\Omega$ with $\partial\Omega_1$ of class $C^{1,Dini}$ such that $B_{2R}(\overline{x})\cap \Omega\subset\Omega_1$ and $M(x)\geq \mu_1 I_n>0$, $c^-(x)\equiv 0$ and $c^+(x)\gneqq 0$ in $\Omega_1$. Hence, for $i=1$ note that \eqref{w_i} turn into \eqref{6.5} in $\Omega_1$ instead of $B_{4R}(\overline{x})$. Then, if $z_0$ is the solution to \eqref{6.6} in $H^1_0(\Omega_1)$ instead of $H^1_0(B_{4R}(\overline{x}))$, as in Lemma \ref{interior}, we get $z_0\in C(\overline{\Omega}_1)$ and $\overline{C}>0$ depending on the usual quantities such that $-\overline{C}\leq z_0\leq 0$ in $\Omega_1$. In addition, defining $v_1$ as before, we observe that $v_1$ satisfies the equation \eqref{v_1} in $\Omega_1$ and $v_1>0$ on $\overline{\Omega}_1$. Therefore, arguing exactly as Lemma \ref{interior} we deduce \eqref{v_1},\eqref{w_2} and then we are able to apply Theorem \ref{newmethod} getting an upper bound to $u$ in $\Omega_1$. \end{proof} \begin{proof}[Proof of Theorem \ref{6.3}] In view of the Lemmas \ref{interior} and \ref{exterior} we have the existence of a uniform a priori upper bound on $u$ in a neighborhood of any fixed point $\overline{x} \in\overline{\Omega}_{c^+}$. Then, by applying a topological approach relying on the derivation of a priori bounds, this proof follows the same lines as the proof of the Interior Weak Harnack inequality-IWHI, for details see \cite{MR4030257}. \end{proof} We will now see that the solutions of problem \eqref{$P_lambda$} are bounded from below, even when $\lambda\to 0$, $\lambda>0$. \begin{theorem}[A priori lower bound]\label{lowerbound} Under the standing assumptions on problem \eqref{$P_lambda$}, including hypothesis \eqref{A}, let $\Lambda_2>0$. Then, every supersolution $u$ to \eqref{$P_lambda$} satisfies \begin{align*} \|u^-\|_{L^\infty}\leq C \mbox{ for all } \lambda\in [0,\Lambda_2], \quad \mbox{where\ } C= C(n,p,\nu_1, \Omega, \Lambda_2,\|c\|_{L^p(\Omega)},\|h^-\|_{L^p(\Omega)}). \end{align*} \end{theorem} \begin{proof} First observe that both $U_1=-u$ and $U_2=0$ are subsolutions of \begin{align*} -\divi(A(x)DU)\leq c_\lambda U -(M(x)DU,DU)+h^-(x) \mbox{ in }\Omega. \end{align*} Then, these functions are also subsolutions of \begin{eqnarray*} \left\{ \begin{array}{rll} -\divi(A(x)DU)+ \mu_1|Du|^2&\leq c_\lambda U +h^-(x) &\mbox{ in } \Omega\\ U&\leq 0 &\mbox{ on } \partial\Omega \end{array} \right. \end{eqnarray*} and so is $U:=u^-=\max\{U_1,U_2\}$, as the maximum of subsolutions. Moreover, $U\geq 0$ in $\Omega$ and $U=0$ on $\partial\Omega$. We make the following exponential change of variables $ w:=\dfrac{1-e^{-\nu_1U}}{\nu_1}. $ From Lemma \ref{exponentialchange}, \[-\divi(A(x)Dw)\leq (1-\nu_1 w) \left[c_\lambda(x) U +h^-(x) \right],\] hence, we know that $w$ is a weak solution to \begin{align}\tag{$Q_\lambda$}\label{w} \left\{ \begin{array}{rll} -\divi(A(x)Dw)+\nu_1h^-(x)w&\leq h^-(x)+\dfrac{c_\lambda(x)}{\nu_1}\ln (1-\nu_1w)(1-\nu_1w) &\mbox{ in } \Omega\\ w&= 0 &\mbox{ on } \partial\Omega. \end{array} \right. \end{align} Now set $w_1:=\textstyle\dfrac{1-e^{-\nu_1u_1^-}}{\nu_1}$, where $u_1$ is some fixed supersolution to \eqref{$P_lambda$}, $\lambda\geq 0$, Note that if there was not such supersolution, we had nothing to prove. Then, from the above conclusions, $w_1\in[0,1/\nu_1)$ is a solution to \eqref{w}. Defining \begin{align*} \overline{w}:=\sup \mathcal{A}, \mbox{ where } \mathcal{A}:=\{w: w \mbox{ is a solution to \eqref{w}; } 0\leq w <1/\nu_1 \mbox{ in } \Omega \}, \end{align*} we observe that $\mathcal{A}\ne\emptyset$, since $w_1\in \mathcal{A}$, and also that $w_1\leq \overline{w}\leq 1/\nu_1$ in $\Omega$. Further, as a supremum of solutions, $\overline{w}$ is a weak solution to \eqref{w}, with $\overline{w}=0$ on $\partial \Omega$. Then, \begin{align*} f(x)&:=h^-(x)+\frac{c_\lambda(x)}{\nu_1}|\ln (1-\nu_1\overline{w})|(1-\nu_1\overline{w})\in L^p_+(\Omega)\\ \mbox{with } \|f^+\|_{L^p(\Omega)}&\leq\|h^-\|_{L^p(\Omega)}+\frac{1}{\nu_1}\left(\Lambda_2\|c^+\|_{L^p(\Omega)}+\|c^-\|_{L^p(\Omega)}\right)C_0, \end{align*} since $A(\overline{w}):=|\ln (1-\nu_1\overline{w})|(1-\nu_1\overline{w})\leq C_0$. Therefore, by applying the Boundary Lipschitz Bound \cite[Lemma 2.14]{fio}, we conclude that \begin{align*} \overline{w}\leq C\|f^+\|_{L^p(\Omega)}d(x)\rightarrow 0 \mbox{ as }x\rightarrow\partial\Omega, \end{align*} since $d(x)$ denotes the distance between $x$ and $\partial \Omega$. Hence, $\overline{w}\not\equiv 1/\nu_1$, however, $\overline{w}$ may be equal to $1/\nu_1$ at some interior points. In order to complete the proof, we argue by contradiction. Assume that there is a sequence of supersolutions $u_k$ to \eqref{$P_lambda$} in $\Omega$ with unbounded negative parts, then there exists a subsequence such that \begin{align*} u_k^-(x_k)=\|u_k^-\|_{L^\infty}\rightarrow +\infty, \quad x_k\in \overline{\Omega}, \quad x_k\rightarrow x_0\in\overline{\Omega} \mbox{\ as \ }k\rightarrow\infty, \end{align*} with $x_k\in \Omega$ for large $k$, since $u_k\geq 0$ on $\partial\Omega$. It implies that the respective sequence $(w_k(x_k))$ satisfies \begin{align*} w_k(x_k)=\frac{1-e^{-\nu_1 u_k^-(x_k)}}{\nu_1}\rightarrow \frac{1}{\nu_1}, \qquad w_k\in \mathcal{A}. \end{align*} Hence, for every $\varepsilon>0$, there exists some $k_0\in \mathbb{N}$ such that \begin{align*} \frac{1}{\nu_1}-\varepsilon \leq w_k(x_k)\leq \overline{w}(x_k)\leq \frac{1}{\nu_1},\mbox{ for all } k\geq k_0. \end{align*} Thus, $ \overline{w}(x_0)\geq \displaystyle\lim_{x_k\rightarrow x_0}\overline{w}(x_k)=\displaystyle\lim_{k\rightarrow \infty} \overline{w}(x_k) ={1}/{\nu_1}, $ and $x_0 \in\Omega$, since $\overline{w}=0$ on $\partial\Omega$. Moreover, $\overline{w}(x_0)={1}/{\nu_1}$. Finally, set $z:=1-\nu_1\overline{w}$ and observe that \begin{align*} \divi(A(x)Dz)&=-\nu_1 \divi(A(x)D\overline{w})\leq \nu_1(1-\nu_1\overline{w}) \left[\frac{c_\lambda(x)}{\nu_1} |\ln(1-\nu_1\overline{w})| +h^-(x) \right] \\ &= c_\lambda(x)|\ln z|z +\nu_1h^-(x)z. \end{align*} Then $z$ is a supersolution to \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)Dz)+\nu_1h^-(x)z&\geq -c_\lambda(x)|\ln z|z &\mbox{ in } \Omega\\ z(x_0)=0 \quad \text{and} \quad z&\gneqq 0 &\mbox{ in } \Omega.\\ \end{array} \right. \end{align*} But this contradicts the nonlinear version of the SMP, see for instance \cite[Lemma 5.3]{multiplicidade}, and its extension in \cite{MR4274882}, which says that $z\equiv 0$ or $z>0$ in $\Omega$. \end{proof} \section{Main Results}\label{results} \quad \ This section is devoted to prove our main results. We start by proving a lemma, which is going be useful in order to deal with degree arguments. \begin{lemma}\label{existssub} Under assumption \eqref{A} for every $\lambda>0$, there exists a strict subsolution $v_\lambda$ to \eqref{$P_lambda$} such that, every supersolution $\beta$ to \eqref{$P_lambda$} satisfies $v_\lambda\leq\beta$. \end{lemma} \begin{proof} Let $C>0$ be given by Theorem \ref{lowerbound} and $\overline{M}$ be given by Theorem \ref{6.3} such that for every supersolution $\beta$ of \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)Du)& = c_\lambda(x)u+(M(x)Du,Du)-h^-(x)-1 &\mbox{ in } \Omega\\ u&=0 &\mbox{ on } \partial\Omega, \end{array} \right. \end{align*} we have $\beta\geq-C$. Let $k>C$ and consider $\alpha_k$ the solution to \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)Dv)+c^-(x)v& = -\lambda k c^+(x)-h^-(x)-1 &\mbox{ in } \Omega\\ v&=0 &\mbox{ on } \partial\Omega. \end{array} \right. \end{align*} As $-\lambda k c^+(x)-h^-(x)-1<0$ we have $\alpha_k\ll 0 $ by the SMP and the Hopf lemma. We claim that every supersolution $\beta$ to \eqref{$P_lambda$} satisfies $\beta\geq \alpha_k$. In fact, taking regular supersolutions $\beta_1,\cdots,\beta_l$ to \eqref{$P_lambda$} such that $\beta=\min\{\beta_j: 1\leq j\leq l\}$ and setting $w=\beta_j-\alpha_k$ for some $1\leq k\leq l$ we have \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)Dw)+c^-(x)w&\geq \lambda c^+(x)(\beta_j+k)+\mu_1|D\beta_j|^2\geq 0 &\mbox{ in } \Omega\\ w&=0 &\mbox{ on } \partial\Omega. \end{array} \right. \end{align*} Hence, by the Maximum Principle $w\geq 0 $ i.e. $\beta_j\geq \alpha_k$ and this proves the claim. Now, consider the problem \begin{align}\label{t_k} -\divi(A(x)Dv)= &c_\lambda(x)T_k(v)+(M(x)Dv,Dv)-h^-(x)-1,\\ \text{where} \qquad \qquad \qquad \qquad T_k(v)&= \left\{ \begin{array}{rll} -k, & \mbox{ if } v\leq -k,\\ v,& \mbox{ if } v>-k. \end{array} \right. \nonumber \qquad\qquad \qquad \qquad \qquad\qquad\qquad\qquad \end{align} Observe that $\beta=T_k(\beta)$ is a supersolution to \eqref{t_k} and $\alpha_k$ is a subsolution to \eqref{t_k}. Note that $-\lambda k c^+(x)=\lambda c^+(x) T_k(\alpha_k)$, $c^-(x)k=-c^-(x)T_k(\alpha_k)$ and hence, by the standard method of sub and supersolutions \eqref{t_k} has a minimal solution $v_k$ with $\alpha_k\leq v_k\leq \beta$. Furthermore, we also observe that every supersolution $\widetilde \beta$ to \eqref{$P_lambda$} satisfies $\widetilde \beta\geq v_k$. In fact, since $\widetilde \beta$ is a supersolution to \eqref{$P_lambda$}, we have $\widetilde \beta \geq \alpha_k$ and by the construction of \eqref{t_k}, every supersolution $\widetilde \beta$ to \eqref{$P_lambda$} is also a supersolution to \eqref{t_k}, then the minimality of $v_k$ implies that $v_k\leq \widetilde \beta$. Now we observe that $v_k$ is a subsolution to \eqref{$P_lambda$}, since $v_k\geq -C>-k$ and it satisfies \begin{align*} -\divi(A(x)Dv_k)&=c_\lambda(x)T_k(v_k)+(M(x)Dv_k,Dv_k)-h^-(x)-1\\ &\leq c_\lambda(x)v_k+(M(x)Dv_k,Dv_k)+h(x). \end{align*} Furthermore, we claim that $v_k$ is strict subsolution to \eqref{$P_lambda$}. In order to see this, let $u$ be a solution to \eqref{$P_lambda$} with $u\geq v_k$. Then, $w=u-v_k$ satisfies \begin{align*} -\divi(A(x)Dw)&\geq c_\lambda(x)u+(M(x)Du,Du)+h(x)-c_\lambda(x)v_k-(M(x)Dv_k,Dv_k)+h^-(x)+1\\ &=c_\lambda(x)w+(M(x)[Du+Dv_k],Dw)+h^+(x)+1, \end{align*} which means that \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)Dw) -(M(x)[Du+Dv_k],Dw) &\geq c_\lambda(x)w+h^+(x)+1 &\mbox{ in } \Omega\\ w&=0 &\mbox{ on } \partial\Omega. \end{array} \right. \end{align*} Therefore, by the Maximum Principle, we deduce that $w\gg 0$, namely, $u\gg v_k$. \end{proof} By adapting \cite[Lemma 5.1]{MR4030257} to our setting, we obtain the following auxiliary result, which is going to be useful for proving Theorem \ref{teo5.2}. \begin{lemma}\label{nosolution} Under the assumptions of Theorem \ref{teo5.2}, assume that ($P_0$) has a solution $u_0$ such that $c^+(x)u_0\gneqq 0$, $c^-(x)\equiv 0 $. Then, there exists $\overline{\Lambda}\in(0,\infty)$ such that, for $\lambda\geq \overline{\Lambda}$, the problem \eqref{$P_lambda$} has no solution $u$ with $u\geq u_0$ in $\Omega$. \end{lemma} \begin{proof} Let $\varphi_1>0$ the first eigenfunction of \eqref{eig1}. If \eqref{$P_lambda$} has a solution $u$ with $u\geq u_0$, multiplying \eqref{$P_lambda$} by $\varphi_1$ and integrating we obtain \begin{align*} \displaystyle\int_{\Omega}c_{\gamma_1}(x)u\varphi_1&= \displaystyle\int_{\Omega}A(x)D\varphi_1 Du=\displaystyle\int_{\Omega} c_\lambda(x)u\varphi_1 dx+\int_{\Omega}(M(x)Du,\varphi_1 Du) +\int_{\Omega} h(x)\varphi_1 dx, \end{align*} and hence $\lambda>\overline{\Lambda}>\gamma_1$. As $u\geq u_0$, we have \begin{align*} 0&\geq (\lambda-\gamma_1)\displaystyle\int_{\Omega} c^+(x)u\varphi dx + \mu_1\int_{\Omega}\varphi_1 |Du|^2 dx+\int_{\Omega} h(x)\varphi_1 dx\\ &\geq (\lambda-\gamma_1)\displaystyle\int_{\Omega} c^+(x)u_0\varphi dx + \mu_1\int_{\Omega}\varphi_1 |Du|^2 dx+\int_{\Omega} h(x)\varphi_1 dx, \end{align*} which gives a contradiction for $\lambda$ large enough. \end{proof} In view of the previous results, we now are able to prove Theorem \ref{teo5.2}. \begin{proof}[Proof of Theorem \ref{teo5.2}] Applying all previous results and adopting strategies presented in \cite{ACJT, ACJTuni, MR4030257} we give the proof of Theorem \ref{teo5.2} treating separately the cases $\lambda\leq 0$ and $\lambda>0$. \newline {\bf Case (i): $\lambda\leq 0$.} This case has been studied in previous works. We briefly recall the following argument. If ($P_0$) has a solution $u_0$, then $u_0$ is a supersolution to \eqref{$P_lambda$}. By applying Lemma \ref{lbound} and the arguments found in \cite{ACJT}, we obtain the existence of a solution $u_\lambda$ of (\ref{$P_lambda$}) for any $\lambda<0$. We observe that the uniqueness of solutions for $\lambda\leq 0$ is ensured by \cite[Proposition 4.1]{ACJT}. On the other hand, for $\lambda \leq 0$, we have $c_\lambda(x)=\lambda c^+(x)-c^-(x)\leq -c^-(x)$ so by applying the Comparison Principle, we get $u_\lambda\leq u_0$. Moreover, setting $v_0=u_0-\|u_0\|_{\infty}$, by Lemma \ref{lbound} we see that $v_0$ is a subsolution to \eqref{$P_lambda$} for $\lambda<0$, so again by the Comparison Principle we get $u_0-\|u_0\|_{\infty}\leq u_\lambda.$ \newline{\bf Case (ii): $\lambda>0$}. With the aim of showing the existence of a continuum of solution to \eqref{$P_lambda$}, for $\lambda\geq 0$ we introduce the auxiliary problem \begin{equation}\label{P2}\tag{$\overline{P_\lambda}$} -\divi(A(x)Du)+u=[c_\lambda(x)+1][(u-u_0)^++u_0]+\big(M(x)Du,Du\big)+h(x). \end{equation} As in the case of \eqref{$P_lambda$}, any solution to \eqref{P2} belongs to $C^{0,\tau}(\overline{\Omega})$ for some $\tau>0$. Moreover, observe that $u$ is a solution to \eqref{P2} if and only if it is a fixed point of the operator $\overline{T}_\lambda$ defined by $\overline{T}_\lambda:C(\overline{\Omega})\rightarrow C(\overline{\Omega}):v\rightarrow u$ with $u$ the solution to \begin{align*} -\divi(A(x)Du)+u-\big(M(x)Du,Du\big)=[c_\lambda(x)+1][(v-u_0)^++u_0]+h(x). \end{align*} Applying \cite[ Lemma 5.2]{ACJT}, we see that $\overline{T}_\lambda$ is completely continuous. Now, we define \begin{align*} \overline{\Sigma}:=\{(\lambda,u)\in\mathbb{R}\times \mathcal{C}(\overline{\Omega}), u \mbox{ solves \eqref{P2}} \} \ \mbox{and split the rest of the proof into three steps.} \end{align*} {\bf Step 1:} {\it If $u$ is a solution to \eqref{P2}, then $u\geq u_0$ and hence, it is a solution to \eqref{$P_lambda$}.} Observe that $(u-u_0)^++u_0-u\geq 0$ and $\lambda c^+(x)[(u-u_0)^++u_0]\geq \lambda c^+(x)u_0\geq 0$. Then, we deduce that a solution $u$ to \eqref{P2} is a supersolution to \begin{equation}\label{7.1} -\divi(A(x)Du)=[c_\lambda(x)+1][(u-u_0)^++u_0]+\big(M(x)Du,Du\big)+h(x). \end{equation} Since $u_0$ is a solution to \eqref{$P_lambda$}, it implies that $u_0$ solves \eqref{7.1}. Thus, applying again the Comparison Principle we get $u\geq u_0$. \newline {\bf Step 2:} {\it $u_0$ is the unique solution to ($\overline{P_0}$) as well as to the problem ($P_0$) and $i(I-\overline{T}_0,u_0)=1$.} For $\lambda=0$, if $u$ is a solution to (\ref{7.1}), then by Step 1, $u\geq u_0$ and $u$ solves (\ref{$P_lambda$}). From Case (i), we conclude that $u=u_0$. In order to prove that $i(I-\overline{T}_0,u_0)=1$, we consider the operator $S_t$ defined by $ S_t:C(\overline{\Omega})\rightarrow C(\overline{\Omega})$, given by $S_t(v)=t\overline{T}_0v=u $, where $u$ is the solution to {\small \begin{align*} \textstyle-\divi(A(x)Du)+u&=(M(x)Du,Du)+th(x) +t\big([-c^-(x)+1][u_0+(v-u_0)^+-(v-u_0-1)^+]\big). \end{align*}} First, note that the complete continuity of $\overline{T}_\lambda$ follows from the fact that every solution $u$ to \eqref{P2} is $C^{\alpha}$ up to the boundary, and hence there exists $R>0$ such that for all $t \in [0,1]$ and all $v \in C(\overline{\Omega})$, it follows that $ \|S_tv\|_{L^\infty}<R.$ Then, $I-S_t$ does not vanish on $\partial B_R(0)$ and $ { \deg(I-\overline{T}_0,B_R(0))= \deg(I-S_1,B_R(0))=\deg(I-S_0,B_R(0))=\deg(I,B_R(0))=1.} $ Therefore, $\overline{T}_0$ has a fixed point $u_0$, which is a solution to $(\overline{P}_0)$. Applying the degree's properties, for all $\varepsilon>0$ small enough, it follows that $ \displaystyle \deg(I-\overline{T}_0,B_\varepsilon(0))=\deg(I-\overline{T}_0,B_R(0))=1. $ Thus, for $\varepsilon \ll 1$, we conclude that $ \displaystyle i(I-\overline{T}_0,u_0)=\lim_{\varepsilon\rightarrow 0}\deg(I-\overline{T}_0,B_\varepsilon(0))=1. $ \newline {\bf Step 3:} {\it Existence and behavior of the continuum.} Proceeding as in \cite[Theorem 1.2]{CJ}, we are able to apply Theorem \ref{continuum}, which gives us a continuum $\mathcal{C}=\mathcal{C}^+\cup \mathcal{C}^-\subset \overline{\Sigma}$ such that \begin{align*} \mathcal{C}^+=\mathcal{C}\cap ([0,\infty)\times C(\overline{\Omega}))\mbox{ and } \mathcal{C}^-=\mathcal{C}\cap ((-\infty,0]\times C(\overline{\Omega})) \ \mbox{ are unbounded in} \ \mathbb{R}^{\pm}\times C(\overline{\Omega}). \end{align*} By Step 1, if $u \in \mathcal{C}^+$, then $u\geq u_0$ and it is a solution to \eqref{$P_lambda$}. Thus, applying Lemma \ref{nosolution} we infer that the projection of $\mathcal{C}^+$ on $\lambda$-axis is $[0,\overline{\Lambda}]$, a bounded interval. A consequence of Case (i) is that none of $\lambda\in (-\infty,0] $ is a bifurcation point from infinity to problem (\ref{$P_lambda$}), and then, we deduce that the projection of $\mathcal{C}^-$ on $\lambda$-axis is $(-\infty,0]$. Hence, \begin{align*} \mbox{Proj}_{\mathbb{R}}\mathcal{C}=\mbox{Proj}_{\mathbb{R}}\mathcal{C}^-\cup\mbox{Proj}_{\mathbb{R}}\mathcal{C}^+=(-\infty,\overline{\Lambda}], \quad \text{for \ some} \quad \overline{\Lambda}>0. \end{align*} Finally, by Theorem \ref{6.3} for any $0<\Lambda_1<\Lambda_2$ there exists a priori bound for the solutions to \eqref{$P_lambda$}, for all $\lambda\in[\Lambda_1,\Lambda_2]$, then the projection of $\mathcal{C}\cap ([\Lambda_1,\Lambda_2]\times C(\overline{\Omega}))$ on $C(\overline{\Omega})$ is bounded. Since the component $\mathcal{C}^+$ is unbounded in $\mathbb{R}^+\times C(\overline{\Omega})$, its projection on the $C(\overline{\Omega})$ axis must be unbounded. In view of Case (i), the projection $\mathcal{C}^-$ on the $C(\overline{\Omega})$ is bounded. Hence, \begin{align*} \mbox{Proj}_{C(\overline{\Omega})}\mathcal{C}=\mbox{Proj}_{C(\overline{\Omega})}\mathcal{C}^-\cup\mbox{Proj}_{C(\overline{\Omega})}\mathcal{C}^+=[0,+\infty). \end{align*} Therefore, we deduce that $\mathcal{C}$ must emanate from infinity on the right of axis $\lambda=0$. Now, we prove our multiplicity results in (iii). Since $\mathcal{C}$ contains $(0,u_0)$, with $u_0$ being the unique solution to ($P_0$), from Case (ii) we know that $\mathcal{C}$ also emanates from infinity on the right of axis $\lambda=0$ and then, we conclude that there exists $\lambda_0\in (0,\overline{\Lambda})$ such that the problems (\ref{P2}) and \eqref{$P_lambda$} have at least two solutions satisfying $u\geq u_0$ for $\lambda\in(0,\lambda_0)$. Hence, the quantity \begin{align*} \overline{\lambda}:=\sup \{\mu>0: \forall \ \lambda\in(0, \mu), (P_\lambda) \mbox{ has at least two solutions} \} \quad \mbox{is well defined.} \end{align*} We claim that for all $\lambda\in (0, \overline{\lambda})$, the problem (\ref{$P_lambda$}) has at least two solutions with $u_{\lambda,1}\ll u_{\lambda,2}$. Let us consider the strict subsolution $\alpha_\lambda$ given by Lemma \ref{existssub}. As $\alpha_\lambda\leq u$ for all $u$ solution to \eqref{$P_lambda$}, we can choose $u_{\lambda,1}$ as the minimal solution with $u_{\lambda,1}\geq \alpha$. Hence, we have $u_{\lambda,1}\lneqq u_{\lambda,2}$, otherwise there would exist a solution $u$ with $\alpha\leq u \leq \min\{u_{\lambda,1}, u_{\lambda,2}\}$, which contradicts the minimality of $u_{\lambda,1}$. Observe that, the function $\beta=(u_{\lambda,1}+u_{\lambda,2})/2$ is a supersolution to \eqref{$P_lambda$} which is not a solution. Now, for each $\xi\in \mathbb{R}^n$, we can define the function $\varphi(\xi):=(M(x)\xi,\xi)$ and observe that by assumption \eqref{A} we have $D^2(\varphi)>0$, and thus $\varphi$ is convex. With this, we obtain \begin{align*} -\divi(A(x)D\beta) &=-\frac{1}{2}\divi(A(x)Du_{\lambda,1})-\frac{1}{2}\divi(A(x)Du_{\lambda,2})\\ &=c_\lambda(x)\beta+ \frac{1}{2}(M(x)Du_{\lambda,1},Du_{\lambda,1})+\frac{1}{2}(M(x)Du_{\lambda,2},Du_{\lambda,2})+h(x)\\ &\gneqq c_\lambda(x)\beta+\varphi\big(\frac{Du_{\lambda,1}}{2}+\frac{Du_{\lambda,2}}{2}\big)+h(x) = c_\lambda(x)\beta+(M(x)D\beta,D\beta)+h(x). \end{align*} Finally, let us prove that $\beta$ is a strict supersolution to (\ref{$P_lambda$}). Consider a solution $u$ of (\ref{$P_lambda$}) with $u\leq \beta$. Then $v:=\beta-u$ satisfies \begin{align*} -\divi(A(x)Dv)\gneqq c_\lambda(x)\beta&+(M(x)D\beta,D\beta)+h(x)-(M(x)Du,Du)-c_\lambda u -h(x)\\ =(M(x)&[D\beta+Du],Dv)+c_\lambda v,\\ \text{and \ hence} \qquad \quad -\divi(A(x)Dv) &-(M(x)D\beta+Du,Dv)+c^-(x)v\gneqq \lambda c^+(x) v\geq 0. \quad \quad \end{align*} By Theorem \ref{SMP} we deduce that either $v\gg 0$ or $v\equiv 0$. If $v \equiv 0$, then $\beta=u$ is solution, which contradicts the definition of $\beta$. Then, we have $\beta\gg u$. As $u_{\lambda,1}\lneqq \beta\lneqq u_{\lambda,2}$ we deduce that, $u_{\lambda,1}\ll \beta\lneqq u_{\lambda,2}$ and hence $u_{\lambda,1}\ll u_{\lambda,2}$. We finish the proof claiming that if $\overline{\lambda}<\infty$, then the solution $u_{\overline{\lambda}}$ of ($P_{\overline{\lambda}}$) is unique. In order to prove that ($P_{\overline{\lambda}}$) has at least one solution, take $\{\lambda_n\}\subset (0,\overline{\lambda})$ such that $\lambda_n\rightarrow \overline{\lambda}$ and by the regularity result \cite[Lemma 2.1]{ACJTuni} let $\{u_n \}\subset H^1(\omega)\cap W^{1,n}_{loc}(\Omega)\cap C(\overline{\Omega})$ be a sequence of corresponding solutions. By Theorem \ref{6.3}, there exists $M>0$ such that $\|u_n\|_{L^\infty}<M$ for all $n\in \mathbb{N}$, and hence in view of the $C^{1,\alpha}$ global estimates we get $\|u_n\|_{C^{1,\alpha}(\overline{\Omega})}\leq C$. Then, up to a subsequence, $u_n\rightarrow u$ in $C^{1}_0(\Omega)$. From this strong convergence we easily observe that $u$ is a solution to ($P_{\overline{\lambda}}$). Now we prove the uniqueness of the solution to ($P_{\overline{\lambda}}$). We assume by contradiction that there exist two distinct solutions, $u_1$ and $u_2$ to problem ($P_{\overline{\lambda}}$), we prove that $\beta=(u_1+u_2)/2$ is a strict supersolution to ($P_{\overline{\lambda}}$). Let us consider the strict subsolution $\alpha_{\overline{\lambda}}\ll \beta$ to problem ($P_{\overline{\lambda}}$) given by Lemma \ref{existssub} and look at the set \begin{align*} \overline{\mathcal{S}}=\{u\in C_0^1(\overline{\Omega})\mbox{; } \alpha\ll u\ll \beta,\|u\|_{C^1_0}<R\} \end{align*} for some $R>C>0$. Again, by the $C^{1,\alpha}$ estimates, \begin{align}\label{overlambda} \|u\|_{C^{1,\alpha}}\leq C \mbox{ for all } \mbox{ solution $u$ to } \eqref{$P_lambda$}\mbox{, } \lambda\in[\overline{\lambda}, \overline{\lambda}+1] \end{align} such that $deg(I-T_{\overline{\lambda}},\mathcal{\overline{\mathcal{S}}})=1$. Now, we prove the existence of $\varepsilon>0$ such that \begin{align}\label{existsvarepsilonover} deg(I-T_\lambda,\overline{\lambda})=1\mbox{, for all } \lambda\in[\overline{\lambda} ,\overline{\lambda}+\varepsilon]. \end{align} We will verify that there exists some $\varepsilon\in(0,1)$ such that there is no fixed points of $T_\lambda$ on the boundary of $\overline{\mathcal{S}}$ for all $\lambda$ in the preceding interval. Indeed, if this was not the case, there would exist a sequence $\lambda_k\rightarrow\overline{\lambda}$ with the respective solutions $u_k$ to problem ($P_{\lambda_k}$) belonging to $\overline{\mathcal{S}}$. Say $\lambda_k \in[\overline{\lambda} ,\overline{\lambda}+1]$ for $k \geq k_0$. Then, since $\alpha\ll u_k \ll \beta$ in $\Omega$, by \eqref{overlambda} we must have $u_k\in \partial \overline{\mathcal{S}}$ for $k\geq k_0$, which means that for each such $k$, \begin{align}\label{touchover} \max_{\overline{\Omega}}(\alpha-u_k)=0\mbox{ or } \min_{\overline{\Omega}}(u_k-\beta)=0. \end{align} By \eqref{overlambda} and the compact inclusion $C^{\alpha}(\overline{\Omega})\subset \subset C(\Omega)$, we know that $u_k\rightarrow u$ in $\Omega$ for some $u\in C(\Omega)$, up to subsequences. Then, $u$ is a solution to ($P_{\overline{\lambda}}$) and by taking the limit as $k\rightarrow+\infty$ in the corresponding inequalities for $u_k$, it follows that $\alpha \leq \beta$ in $\Omega$. Thus, $\alpha\ll u \ll \beta$ in $\Omega$, since $\alpha$ and $\beta$ are strict. Passing \eqref{touchover} to the limit, we obtain that $u(x)=\alpha(x)$ or $u(x)=\beta(x)$ for $x\in \overline{\Omega}$, which contradicts the definition of $\alpha\ll u\ll \beta$. Hence, for obtaining \eqref{existsvarepsilonover} it is just necessary to apply the homotopy invariance in $\lambda$ in the interval $[\overline{\lambda} ,\overline{\lambda}+\varepsilon]$. With \eqref{existsvarepsilonover} in hand, we repeat exactly the same argument done in (iii) to obtain the existence of a second solution $u_{\lambda,2}$ to problem \eqref{$P_lambda$} for all $\lambda\in [\overline{\lambda} ,\overline{\lambda}+\varepsilon]$, which contradicts the definition of $\overline{\lambda}$. \end{proof} In order to prove Theorem \ref{teo5.3}, we start by constructing an auxiliary problem $(P_{\lambda,k})$, for which we can assume that there is no solution for large $k$. This is a typical but essential argument that allows us to find a second solution via degree theory, by homotopy invariance in $k$. Fix $\Lambda_2>0$ and recall that Theorem \ref{lowerbound} gives us an a priori lower uniform bound $C_0$ such that $u\geq -C_0,\mbox{ for every } \mbox{weak supersolution }u \mbox{ of (\ref{$P_lambda$})}, \mbox{ for all } \lambda\in[0,\Lambda_2].$ Consider, the problem \begin{align*}\label{P3}\tag{$P_{\lambda,{k}}$} \left\{ \begin{array}{rll} -\divi(A(x)Du)=c_\lambda(x)u+(M(x)Du,Du)+h(x)+k\widetilde{c}(x) \mbox{ in } \Omega\\ \qquad\qquad\qquad u= 0\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mbox{ on } \partial\Omega \end{array} \right. \end{align*} for $k\geq 0$, $\lambda\in [0,\Lambda_2]$ and $\widetilde{c}$ defined as \begin{align}\label{5.5} \widetilde{c}(x):=\widetilde{c}_{\Lambda_2}(x)=h^-(x)+\Lambda_2C_0c^+(x)+\widetilde{M}c^-(x)+Bc^+(x) \end{align} with $B=\gamma_1/ \nu_1$, where $\gamma_1=\gamma_1^+>0$ is the first eigenvalue, with weight $c$, associated to the eigenfunction $\varphi_1\in W^{2,p}(\Omega)$, given by \eqref{eig1}. Note that every solution to \eqref{P3} is also a supersolution to \eqref{$P_lambda$} since $k\widetilde{c}(x)\geq 0$. Then, in virtue of (\ref{5.5}), for all $k \geq 1$, we have that \begin{align*} c_\lambda(x)u+h(x)+k\widetilde{c}(x)\geq -\Lambda_2C_0c^+(x)-\widetilde{M}c^-(x)-h^-(x)+\widetilde{c}(x)=Bc^+(x)\gneqq 0. \end{align*} We now derive some results about the solutions of problem \eqref{P3}. \begin{lemma}\label{c2} Under assumption \eqref{A}, assume that ($P_0$) has a solution $u_0\leq 0$ with ${c^+(x)u_0\lneqq 0}$. Then for each fixed $\Lambda_2>0$ and $\lambda\in [0,\Lambda_2]$, there exists $k\geq 0$ such that \begin{itemize} \item[(i)] For all $k> 1$, the problem \eqref{P3} has no solutions; \item[(ii)] For all $k\in(0,1) $, \eqref{P3} has at least two solutions $u_{\lambda,1}\ll u_{\lambda,2}$; \item[(iii)] For $k=1$, and $h\leq 0$ the problem \eqref{P3} has exactly one solution. \end{itemize} \end{lemma} \begin{proof} We proceed in several steps. \newline {\bf Step 1:} {\it For $k>0$ small, \eqref{P3} admits a solution.} Let $\lambda>\gamma_1$ and $\varepsilon_0>0$ be given by Lemma \ref{antimax} corresponding to $\overline{c}=c(x)$, $\overline{d}=\nu_2h^-(x)$, $\overline{h}=\nu_2\widetilde{c}(x)+{k}^{-1}\nu_2h^+(x)$, and choose\\ $\displaystyle\lambda_0\in\big(\gamma_1,\min\big\{\gamma_1+\varepsilon_0,\gamma_1+{(\lambda-\gamma_1)}/{2} \big\}\big]$. Then, the following problem \begin{align*} -\divi(A(x)Du)+\nu_2h^-(x)u=c_{\lambda_0}u+\nu_2\widetilde{c}(x)+\frac{1}{k}\nu_2h^+(x) \end{align*} has a solution $u\ll 0$. Taking $\delta>0$ small enough we obtain \begin{align*} \lambda_0 s\geq (1+\lambda s)\ln(1+\lambda s) \quad \text{for \ all} \quad s\in[-\delta,0]. \end{align*} Defining $\widetilde{\beta}_k={k}{\lambda}^{-1}u$ for $k>0$ small enough, it follows that $\widetilde{\beta}_k\in[-\delta,0]$ and it satisfies \begin{align*} -\divi(A(x)D\widetilde{\beta}_k)&=c_{\lambda_0}\widetilde{\beta}_k+\nu_2\frac{k}{\lambda}\widetilde{c}(x)-\nu_2\frac{k}{\lambda}h^-(x)u+\frac{1}{\lambda}\nu_2h^+(x),\mbox{ and hence}\\ -\divi(A(x)D\widetilde{\beta}_k)&+\nu_2h^-(x)\widetilde{\beta}_k=c_{\lambda_0}(x)\widetilde{\beta}_k+\nu_2\frac{k}{\lambda}\widetilde{c}(x)+\frac{1}{\lambda}\nu_2h^+(x). \end{align*} Thus, for $\beta_k$ being defined by $\beta_k={\nu_2}^{-1}\ln(1+\lambda \widetilde{\beta}_k)$, we have \begin{align*} -\divi(A(x)D\beta_k)&=-\frac{\lambda}{\nu_2}\frac{\divi(A(x)D\widetilde{\beta}_k)}{(1+\lambda\widetilde{\beta}_k)}-\frac{\lambda}{\nu_2}\left(A(x)D\widetilde{\beta}_k,D\left[{(1+\lambda\widetilde{\beta}_k)^{-1}}\right]\right)\\ &\gneqq c_\lambda(x)\beta_k+\frac{k\widetilde{c}(x)+h^+(x)-\lambda h^-(x)\widetilde{\beta}_k}{1+\lambda \widetilde{\beta}_k}+\frac{\lambda^2}{\nu_2(1+\lambda\widetilde{\beta}_k)^2}(A(x)D\widetilde{\beta}_k,D\widetilde{\beta}_k)\\ &\geq c_\lambda(x)\beta_k+k\widetilde{c}(x)+h^+(x)-h^-(x)+\nu_2\vartheta\frac{|D\widetilde{\beta}_k|^2}{(1+\nu_2\widetilde{\beta})^2}\\ &\geq c_\lambda(x)\beta_k+k\widetilde{c}(x)+h(x)+(M(x)D\beta_k,D\beta_k) . \end{align*} Therefore, we conclude that \begin{eqnarray*} \left\{ \begin{array}{rll} -\divi(A(x)D\beta_k)&\geq c_\lambda(x)\beta_k+k\widetilde{c}(x)+h(x)+(M(x)D\beta_k,D\beta_k) &\mbox{ in } \Omega\\ \beta_k&=0 &\mbox{ on } \partial\Omega \end{array} \right. \end{eqnarray*} has a supersolution $\beta_k$ with $\beta_k\ll 0$ and that \eqref{P3} has at least one solution, by following the proof of Theorem \ref{teo5.2}. \newline {\bf Step 2:} {\it For $k>1$ the problem \eqref{P3} has no solution.} First we observe that every solution to \eqref{P3} for $\lambda\in [0,\Lambda_2]$ is positive in $\Omega$. In fact, we observe that \begin{eqnarray*} \left\{ \begin{array}{rll} -\divi(A(x)Du)&\geq \left(M(x)Du,Du\right)+Bc^+(x) \geq 0&\mbox{ in } \Omega\\ u&=0 &\mbox{ on } \partial\Omega \end{array} \right. \end{eqnarray*} and, in view of the SMP, it implies that $u>0$ in $\Omega$. In order to obtain a contradiction, assume that $u$ is a solution to \eqref{P3} in $\Omega$. Let $\varphi\in C_0^\infty(\Omega)$ such that $\varphi^2\gg 0$. Using $\varphi^2$ as a test function, by Theorem \ref{lowerbound} we obtain \begin{align*} \int\frac{1}{\mu_1}|D\varphi|^2&\geq 2\int(\varphi Du,D \varphi)-\mu_1\int|Du|^2\varphi^2\geq 2\int(\varphi Du,D \varphi)-\int(M(x)Du,\varphi^2Du)\\ &\geq -\Lambda_2C_0\int c^+(x)\varphi^2-M\int c^-(x)\varphi^2-\int h^-(x)\varphi^2+\int k \tilde{c}(x)\varphi^2 \end{align*} which is a contradiction for $k > 1$ large enough. \newline {\bf Step 3:} {\it For $k=1$ \eqref{P3} has a unique solution and for $k\in(0,1)$, problem \eqref{P3} has a strict supersolution.} By Step 1 and 2 we have $ 1=\sup\{k>0; \eqref{P3}\mbox{ has at least one solution}\}. $ Let $k\in(0,1)$ and $\widetilde{k}\in(k,1)$ be such that ($P_{\lambda,\widetilde{k}}$) has a solution $\widetilde{\beta}$. Then, $\beta={k}{\widetilde{k}}^{-1}\widetilde{\beta}$ is a supersolution to \eqref{P3}. Now, as in (iii) of the proof of Theorem \ref{teo5.2} we can prove that $\beta$ is a strict supersolution to \eqref{P3} and also derive the existence of the second solution $u_{\lambda,2}$ with $u_{\lambda,1}\ll u_{\lambda,2}$. \end{proof} \begin{lemma}\label{u_2>0} Under assumption \eqref{A}, assume that ($P_0$) has a solution $u_0\leq 0$ with $c^+(x)u\lneqq 0$. Then, for all $\lambda\geq 0$, problem \eqref{$P_lambda$} has at most one solution $u\leq 0$. \end{lemma} \begin{proof} The proof is divided in several steps. \newline {\bf Step 1:} {\it If $u$ is a subsolution to \eqref{$P_lambda$} with $u\leq 0$, then $u\ll 0$.} In fact, $u$ is a subsolution to ($P_0$) and by the Comparison Principle, we have $u\leq u_0$. In addition, for $w=u_0-u$ we have \begin{align*} -\divi(A(x)Dw) &\geq -c^-(x)u_0+(M(x)Du_0,Du_0)-c_\lambda(x)u-(M(x)Du,Du) \\ &=(M(x)Du+Dw,Dw)-c^-(x)w-\lambda c^+(x)u, \end{align*} and hence, we get \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)Dw)-(M(x)Du+Dw,Dw)-c^-(x)w &\gneqq 0 &\mbox{ in } \Omega\\ w&= 0 &\mbox{ on } \partial\Omega. \end{array} \right. \end{align*} This implies that $w \gg 0$ i.e. $u\ll u_0\leq 0$. \newline {\bf Step 2:} {\it If we have two solutions $u_1, u_2\leq 0$ to \eqref{$P_lambda$}, then such solutions are ordered as $\tilde{u}_1\lneqq \tilde{u}_2\leq u_0$.} By Step 1, we have $u_1, u_2 \ll u_0$ . In case $u_1$ and $u_2$ are not ordered, as $u_0$ is a supersolution to \eqref{$P_lambda$}, applying \cite[Theorem 2.1]{CJ} there exists a solution $u_3$ of (\ref{$P_lambda$}) with $\max\{u_1,u_2\}\leq u_3\leq u_0$. This proves Step 2 by choosing $\tilde{u}_1= u_1$ and $\tilde{u}_2 = u_3$. \newline {\bf Step 3:} {\it There exists at most one nonpositive solution to ($P_{\lambda}$).} Let us assume by contradiction that we have two ordered nonpositive solutions, we can suppose $u_1\ll u_2\ll 0$. As $|u_2|\gg 0 $ the set $\{\varepsilon>0 , u_2-u_1 \leq \varepsilon|u_2|\}$ is nonempty. Defining \begin{align*} \tilde{\varepsilon}:=\min \{\varepsilon>0, u_2-u_1 \leq \varepsilon |u_2|\} \quad \text{ and \ setting} \quad w_{\tilde{\varepsilon}}:=\frac{(1+\tilde{\varepsilon})u_2-u_1}{\tilde{\varepsilon}}, \end{align*} we can use de convexity of the function $\varphi(\xi):=(M(x)\xi,\xi)$ for each $\xi\in \mathbb{R}^n$ and write $u_2={\tilde{\varepsilon}}{(1+\tilde{\varepsilon})^{-1}}w_{\tilde{\varepsilon}}+{(1+\tilde{\varepsilon})^{-1}}u_1,$ then we obtain \begin{align*} (M(x)Du_2,Du_2)&=\varphi\left(\frac{\tilde{\varepsilon}}{1+\tilde{\varepsilon}}Dw_{\tilde{\varepsilon}}+\frac{1}{1+\tilde{\varepsilon}}Du_1\right)\leq \frac{\tilde{\varepsilon}}{1+\tilde{\varepsilon}}\varphi(Dw_{\tilde{\varepsilon}})+\frac{1}{1+\tilde{\varepsilon}}\varphi\left(Du_1\right)\\ &=\frac{1}{1+\tilde{\varepsilon}}\big[\tilde{\varepsilon}(M(x)Dw_{\tilde{\varepsilon}},Dw_{\tilde{\varepsilon}})+(M(x)Du_1,Du_1)\big], \quad \mbox{and hence}\\ \frac{1+\tilde{\varepsilon}}{\tilde{\varepsilon}}&(M(x)Du_2,Du_2)\leq (M(x)Dw_{\tilde{\varepsilon}},Dw_{\tilde{\varepsilon}})+\frac{1}{\tilde{\varepsilon}}(M(x)Du_1,Du_1). \end{align*} \begin{align*} \mbox{ Thus, it yields \ } \qquad -\divi(A(x)Dw_{\tilde{\varepsilon}}) &\leq \frac{1+\tilde{\varepsilon}}{\tilde{\varepsilon}}\big(c_\lambda(x)u_2+(M(x)Du_2,Du_2)+h(x)\big)\\ &\quad-\frac{1}{\tilde{\varepsilon}}\big(c_\lambda(x)u_1+(M(x)Du_1,Du_1)+h(x)\big)\\ &\leq c_{\lambda}(x) w_{\tilde{\varepsilon}}+(M(x)Dw_{\tilde{\varepsilon}},Dw_{\tilde{\varepsilon}})+h(x). \qquad \qquad \qquad \quad \qquad \end{align*} Applying again the Comparison Principle, we get $w_{\tilde{\varepsilon}}\lneqq u_2\leq 0$, which is a contradiction due to the definition of $\tilde{\varepsilon}$. \end{proof} Finally, we have all the necessary tools to prove Theorem \ref{teo5.3}. \begin{proof}[Proof of Theorem \ref{teo5.3}] We treat separately the case $\lambda\leq 0$ and $\lambda>0$. \newline{\bf Case {(i)}:} $\lambda\leq 0$. As in the proof of Theorem \ref{teo5.2} we can apply \cite[Theorem 1.2]{MR4030257}. Moreover, observe that $u_0$ is a subsolution to (\ref{$P_lambda$}). Hence we conclude that $u_\lambda\geq u_0$ applying the Comparison Principle. By \cite[Proposition 4.1]{ACJT} the problem \eqref{$P_lambda$} has at most one solution and by Lemma \ref{lbound} the function $v=u_0+\|u_0\|_{\infty}$ is a supersolution to \eqref{$P_lambda$} when $\lambda<0$. Then, the Comparison Principle implies that $u_0+\|u_0\|_{\infty}\geq u_\lambda$ . \newline{\bf Case {(ii)}:} $\lambda>0$. With the aim of showing the existence of a continuum of solution to problem (\ref{$P_lambda$}), for $\lambda\geq 0$ we introduce the auxiliary problem \begin{equation}\tag{$\underline{P_\lambda}$}\label{p4} -\divi(A(x)Du)+u=[c_\lambda(x)+1][u_0-(u-u_0)^-]+(M(x)Du,Du)+h(x). \end{equation} As in the case of problem (\ref{$P_lambda$}), any solution to problem (\ref{p4}) belong to $\mathcal{C}^{0,\tau}(\overline{\Omega})$ for some $\tau>0$. Moreover, observe that $u$ is a solution to \eqref{p4} if and only if it is a fixed point of the operator $\widehat{T}_\lambda$ defined by $\widehat{T}_\lambda:C(\overline{\Omega})\rightarrow C(\overline{\Omega}):v\rightarrow u$, where $u$ is the solution to \begin{align*} -\divi(A(x)Du)+u-\big(M(x)Du,Du\big)=[c_\lambda(x)+1][u_0-(v-u_0)^-]+h(x). \end{align*} Applying the same argument to $\overline{T}_\lambda$ as the one used in the proof of Theorem \ref{teo5.2}, we see that $\widehat{T}_\lambda$ is completely continuous, and we split the rest of the proof into three steps. \newline {\bf Step 1:} {\it If $u$ is a solution to (\ref{p4}) then $u\leq u_0$ and it is a solution to (\ref{$P_lambda$}).} Observe that $u_0-u-(u-u_0)^-\leq 0$. Moreover, we also have ${\lambda c^+(x)[u_0-u-(u-u_0)^-]\leq \lambda c^+(x)u_0\leq 0.}$ Hence, we deduce that a solution $u$ of (\ref{p4}) is a subsolution to \begin{align}\label{5.4} -\divi(A(x)Du)&=-c^-(x)[u_0-(u-u_0)^-]+(M(x)Du,Du)+h(x). \end{align} Since $u_0$ is a solution to (\ref{$P_lambda$}), it implies that $u_0$ solves (\ref{5.4}). Then, applying again the Comparison Principle we get $u\leq u_0$. \newline {\bf Step 2:} {\it$u_0$ is the unique solution to ($\underline{{P}_0}$) as well as to the problem ($P_0$) and $i(I-\widehat{T}_0,u_0)=1$.} For $\lambda=0$, if $u$ is a solution to (\ref{5.4}), then by Step 1, $u\leq u_0$ and $u$ solves (\ref{$P_lambda$}). From (i) we conclude that $u=u_0$. In order to prove that $i(I-\widehat{T}_0,u_0)=1$, we consider the operator $ \widehat S_t:C(\overline{\Omega})\rightarrow C(\overline{\Omega})$ given by $ \widehat S_t(v):=t\widehat{T}_0v=u $, where $u$ is the solution to \begin{align*} -\divi(A(x)Du)+u&=(M(x)Du,Du)+th(x) +t[-c^-(x)+1][u_0-(v-u_0)^--(v-u_0+1)^-]. \end{align*} By the complete continuity of $\widehat{T}$ and also by the fact that every solution $u$ to \eqref{P2} is $C^{\alpha}$ up to the boundary, there exists $R>0$ such that for all $t \in [0,1]$ and all $v \in C(\overline{\Omega})$, it follows that $ \|S_tv\|_{C^\alpha}<R. $ Then, $I-S_t$ does not vanish on $\partial B_R(0)$ and \begin{align*} \deg(I-\widehat{T}_0,B_R(0))&= \deg(I-S_1,B_R(0))=\deg(I-S_0,B_R(0))=\deg(I,B_R(0))=1. \end{align*} Therefore, $\widehat{T}_0$ has only a fixed point $u_0$, which is a solution to $(\underline{{P}_0})$. Therefore, arguing as in Step 2 of Theorem \ref{teo5.2} we conclude this step. \newline {\bf Step 3:} {\it Existence and behavior of the continuum.} It follows the same lines as Step 3 of Theorem \ref{teo5.2}. For the multiplicity results in (iii), we observe that by Step 1, we get the existence of a first solution $u_{\lambda,1}\leq u_0$. To prove that $u_0$ is a strict supersolution to \eqref{$P_lambda$}, we argue as in the proof of Theorem \ref{teo5.2}, and by Lemma \ref{existssub} \eqref{$P_lambda$} has a strict subsolution $\alpha$ with $\alpha\leq u_0$. Then, by \cite[Theorem 2.1]{CJ}, there exists $R>0$ such that $u_{\lambda,1}\in \mathcal{S}$, where \[\mathcal{S}=\{u\in C_0^1(\overline{\Omega});\alpha\ll u\ll u_0 \mbox{ in }\Omega, \|u\|_{C^1_0}<R\}.\] Now, fixing $\lambda>0$ and setting $\Lambda_2=2\lambda$, we replace $h$ by $h +k\widetilde{c} $ in the problem \eqref{P3}, and then Theorem \ref{6.3} gives us an $L^\infty$ a priori bound for solutions to \eqref{P3} for every $k\in [0,1]$. This provides, by the $C^{1,\alpha}$ global estimatives, an a priori bound for solutions in $C^1_0(\overline{\Omega})$, i.e. $\|u\|_{C^1_0(\overline{\Omega})}<R_0$ for every solution $u$ to \eqref{P3}, for all $k\in [0,1]$, where $R_0>R$ also depends on $\lambda$. Hence, by the homotopy invariance of the degree, and the fact that, for $k > 1$, \eqref{P3} has no solution we have \begin{eqnarray*} \deg(I-\widehat{T}_\lambda,B_{R_0}(0))=\deg(I-\widehat{T}_{\lambda,0},B_{R_0}(0))=\deg(I-\widehat{T}_{\lambda,k},B_{R_0}(0))=0, \end{eqnarray*} where $\widehat{T}_{\lambda,k}$ is the operator $\widehat{T}_\lambda$ in which we replace $h(x)$ by $h(x)+k\widetilde{c}$, note that $\widehat{T}_{\lambda,k}$ is clearly still completely continuous. But then, by the excision property of the degree, \begin{eqnarray*} \deg(I-\widehat{T}_\lambda,B_{R_0}\setminus \mathcal{S}(0))=\deg(I-\widehat{T}_\lambda,B_{R_0}(0))-\deg(I-\widehat{T}_\lambda,S(0))=-1 \end{eqnarray*} and the existence of a second solution $u_{\lambda,2}\in B_{R_0}\setminus \mathcal{S}$ is derived. By Lemma \ref{u_2>0} we have $ u_{\lambda,2}>0$. For finishing, we claim that for fixed $\lambda_1< \lambda_2$ we have $u_{\lambda_2,1}\ll u_{\lambda_1,1}$. In fact, note that \[c_{\lambda_1}(x)u_{\lambda_1,1}=\lambda_1 c^+(x)u_{\lambda_1,1}-c^-(x)u_{\lambda_1,1}\gneqq \lambda_2 c^+(x)u_{\lambda_1,1}-c^-(x)u_{\lambda_1,1}=c_{\lambda_2}(x)u_{ \lambda_1,1},\] since $u_{\lambda_1,1}<0$. Then, $u_{\lambda_1,1}$ is a strict supersolution to $(P_{\lambda_2})$, which is not a solution and, in particular, $u_{\lambda_1,1}\ne u_{\lambda_2,1}$. As in the proof of \cite[Claim 6.16]{multiplicidade}, we observe that $u_{\lambda_2,1}$ is the minimal solution to ($P_{\lambda_2}$). In fact, recall that $\xi=\xi_{\lambda_2}$, given by Lemma \ref{existssub}, is such that $\xi\leq u$ for every strict supersolution to ($P_{\lambda_2}$) and in particular $\xi\leq u_{\lambda_1,1}$. Remember also that $u_{\lambda_2,1}$ is the minimal strict solution such that $u_{\lambda_2,1}\geq \xi$ in $\Omega$. Now, if there was a $x_0\in \Omega$ such that $u_{\lambda_2,1}(x_0)>u_{\lambda_1,1}(x_0)$, by defining $\eta:=\min \{u_{\lambda_1,1},u_{\lambda_2,1}\}$, as the minimum of the strict supersolutions of ($P_{\lambda_2}$) not less than $\xi$, we have $\xi\leq\eta$ in $\Omega$. Thus, applying again \cite[Theorem 2.1]{CJ} we get a solution $u$ of ($P_{\lambda_2}$) such that $\xi\leq u \leq\eta\lneqq u_{\lambda_2,1}$ in $\Omega$, which contradicts the minimality of $u_{\lambda_2,1}$. Finally, this ends the proof of Theorem \ref{teo5.3}. \end{proof} In what follows, we prove Theorem \ref{solucoesnegativas}, considering the alternative situation when there exists a supersolution to (\ref{$P_lambda$}) for some $\lambda_0 > 0$. \begin{proof}[Proof of Theorem \ref{solucoesnegativas}] For proving (i), we first observe that if \eqref{$P_lambda$} has a supersolution $\beta_\lambda\leq 0$, then $\beta_\lambda$ satisfies also $c^+(x)\beta_\lambda\lneqq 0$, otherwise, it is also a supersolution to ($P_0$), which contradicts the assumption (A). Let us define \begin{equation*} \underline{\lambda}=\inf \{\lambda\geq 0; (\mbox{\ref{$P_lambda$}}) \mbox{ has a supersolution } \beta_\lambda\leq 0 \mbox{ with } c^+(x)\beta_\lambda\lneqq 0 \}. \end{equation*} Given $\lambda> \underline{\lambda}$, by the definition of $\underline{\lambda}$ there exists $\widetilde{\lambda}\in [\underline{\lambda},\lambda)$, such that ($P_{\widetilde{\lambda}}$) has a supersolution $\beta_{\widetilde{\lambda}}\leq 0$ with $c^+(x)\beta_{\widetilde{\lambda}}\lneqq 0$. Note that $$c_{\widetilde{\lambda}}(x)\beta_{\widetilde{\lambda}}=\widetilde{\lambda}c^+(x)\beta_{\widetilde{\lambda}}-c^-(x)\beta_{\widetilde{\lambda}}\gneqq \lambda c^+(x)\beta_{\widetilde{\lambda}}-c^-(x)\beta_{\widetilde{\lambda}}=c_{\lambda}(x)\beta_{\widetilde{\lambda}}.$$ Then, $\beta_{\widetilde{\lambda}}$ is a supersolution to \eqref{$P_lambda$}, which is not a solution and hence, as in the proof of Theorem \ref{teo5.3}(iii), it is a strict supersolution to (\ref{$P_lambda$}). By Lemma \ref{existssub}, \eqref{$P_lambda$} has a strict subsolution $\alpha_\lambda\leq \beta_{\widetilde{\lambda}}$ and $\alpha_\lambda\leq u$ for all solutions $u$ to \eqref{$P_lambda$}. As in Step 2 of the proof of Theorem \ref{teo5.3}, there exists $R>0$ such that $\deg(I-\widehat{T}_\lambda,S)=1$ with \begin{align*} S=\{u\in C^1_0(\overline{\Omega})\mbox{, } \alpha\ll u\ll \beta_{\widetilde{\lambda}}\mbox{, } \|u\|_{C^1}\leq R\}, \end{align*} and hence the existence of the first solution $u_{\lambda,1}\ll 0$ is derived. To obtain a second solution $u_{\lambda,2}$ satisfying $u_{\lambda,1}\ll u_{\lambda,2}$ and $u_{\lambda,2}>\beta_{\widetilde{\lambda}}$ we repeat the argument in the proof of the Theorem \ref{teo5.3}(iii). By Lemma \ref{u_2>0} we have $u_{\lambda,2}>u_{\overline{\lambda}}$. Finally, arguing as the ending of the proof of Theorem \ref{teo5.3}, we prove that if $\lambda_1 < \lambda_2$ we have $u_{\lambda_1,1}\gg u_{\lambda_2,1}$. For proving that ($P_{\underline{\lambda}}$) has a nonpositive solution, let $\{\lambda_n\}\subset (\underline{\lambda}, \infty)$ be a decreasing sequence such that $\lambda_n\rightarrow \underline{\lambda}$. By the regularity result proved in \cite[Lemma 2.1]{ACJTuni}, we know that there exists a sequence of corresponding solutions $\{u_n \}\subset H^1(\omega)\cap W^{1,n}_{loc}(\Omega)\cap C(\overline{\Omega})$ with $u_n\leq u_{n+1}\leq 0$. As $\{u_n\}$ is increasing and bounded above, by Theorem \ref{6.3}, there exists $M>0$ such that $\|u_n\|_{L^\infty}<M$ for all $n\in \mathbb{N}$, and hence by the $C^{1,\alpha}$ global estimates, we get $\|u_n\|_{C^{1,\alpha}(\overline{\Omega})}\leq C$. Then, up to a subsequence, $u_n\rightarrow u$ in $C^{1}_0(\Omega)$. From this strong convergence we easily conclude that $u$ is a solution to ($P_{\underline{\lambda}}$) with $u\leq 0$. Now we prove the uniqueness of the nonpositive solution to ($P_{\underline{\lambda}}$). Let us assume by contradiction that we have two distincts solutions, $u_1$ and $u_2$ of ($P_{\underline{\lambda}}$), then as in the Step 3 of the proof of Theorem \ref{teo5.3}, we prove that $\beta=(u_1+u_2)/2$ is a strict supersolution to ($P_{\underline{\lambda}}$). Let us consider the strict subsolution $\alpha\ll \beta$ of ($P_{\underline{\lambda}}$) given by Lemma \ref{existssub}, and define the set $ \overline{\mathcal{S}}=\{u\in C_0^1(\overline{\Omega})\mbox{; } \alpha\ll u\ll \beta,\|u\|_{C^1_0}<R\} $ for some $R>C>0$. Again, by the $C^{1,\alpha}$ estimates, we have that \begin{align}\label{underlambda} \|u\|_{C^{1,\alpha}}\leq C \mbox{ for all } u \mbox{ solution of } (P_\lambda)\mbox{, } \lambda\in[\underline{\lambda}-1, \underline{\lambda}] \end{align} such that $deg(I-\widehat{T}_{\underline{\lambda}},\mathcal{\overline{\mathcal{S}}})=1$. Now we prove the existence of $\varepsilon>0$ such that \begin{align}\label{existsvarepsilon} deg(I-\widehat{T}_\lambda,\overline{\lambda})=1\mbox{, for all } \lambda\in[\underline{\lambda} -\varepsilon,\underline{\lambda}]. \end{align} We argue as the end of the proof of Theorem \ref{teo5.2}, in order to verify that there exists some $\varepsilon\in(0,1)$ such that there is no fixed points of $T_\lambda$ on the boundary of $\overline{\mathcal{S}}$ for all $\lambda$ in the preceding interval. Hence for obtaining \eqref{existsvarepsilon} it is sufficient to apply the homotopy invariance for $\lambda \in [\underline{\lambda}-\varepsilon,\underline{\lambda}]$. Next, with \eqref{existsvarepsilon} in hand, we repeat exactly the same argument done in the proof of (i) to obtain the existence of a second solution $u_{\lambda,2}$ to \eqref{$P_lambda$}, for all $\lambda\in [\underline{\lambda}-\varepsilon,\underline{\lambda}]$. But this, finally, contradicts the definition of $\underline{\lambda}$ completing the proof of (ii). By the definition of $\underline{\lambda}$ and since $\beta$ is a strict supersolution to \eqref{$P_lambda$} we infer that (iii) holds. Finally, in order to describe the behaviour of the solutions for $\lambda\rightarrow 0^-$, observe that in Lemma \ref{lbound} we have proved that $\|u_\lambda\|_\infty< 2\|u_{\widehat{\lambda}}\|_\infty$ for all $\lambda\leq \widehat{\lambda}<0$. In particular, if $C_0:= \displaystyle\limsup_{\lambda\rightarrow 0^-}\|u_\lambda\|_\infty<\infty$, there exists a sequence $\widehat{\lambda}_n\rightarrow 0^-$ such that ${C_0=\displaystyle\limsup_{n \rightarrow \infty} \|u_{\widehat{\lambda}_n}\|_\infty <\infty}$. Then, for every sequence $\lambda_n\rightarrow 0^-$ we deduce by the above inequality that $\displaystyle\limsup_{n \rightarrow \infty} \|u_{\lambda_n}\|_{\infty}\leq2C_0<\infty$. Therefore, we have either $\displaystyle\lim_{\lambda\rightarrow 0^-} \|u_\lambda\|_\infty= \infty$ or $\displaystyle\limsup_{\lambda\rightarrow 0^-} \|u_\lambda\|_\infty< \infty$. By assumption, we know that ($P_0$) does not have a solution $u_0\leq 0$, then the first case holds, finishing the proof. \end{proof} \begin{proof}[Proof of Corollary \ref{coro}] First observe that ($P_{\gamma_1,k}$) has no solution. In fact, if we assume by contradiction that $u$ is a solution to \eqref{P3}, using $\varphi_1>0$, the first eigenfunction of \eqref{eig1}, as a test function in \eqref{P3}, we have \begin{align*} \int c_{\gamma_1}(x)u\varphi_1 =\int A(x)DuD \varphi_1=\int c_\lambda(x)u&\varphi_1 +\int \varphi_1(M(x)Du,Du)+\int (h(x)+k\widetilde{c}(x))\varphi_1 \\ \mbox{and} \quad (\gamma_1-\lambda)\int c^+(x)u\varphi_1 \leq -\int|h(x)|&\varphi_1<0, \quad \mbox{which is a contradiction for \ } \lambda=\gamma_1. \qquad \quad \end{align*} Hence, for all $\lambda>0$, problem \eqref{$P_lambda$} has no solution with $c^+(x)u \equiv 0$, otherwise $u$ would be a solution to \eqref{$P_lambda$} for every $\lambda\in \mathbb{R}$, which contradicts the nonexistence of a solution for $\lambda=\gamma_1$. By Step 3 of the proof of Lemma \ref{c2}, there exists $\widetilde{k}>0$ such that for all $k\in (0,\widetilde{k}]$ problem \eqref{P3} has a strict supersolution $\beta_0$ with $\beta_0\ll 0$. The existence of $\lambda_2>\gamma_1$ as in (iii) can then be deduced from Theorem \ref{solucoesnegativas}. By \cite[Theorem 1.1]{ACJT}, decreasing $\widetilde{k}$ if necessary, we know that for all $k\in(0,\widetilde{k}]$ problem ($P_{0,k}$) has a solution $u_0\gg 0$. Therefore, the existence of $\lambda_1$ as in (i) can be deduced from Theorem \ref{teo5.2}. \end{proof} Before proving Theorem \ref{hequiv0}, we observe that particular cases of Theorems \ref{teo5.2} and \ref{teo5.3} are given when $h(x) \gneqq 0$ and $h(x) \lneqq 0$. Indeed, if $h\gneqq 0$ holds, then $u_0$ is a supersolution to \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)Du_0)&\geq c_\lambda(x)u_0+(M(x)Du_0,Du_0)+h(x)\gneqq 0&\mbox{ in } \Omega\\ u_0&=0 &\mbox{ on } \partial\Omega \end{array} \right. \end{align*} Then, applying the SMP we obtain $u_0>0$ in $\Omega$. Furthermore, by the Hopf Lemma we conclude that $u_0\gg 0$ in $\Omega$. On the other hand , if $h \lneqq 0$, then $u_0$ is a subsolution to \begin{align*} -\divi(A(x)Du_0)\leq c_\lambda(x)u_0+(M(x)Du_0,Du_0)+h(x) \lneqq (M(x)Du_0,Du_0) \end{align*} and so $v_0={\nu_2}^{-1}(e^{\nu_2u_0}-1)$ is a subsolution to \begin{align*} -\divi(A(x)Dv_0)&\leq [1+\nu_2 v][-\divi(A(x)Du_0)-\mu_2|Du_0|^2]\\ &\lneqq [1+ \nu_2v][(M(x)Du_0,Du_0)-\mu_2|Du_0|^2]\lneqq 0 \mbox{ in }\Omega. \end{align*} Again, by the SMP and the Hopf Lemma we get $v_0\ll 0$ and therefore $u_0\ll 0$. \begin{proof}[Proof of Theorem \ref{hequiv0}] For proving (i), firstly we note that for all $\lambda\in \mathbb{R}$, $u\equiv 0$ is a solution to (\ref{h=0}). In order to prove that for all $\lambda\in(0,\gamma_1)$ problem (\ref{h=0}) has a second solution $u_{\lambda,2}\gneqq 0$, we claim that problem (\ref{h=0}) has a supersolution $\beta\gg 0$. In fact, taking $\lambda< \gamma_1$ and $\varepsilon>0$ such that $ \lambda{\nu_2}^{-1}{(1+\nu_2v)\ln(1+\nu_2v)}\leq \gamma_1v $ for all $v\in[0,\varepsilon]$, we consider the function $\widetilde{\beta}=\varepsilon\varphi_1,$ where $\varphi_1$ denotes the first eigenfunction of \eqref{eig1} with $\|\varphi_1\|_{L^{\infty}}=1$ and \begin{align*} \left\{ \begin{array}{rll} -\divi(A(x)D\widetilde{\beta})&=c_\gamma(x)\widetilde{\beta} \gneqq c_\lambda(x) \displaystyle\frac{(1+\nu_2\widetilde{\beta})\ln(1+\nu_2\widetilde{\beta})}{\nu_2}, &\mbox{ in } \Omega\\ \widetilde{\beta}&=0 &\mbox{ on } \partial\Omega. \end{array} \right. \end{align*} Hence, for $\beta$ being defined by $\beta:={\nu_2}^{-1}{\ln(1+\nu_2\widetilde{\beta})}$, we have \begin{align*} -\divi(A(x)D\beta)&=-\frac{\divi(A(x)D\widetilde{\beta})}{(1+\nu_2\widetilde{\beta})}-\left(A(x)D\widetilde{\beta},D\left[(1+\nu_2\widetilde{\beta})^{-1}\right]\right)\\ &\gneqq c_\lambda(x)\beta+\frac{\nu_2}{(1+\nu_2\widetilde{\beta})^2}(A(x)D\widetilde{\beta},D\widetilde{\beta})\geq c_\lambda(x)\beta+\nu_2\vartheta\frac{|D\widetilde{\beta}|^2}{(1+\nu_2\widetilde{\beta})^2}\\ &= c_\lambda(x)\beta+\mu_2|D\beta|^2\geq c_\lambda(x)\beta+(M(x)D\beta,D\beta) \end{align*} \begin{align*} \mbox{and thus,} \quad \qquad \qquad \left\{ \begin{array}{rll} -\divi(A(x)D\beta)& \gneqq c_\lambda(x)\beta+(M(x)D\beta,D\beta) &\mbox{ in } \Omega\\ \beta&=0 &\mbox{ on } \partial\Omega. \end{array} \right. \qquad \qquad \qquad \end{align*} By the Comparison Principle, we have that $\beta\geq 0$ is a strict supersolution to (\ref{h=0}). Since we know that every solution $u$ of problem (\ref{h=0}) satisfies $u\geq 0$, by Lemma \ref{existssub} problem (\ref{h=0}) has a strict subsolution $\alpha\lneqq 0$. Therefore, we conclude that problem (\ref{h=0}) has at least two solutions, by following the proof of Theorem \ref{teo5.2} with the solution $u_{\lambda,1}\equiv 0$. For proving (ii), suppose by contradiction that $u\not\equiv 0$ is another solution to problem (\ref{h=0}) and use $\varphi_1>0 $ the first eigenfunction of \eqref{eig1} as a test function in \eqref{h=0}. Then, \begin{align*} \int c_{\gamma_1}(x)u\varphi_1&=\int A(x)DuD \varphi_1 =\int c_\lambda(x)u\varphi_1 +\int (M(x)Du,Du)\varphi_1 \\ (\gamma_1-\lambda)\int c^+(x)u\varphi_1 &=\int (M(x)Du,Du)\varphi_1\geq\mu_1\int|Du|^2\varphi_1>0, \end{align*} which provides a contradiction for $\lambda=\gamma_1$. In case $\lambda>\gamma_1$, for showing that problem \eqref{h=0} has a second solution $u_{\lambda,2}\ll 0$, let $\lambda_0\in(\gamma_1,\lambda]$ be such that by Lemma \ref{antimax}, the problem \begin{align*} -\divi(A(x)Du)=c_{\lambda_0}(x)u+1, \end{align*} has a solution $u\ll 0$. Then, for $\varepsilon>0$ small enough, the function $\beta_0=\varepsilon u$ satisfies \begin{align*} -\divi(A(x)D\beta_0)&=c_{\lambda_0}(x)\varepsilon u+\varepsilon\geq c_{\lambda_0}(x)\beta_0+\varepsilon^2\mu_2|Du|^2\geq c_{\lambda_0}(x)\beta_0+(M(x)D\beta_0,D\beta_0) \end{align*} and the problem (\ref{h=0}) has a supersolution $\beta_0$ with $\beta_0\leq 0$ and $c^+(x)\beta_0\lneqq 0$. Therefore, (iii) follows by Theorem \ref{solucoesnegativas} with $u_{\lambda,2}\equiv 0$. With the aim of showing the existence of a continuum of solution to problem \eqref{h=0}, we define the operator $ T_{\lambda}:= \left\{ \begin{array}{rll} \overline{T}_\lambda, & \mbox{ if } \lambda\leq \gamma_1,\\ \widehat{T}_\lambda,& \mbox{ if } \lambda> \gamma_1. \end{array} \right. $ where $\overline{T}_\lambda$ for $\lambda\leq \gamma_1$ is defined in (ii) of the proof of Theorem \ref{teo5.2} and the operator $\widehat{T}_\lambda$ for $\lambda\geq \gamma_1$ is defined in (ii) of the proof of Theorem \ref{teo5.3} in both cases with $h\equiv 0$. Observe that the case $\lambda\in (-\infty,\gamma_1]$ can be proved as in the proof of Theorem \ref{teo5.2}(ii). Note that, if $u$ is a solution to (\ref{P2}), then $u\geq u_{\gamma_1}$ and hence it is a solution to (\ref{h=0}). On the other hand, the case $\lambda\in [\gamma_1,+\infty)$ follows the same lines of the proof of Theorem \ref{teo5.3}(ii). If $u$ is a solution to (\ref{p4}), then $u\leq u_{\gamma_1}$ and hence it is a solution to (\ref{h=0}). Furthermore, since $u_{\gamma_1}\equiv 0$ is the unique solution to the problem (\ref{h=0}) for $\lambda=\gamma_1$, then $i(I-T_{\gamma_1},u_{\gamma_1})=1$. Applying Theorem \ref{continuum} with $\gamma_1>0$, we obtain a continuum ${\mathcal{C}=\mathcal{C}^+\cup \mathcal{C}^-\subset \overline{\Sigma}}$ such that $ \mathcal{C}^+=\mathcal{C}\cap ([{\gamma_1},+\infty)\times C(\overline{\Omega}))\mbox{ and } \mathcal{C}^-=\mathcal{C}\cap ((-\infty,{\gamma_1}]\times C(\overline{\Omega})) \ \mbox{are unbounded in \ } \mathbb{R}^{\pm}\times C(\overline{\Omega}). $ By Step 1, we get that if $u \in \mathcal{C}^-$, then $u\geq u_{\gamma_1}$ and it is a solution to \eqref{h=0}. Thus, from (iv) we infer that the projection of $\mathcal{C}^-$ on $\lambda$-axis is $(0,\gamma_1]$, a bounded interval, and then we deduce that the projection of $\mathcal{C}^+$ on $\lambda$-axis is $[\gamma_1,+\infty)$. Hence, \begin{align*} \mbox{Proj}_{\mathbb{R}}\mathcal{C}=\mbox{Proj}_{\mathbb{R}}\mathcal{C}^-\cup\mbox{Proj}_{\mathbb{R}}\mathcal{C}^+=(0,+\infty). \end{align*} Finally, by Theorem \ref{6.3} for any $0<\Lambda_1<\Lambda_2<\gamma_1$ there is a priori bound for the solutions to problem \eqref{h=0}, for all $\lambda\in[\Lambda_1,\Lambda_2]$. Then, we have also a $C^{\alpha}$ a priori bound for these solutions, i.e. the projection of $\mathcal{C}\cap ([\Lambda_1,\Lambda_2]\times C(\overline{\Omega}))$ on $C(\overline{\Omega})$ is bounded. Since the component $\mathcal{C}^-$ is unbounded in $\mathbb{R}^-\times C(\overline{\Omega})$, its projection on the $C(\overline{\Omega})$ axis must be unbounded. Therefore, we deduce that $\mathcal{C}$ must emanate from infinity on the right of axis $\lambda=0$. \end{proof} \noindent{\bf Acknowledgment:} The author Mayra Soares would like to thank the financial support received by the postdoctoral fellowship from DGAPA-Unam. \noindent\textsc{Fiorella Rend\'on}\\ Departamento de Matem\'atica,\\ Pontif\'icia Universidade Cat\'olica do Rio de Janeiro -- PUC-Rio\\ 22451-900, G\'avea, Rio de Janeiro-RJ, Brazil\\ \noindent\texttt{[email protected]} \noindent\textsc{Mayra Soares}\\ Departamento de Matem\'atica,\\ Universidade de Bras\'ilia - UnB\\ Instituto Central de Ci\^encias, Campus Darci Ribeiro,\\ 70910-900, Asa Norte, Bras\'ilia, Distrito Federal, Brasil\\ \noindent\texttt{[email protected]} \end{document}
arXiv
\begin{document} \title{Effect of the Choice of Connectives \\ on the Relation between \\ the Logic of Constant Domains \\ and Classical Predicate Logic} \titlerunning{Effect of the Choice of Connectives} \author{Naosuke Matsuda\inst{1} \and Kento Takagi\inst{2}\orcidID{0000-0003-3810-9610}} \authorrunning{N. Matsuda and K. Takagi} \institute{Department of Engineering, Niigata Institute of Technology, \\ Fujihashi, Kashiwazaki City, Niigata 945-1195, Japan \\ \email{[email protected]}\\ \and Department of Computer Science, Tokyo Institute of Technology, \\ Ookayama, Meguro-ku, Tokyo 152-8522, Japan \\ \email{[email protected]}} \maketitle \begin{abstract} It is known that not only classical semantics but also intuitionistic Kripke semantics can be generalized so that it can treat arbitrary propositional connectives characterized by truth tables, or truth functions. In our previous work, it has been shown that the set of Kripke-valid propositional sequents and that of classically valid propositional sequents coincide if and only if all available propositional connectives are monotonic. The present paper extend this result to first-order logic showing that, in the case of predicate logic, the condition that all available propositional connectives are monotonic is a necessary and sufficient condition for the set of sequents valid in all constant domain Kripke models, not the set of Kripke-valid sequents, and the set of classically valid sequents to coincide. \keywords{Kripke semantics \and Propositional connective \and Intuitionistic predicate logic \and The logic of constant domains \and Classical predicate logic.} \end{abstract} \section{Introduction}\label{Section 1} \subsection{Generalized propositional logic}\label{Subsection 1.1} In \cite{kripke1965semantical}, Kripke provided the intuitionistic interpretation for formulas built out of the usual propositional connectives $\lnot$, $\to$, $\land$ and $\lor$. The notion of validity in intuitionistic logic can be defined with this interpretation. Rousseau~\cite{rousseau1970sequents} and Geuvers and Hurkens~\cite{geuvers2017deriving} extended the intuitionistic interpretation so that it can treat arbitrary propositional connectives characterized by truth tables, or truth functions. Their idea is very simple: when $c$ is a propositional connective and $\ttfunc{c}$ is the truth function associated with $c$, then the interpretation $\| c(\alpha_1, \ldots, \alpha_n) \|_w$ of formula $c(\alpha_1, \ldots, \alpha_n)$ at world $w$ is defined as follows: \[ \| c(\alpha_1, \ldots, \alpha_n )\|_w = 1 \ \text{ if and only if } \ \text{$\ttfunc{c}(\| \alpha_1 \|_v, \ldots, \| \alpha_n \|_v) = 1$ for all $v \succeq w$}. \] It is well-known that the relation between intuitionistic logic and classical logic changes by the choice of propositional connectives. In particular, the relation between the sets of valid sequents changes. For example, $\mathop{\mathrm{ILS}}(\{\lnot\}) \subsetneq \mathop{\mathrm{CLS}}(\{ \lnot \})$ and $\mathop{\mathrm{ILS}}(\{\land, \lor \}) = \mathop{\mathrm{CLS}}(\{ \land, \lor \})$, where, for a set of propositional connectives $\mathscr{C}$, $\mathop{\mathrm{ILS}}(\mathscr{C})$ denotes the set of Kripke-valid propositional sequents built out of the connectives in $\mathscr{C}$ and $\mathop{\mathrm{CLS}}(\mathscr{C})$ denotes the set of classically valid propositional sequents built out of the connectives in $\mathscr{C}$. Then, there arises a natural question: for what $\mathscr{C}$, does $\mathop{\mathrm{ILS}}(\mathscr{C}) = \mathop{\mathrm{CLS}}(\mathscr{C})$ hold? We answered this question in \cite{kawano2021effect}. But, before describing the answer, we briefly review some necessary notions. For each connective $c$, $\mathop{\mathrm{ar}}(c)$ denotes the arity of $c$. Let $\mathscr{C}$ be a set of propositional connectives. We denote by $\mathop{\mathrm{ILS}}(\mathscr{C})$ the set of Kripke-valid sequents built out of the propositional connectives in $\mathscr{C}$ and by $\mathop{\mathrm{CLS}}(\mathscr{C})$ the sets of classically-valid sequents built out of the propositional connectives in $\mathscr{C}$. For a sequence of truth values $\mathbf{a} \in \{ 0, 1 \}^n$, $\overline{\mathbf{a}} \in \{0,1\}^n$ denotes the sequence of truth values obtained from $\mathbf{a}$ by inverting $0$ and $1$. $\sqsubseteq_n$ is the natural order on $\{ 0, 1 \}^n$, that is, for $\mathbf{a} = \langle a_1, \ldots, a_n \rangle \in \{0,1\}^n$ and $\mathbf{b} = \langle b_1, \ldots, b_n \rangle \in \{0, 1 \}^{n}$, $\mathbf{a} \sqsubseteq_n \mathbf{b}$ if and only if $a_i \leq b_i$ for all $i = 1, \ldots, n$. For $\mathbf{a} \in \{0,1\}^n$ and $\mathbf{b} \in \{0, 1\}^n$, $\mathbf{a} \sqcap \mathbf{b}$ denotes the infimum of the set $\{ \mathbf{a}, \mathbf{b} \}$ with respect to $\sqsubseteq_n$. $\langle 1, \ldots, 1 \rangle \in \{ 0, 1 \}^n$ and $\langle 0, \ldots, 0 \rangle \in \{ 0, 1 \}^n$ are denoted by $\mathbf{1}_n$ and $\mathbf{0}_n$, respectively. We shall omit the subscript $n$ of $\sqsubseteq_n$, $\mathbf{1}_n$ and $\mathbf{0}_n$ if it is clear from the context. For details, see \S \ref{Section 2}. Then, the necessary and sufficient condition for $\mathop{\mathrm{ILS}}(\mathscr{C})$ and $\mathop{\mathrm{CLS}}(\mathscr{C})$ to coincide is described as follows: \begin{theorem*}[\cite{kawano2021effect}] $\mathop{\mathrm{ILS}}(\mathscr{C}) = \mathop{\mathrm{CLS}}(\mathscr{C})$ if and only if all connectives in $\mathscr{C}$ are monotonic, that is, all $c \in \mathscr{C}$ satisfy the following condition: for any $\mathbf{a}, \mathbf{b} \in \{ 0, 1\}^{\mathop{\mathrm{ar}}(c)}$, if $\mathbf{a} \sqsubseteq \mathbf{b}$ then $\ttfunc{c}(\mathbf{a}) \leq \ttfunc{c}(\mathbf{b})$. \end{theorem*} \subsection{Results}\label{Subsection 1.2} The present paper extends the preceding theorem to first-order logic. Generalized Kripke semantics can be extended to first-order logic by adding $\forall$ and $\exists$ with the usual interpretations. Let $\mathop{\mathrm{FOILS}}(\mathscr{C})$ denote the set of Kripke-valid sequents built out of the quantifiers $\forall$ and $\exists$ and the propositional connectives in $\mathscr{C}$ and let $\mathop{\mathrm{FOCLS}}(\mathscr{C})$ denote the set of classically valid sequents built out of the quantifiers $\forall$ and $\exists$ and the propositional connectives in $\mathscr{C}$. Then, the following claim might seem a straightforward extension of the preceding theorem to first-order logic: $\mathop{\mathrm{FOILS}}(\mathscr{C}) = \mathop{\mathrm{FOCLS}}(\mathscr{C})$ if and only if all connectives in $\mathscr{C}$ are monotonic. However, this claim fails. Instead, if we extend the proof of the preceding theorem, we obtain a necessary and sufficient condition for the set of sequents that are valid with respect to \emph{constant domain} Kripke semantics and that of classically valid sequents to coincide: \begin{theorem*} Let $\mathop{\mathrm{FOCDS}}(\mathscr{C})$ denote the set of sequents built out of the quantifiers $\forall$ and $\exists$ and the propositional connectives in $\mathscr{C}$ which are valid in all constant domain Kripke models. Then, $\mathop{\mathrm{FOCDS}}(\mathscr{C}) = \mathop{\mathrm{FOCLS}}(\mathscr{C})$ if and only if all connectives in $\mathscr{C}$ are monotonic. \end{theorem*} We give a proof of this main theorem extending the proof of the theorem that gives the necessary and sufficient condition for $\mathop{\mathrm{ILS}}(\mathscr{C})$ and $\mathop{\mathrm{CLS}}(\mathscr{C})$ to coincide. \subsection{Overview}\label{Subsection 1.3} In \S \ref{Section 2}, we introduce basic concepts and extend the general propositional logic to first-order logic. In \S \ref{Section 3}, we show the main theorem. \section{Preliminaries} \label{Section 2} \subsection{Connectives and truth functions} \label{Subsection 2.1} The elements of a set $\{ 0, 1\}$ are called the \emph{truth values}. $\{ 0, 1 \}^n$ denotes the set of sequences of truth values of length $n$. We shall use letters $\mathbf{a}$, $\mathbf{b}$ and $\mathbf{c}$ to denote arbitrary finite sequences of truth values. We denote by $\mathbf{0}_n$ and $\mathbf{1}_n$ the sequence $\langle 0, \ldots, 0 \rangle \in \{0, 1\}^n$ and $\langle 1, \ldots, 1 \rangle \in \{0, 1\}^n$, respectively. For $\mathbf{a} \in \{0,1\}^n$, we denote by $\mathbf{a}[i]$ the $i$-th value of $\mathbf{a}$. For example, $\langle 0, 1, 0 \rangle [1] = \langle 0, 1, 0 \rangle [3] = 0$ and $\langle 0, 1, 0 \rangle [2] = 1$. For $\mathbf{a} \in \{ 0, 1\}^n$, $\overline{\mathbf{a}}$ denotes the sequence obtained from $\mathbf{a}$ by inverting $0$ and $1$. For example, $\overline{\langle 0, 1, 0 \rangle} = \langle 1, 0, 1 \rangle$. An $n$-ary \emph{truth function} is a function from $\{ 0, 1\}^n$ to $\{ 0, 1\}$. The natural order $\sqsubseteq_n$ on $\{ 0, 1\}^n$ is defined as follows: for $\mathbf{a} \in \{ 0, 1 \}^n$ and $\mathbf{b} \in \{ 0, 1 \}^n$, $\mathbf{a} \sqsubseteq_n \mathbf{b}$ if and only if $\mathbf{a}[i] \leq \mathbf{b}[i]$ for all $i = 1, \ldots, n$. Here, $\leq$ denotes the usual order on $\{0,1\}$ defined by $0 \leq 0$, $1 \leq 1$, $0 \leq 1$ and $1 \not \leq 0$. In what follows, we shall omit the subscript $n$ of $\mathbf{0}_n$, $\mathbf{1}_n$ and $\sqsubseteq_n$, since it is clear from the context. For $\mathbf{a}, \mathbf{b} \in \{0,1\}^n$, $\mathbf{a} \sqcap \mathbf{b}$ denotes the infimum of $\{ \mathbf{a}, \mathbf{b} \}$. It is obvious that $\mathbf{a} \sqcap \mathbf{b}$ can be calculated as follows: \[ (\mathbf{a} \sqcap \mathbf{b}) [i] = \begin{cases} 1 & \text{if $\mathbf{a}[i] = 1$ and $\mathbf{b}[i] = 1$} \\ 0 & \text{if $\mathbf{a}[i] = 0$ or $\mathbf{b}[i] = 0$}. \end{cases} \] An $n$-ary truth function $f$ is said to be \emph{monotonic} if for all $\mathbf{a}, \mathbf{b} \in \{ 0, 1 \}^n$, $\mathbf{a} \sqsubseteq \mathbf{b}$ implies $f(\mathbf{a}) \leq f(\mathbf{b})$. \subsection{Propositional connectives and formulas} \label{Subsection 2.2} A \emph{propositional connective} is a symbol with a truth function. For a propositional connective $c$, we denote by $\ttfunc{c}$ the truth function associated with $c$ and by $\mathop{\mathrm{ar}}(c)$ the arity of $\ttfunc{c}$. We shall use letters $c$ and $d$ as metavariables for propositional connectives. Assume a set $\mathscr{C}$ of propositional connectives is given. We define the first-order language with propositional connectives in $\mathscr{C}$. It consists of the following symbols: countably infinitely many individual variables; countably infinitely many \footnote{As we can see from the proofs in this paper, only a small number of supplies of predicate symbols suffice actually.} $n$-ary predicate symbols for each $n \in \mathbb{N}$; propositional connectives in $\mathscr{C}$; quantifiers $\forall$ and $\exists$. $0$-ary predicate symbols are also called \emph{propositional symbols}. Although all arguments in this paper work with trivial modifications if the language has function symbols and constant symbols, we assume the language has no function symbols and no constant symbols for simplicity. We shall use $x$, $y$ and $z$ as metavariables for individual variables; $p$, $q$, $r$ and $s$ for predicate symbols; $c$ and $d$ for propositional connectives. An \emph{atomic formula} is an expression of the form $p(x_1, \ldots, x_n)$, where $p$ is an $n$-ary predicate symbol. The set $\mathop{\mathrm{FOFml}}(\mathscr{C})$ of \emph{(first-order) formulas} is defined inductively as follows: \begin{itemize} \item if $\alpha$ is an atomic formula, then $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$; \item if $c \in \mathscr{C}$ and $\alpha_1, \ldots, \alpha_{\mathop{\mathrm{ar}}(c)} \in \mathop{\mathrm{FOFml}}(\mathscr{C})$, then $c(\alpha_1, \ldots, \alpha_{\mathop{\mathrm{ar}}(c)}) \in \mathop{\mathrm{FOFml}}(\mathscr{C})$; \item if $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ and $x$ is an individual variable, then $\forall x \alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ and $\exists x \alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$. \end{itemize} We shall use $\alpha$, $\beta$, $\gamma$, $\varphi$, $\psi$, $\sigma$, $\tau$ and $\chi$ as metavariables for formulas. The set $\FV(\alpha)$ of free variables of $\alpha$ is defined inductively as follows: \begin{align*} \FV(p(x_1, \ldots, x_n)) & = \{ x_1,\ldots, x_n \}; \\ \FV(c(\alpha_1, \ldots, \alpha_{\mathop{\mathrm{ar}}(c)})) & = \FV(\alpha_1) \cup \cdots \cup \FV(\alpha_{\mathop{\mathrm{ar}}(c)}); \\ \FV(\forall x \alpha) = \FV(\exists x \alpha) & = \FV(\alpha) \setminus \{ x \}. \end{align*} A \emph{sequent} is an expression $\Gamma \Rightarrow \Delta$, where $\Gamma$ and $\Delta$ are sets of formulas. We denote by $\mathop{\mathrm{FOSqt}}(\mathscr{C})$ the set $\{ \Gamma \Rightarrow \Delta \mid \Gamma, \Delta \subseteq \mathop{\mathrm{FOFml}}(\mathscr{C}) \}$. If $\Gamma = \{ \alpha_1, \ldots, \alpha_n \}$ and $\Delta = \{ \beta_1, \ldots, \beta_m \}$, we often omit the braces and simply write $\alpha_1, \ldots, \alpha_n \Rightarrow \beta_1, \ldots, \beta_m$ for $\{ \alpha_1, \ldots, \alpha_n \} \Rightarrow \{ \beta_1, \ldots, \beta_m \}$. $\FV(\Gamma \Rightarrow \Delta)$ denotes the set of free variables of formulas in $\Gamma \cup \Delta$. Formulas which contain no predicate symbols except propositional symbols are said to be \emph{propositional}. We denote by $\mathop{\mathrm{Fml}}(\mathscr{C})$ the set $\{ \alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C}) \mid \text{$\alpha$ is propositional} \}$ and by $\mathop{\mathrm{Sqt}}(\mathscr{C})$ the set \[ \{ \Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C}) \mid \text{all formulas in $\Gamma \cup \Delta$ are propositional} \}. \] \subsection{Classical semantics} \label{Subsection 2.3} A \emph{(classical) model} $\mathscr{M}$ is a tuple $\langle D, I \rangle$ in which \begin{itemize} \item $D$ is a non-empty set, called the \emph{individual domain}; \item $I$ is a function, called the \emph{interpretation function}, which assigns to each $n$-ary predicate symbol a function from $D^n$ to $\{ 0,1 \}$. \end{itemize} An \emph{assignment} in $D$ is a function which assigns to each individual variable an element of $D$. For an assignment $\rho$ in $D$, an individual variable $x$ and an element $a \in D$, we write $\rho[x \mapsto a]$ for the assignment in $D$ which maps $x$ to $a$ and is equal to $\rho$ everywhere else. For a model $\mathscr{M} = \langle D, I \rangle$, a formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ and an assignment in $D$, we define the \emph{interpretation} $\llbracket \alpha \rrbracket_{\mathscr{M}}^{\rho}$ of $\alpha$ with respect to $\rho$ inductively as follows: \begin{itemize} \item $\llbracket p(x_1, \ldots, x_n) \rrbracket_{\mathscr{M}}^{\rho} = I(p)(\rho(x_1), \ldots, \rho(x_n))$; \item $\llbracket c(\alpha_1, \ldots, \alpha_{\mathop{\mathrm{ar}}(c)}) \rrbracket_\mathscr{M}^\rho = \ttfunc{c}(\llbracket \alpha_1 \rrbracket_{\mathscr{M}}^\rho, \ldots, \llbracket \alpha_{\mathop{\mathrm{ar}}(c)} \rrbracket_\mathscr{M}^\rho)$; \item $\llbracket \forall x \alpha \rrbracket_{\mathscr{M}}^\rho = 1$ if and only if $\llbracket \alpha \rrbracket_{\mathscr{M}}^{\rho[x \mapsto a]} = 1$ for all $a \in D$; \item $\llbracket \exists x \alpha \rrbracket_{\mathscr{M}}^\rho = 1$ if and only if $\llbracket \alpha \rrbracket_{\mathscr{M}}^{\rho[x \mapsto a]} = 1$ for some $a \in D$. \end{itemize} The value of $\llbracket \alpha \rrbracket_{\mathscr{M}}^\rho$ only depends on the values of $\rho$ on $\FV(\alpha)$. Hence, even for a partial function $\rho$ from the set of individual variables to $D$ whose domain includes $\FV(\alpha)$, $\llbracket \alpha \rrbracket_\mathscr{M}^\rho$ can be defined to be the value $\llbracket \alpha \rrbracket_\mathscr{M}^{\rho'}$ for any total function $\rho'$ from the set of individual variables to $D$ which is an extension of $\rho$. We call a partial function from the set of individual variables to an individual domain a \emph{partial assignment} . Even for a partial assignment $\rho$, we define $\rho[x \mapsto a]$ to be the function which maps $x$ to $a$ and is equal to $\rho$ on $\dom(\rho)\setminus \{ x\}$. We use $\varnothing$ to denote the empty assignment $\emptyset \to D$. For example, for a model $\langle D, I \rangle$ with $a, b \in D$, we have $\llbracket \bot \rrbracket_{\langle D, I \rangle}^{\varnothing} = 0$ and $\llbracket p(x,y) \rrbracket_{\langle D, I \rangle}^{\varnothing [x \mapsto a] [y \mapsto b]} = I(p)(a, b)$. If $\vec{\alpha}$ denotes a sequence of formulas $\alpha_1, \ldots, \alpha_n$, then we denote by $\llbracket \vec{\alpha} \rrbracket_{\mathscr{M}}^{\rho}$ the sequence of interpretations of $\alpha_1, \ldots, \alpha_n$, $\langle \llbracket \alpha_1 \rrbracket_\mathscr{M}^\rho, \ldots, \llbracket \alpha_n \rrbracket_{\mathscr{M}}^{\rho} \rangle$. For example, if $\alpha \equiv c(\beta_1, \ldots, \beta_{\mathop{\mathrm{ar}}(c)})$ and $\vec{\beta} = \beta_1, \ldots, \beta_{\mathop{\mathrm{ar}}(c)}$, then $\llbracket \alpha \rrbracket_\mathscr{M}^\rho = 1$ if and only if $\ttfunc{c}(\llbracket \vec{\beta} \rrbracket_\mathscr{M}^\rho) = 1$. A formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ is \emph{valid} in a classical model $\mathscr{M} = \langle D, I \rangle$ if $\llbracket \alpha \rrbracket_\mathscr{M}^\rho = 1$ holds for all assignments $\rho$ in $D$. A formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ is \emph{(classically) valid} if it is valid in all classical models. We denote by $\mathop{\mathrm{FOCL}}(\mathscr{C})$ the set $\{ \alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C}) \mid \text{$\alpha$ is classically valid} \}$. For a sequent $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C})$, the \emph{interpretation} $\llbracket \Gamma \Rightarrow \Delta \rrbracket_{\mathscr{M}}^{\rho} \in \{0,1\}$ of $\Gamma \Rightarrow \Delta$ with respect to $\rho$ is defined by \[ \llbracket \Gamma \Rightarrow \Delta \rrbracket_\mathscr{M}^\rho = \begin{cases} 0 & \text{if $\llbracket \alpha \rrbracket_\mathscr{M}^\rho = 1$ for all $\alpha \in \Gamma$ and $\llbracket \beta \rrbracket_\mathscr{M}^\rho = 0$ for all $\beta \in \Delta$} \\ 1 & \text{otherwise}. \end{cases} \] A sequent $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C})$ is \emph{valid} in a classical model $\mathscr{M} = \langle D, I \rangle$ if $\llbracket \Gamma \Rightarrow \Delta \rrbracket_\mathscr{M}^\rho = 1$ holds for all assignments $\rho$ in $D$. A sequent $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C})$ is \emph{(classically) valid} if it is valid in all classical models. We denote by $\mathop{\mathrm{FOCLS}}(\mathscr{C})$ the set $\{ \Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C}) \mid \text{$\Gamma \Rightarrow \Delta$ is classically valid} \}$. \subsection{Kripke semantics} \label{Subsection 2.4} A \emph{Kripke model} is a tuple $\langle W, \preceq, D, I \rangle$ in which \begin{itemize} \item $W$ is a non-empty set, called a set of \emph{possible worlds}; \item $\preceq$ is a pre-order on $W$; \item $D$ is a function that assigns to each $w \in W$ a non-empty set $D(w)$, which is called the \emph{individual domain} at $w$. Furthermore, $D$ satisfies the monotonicity: for all $w, v \in W$, if $w \preceq v$ then $D(w) \subseteq D(v)$. \item $I$ is a function, called an \emph{interpretation function}, that assigns to each pair $\langle w, p \rangle$ of a possible world and an $n$-ary predicate symbol a function $I(w,p)$ from $D(w)^n$ to $\{0,1\}$. Furthermore, $I$ satisfies the \emph{hereditary condition}: for all $n$-ary predicate symbols $p$ and all $w, v \in W$, if $w \preceq v$ then $I(w,p) (a_1, \ldots, a_n) \leq I(v, p) (a_1, \ldots, a_n)$ holds for all $a_1, \ldots, a_n \in D(w)$. \end{itemize} An \emph{assignment} in $D(w)$ is a function which assigns to each individual variable an element of $D(w)$. For an assignment $\rho$ in $D(w)$, an individual variable $x$ and an element $a \in D$, we write $\rho[x \mapsto a]$ for the assignment in $D(w)$ which maps $x$ to $a$ and is equal to $\rho$ everywhere else. For a Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$, a possible world $w \in W$, an assignment $\rho$ in $D(w)$ and a formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$, we define the \emph{interpretation} $\| \alpha \|_{\mathscr{K}, w}^{\rho} \in \{ 0, 1 \}$ of $\alpha$ at $w$ with respect to $\rho$ as follows: \begin{itemize} \item $\| p (x_1, \ldots, x_n) \|_{\mathscr{K}, w}^\rho = I(w, p)(\rho(x_1), \ldots, \rho(x_n))$; \item $\| c(\alpha_1, \ldots, \alpha_n) \|_{\mathscr{K}, w}^\rho = 1$ if and only if $\ttfunc{c}(\| \alpha_1 \|_{\mathscr{K}, v}^\rho, \ldots, \| \alpha_n \|_{\mathscr{K}, v}^\rho) = 1$ for all $v \succeq w$; \item $\| \forall x \alpha \|_{\mathscr{K}, w}^\rho = 1$ if and only if $\| \alpha \|_{\mathscr{K}, v}^{\rho[x \mapsto a]} = 1$ for all $v \succeq w$ and all $a \in D(v)$; \item $\| \exists x \alpha \|_{\mathscr{K}, w}^\rho = 1$ if and only if $\| \alpha \|_{\mathscr{K}, w}^{\rho [x \mapsto a]} = 1$ for some $a \in D(w)$. \end{itemize} Note that, in case $c = \land$ or $c = \lor$, the statement of the definition of $\| c(\alpha_1, \alpha_2) \|_{\mathscr{K}, w}^\rho$ differs from the usual one, in which the interpretation is defined by the interpretations of $\alpha_1$ and $\alpha_2$ only at $w$, but we can easily verify that this definition is equivalent to the usual one. The value of $\llbracket \alpha \rrbracket_{\mathscr{K}, w}^\rho$ only depends on the values of $\rho$ on $\FV(\alpha)$. Hence, even for a partial function $\rho$ from the set of individual variables to $D(w)$ whose domain includes $\FV(\alpha)$, $\llbracket \alpha \rrbracket_{\mathscr{K}, w}^\rho$ can be defined to be the value $\llbracket \alpha \rrbracket_{\mathscr{K}, w}^{\rho'}$ for any total function $\rho'$ from the set of individual variables to $D(w)$ which is an extension of $\rho$. We call a partial function from the set of individual variables to an individual domain a \emph{partial assignment} . Even for a partial assignment $\rho$, we define $\rho[x \mapsto a]$ to be the function which maps $x$ to $a$ and is equal to $\rho$ on $\dom(\rho)\setminus \{ x\}$. We use $\varnothing$ to denote the empty assignment $\emptyset \to D(w)$. For example, for a Kripke model $\langle W, \preceq, D, I \rangle$, a possible world $w \in W$ and individuals $a, b \in D(w)$, we have $\llbracket \bot \rrbracket_{\mathscr{K}, w}^{\varnothing} = 0$ and $\llbracket p(x,y) \rrbracket_{\mathscr{K}, w}^{\varnothing [x \mapsto a] [y \mapsto b]} = I(w, p)(a, b)$. If $\vec{\alpha}$ denotes a sequence of formulas $\alpha_1, \ldots, \alpha_n$, then we denote by $\llbracket \vec{\alpha} \rrbracket_{\mathscr{K}, w}^{\rho}$ the sequence of interpretations of $\alpha_1, \ldots, \alpha_n$, $\langle \llbracket \alpha_1 \rrbracket_{\mathscr{K}, w}^\rho, \ldots, \llbracket \alpha_n \rrbracket_{\mathscr{K}, w}^{\rho} \rangle$. For example, if $\alpha \equiv c(\beta_1, \ldots, \beta_{\mathop{\mathrm{ar}}(c)})$ and $\vec{\beta} = \beta_1, \ldots, \beta_{\mathop{\mathrm{ar}}(c)}$, then $\llbracket \alpha \rrbracket_{\mathscr{K}, w}^\rho = 1$ if and only if $\ttfunc{c}(\llbracket \vec{\beta} \rrbracket_{\mathscr{K}, v}^\rho) = 1$ for any $v \succeq w$. A formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ is \emph{valid} in a Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$ if $\| \alpha \|_{\mathscr{K}, w}^\rho = 1$ for any $w \in W$ and any assignment $\rho$ in $D(w)$. A formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ is \emph{Kripke-valid} if it is valid in all Kripke models. We denote by $\mathop{\mathrm{FOIL}}(\mathscr{C})$ the set $\{ \alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C}) \mid \text{$\alpha$ is Kripke-valid} \}$. As in the case of the usual connectives, the hereditary condition easily extends to any formula: \begin{lemma}\label{Lemma 1} For any formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$, any Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$, any $w,v \in W$ and any assignment $\rho$ in $D(w)$, if $w \preceq v$ then $\| \alpha \|_{\mathscr{K}, w}^\rho \leq \| \alpha \|_{\mathscr{K}, w}^\rho$. \end{lemma} We shall use this lemma without references. For a Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$, a possible world $w \in W$, an assignment $\rho$ in $D(w)$ and a sequent $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C})$, the \emph{interpretation} $\| \Gamma \Rightarrow \Delta \|_{\mathscr{K}, w}^{\rho} \in \{0,1\}$ of $\Gamma \Rightarrow \Delta$ at $w$ with respect to $\rho$ is defined by \[ \| \Gamma \Rightarrow \Delta \|_{\mathscr{K}, w}^\rho = \begin{cases} 0 & \text{if $\| \alpha \|_{\mathscr{K}, w}^\rho = 1$ for all $\alpha \in \Gamma$ and $\| \beta \|_{\mathscr{K}, w}^\rho = 0$ for all $\beta \in \Delta$} \\ 1 & \text{otherwise}. \end{cases} \] For a Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$, a sequent $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C})$ is \emph{valid} in $\mathscr{K}$ if $\| \Gamma \Rightarrow \Delta \|_{\mathscr{K},w}^\rho = 1$ for all $w \in W$ and all assignment $\rho$ in $D(w)$. A sequent $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C})$ is \emph{Kripke-valid} if it is valid in all Kripke models. We denote by $\mathop{\mathrm{FOILS}}(\mathscr{C})$ the set $\{ \Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}}(\mathscr{C}) \mid \text{$\Gamma \Rightarrow \Delta$ is Kripke-valid} \}$ A Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$ is said to be \emph{constant domain} if $D(w) = D(v)$ for all $w, v \in W$. In this case, we simply write $D$ for $D(w)$ for any $w \in W$. Note that, for a constant domain Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$, the interpretation of an universal formula may be defined only at the present world, that is: $\| \forall x \alpha \|_{\mathscr{K}, w}^\rho = 1$ if and only if $\| \alpha \|_{\mathscr{K}, w}^{\rho[x \mapsto a]} = 1$ for all $a \in D$. A formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ is \emph{CD-valid} if it is valid in all constant domain Kripke models. We denote by $\mathop{\mathrm{FOCD}}(\mathscr{C})$ the set $\{ \alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C}) \mid \text{$\alpha$ is CD-valid} \}$. A sequent $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{Sqt}}(\mathscr{C})$ is \emph{CD-valid} if it is valid in all constant domain Kripke models. We denote by $\mathop{\mathrm{FOCDS}}(\mathscr{C})$ the set $\{ \Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOSqt}} \mid \text{$\Gamma \Rightarrow \Delta$ is CD-valid}\}$. The following lemma follows by the definition of $\mathop{\mathrm{FOCDS}}(\mathscr{C})$ and $\mathop{\mathrm{FOCLS}}(\mathscr{C})$: \begin{lemma}\label{Lemma 2} $\mathop{\mathrm{FOCDS}}(\mathscr{C}) \subseteq \mathop{\mathrm{FOCLS}}(\mathscr{C})$ for any set $\mathscr{C}$ of connectives. \end{lemma} \section{Condition for $\mathop{\mathrm{FOCDS}}(\mathscr{C})$ and $\mathop{\mathrm{FOCLS}}(\mathscr{C})$ to coincide} \label{Section 3} In this section, we show the following theorem: \begin{theorem}\label{Theorem 1} $\mathop{\mathrm{FOCDS}}(\mathscr{C}) = \mathop{\mathrm{FOCLS}}(\mathscr{C})$ if and only if all connectives in $\mathscr{C}$ are monotonic. \end{theorem} We show the ``if'' part in \S \ref{Subsection 3.1} and the ``only if'' part in \S \ref{Subsection 3.2}. \subsection{The ``if'' part} \label{Subsection 3.1} Here, we show the ``if'' part of Theorem \ref{Theorem 1}: \begin{proposition}\label{Proposition 1} If all connectives in $\mathscr{C}$ are monotonic, then $\mathop{\mathrm{FOCDS}}(\mathscr{C}) = \mathop{\mathrm{FOCLS}}(\mathscr{C})$. \end{proposition} The following lemma is essential for the proof of this proposition. \begin{lemma}\label{Lemma 3} Suppose all connectives in $\mathscr{C}$ are monotonic. Let $\mathscr{K} = \langle W, \preceq, D, I \rangle$ be a constant domain Kripke model and $w \in W$. Let $\mathscr{M}_{\mathscr{K}, w} = \langle D, J_{\mathscr{K}, w} \rangle$ be the classical model defined by $J_{\mathscr{K}, w}(p) = I(p, w)$. Then, for any formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ and any assignment $\rho$ in $D$, $\| \alpha \|_{\mathscr{K}, w}^\rho = \llbracket \alpha \rrbracket_{\mathscr{M}_{\mathscr{K}, w}}^\rho$ holds. \end{lemma} \begin{proof} The proof proceeds by induction on $\alpha$. The base case, in which $\alpha$ is atomic, immediately follows by the definition of $J_{\mathscr{K}, w}$. Now, we show the inductive step by cases of the form of $\alpha$. \textsc{Case} 1: $\alpha$ is of the form $c(\beta_1, \ldots, \beta_{\mathop{\mathrm{ar}}(c)})$. Put $\vec{\beta} = \beta_1, \ldots, \beta_{\mathop{\mathrm{ar}}(c)}$. By the hereditary, we have $\| \vec{\beta} \|_{\mathscr{K}, w}^\rho \sqsubseteq \| \vec{\beta} \|_{\mathscr{K}, v}^\rho$ for all $v \succeq w$. Hence, since $c$ is monotonic, we have $\ttfunc{c}(\| \vec{\beta} \|_{\mathscr{K}, w}^{\rho}) \leq \ttfunc{c}(\| \vec{\beta} \|_{\mathscr{K}, v}^{\rho})$ for all $v \succeq w$, so that $\| \alpha \|_{\mathscr{K}, w}^\rho = \ttfunc{c}(\| \vec{\beta} \|_{\mathscr{K}, w}^\rho)$ holds. On the other hand, by the induction hypothesis, we have $\ttfunc{c}(\| \vec{\beta} \|_{\mathscr{K}, w}^\rho) = \ttfunc{c}(\| \vec{\beta} \|_{\mathscr{M}_{\mathscr{K}, w}}^\rho) = \llbracket \alpha \rrbracket_{\mathscr{M}_{\mathscr{K}, w}}^\rho$. \textsc{Case} 2: $\alpha$ is of the form $\forall x \beta$. In this case, we have \begin{alignat*}{2} \| \alpha \|_{\mathscr{K}, w}^\rho & = \min_{a \in D} \| \beta \|_{\mathscr{K}, w}^{\rho [x \mapsto a]} \\ & = \min_{a \in D} \| \beta \|_{\mathscr{M}_{\mathscr{K}, w}}^{\rho [x \mapsto a]} & \quad & \text{(by the induction hypothesis)} \\ & = \| \alpha \|_{\mathscr{M}_{\mathscr{K}, w}}^\rho. \end{alignat*} \textsc{Case} 3: $\alpha$ is of the form $\exists x \beta$. In this case, we have \begin{alignat*}{2} \| \alpha \|_{\mathscr{K}, w}^\rho & = \max_{a \in D} \| \beta \|_{\mathscr{K}, w}^{\rho [x \mapsto a]} \\ & = \max_{a \in D} \| \beta \|_{\mathscr{M}_{\mathscr{K}, w}}^{\rho [x \mapsto a]} & \quad & \text{(by the induction hypothesis)} \\ & = \| \alpha \|_{\mathscr{M}_{\mathscr{K}, w}}^\rho. \end{alignat*} \qed \end{proof} Using this lemma, we prove Proposition \ref{Proposition 1}. \begin{proof}[of Proposition \ref{Proposition 1}] Suppose all connectives in $\mathscr{C}$ are monotonic. By Lemma \ref{Lemma 2}, it suffices to show $\mathop{\mathrm{FOCLS}}(\mathscr{C}) \subseteq \mathop{\mathrm{FOCDS}}(\mathscr{C})$. In order to show this inclusion, we suppose $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOCLS}}(\mathscr{C})$, and show that $\| \Gamma \Rightarrow \Delta \|_{\mathscr{K}, w}^\rho = 1$ holds for any constant domain Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$, any possible world $w \in W$ and any assignment $\rho$ in $D$. By Lemma \ref{Lemma 3}, it holds that $\| \Gamma \Rightarrow \Delta \|_{\mathscr{K}, w}^\rho = \llbracket \Gamma \Rightarrow \Delta \rrbracket_{\mathscr{M}_{\mathscr{K}, w}}^\rho$ for any such $\mathscr{K}$, $w$ and $\rho$. For any such $\mathscr{K}$, $w$ and $\rho$, since $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOCLS}}(\mathscr{C})$, we have $\llbracket \Gamma \Rightarrow \Delta \rrbracket_{\mathscr{M}_{\mathscr{K}, w}}^\rho = 1$, and hence, we have $\| \Gamma \Rightarrow \Delta \|_{\mathscr{K}, w}^\rho = 1$. \qed \end{proof} \subsection{The ``only if'' part} \label{Subsection 3.2} Here, we show the ``only if'' part of Theorem \ref{Theorem 1} by showing its contraposition: \begin{proposition}\label{Proposition 2} If $\mathscr{C}$ has a non-monotonic connective, then $\mathop{\mathrm{FOCLS}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCDS}}(\mathscr{C}) \neq \emptyset$. \end{proposition} In \cite{kawano2021effect}, the following corresponding claim was shown in the case of propositional logic: \begin{proposition}\label{Proposition 3} If $\mathscr{C}$ has a non-monotonic connective, then $\mathop{\mathrm{CLS}}(\mathscr{C}) \setminus \mathop{\mathrm{ILS}}(\mathscr{C}) \neq \emptyset$. \end{proposition} Here, $\mathop{\mathrm{ILS}}(\mathscr{C})$ denotes the set of propositional sequents $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{Sqt}}(\mathscr{C})$ which are valid in all Kripke models for intuitionistic propositional logic and $\mathop{\mathrm{CLS}}(\mathscr{C})$ denotes the set of propositional sequents $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{Sqt}}(\mathscr{C})$ which are valid in all models for classical propositional logic. Actually, Proposition \ref{Proposition 2} follows from Proposition \ref{Proposition 3}, because the followings hold: \begin{itemize} \item For any $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{Sqt}}(\mathscr{C})$, $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{ILS}}(\mathscr{C})$ if and only if $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOCDS}}(\mathscr{C})$. \item For any $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{Sqt}}(\mathscr{C})$, $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{CLS}}(\mathscr{C})$ if and only if $\Gamma \Rightarrow \Delta \in \mathop{\mathrm{FOCLS}}(\mathscr{C})$. \end{itemize} However, for the purpose of self-containedness, here we describe the direct proof. \begin{proof}[of Proposition \ref{Proposition 2}] We show that if $\mathscr{C}$ includes a non-monotonic connective, then $\mathop{\mathrm{FOCLS}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCDS}}(\mathscr{C}) \neq \emptyset$. We fix distinct propositional symbols $p$, $q$, $r$ and $s$. Let $c$ be a non-monotonic connective in $\mathscr{C}$. We divide into four cases: $(\mathrm{a})$ $\ttfunc{c}(\mathbf{0}) = \ttfunc{c}(\mathbf{1}) = 0$; $(\mathrm{b})$ $\ttfunc{c}(\mathbf{0}) = 0$ and $\ttfunc{c}(\mathbf{1}) = 1$; $(\mathrm{c})$ $\ttfunc{c}(\mathbf{0}) = 1$ and $\ttfunc{c}(\mathbf{1}) = 0$; and $(\mathrm{d})$: $\ttfunc{c}(\mathbf{0}) = \ttfunc{c}(\mathbf{1}) = 1$. We show in the order of $(\mathrm{d})$, $(\mathrm{c})$, $(\mathrm{b})$, $(\mathrm{a})$. \textsc{Case} $(\mathrm{d})$: $\ttfunc{c}(\mathbf{0}) = \ttfunc{c}(\mathbf{1}) = 1$. First, we construct a formula $\tau$ in $\mathop{\mathrm{FOCD}}(\mathscr{C})$. We define $\tau \in \mathop{\mathrm{Fml}}(\mathscr{C})$ by $\tau \equiv c(s, \ldots, s)$. Then, $\tau \in \mathop{\mathrm{FOIL}}(\mathscr{C}) \subseteq \mathop{\mathrm{FOCD}}(\mathscr{C})$ can be easily verified. Now, we construct a formula $\varphi \in \mathop{\mathrm{FOCL}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCD}}(\mathscr{C})$. We can see, if such $\varphi$ exists, then ${} \Rightarrow \varphi \in \mathop{\mathrm{FOCLS}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCDS}}(\mathscr{C})$ holds. Since $c$ is non-monotonic, there exist $\mathbf{a}, \mathbf{b} \in \{ 0, 1 \}^{\mathop{\mathrm{ar}}(c)}$ such that $\mathbf{a} \sqsubseteq \mathbf{b}$, $\ttfunc{c}(\mathbf{a}) = 1$ and $\ttfunc{c}(\mathbf{b}) = 0$. Let $\overline{\mathbf{b}}^{\mathbf{a}}$ be the sequence in $\{ 0, 1 \}^{\mathop{\mathrm{ar}}(c)}$ defined by \[ \overline{\mathbf{b}}^{\mathbf{a}} = \begin{cases} 0 & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 1$} \\ 1 & \text{if $\mathbf{a}[i] = 1$ or $\mathbf{b}[i] = 0$}. \end{cases} \] We divide into two subcases: (\textsc{Subcase} 1) $\ttfunc{c}(\overline{\mathbf{b}}^\mathbf{a}) = 1$; and (\textsc{Subcase} 2) $\ttfunc{c}(\overline{\mathbf{b}}^\mathbf{a}) = 0$. \textsc{Subcase} 1: $\ttfunc{c}(\overline{\mathbf{b}}^\mathbf{a}) = 1$. We define formulas $\sigma^{\mathrm{P}}_1, \ldots, \sigma^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)}, \sigma^{\mathrm{P}}\in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \sigma^{\mathrm{P}}_i & \equiv \begin{cases} q & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 0$} \\ p & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 1$} \\ \tau & \text{if $\mathbf{a}[i] = 1$} \end{cases} \\ \sigma^{\mathrm{P}} & \equiv c(\sigma^{\mathrm{P}}_1, \ldots, \sigma^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we define formulas $\psi^{\mathrm{P}}_1, \ldots, \psi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)}, \psi^{\mathrm{P}} \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \psi^{\mathrm{P}}_i & \equiv \begin{cases} p & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 0$} \\ \sigma^{\mathrm{P}} & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 1$} \\ \tau & \text{if $\mathbf{a}[i] = 1$} \end{cases} \\ \psi^{\mathrm{P}} & \equiv c(\psi^{\mathrm{P}}_1, \ldots, \psi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)}) \end{align*} Furthermore, we define formulas $\varphi^{\mathrm{P}}_1, \ldots, \varphi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)}, \varphi^{\mathrm{P}} \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \varphi^{\mathrm{P}}_i & \equiv \begin{cases} p & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 0$} \\ \psi^{\mathrm{P}} & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{a}[i] = 1$} \\ \tau & \text{if $\mathbf{a}[i] = 1$} \end{cases} \\ \varphi^{\mathrm{P}} & \equiv c(\varphi^{\mathrm{P}}_1, \ldots, \varphi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we obtain $\varphi^{\mathrm{P}} \in \mathop{\mathrm{FOCL}}(\mathscr{C})$ from the following table. \begin{center} \begin{tabular}{|C{5mm}|C{5mm}||C{26mm}|C{6mm}|C{26mm}|C{6mm}|C{26mm}|C{6mm}|} \hline $p$ & $q$ & $\langle \sigma^{\mathrm{P}}_1, \ldots, \sigma^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\sigma^{\mathrm{P}}$ & $\langle \psi^{\mathrm{P}}_1, \ldots, \psi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\psi^{\mathrm{P}}$ & $\langle \varphi^{\mathrm{P}}_1, \ldots, \varphi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\varphi^{\mathrm{P}}$ \\ \hline $0$ & $0$ & $\mathbf{a}$ & $1$ & $\mathbf{b}$ & $0$ & $\mathbf{a}$ & $1$ \\ \hline $0$ & $1$ & $\overline{\mathbf{b}}^\mathbf{a}$ & $1$ & $\mathbf{b}$ & $0$ & $\mathbf{a}$ & $1$ \\ \hline $1$ & $0$ & $\mathbf{b}$ & $0$ & $\overline{\mathbf{b}}^\mathbf{a}$ & $1$ & $\mathbf{1}$ & $1$ \\ \hline $1$ & $1$ & $\mathbf{1}$ & $1$ & $\mathbf{1}$ & $1$ & $\mathbf{1}$ & $1$ \\ \hline \end{tabular} \end{center} Now, consider the constant domain Kripke model $\mathscr{K}^* = \langle \{w_0, w_1 \}, \preceq, \{ a_1 \}, I \rangle$ in which \begin{itemize} \item $w_i \preceq w_j$ if and only if $i \leq j$; \item $I(w_0, p) = 0$, $I(w_0, q) = 0$, $I(w_1, p) = 1$, and $I(w_1, q) = 0$. (The interpretations for the other pairs of possible worlds and predicate symbols may be arbitrary.) \end{itemize} Then, we obtain $\| \varphi^{\mathrm{P}} \|_{\mathscr{K}^*, w_0}^\varnothing = 0$ from the following table. For example, that the element in the second row and fourth column is $\overline{\mathbf{b}}^\mathbf{a}$ means that \[ \langle \| \psi^{\mathrm{P}}_1 \|_{\mathscr{K}^*, w_1}^\varnothing, \ldots, \| \psi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)} \|_{\mathscr{K}^*, w_1}^\varnothing \rangle = \overline{\mathbf{b}}^\mathbf{a}. \] \begin{center} \begin{tabular}{|c||C{26mm}|C{6mm}|C{26mm}|C{6mm}|C{26mm}|C{6mm}|} \hline & $\langle \sigma^{\mathrm{P}}_1, \ldots, \sigma^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\sigma^{\mathrm{P}}$ & $\langle \psi^{\mathrm{P}}_1, \ldots, \psi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\psi^{\mathrm{P}}$ & $\langle \varphi^{\mathrm{P}}_1, \ldots, \varphi^{\mathrm{P}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\varphi^{\mathrm{P}}$ \\ \hline $\| \cdot \|_{\mathscr{K}^*, w_1}^\varnothing$ & $\mathbf{b}$ & $0$ & $\overline{\mathbf{b}}^\mathbf{a}$ & $1$ & $\mathbf{1}$ & $1$ \\ \hline $\| \cdot \|_{\mathscr{K}^*, w_0}^\varnothing$ & $\mathbf{a}$ & $0$ & $\mathbf{a}$ & $1$ & $\mathbf{b}$ & $0$ \\ \hline \end{tabular} \end{center} Hence, $\varphi^{\mathrm{P}} \in \mathop{\mathrm{FOCL}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCD}}(\mathscr{C})$. \textsc{Subcase} 2: $\ttfunc{c}(\overline{\mathbf{b}}^\mathbf{a}) = 0$. We define formulas $\sigma^{\mathrm{Q}}_1, \ldots, \sigma^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)}, \sigma^{\mathrm{Q}}\in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \sigma^{\mathrm{Q}}_i & \equiv \begin{cases} q & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 0$} \\ p & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 1$} \\ \tau & \text{if $\mathbf{a}[i] = 1$} \end{cases} \\ \sigma^{\mathrm{Q}} & \equiv c(\sigma^{\mathrm{Q}}_1, \ldots, \sigma^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we define formulas $\psi^{\mathrm{Q}}_1, \ldots, \psi^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)}, \psi^{\mathrm{Q}} \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \psi^{\mathrm{Q}}_i & \equiv \begin{cases} \sigma^{\mathrm{Q}} & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 0$} \\ q & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 1$} \\ \tau & \text{if $\mathbf{a}[i] = 1$} \end{cases} \\ \psi^{\mathrm{Q}} & \equiv c(\psi^{\mathrm{Q}}_1, \ldots, \psi^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)}) \end{align*} Furthermore, we define formulas $\varphi^{\mathrm{Q}}_1, \ldots, \varphi^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)}, \varphi^{\mathrm{Q}} \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \varphi^{\mathrm{Q}}_i & \equiv \begin{cases} \psi^{\mathrm{Q}} & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 0$} \\ p & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{a}[i] = 1$} \\ \tau & \text{if $\mathbf{a}[i] = 1$} \end{cases} \\ \varphi^{\mathrm{Q}} & \equiv c(\varphi^{\mathrm{Q}}_1, \ldots, \varphi^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we obtain $\varphi^{\mathrm{Q}} \in \mathop{\mathrm{FOCL}}(\mathscr{C})$ from the following table. \begin{center} \begin{tabular}{|C{5mm}|C{5mm}||C{26mm}|C{6mm}|C{26mm}|C{6mm}|C{26mm}|C{6mm}|} \hline $p$ & $q$ & $\langle \sigma^{\mathrm{Q}}_1, \ldots, \sigma^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\sigma^{\mathrm{Q}}$ & $\langle \psi^{\mathrm{Q}}_1, \ldots, \psi^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\psi^{\mathrm{Q}}$ & $\langle \varphi^{\mathrm{Q}}_1, \ldots, \varphi^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\varphi^{\mathrm{Q}}$ \\ \hline $0$ & $0$ & $\mathbf{a}$ & $1$ & $\overline{\mathbf{b}}^\mathbf{a}$ & $0$ & $\mathbf{a}$ & $1$ \\ \hline $0$ & $1$ & $\overline{\mathbf{b}}^\mathbf{a}$ & $0$ & $\mathbf{b}$ & $0$ & $\mathbf{a}$ & $1$ \\ \hline $1$ & $0$ & $\mathbf{b}$ & $0$ & $\mathbf{a}$ & $1$ & $\mathbf{1}$ & $1$ \\ \hline $1$ & $1$ & $\mathbf{1}$ & $1$ & $\mathbf{1}$ & $1$ & $\mathbf{1}$ & $1$ \\ \hline \end{tabular} \end{center} On the other hand, we obtain $\| \varphi^{\mathrm{Q}} \|_{\mathscr{K}^*, w_0}^\varnothing = 0$ from the following table. \begin{center} \begin{tabular}{|c||C{26mm}|C{6mm}|C{26mm}|C{6mm}|C{26mm}|C{6mm}|} \hline & $\langle \sigma^{\mathrm{Q}}_1, \ldots, \sigma^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\sigma^{\mathrm{Q}}$ & $\langle \psi^{\mathrm{Q}}_1, \ldots, \psi^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\psi^{\mathrm{Q}}$ & $\langle \varphi^{\mathrm{Q}}_1, \ldots, \varphi^{\mathrm{Q}}_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\varphi^{\mathrm{Q}}$ \\ \hline $\| \cdot \|_{\mathscr{K}^*, w_1}^\varnothing$ & $\mathbf{b}$ & $0$ & $\mathbf{a}$ & $1$ & $\mathbf{1}$ & $1$ \\ \hline $\| \cdot \|_{\mathscr{K}^*, w_0}^\varnothing$ & $\mathbf{a}$ & $0$ & $\mathbf{a}$ & $1$ & $\overline{\mathbf{b}}^\mathbf{a}$ & $0$ \\ \hline \end{tabular} \end{center} Hence, $\varphi^{\mathrm{Q}} \in \mathop{\mathrm{FOCL}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCD}}(\mathscr{C})$. \textsc{Case} $(\mathrm{c})$: $\ttfunc{c}(\mathbf{0}) = 1$ and $\ttfunc{c}(\mathbf{1}) = 0$. First, we define formula $\lnot_c \alpha$ for each formula $\alpha \in \mathop{\mathrm{FOFml}}(\mathscr{C})$ by $\lnot_c \alpha \equiv c(\alpha, \ldots, \alpha)$. Then, $\lnot_c \alpha$ plays the same role as $\lnot \alpha$, that is, for any Kripke model $\mathscr{K} = \langle W, \preceq, D, I \rangle$, any $w \in W$ and any assignment $\rho$ in $D(w)$, $\| \lnot_c \alpha \|_{\mathscr{K}, w}^\rho = 1$ if and only if $\| \alpha \|_{\mathscr{K}, v}^\rho = 0$ for all $v \succeq w$. Fix a predicate symbol $p$. Then, it is easy to verify that $\lnot_c \lnot_c p \Rightarrow p \in \mathop{\mathrm{FOCLS}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCDS}}(\mathscr{C})$. \textsc{Case} $(\mathrm{b})$: $\ttfunc{c}(\mathbf{0}) = 0$ and $\ttfunc{c}(\mathbf{1}) = 1$. Since $c$ is non-monotonic, there exist $\mathbf{a}, \mathbf{b} \in \{ 0, 1 \}^{\mathop{\mathrm{ar}}(c)}$ such that $\mathbf{a} \sqsubseteq \mathbf{b}$, $\ttfunc{c}(\mathbf{a}) = 1$ and $\ttfunc{c}(\mathbf{b}) = 0$. We divide into two subcases: (\textsc{Subcase} 1) $\ttfunc{c}(\overline{\mathbf{a}}) = 1$; and (\textsc{Subcase} 2) $\ttfunc{c}(\overline{\mathbf{a}}) = 0$. \textsc{Subcase} 1: $\ttfunc{c}(\overline{\mathbf{a}}) = 1$. We define formulas $\chi_1, \ldots, \chi_{\mathop{\mathrm{ar}}(c)}, \chi \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \chi_i & \equiv \begin{cases} q & \text{if $\mathbf{a}[i] = 0$} \\ p & \text{if $\mathbf{a}[i] = 1$} \end{cases} \\ \chi & \equiv c(\chi_1, \ldots, \chi_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we can easily verify that, for any model $\mathscr{M} = \langle D, I \rangle$, if $I(p) = 1$ or $I(q) = 1$, then $\llbracket \chi \rrbracket_\mathscr{M}^\varnothing = 1$. Now, we define formulas $\psi_1, \ldots, \psi_{\mathop{\mathrm{ar}}(c)}, \psi \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \psi_i & \equiv \begin{cases} q & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 0$} \\ p & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 1$} \\ r & \text{if $\mathbf{a}[i] = 1$ and $\mathbf{b}[i] = 1$} \end{cases} \\ \psi & \equiv c(\psi_1, \ldots, \psi_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we can easily verify that, for any model $\mathscr{M} = \langle D, I \rangle$, $I(p) = I(q) = 0$ implies $\llbracket \psi \rrbracket_\mathscr{M}^\varnothing = I(r)$. Next, we define formulas $\varphi_1, \ldots, \varphi_{\mathop{\mathrm{ar}}(c)}, \varphi \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \varphi_i & \equiv \begin{cases} q & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 0$} \\ \psi & \text{if $\mathbf{a}[i] = 0$ and $\mathbf{b}[i] = 1$} \\ r & \text{if $\mathbf{a}[i] = 1$ and $\mathbf{b}[i] = 1$} \end{cases} \\ \varphi & \equiv c(\varphi_1, \ldots, \varphi_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we can see that, for any model $\mathscr{M} = \langle D, I \rangle$, if $I(p) = I(q) = 0$ then $\llbracket \varphi \rrbracket_\mathscr{M}^\varnothing = 0$. From the above observation, we obtain $\varphi \Rightarrow \chi \in \mathop{\mathrm{FOCLS}}(\mathscr{C})$. Now, let $\mathscr{K}^+ = \langle \{ w_0, w_1 \}, \preceq, \{ a_1 \}, I \rangle$ be the constant domain Kripke model defined as follows: \begin{itemize} \item $w_i \preceq w_j$ if and only if $i \leq j$; \item $I(w_0, p) = 0$, $I(w_0, q) = 0$, $I(w_0, r) = 1$, $I(w_1, p) = 1$, $I(w_1, q) = 0$, $I(w_1, r) = 1$. \end{itemize} Then, from the following table, we obtain $\| \varphi \|_{\mathscr{K}^+, w_0}^\varnothing = 1$ and $\| \chi \|_{\mathscr{K}^+, w_0}^\varnothing = 0$. Hence, $\varphi \Rightarrow \chi \notin \mathop{\mathrm{FOCDS}}(\mathscr{C})$. \begin{center} \begin{tabular}{|c||C{26mm}|C{6mm}|C{26mm}|C{6mm}|C{26mm}|C{6mm}|} \hline & $\langle \chi_1, \ldots, \chi_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\chi$ & $\langle \psi_1, \ldots, \psi_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\psi$ & $\langle \varphi_1, \ldots, \varphi_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\varphi$ \\ \hline $\| \cdot \|_{\mathscr{K}^+, w_1}^\varnothing$ & $\mathbf{a}$ & $1$ & $\mathbf{b}$ & $0$ & $\mathbf{a}$ & $1$ \\ \hline $\| \cdot \|_{\mathscr{K}^+, w_0}^\varnothing$ & $\mathbf{0}$ & $0$ & $\mathbf{a}$ & $0$ & $\mathbf{a}$ & $1$ \\ \hline \end{tabular} \end{center} \textsc{Subcase} 2: $\ttfunc{c}(\overline{\mathbf{a}}) = 0$. We define $\psi_1, \ldots, \psi_{\mathop{\mathrm{ar}}(c)}, \psi \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \psi_i & \equiv \begin{cases} q & \text{if $\mathbf{a}[i] = 0$} \\ r & \text{if $\mathbf{a}[i] = 1$} \end{cases} \\ \psi & \equiv c(\psi_1, \ldots, \psi_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we can easily verify that, for any model $\mathscr{M} = \langle D, I \rangle$, $I(r) = 0$ implies $\llbracket \psi \rrbracket_\mathscr{M}^\varnothing = 0$. Now, let $\varphi^{\mathrm{PP}}$ be the formula obtained from $\varphi^{\mathrm{P}}$ in subcase 1 of case $(\mathrm{d})$ by replacing every occurrence of $\tau$ with $r$. Let $\varphi^{\mathrm{QQ}}$ be the formula obtained from $\varphi^{\mathrm{Q}}$ in subcase 2 of case $(\mathrm{d})$ by replacing every occurrence of $\tau$ with $r$. Then, similarly to case $(\mathrm{d})$, we obtain either $\psi \Rightarrow \varphi^{\mathrm{PP}} \in \mathop{\mathrm{FOCLS}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCDS}}(\mathscr{C})$ or $\psi \Rightarrow \varphi^{\mathrm{QQ}} \in \mathop{\mathrm{FOCLS}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCDS}}(\mathscr{C})$. Hence, $\mathop{\mathrm{FOCLS}}(\mathscr{C}) \setminus \mathop{\mathrm{FOCDS}}(\mathscr{C}) \neq \emptyset$. \textsc{Case} $(\mathrm{a})$: $\ttfunc{c}(\mathbf{0}) = \ttfunc{c}(\mathbf{1}) = 0$. Since $c$ is non-monotonic, there exists some $\mathbf{a} \in \{ 0, 1 \}^{\mathop{\mathrm{ar}}(c)}$ such that $\ttfunc{c}(\mathbf{a}) = 1$. We define formulas $\psi_1, \ldots, \psi_{\mathop{\mathrm{ar}}(c)}, \psi \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \psi_i & \equiv \begin{cases} p & \text{if $\mathbf{a}[i] = 0$} \\ r & \text{if $\mathbf{b}[i] = 1$} \end{cases} \\ \psi & \equiv c(\psi_1, \ldots, \psi_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we can easily verify that, for any model $\mathscr{M} = \langle D, I \rangle$, if $I(p) = 0$ then $\llbracket \psi \rrbracket_\mathscr{M}^\varnothing = I(r)$. Now, we define formulas $\varphi_1, \ldots, \varphi_{\mathop{\mathrm{ar}}(c)}, \varphi \in \mathop{\mathrm{Fml}}(\mathscr{C})$ as follows: \begin{align*} \varphi_i & \equiv \begin{cases} \psi & \text{if $\mathbf{a}[i] = 0$} \\ r & \text{if $\mathbf{a}[i] = 1 $} \end{cases} \\ \varphi & \equiv c(\varphi_1, \ldots, \varphi_{\mathop{\mathrm{ar}}(c)}) \end{align*} Then, we can easily verify that, for any model $\mathscr{M} = \langle D, I \rangle$, if $I(p) = 0$ then $\llbracket \varphi \rrbracket_\mathscr{M}^\varnothing = 0$. Hence, we obtain $\varphi \Rightarrow p \in \mathop{\mathrm{FOCLS}}(\mathscr{C})$. On the other hand, for the constant domain Kripke model $\mathscr{K}^+$ given in case $(\mathrm{b})$, we have $\| p \|_{\mathscr{K}^+, w_0}^\varnothing = 1$, and we obtain $\| \varphi \|_{\mathscr{K}^+, w_0}^\varnothing = 1$ from the following table. Hence, we have $\varphi \Rightarrow p \notin \mathop{\mathrm{FOCDS}}(\mathscr{C})$. \begin{center} \begin{tabular}{|c||C{26mm}|C{6mm}|C{26mm}|C{6mm}|} \hline & $\langle \psi_1, \ldots, \psi_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\psi$ & $\langle \varphi_1, \ldots, \varphi_{\mathop{\mathrm{ar}}(c)} \rangle$ & $\varphi$ \\ \hline $\| \cdot \|_{\mathscr{K}^+, w_1}^\varnothing$ & $\mathbf{1}$ & $0$ & $\mathbf{a}$ & $1$ \\ \hline $\| \cdot \|_{\mathscr{K}^+, w_0}^\varnothing$ & $\mathbf{a}$ & $0$ & $\mathbf{a}$ & $1$ \\ \hline \end{tabular} \end{center} \qed \end{proof} \section{Conclusion} \label{Section 4} We have seen that generalized Kripke semantics can be extended to first-order logic. Furthermore, if we only admit as models Kripke models with constant domains, then we obtain constant domain Kripke semantics that admits general propositional connectives. Then, extending the the theorem that gives the necessary and sufficient condition for $\mathop{\mathrm{ILS}}(\mathscr{C})$ and $\mathop{\mathrm{CLS}}(\mathscr{C})$ to coincide, we have obtained the following theorem: \begin{theorem*} $\mathop{\mathrm{FOCDS}}(\mathscr{C}) = \mathop{\mathrm{FOCLS}}(\mathscr{C})$ if and only if all connectives in $\mathscr{C}$ are monotonic. \end{theorem*} \end{document}
arXiv
are the integers complete For the rationals, I would appreciate any info here as well. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Learn integers with lessons from Math Goodies. Only those equalities of expressions are true in ℤ for all values of variables, which are true in any unital commutative ring. This is readily demonstrated by the construction of a bijection, that is, a function that is injective and surjective from ℤ to ℕ. [19] These constructions differ in several ways: the number of basic operations used for the construction, the number (usually, between 0 and 2) and the types of arguments accepted by these operations; the presence or absence of natural numbers as arguments of some of these operations, and the fact that these operations are free constructors or not, i.e., that the same integer can be represented using only one or many algebraic terms. [18] To confirm our expectation that 1 − 2 and 4 − 5 denote the same number, we define an equivalence relation ~ on these pairs with the following rule: Addition and multiplication of integers can be defined in terms of the equivalent operations on the natural numbers;[18] by using [(a,b)] to denote the equivalence class having (a,b) as a member, one has: The negation (or additive inverse) of an integer is obtained by reversing the order of the pair: Hence subtraction can be defined as the addition of the additive inverse: The standard ordering on the integers is given by: It is easily verified that these definitions are independent of the choice of representatives of the equivalence classes. The rationals, on the other hand, do not have the property because it is possible to find a bounded subset of $\mathbb{Q}$ which has an irrational supremum. The LUBP says that every BOUNDED set has a least upper bound. It is called Euclidean division, and possesses the following important property: given two integers a and b with b ≠ 0, there exist unique integers q and r such that a = q × b + r and 0 ≤ r < | b |, where | b | denotes the absolute value of b. How can I change a math symbol's size globally? Asking for help, clarification, or responding to other answers. This notation recovers the familiar representation of the integers as {…, −2, −1, 0, 1, 2, …}. Canonical factorization of a positive integer, "Earliest Uses of Symbols of Number Theory", "The Definitive Higher Math Guide to Long Division and Its Variants — for Integers", The Positive Integers – divisor tables and numeral representation tools, On-Line Encyclopedia of Integer Sequences, Creative Commons Attribution/Share-Alike License, https://en.wikipedia.org/w/index.php?title=Integer&oldid=989799689, Short description is different from Wikidata, Wikipedia articles incorporating text from PlanetMath, Creative Commons Attribution-ShareAlike License, This page was last edited on 21 November 2020, at 02:36. x It is also a cyclic group, since every non-zero integer can be written as a finite sum 1 + 1 + … + 1 or (−1) + (−1) + … + (−1). An integer is often a primitive data type in computer languages. y Two PhD programs simultaneously in different countries. {\displaystyle (x,y)} You are missunderstanding the definition having, Everybody is saying "set," which might be confusing. The following table gives examples and explains what this means in plain English. The set of integers is often denoted by a boldface letter 'Z' ("Z") or blackboard bold Prove that every nonempty set of real numbers that is bounded from below has an infimum. that takes as arguments two natural numbers An integer (from the Latin integer meaning "whole")[a] is colloquially defined as a number that can be written without a fractional component. {\displaystyle x} Fixed length integer approximation data types (or subsets) are denoted int or Integer in several programming languages (such as Algol68, C, Java, Delphi, etc.). [17] The integers can thus be formally constructed as the equivalence classes of ordered pairs of natural numbers (a,b).[18]. Some authors use ℤ* for non-zero integers, while others use it for non-negative integers, or for {–1, 1}. , and returns an integer (equal to if I did? x Shouldn't some stars behave as black holes? I understand the inf of the naturals is 1 and has no sup. In fact, ℤ under addition is the only infinite cyclic group—in the sense that any infinite cyclic group is isomorphic to ℤ. The symbol ℤ can be annotated to denote various sets, with varying usage amongst different authors: ℤ+,[4] ℤ+ or ℤ> for the positive integers, ℤ0+ or ℤ≥ for non-negative integers, and ℤ≠ for non-zero integers. In fact, (rational) integers are algebraic integers that are also rational numbers. Residue classes of integers mod n. The cardinality of the set of integers is equal to ℵ0 (aleph-null). The set of integers consists of zero (0), the positive natural numbers (1, 2, 3, ...), also called whole numbers or counting numbers,[2][3] and their additive inverses (the negative integers, i.e., −1, −2, −3, ...). However, not every integer has a multiplicative inverse (as is the case of the number 2), which means that ℤ under multiplication is not a group. Every equivalence class has a unique member that is of the form (n,0) or (0,n) (or both at once). MathJax reference. y Bounded above implies there exists a $\sup B$? x Use MathJax to format equations. The first four properties listed above for multiplication say that ℤ under multiplication is a commutative monoid. ). (It is, however, certainly possible for a computer to determine whether an integer value is truly positive.) Since any number with a terminating decimal representation is rational, $X\subset\mathbb{Q}$. For more complex math equations that require the rules of order of … Integers are represented as algebraic terms built using a few basic operations (e.g., zero, succ, pred) and, possibly, using natural numbers, which are assumed to be already constructed (using, say, the Peano approach). Integers are positive and negative whole numbers. An ordered set $A$ has the LUBP if every, Good point @ThomasAndrews. As an example you can take the set obtained by writing the first $n$ decimal places of $\pi$ for each $n\in\mathbb{N}$, $$X= \{3.1, 3.14, 3.141,3.1415,\ldots\} $$. {\displaystyle \mathbb {Z} } The ordering of ℤ is given by: Like the natural numbers, ℤ is countably infinite. To learn more, see our tips on writing great answers. The ordering of integers is compatible with the algebraic operations in the following way: Thus it follows that ℤ together with the above ordering is an ordered ring. If ℕ₀ ≡ {0, 1, 2, ...} then consider the function: {… (−4,8) (−3,6) (−2,4) (−1,2) (0,0) (1,1) (2,3) (3,5) ...}. The calculator uses standard mathematical rules to solve the equations. Why is the concept of injective functions difficult for my students? Since the set of $x$ such that $-5<5$ is $\{-4,-3,...,3,4\}$. Given definition: An ordered set A is complete if it has the "least upper bound property" (completeness). or a memorable number of decimal digits (e.g., 9 or 10). We are given $ \mathbb{N}, \mathbb{Z}$ are complete. and $\mathbb{Q}$ is not complete. The intuition is that (a,b) stands for the result of subtracting b from a. [12] The integer q is called the quotient and r is called the remainder of the division of a by b. It is the prototype of all objects of such algebraic structure. For example, 21, 4, 0, and −2048 are integers, while 9.75, 5+1/2, and √2 are not. Ottolenghi Broccoli Salad, Articles Games For Adults, Cyd Animal Crossing, Toshiba Portege Z30-a Price, Chocolate Almond Flour Recipes, What Is Transformational Leadership, Green Olive Tapenade Vegan,
CommonCrawl
Range of a function In mathematics, the range of a function may refer to either of two closely related concepts: • The codomain of the function • The image of the function Given two sets X and Y, a binary relation f between X and Y is a (total) function (from X to Y) if for every x in X there is exactly one y in Y such that f relates x to y. The sets X and Y are called domain and codomain of f, respectively. The image of f is then the subset of Y consisting of only those elements y of Y such that there is at least one x in X with f(x) = y. Terminology As the term "range" can have different meanings, it is considered a good practice to define it the first time it is used in a textbook or article. Older books, when they use the word "range", tend to use it to mean what is now called the codomain.[1] More modern books, if they use the word "range" at all, generally use it to mean what is now called the image.[2] To avoid any confusion, a number of modern books don't use the word "range" at all.[3] Elaboration and example Given a function $f\colon X\to Y$ with domain $X$, the range of $f$, sometimes denoted $\operatorname {ran} (f)$ or $\operatorname {Range} (f)$,[4] may refer to the codomain or target set $Y$ (i.e., the set into which all of the output of $f$ is constrained to fall), or to $f(X)$, the image of the domain of $f$ under $f$ (i.e., the subset of $Y$ consisting of all actual outputs of $f$). The image of a function is always a subset of the codomain of the function.[5] As an example of the two different usages, consider the function $f(x)=x^{2}$ as it is used in real analysis (that is, as a function that inputs a real number and outputs its square). In this case, its codomain is the set of real numbers $\mathbb {R} $, but its image is the set of non-negative real numbers $\mathbb {R} ^{+}$, since $x^{2}$ is never negative if $x$ is real. For this function, if we use "range" to mean codomain, it refers to $\mathbb \mathbb {R} ^{}} $; if we use "range" to mean image, it refers to $\mathbb {R} ^{+}$. In many cases, the image and the codomain can coincide. For example, consider the function $f(x)=2x$, which inputs a real number and outputs its double. For this function, the codomain and the image are the same (both being the set of real numbers), so the word range is unambiguous. See also • Bijection, injection and surjection • Essential range Notes and references 1. Hungerford 1974, p. 3; Childs 2009, p. 140. 2. Dummit & Foote 2004, p. 2. 3. Rudin 1991, p. 99. 4. Weisstein, Eric W. "Range". mathworld.wolfram.com. Retrieved 2020-08-28. 5. Nykamp, Duane. "Range definition". Math Insight. Retrieved August 28, 2020.{{cite web}}: CS1 maint: url-status (link) Bibliography • Childs, Lindsay N. (2009). A Concrete Introduction to Higher Algebra. Undergraduate Texts in Mathematics (3rd ed.). Springer. doi:10.1007/978-0-387-74725-5. ISBN 978-0-387-74527-5. OCLC 173498962. • Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). Wiley. ISBN 978-0-471-43334-7. OCLC 52559229. • Hungerford, Thomas W. (1974). Algebra. Graduate Texts in Mathematics. Vol. 73. Springer. doi:10.1007/978-1-4612-6101-8. ISBN 0-387-90518-9. OCLC 703268. • Rudin, Walter (1991). Functional Analysis (2nd ed.). McGraw Hill. ISBN 0-07-054236-8. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Reducing the impact of location errors for target tracking in wireless sensor networks Éfren L. Souza1, Eduardo F. Nakamura2, Horácio A. B. F. de Oliveira1 & Carlos M. S. Figueiredo2 Journal of the Brazilian Computer Society volume 19, pages 89–104 (2013)Cite this article In wireless sensor networks (WSNs), target tracking algorithms usually depend on geographical information provided by localization algorithms. However, errors introduced by such algorithms affect the performance of tasks that rely on that information. A major source or errors in localization algorithms is the distance estimation procedure, which often is based on received signal strength indicator measurements. In this work, we use a Kalman Filter to improve the distance estimation within localization algorithms to reduce distance estimation errors, ultimately improving the target tracking accuracy. As a proof-of-concept, we chose the recursive position estimation and directed position estimation as the localization algorithms, while Kalman and Particle filters are used for tracking a moving target. We provide a deep performance assessment of these combined algorithms (localization and tracking) for WSNs are used. Our results show that by filtering multiple distance estimates in the localization algorithms we can improve the tracking accuracy, but the associate communication cost must not be neglected. A wireless sensor network (WSN) [1] is a special type of ad-hoc network composed of resource-constrained devices, called sensor nodes. These sensors are able to perceive the environment, collect, process and disseminate environmental data. Tracking the location of a moving entity (event) represents an important class of applications for WSNs. For instance, animal tracking for long-term assessment of species to improve our knowledge about the biodiversity and support preserving and conserving the wildlife [7, 21, 28]. Target tracking is particularly dependent on location information, and current localization algorithms [2, 24] cannot perfectly estimate every node location [25, 30]. Various approaches have been proposed for target tracking in WSN, considering diverse metrics like accuracy, scalability, and density [6, 18, 31, 36, 38]. However, there is little research assessing the impact of localization algorithms have on the target tracking performance. Current approaches either assume that every sensor node knows its position perfectly [20], or simulate localization errors by adding a random noise variable in the correct node position [12, 19]. In this work, we assess the performance of target tracking algorithms when position information are based on actual localization algorithms. Then, we demonstrate how an information fusion techniques [20] can be used to mitigate errors of localization algorithms, improving the target tracking accuracy. To do that, multiple distance estimates are fused by a Kalman filter, during the localization process. Such an evaluation is a step towards the understanding of the relationship among localization and target tracking algorithms, and the design of integrated solutions that exploit features and requirements shared by these tasks. As a proof-of-concept, we evaluate two localization algorithms on two tracking algorithms. The first localization algorithm is the recursive position estimation (RPE) algorithm [2]—a pioneer iterative solution—while the second is the directed position estimation (DPE) algorithm [24]—a solution that evolved from the original RPE. The tracking algorithms we evaluate are the Kalman filter (KF) [15] and particle filter (PF) [3]. These filters can be considered as canonical solutions for the target tracking problem. The remainder of the work is organized as follows. In Sect. 2, we present the related work and background knowledge required for localization and target tracking problems. In Sect. 3, we present a simple information-fusion approach for reducing localization/tracking errors. Section 4 presents our experimental methodology and quantitative evaluation. Finally, in Sect. 5, we present our conclusions and future work. Background and related work In this section, we describe the state-of-the-art regarding localization and tracking algorithms, putting emphasis on the algorithms evaluated in this work. A localization system in sensor networks basically consists of determining the physical location of the sensor nodes [25]. These systems are usually divided into three phases: distance estimation, position computation and localization algorithm [5]. In current localization solutions, a limited number of nodes, called beacon or anchor nodes, are aware of their positions. Then, distributed algorithms share beacon information, so that the remainder of the nodes can estimate their position. The ad hoc positioning system (APS) [22] works as an extension of both the distance vector routing and GPS positioning in order to provide a localization system in which a limited fraction of nodes have self-location capability (e.g., GPS-equipped nodes). An approach that uses mobile beacon to provide the node location in sensor networks is proposed by Sichitiu and Ramadurai [27]. In this algorithm, one or more beacon nodes move through the sensor field broadcasting their positions to all nodes within in the beacon range. When a node receives three or more positions it computes its own position. Tatham and Kunz [30] show that the position of the beacon nodes can impact the localization error, furthermore they propose a set of guidelines to improve the positions of the nodes using the smallest number of beacon nodes possible. The recursive position estimation (RPE) [2] iteratively computes the node location information without the need for strategic beacon placement. The directed position estimation (DPE) [24] is a similar algorithm that uses the direction of the recursion to improve the localization accuracy. Both the RPE and DPE propagate position errors throughout the network. However, in the DPE this error is reduced by selecting the best reference neighbors. These two algorithms are evaluated in this work, so they are treated in more detail in the next subsections. Recursive position estimation The RPE [2] is a positioning system that requires at least 5 % of the nodes to be beacon nodes, randomly distributed in the sensor field. However, depending on the network density and on the beacons arrangement, we need a larger number of beacons to start the recursion. In this algorithm, every free node needs the minimum of three references to estimate its position. Estimated positions are broadcasted to help other nodes estimate their positions recursively. The number of estimated positions increases iteratively as new estimated nodes assist others estimating their positions. The RPE algorithm can be divided into four phases (see Fig. 1). In the first phase, beacon nodes broadcast their position so they can be used as reference nodes. In the second phase, a node estimates its distance to the reference nodes by using, for example, the received signal strength indicator (RSSI) [5]. In the third phase, the node computes its position by using multilateration [5], and becomes a settled node. In the final phase, the node becomes a reference, and broadcasts its estimated position to assist its neighbors. Example and phases of the recursive position estimation (RPE) The directed position estimation By using settled nodes as reference nodes, location errors are propagated. The reason is that the distance estimation process introduce errors in the estimated positions. As a consequence, the most distant nodes of the beacons are likely to have larger errors than the closer ones. In Fig. 1, the location error for node 5 is probably greater than the location error for node 7. The algorithm attempt to mitigate propagated errors by ignoring the worst references. The references quality is given by the residual value defined as $$\begin{aligned} residual(x, y) = \sum _{i=1}^R \left( \sqrt{ (x_i - x)^2 + (y_i - y)^2 } - d_i \right)^2\nonumber \\ \end{aligned}$$ where \(R\) is the number of references, \((x, y)\) is the estimated position, \((x_i, y_i)\) is the \(i\)th reference position and \(d_i\) is its measured range. The RPE is an algorithm that uses multiple hops to determine the nodes position. Hence, the network topology does not have to follow a special organization, making it suitable for outdoor scenarios. Directed position estimation The DPE [24] algorithm is similar to the RPE algorithm. The main idea of the DPE is to start the recursion at a single location, and make it follow a known direction. Then, a node can estimate its position by using only two reference neighbors and the recursion direction. This controlled recursion leads to smaller errors, compared to RPE. To ensure that the recursion starts at a single point, the algorithm uses a fixed beacon structure. The recursion direction and the beacon structure are depicted in Fig. 2a. This structure has, generally, four beacons that know their distance from the recursion origin and the angle between each pair of beacons. Then, to start the recursion, these beacons inform their positions to their neighbors. When a node receives the position from two reference neighbors (see Fig. 2b), a pair of possible points results from the system: one is the correct position and the other is the incorrect. Because the direction of the recursion is known, the node can choose between the two possible solutions: the most distant point from the recursion origin is the correct choice. The algorithm is divided into four phases. In the first phase, beacon nodes start the recursion from a single location. In the second phase, a node chooses two reference points: the pair of nodes with the largest distance between them, and closest to the recursion origin. In the third phase, the node estimates its position. This position is estimated by intersecting the two circles and choosing the most distant point from the recursion origin. In the last phase, the node becomes a reference by sending its information to its neighbors. The recursion direction can occasionally become wrong. To a correct estimation it is necessary to avoid two possible situations: (a) when the unknown node is closer to the recursion origin than one of the two reference nodes; and (b) when both reference nodes are aligned with the recursion origin. These two scenarios can be detected by comparing the distances from the possible solutions to the recursion origin with the distances from the reference nodes to the recursion origin. The DPE also propagates localization errors, due to distance estimation errors. However, propagated errors are considerably smaller. Oliveira et al. [24] compare the performance of the DPE with the RPE in several aspects. Their results show that the DPE outperforms RPE in many cases. The DPE works with sparse network, needs fewer beacons, and have smaller errors. Target tracking Target tracking algorithms aim at estimating current and future (next) location of a target. These algorithms are exposed to different sources of noise, introduced by the measurement process and also errors in nodes' location that are used to estimate the target coordinates. Therefore, information fusion [20] is commonly used for filtering such noise sources. Two popular algorithms for this problem are the Kalman and Particle filters. Several tracking solutions are based on Kalman filters (KF). The reason is that Kalman filters have been used in algorithms for source localization and tracking, especially in robotics [20]. Li et al. [16] propose a source localization algorithm for a system equipped with asynchronous sensors, and evaluate the performance of extended Kalman filter (EKF) [35] and unscented Kalman filter (UKF) [14] for source tracking in non-linear systems. Olfati-Saber [23] proposes distributed Kalman filtering (DKF), in which a centralized KF is decomposed into micro-KFs, so that the distributed approach has a performance equivalent to centralized KF. Particle filters are popular for modeling non-linear systems subject to non-Gaussian noise. Vercauteren et al. [32] propose a collaborative Particle Filter for jointly tracking several targets and classifying them according to their motion pattern. Arulampalam et al. [3] assess the use Particle filters and the EKF for tracking applications. Considering sensor networks, Rosencrantz et al. [26] developed a Particle Filter for distributed information fusion applied to decentralized tracking. Jiang and Ravindran [13] propose a completely distributed Particle Filter for target tracking in sensor networks, where the communication cost to maintain the particles on different nodes and propagate along the target trajectory is reduced. Souza et at. [29] assess the performance of target tracking algorithms when position information is provided by localization algorithms. The authors combine the KF and PF with RPE and DPE. In this work, we combine the same algorithms, but we use data fusion of multiple distance estimates during the localization process to improve the target-tracking accuracy. There are also other distributed approaches for target tracking that are based on cluster [6, 33, 38] and tree [17, 31, 37] organizations for in-network data processing. Kalman filter The Kalman filter is a popular fusion method used to fuse low-level redundant data [20]. If a linear model can describe the system and the error can be modeled as a Gaussian noise, than the Kalman Filter recursively retrieves statistically optimal estimates. This method, as depicted in Fig. 3a, causes at each discrete-time increment a linear operator application in the current state to generate the new state. The filter considers measurement noise and, optionally, information about the controls on the system. Then, another linear operator, also subject to noise, generates the observed outputs from the true state. The Kalman filter estimates the state \(\mathbf x \) of a discrete-time \({k}\) controlled process that is ruled by the state-space model $$\begin{aligned} \mathbf x _{k+1} = \mathbf A \mathbf x _k + \mathbf B \mathbf u _k + \mathbf w _k \end{aligned}$$ with measurements \(\mathbf y \) represented by $$\begin{aligned} \mathbf y _k = \mathbf C \mathbf x _k + \mathbf v _k, \end{aligned}$$ in which \(\mathbf A \) is the state transition matrix, \(\mathbf B \) is the input control matrix that is applied to control vector \(\mathbf u \), \(\mathbf C \) is the measurement matrix; \(\mathbf w \) represent the process noise and \(\mathbf v \) the measurement noise, where these noise sources are represented by random zero-mean Gaussian variables with covariance matrices \(\mathbf Q \) and \(\mathbf R \) respectively. Based on the measurement \(\mathbf y \) and the knowledge of the system parameters, the estimate of \(\mathbf x \), represented by \(\hat{\mathbf x }\) is given by $$\begin{aligned} {\hat{\mathbf x }}_{k+1} = ({\mathbf A } {\hat{\mathbf x }}_k + {\mathbf B } {\mathbf u }_k) + {\mathbf K }_k ({\mathbf y }_k - {\mathbf C } {\hat{\mathbf x }}_k), \end{aligned}$$ in which \(\mathbf K \) is the Kalman gain determined by $$\begin{aligned} \mathbf K _k = \mathbf P _k \mathbf C ^T {(\mathbf C \mathbf P _k \mathbf C ^T + \mathbf R )}^{-1}, \end{aligned}$$ while \(P\) is the prediction covariance matrix that can be determined by $$\begin{aligned} \mathbf P _{k+1} = \mathbf A (\mathbf I - \mathbf K _k \mathbf C ) \mathbf P _k \mathbf A ^T + \mathbf Q . \end{aligned}$$ The Kalman filter has two phases (see Fig. 3b): time-update (predict) and measurement-update (correct). The time-update is responsible for obtaining the a priori estimates for the next time step and consists of the Eqs. (2) and (3). The measurement-update is responsible for incorporating a new measurement into the a priori estimate to obtain an improved a posteriori estimate and consists of the Eqs. (4), (5), and (6) [20]. These phases form a cycle that is maintained while the filter is fed by measurements. Kalman filter representation and phases Since many problems cannot be represented by linear models, algorithms have emerged based on the original Kalman Filter formulation to allow these problems to be treated. The major variations of the Kalman filter for non-linear problems are the extended Kalman filter (EKF) [10] and the unscented Kalman filter (UKF) [14]. The EKF is the most popular alternative to non-linear problems. This method uses a linearized model of the process using Taylor series, because this is a sub-optimal estimator. The UKF performs estimations on non-linear systems without the need to linearize them, because it uses the principle that a set of discrete sampling points can be used to parameterize the mean and covariance. The quality of UKF estimates are close to standard KF for linear systems. Finally, the Kalman filter model allows the elaboration of an algorithm to estimate the optimal state vector values. Thus, it is possible to generate a sequence of state values in each time unit, predicting future states using the current state, and allowing the creation of systems with real-time updates. Particle filter Particle Filters are recursive implementations of sequential Monte Carlo (SMC) methods [3]. Although the Kalman filter is a classical solution, Particle Filters represent an alternative for applications with non-Gaussian noise, especially when computational power is rather cheap and sampling rate is slow. Unlike of the linear/Gaussian problems, the calculations of the posterior distribution of non-linear/non-Gaussian problems are extremely complex. To overcome this difficulty, the Particle Filter adopts an approach called sampling importance. The goal is to estimate the posterior probability density, representing it as a set of particles. This method attempts to build the posterior probability density function (PDF) based on a large number of random samples, called particles. These particles are propagated over time, sequentially combining sampling and resampling steps. At each time step, the resampling is used to discard some particles, increasing the relevance of regions with high posterior probability. Each particle has an associated weight that indicates the particle quality. Then, the estimate is the result of the weighted sum of all particles. The resampling step is the solution adopted to avoid the degeneration problem, where the particles have negligible weights after several iterations. The particles of greater weight are selected and serve as the basis for the creation of the new particles set. Furthermore, the minor particles disappear and do not originate descendants. As the Kalman filter, the Particle filter algorithm has two phases: prediction and correction. In the prediction phase, each particle is modified according to the existing model, including the addition of random noise in order to simulate the effect of noise. Then, in the correction phase, the weight of each particle is reevaluated based on the latest sensory information available, so that particles with small weights are eliminated (resampling process). Proposed approach In the evaluation presented later in this work, we show that errors introduced by the localization algorithms are not successfully filtered by the tracking algorithm (Kalman and Particle filters), because the node position errors are not perceived as noise by the filters. An alternative to reduce the tracking error is reducing localization errors. By reducing the localization error, we make tracking algorithms closer to their ideal operating conditions. Thus, we use a Kalman Filter to improve distance estimation errors and, consequently, localization and tracking accuracy. In this approach, during the localization process, several distance estimates are performed, that is, each reference node reports its position \(k\) times to its neighbors. Nodes receiving these packages create an Kalman Filter instance for each reference. Then, all distance estimates are refined by the corresponding Kalman filters. Thus, the filter obtains a more accurate distance estimate, improving the localization result (Fig. 4). Then, this improved estimate is used by the target tracking algorithm. Fusion of \(k\) distance estimations to improve the target tracking performance. For this task, the unknown node create a unique Kalman filter instance for each reference In this task, the Kalman filter goal is to obtain a constant (distance) estimate. The linear system of the filter is very simple and can be configured as $$\begin{aligned} {\left\{ \begin{array}{ll} x_{k+1} = d_{k+1} = d_k + w_k \\ y_k = d_k + v_k \end{array}\right.} \end{aligned}$$ in which \(x\) and \(d\) represent the state, distance in this case, of the discrete-time \(k\); \(y\) is a measurement value; \(w\) and \(v\) represent the process and measurement noise, respectively. Filtering the distance estimates during localization is a simple process that ensures good results. The distance estimates errors are the largest contributors to overall error of the localization system, and only a small fraction of this error is generated by position computation and localization algorithm [4, 24]. Some algorithms try to isolate the distance estimate errors by selecting the best references based on a residual value [2], however this technique is not very efficient, because all references can cause distance estimate errors. Since the node position is calculated and used by any application, such as target tracking, it is difficult to determine the error size and direction, so the best option is to work on the error source. In this section, we evaluate the performance of the KF and PF using the information position provided by the RPE and DPE localization algorithms. We apply the proposed approach in the localization process, where several distance estimates are used to verify its performance. The evaluation methodology is divided into five phases, as shown in Fig. 5. First, there is a newly deployed sensor network with some beacon nodes, where most of the nodes do not know their position (unknown nodes), so this network must be prepared to track the target. In the second phase, a localization algorithm is applied (RPE or DPE), and during this step, several distance estimates can be used to reduce the localization errors and improve the tracking accuracy, following the proposed approach (see Sect. 3). In the third phase, nodes know their position, so when three or more nodes detect the target, they compute its position with multilateration. In the fourth phase, the nodes send the target position to the sink node. In the final phase, the sink node predicts the next target position and reduces the measurement noise, performing the tracking algorithm (Kalman filter or Particle filter). While there are measurements, the target tracking continues (back to phase three). Methodology phases: (1) a newly deployed sensor network with beacon and unknown nodes; (2) a localization algorithm is applied, moreover several distance estimates are used to improve the tracking accuracy; (3) three or more nodes detect the target and compute its position; (4) the target position is sent to the sink node; (5) the sink node performs the tracking algorithm for predict the future target position and reduce the measurement noise, then back to phase three The experiments were performed by simulation (implemented in Java), where the sensor field is composed of \(n\) sensor nodes, with a communication range of \(r_c\), that are distributed in a two-dimensional squared sensor field \(Q = [0, s] \times [0, s]\). As a proof-of-concept, we consider symmetric communication links, i.e., for any two nodes \(u\) and \(v\), \(u\) reaches \(v\) if and only if \(v\) reaches \(u\). Thus, we represent the network by the Euclidean graph \(G = (V, E)\) with the following properties: \(V = \{v_1, v_2, \ldots , v_n\}\) is the set of sensor nodes; \(\langle i,j \rangle \in E\) iff \(v_i\) reaches \(v_j\), i.e. the distance between \(v_i\) and \(v_j\) is less than \(r_c\). To detect the target, we use the binary detection model [11, 34]. In this model, for a given event \(e\) (target presence), every sensor \(v\), whose distance \(d\) between it and the target is smaller than a detection radius \(r_d\), assuredly detects the event. Then, the probability of a sensor node to detect an event is defined as $$\begin{aligned} P(v,e) = {\left\{ \begin{array}{ll} 1,&\text{ if} d \le r_d \\ 0,&\text{ otherwise} \end{array}\right.}. \end{aligned}$$ The default network configuration is composed of \(n = 150\) sensors nodes randomly distributed on a \(Q = [0,70] \times [0,70]\) m\(^2\) sensor field. The communication and detection ranges are \(r_c = r_d = 15\)m for every node. This configuration defines a network density of 0.03 nodes/m\(^2\), which is sufficient for the majority of nodes to have their locations estimated by both RPE and DPE algorithms. Oliveira et al. [24] use this same configuration in their experiments to evaluate the localization systems. Therefore, we adopted this configurations to estimate the nodes position and tracking the target. Node locations are estimated by RPE or DPE. In the RPE algorithm, 5 % of the nodes are beacons, while DPE always uses four beacons. To simulate the inaccuracies of the distance estimations, usually obtained by RSSI, time of arrival (TOA) and time difference of arrival (TDoA) [8, 9], each range sample is disturbed by a zero-mean Gaussian variable with standard deviation equals to 5 % of the distance. This assumption is reasonable and leads to non-Gaussian errors in the localization algorithms [25]. During the localization algorithm, we vary the number of distance estimates used by the Kalman Filter as 1, 10, 20, 50, 100, and 200 measurements for each reference. The target tracking is performed with Kalman or Particle filters. The Kalman filter has its linear system equations represented by $$\begin{aligned} {\left\{ \begin{array}{ll} x_{k+1} \!=\! \left[ \begin{array}{c} px_{k+1} \\ py_{k+1} \\ vx_{k+1} \\ vy_{k+1} \end{array} \right] \!=\! \begin{bmatrix} 1&\quad \! 0&\quad \! T&\quad \! 0 \\ 0&\quad \! 1&\quad \! 0&\quad \! T \\ 0&\quad \! 0&\quad \! 1&\quad \! 0 \\ 0&\quad 0&\quad 0&\quad 1 \end{bmatrix} \!\times \! \left[ \begin{array}{c} px_k \\ py_k \\ vx_k \\ vy_k \end{array} \right] \!+\! w_k \\ y_k = \begin{bmatrix} 1&\quad 0&\quad 0&\quad 0 \\ 0&\quad 1&\quad 0&\quad 0 \end{bmatrix} \times \left[ \begin{array}{c} px_k \\ py_k \\ vx_k \\ vy_k \end{array} \right] + z_k \end{array}\right.} \end{aligned}$$ in which \(x\) represents the state of a discrete-time \(k\), composed by the position (\(px\), \(py\)) and velocity (\(vx\), \(vy\)); \(y\) is a measurement value; \(w\) and \(v\) represent the process and measurement noise, respectively. The Particle filter uses 1,000 particles. This value was set based on a previous empirical tests that showed that more than 1,000 particles do not improve tracking significantly. The Particle filter used in the experiments is represented by Algorithm 1. For illustration purposes, the Particle Filter algorithm presented considers only one dimension, in which \(x\) is the position, \(v\) is the velocity and \(w\) is the weight of each \(N\) particles in a discrete-time \(k\); \(y\) is the input measurement value. First, the algorithm randomly distributes the particles (line 2). The particle propagation and the calculus of their importance consider the distance from each particle to the measurement position (lines 4–10). The normalization process (line 12) prepares the particles weights for the resampling process (lines 14–21). Finally, the prediction of the position is calculated (line 23). For the sake of simplicity, we consider an uniform movement, so that the movement is modeled by a linear system, suitable for both Kalman and Particle filters. The target trajectory is composed of 1,000 points to consider a significative sample. The distance between the points of the trajectory is 0.1 m (uniform motion) to keep the target within the area monitored. The interval between each measurement is \(T=1\)s. The maneuvers of the target are determined by an angle randomly generated within \(-25^\circ \) and \(25^\circ \) every 25 steps. In Figs. 6–12, each point is plotted as an average of 100 random topologies to ensure a lower variance in the results. The error bars represent the confidence interval of 99 %. Simulation results Target tracking behavior To illustrate the behavior of the tracking algorithms, we show some snapshots in this section. In these snapshots, the RPE algorithm could not find the location of two nodes (from 150 nodes), and the average error of the node locations is 3.49 m. Adopting the same instance, the DPE algorithm managed to estimate the location of every node, and the average location error is 2.56 m. These two scenarios are compared with the ideal setting, in which the localization system is perfect. For all cases, the performance of Kalman and Particle filters are presented. The results are summarized in Table 1. Performance of Kalman and Particle filters using the RPE and DPE localization algorithms The Fig. 6a–c shows a target moving through the sensor field (red line). Orange points represent measurements and blue points are the results of Kalman Filter tracking algorithm. Figure 6d–f shows the error calculated from real target and Kalman Filter estimation for each measured point. Figure 6g–l represents the same case using the Particle Filter tracking algorithm instead. These figures illustrates the influence of the localization errors caused by each algorithm. In general, the greater the localization error, the greater the tracking error, independent of the tracking algorithm. The influence of localization errors is clearly visible in the region around point (20, 15) in Fig. 6c, in which localization errors lead to a wrong track estimation. Tracking with Kalman Filter has better results when node's location information are ideal, or when they are estimated by the DPE. However, when the node locations are estimated by the RPE, the Particle Filter presents the best results. The reason is that the Particle Filter is less affected by measurement errors, this fact becomes clear in the following sections. As a general conclusion, Fig. 6 shows that both filters successfully reduce the errors resulting from the estimation of the target location, but errors resulting from the localization algorithms are not significantly filtered. Costs and benefits of multiple distance estimates. More distance estimates during the localization process can reduce the localization and tracking errors. However, it is necessary to send additional packets, i.e., more resources will be consumed to get this benefit. Therefore, in this section we evaluate the costs and benefits of using several distance estimates. This analysis is important, since it helps define how much you should spend for a given performance in the target tracking. Costs and benefits of multiple distance estimates Both the RPE and DPE have the communication complexity of \(O(n)\), where \(n\) is the number of nodes. The Fig. 7a shows the number of packets sent when the number of distance estimates increases. Using \(k\) distance estimates causes each beacon and settled node to broadcast its position \(k\) times, increasing the communication complexity to \(O(kn)\). This figure also shows that the RPE sends fewer packets than the DPE. This occurs because of the network density used in the experiment that lead RPE, in some topologies, to estimate fewer node positions than DPE, so these nodes do not broadcast their location information, reducing the number of packets sent. The Fig. 7b shows the improvement in accuracy of tracking when the number of distance estimates increase. Using 10 distance estimates already provides a significant improvement when compared to results obtained with only 1 estimation. In the target tracking using the RPE, the Particle Filter has better results, because this algorithm reduces a small fraction of non-Gaussian noise introduced by localization algorithm. In target tracking with DPE, the Kalman Filter becomes more accurate with more than 10 distance estimates, because the localization error is pretty low, so that the Kalman Filter starts to operate under ideal conditions. It is not feasible to use a very high number of distance estimates, because the benefit achieved becomes lower in comparison with the required cost. Using 10 distance estimates is enough to achieve improvements of 50 % with a reasonable cost. Until 50 distance estimates can significantly reduce the tracking error, but most estimates (100 and 200) lead to a little improvement of results with high cost. Impact of distance estimation inaccuracy Distances estimated by sensor nodes are not perfect. Depending on the monitored environment, the associated errors can be greater, which affects the tracking performance. In general, these errors can be modeled by a zero-mean Gaussian variable, in which the standard deviation is a percentage of the actual distance [4]. Thus, to evaluate different situations, we vary the standard deviation from 0 to 15 % of the distance (for RPE and DPE estimation processes). A standard deviation of 0 % corresponds to a perfect distance estimate. Deviations between 0 and 8 % can represent the estimates obtained by techniques that use time of arrival of the signal as TOA and TDoA, which have errors smaller than 1 m. While larger deviations represent errors obtained by more imprecise methods, like RSSI. Figure 8a–d presents the error performance by varying the distance estimation inaccuracy and the number of estimates, showing 3D graphs with the combination of RPE and DPE with KF and PF. Figure 8e, f shows the cases of 1 and 50 distances estimates, respectively. When the distance estimation inaccuracy is low (between 5 and 10 % of the distance), the accuracy improvement of the target tracking using DPE is negligible, regardless of the number of distance estimates used. When this imprecision is high (between 15 and 30 %), with 50 distances estimates or more, note that the average error converges to 1 m (Fig. 8b, d). However, with RPE, the improvement is noticeable even when the distance estimates inaccuracy is low (Fig. 8a, c). It is also interesting to note in Fig. 8e, f that the Particle filter outperforms the Kalman Filter, especially when RPE is chosen as the localization algorithm. The reason is that the non-linearity and non-Gaussian nature of the Particle Filter results in reducing a small fraction of the non-Gaussian noise introduced by the localization algorithm. Impact of the network density The impact of the density of the network is evaluated by increasing the number of nodes in the same sensor field, so that the network density varies from 0.03 to 0.07 nodes/m\(^2\). The smallest density used in this experiment allows both the RPE and DPE algorithms to estimate the location of the most of the sensor nodes. In this case, Fig. 9 shows that, for the DPE algorithm, the target tracking error remains constant independent of the network density. The reason is that the same beacon structure is used regardless the network density. However, for the RPE algorithm, the number of beacons increase with the network density, because we ensure that 5 % of the nodes are beacons. As a result of the increasing number of beacons, the tracking error reduces accordingly. Multiple distance estimates are important for improving the target tracking accuracy when the RPE is used, especially in sparse networks, as show in Fig. 9a and c. With the DPE instead, the network density does not interfere in the target tracking error. Therefore, for 50 distance estimates or more the average error converges to 0.7m (Fig. 9b, d). Figures 9(e and f) show that with a single distance estimate during the location process, the Particle Filter is slightly better with both RPE and DPE. The reason is that it filters a small fraction of the non-Gaussian localization errors. When we use 50 distance estimates, the performance of the Kalman Filter becomes equivalent to the Particle Filter in the case of RPE and better in the case of DPE, since the Kalman Filter operates under ideal conditions with low localization errors. The impact of the network scale In this section, we evaluate how the network scale affects the combinations of localization and tracking algorithms. Tracking scale In this context, we vary the number nodes from 100 to 350, while keeping a constant density of 0.03 nodes/m\(^2\). Therefore, the monitored area is resized according to the number of sensor nodes. As the percentage of beacons used by RPE is 5 %, the number of beacons also increases according to the number of nodes in this case. The DPE keeps using only a single structure of four beacons. Figure 10 shows that increasing the network scale, the tracking errors with DPE increase accordingly. The reason is that a higher number of nodes generates a higher propagation of position error, since the same number of beacons is maintainedregardless the number of nodes. However, with the RPE, the tracking errors remains almost constant, because the number of beacons increase with the network scale (cf. [24]). When there are many nodes in network (between 250 and 350), using multiple distance estimates in DPE improves significantly the target tracking accuracy, however they have little influence when there are few nodes (Fig. 10b, d). With RPE instead, the usage of multiple distance estimates is important for any number of nodes (Fig. 10a, c). The impact of the number of beacons The number beacons used by the DPE and RPE lead to different localization errors. Therefore, affecting the tracking solutions. For the DPE, Oliveira et al. [24] show that increasing the number of beacon nodes of the structure does not improve significantly the result of the localization. Therefore, this evaluation considers only the RPE algorithm. In this experiment, the number of beacons is increased from 5 to 35 % of the total number nodes. A greater number of beacons means that the localization algorithm has more references for estimating the location of remaining nodes, which leads to smaller errors. Hence, the tracking error is also inversely proportional to the number of beacons. This behavior is depicted in Fig. 11. Impact of the number of beacons When the amount of beacon nodes is significantly large, the localization error becomes so small that the Kalman Filter tends to have better results than the Particle Filter, it is up 20 % better with 1 estimate and up 10 % to 50 estimates (Fig. 11c, d). The reason is that the non-Gaussian errors resulting from localization systems are reduced in such a way that the Kalman Filter starts to operate under ideal conditions, which means it converges to the optimal solution for target tracking. When there are few beacons (10 %), multiple distance estimates improves significantly the target tracking accuracy. However, when more than 10 % of the nodes are beacons note that the average error converges to 0.6 m, both for target tracking with Kalman and Particle filters (Fig. 11a, b). The impact of the beacon structure As stated earlier, increasing numbers beacons by structure in the DPE does not improve the localization results. However, the DPE may benefit from multiple beacon structures [24]. Thus, to evaluate the performance of the target tracking algorithms with multiple beacon structures, we vary the number of such structures from 1 to 5. This experiment represents a situation for the DPE algorithm that is analogous to the previous experiment for the RPE algorithm. By performing a single distance estimate in the localization process (Fig. 12(c)) and using a three-beacon structure, the tracking results with Kalman and Particle filters are very close. However, when more structures are used, the Kalman Filter tracking is favored. The opposite occurs when we use less than three-beacon structures. With 50 distance estimates, the Kalman Filter outperforms the Particle Filter with any number of beacon structures, because the non-Gaussian noise introduced by the location is very low (Fig. 12(d)). Besides, when we use of 50 distance estimates, the average tracking error converges to 0.6 m regardless of the number of beacon structures available (Fig. 12a, b). Impact of the beacon structure Conclusions and future work In this work, we demonstrated how information fusion can reduce errors during the localization process, while assessing the impact of actual localization algorithms on target tracking algorithms. For these evaluations, we chose the RPE and DPE algorithms to compute node positions, since RPE is a pioneer solution and DPE is more accurate and cheaper than the RPE. The target tracking techniques we have chosen were the Kalman and Particle filters. These filters are very popular and can be considered as canonical solutions for the target tracking problem. As a general conclusion, by using up to 50 distance estimates ensures better results in the target tracking with a moderate cost. Above this value, the errors converges, so the reduction of errors is small compared to the associated cost. Furthermore, the reduction in localization error using information fusion enhances the performance of Kalman Filter over the Particle Filter, especially when the DPE is used. Kalman and Particle filters successfully filter the errors associated with the target location estimation. However, the errors introduced by the localization algorithms are not successfully filtered by the tracking algorithm. The reason is that the Kalman Filter is not designed to filter non-Gaussian noise. On the other hand, the Particle Filter is designed to filter non-Gaussian noise. Consequently, the Particle Filter tends to outperform the Kalman as the localization errors increase. However, even the Particle Filter cannot significantly filter the non-Gaussian localization errors. Results show that for tracking applications with severe accuracy constraints, the localization algorithms need to improve their estimations to guarantee the performance of target tracking algorithms. Table 1 Target tracking errors This work leads to some particularly interesting directions. The first is to properly characterize the localization errors, so that we can understand the expected magnitude, direction, and orientation of the error resulting from localization algorithms. Such knowledge allows us to design new tracking algorithms that use such information to compensate and reduce the impact of localization errors, depending of the localization algorithm used. Another future direction includes reducing the location algorithms inaccuracy by using all the location information reported to nodes. These algorithms usually provide a minimum number of references (three for the RPE and two for the DPE) required for calculating the node position, ignoring the additional information received after the calculation. This approach can lead to accuracy improvement, and it does not require extra communication. Finally, the cross-layer design of localization and tracking algorithms, not explored yet, may lead to improved solutions for both problems. Akyildiz IF, Su W, Sankarasubramaniam Y, Cayirci E (2002) Wireless sensor Networks: a survey. Comput Netw 38:393–422 Albowicz J, Chen A, Zhang L (2001) Recursive position estimation in sensor networks. In: Proceedings of the 9th international conference on network protocols (ICNP'01), pp 35–41 Arulampalam MS, Maskell S, Gordon N, Clapp T (2002) A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans Signal Process 50:174–188 Bachrach J, Eames AM, Eames AM (2005) Localization in sensor networks, chapter 9, pp 277–310. Wiley, New York Boukerche A, de Oliveira HABF, Nakamura EF, Loureiro AAF (2007) Localization systems for wireless sensor networks. IEEE Wireless Commun 14:6–12 Chong CY, Zhao F, Mori S, Kumar S (2003) Distributed tracking in wireless ad hoc sensor networks. In: Proceedings of the 6th international conference of information fusion (Fusion'03), pp 431–438 Ehsan S, Bradford K, Brugger M, Hamdaoui B, Kovchegov Y, Johnson D, Louhaichi M (2012) Design and analysis of delay-tolerant sensor networks for monitoring and tracking free-roaming animals. Trans Wireless Commun 11(3):1220–1227 Fukuda K, Okamoto E (2012) Performance improvement of TOA localization using IMR-based NLOS detection in sensor networks. In: Proceedings of the 26th international conference on information networking (ICOIN'12), pp 13–18 Gibson JD (1999) The mobile communication handbook. IEEE Press, New York Grewal MS, Andrews AP (2001) Kalman filtering: theory and practice using MATLAB. Wiley, New York He T, Bisdikian C, Kaplan L, Wei W, Towsley D (2010) Multi-target tracking using proximity sensors. In: Proceedings of the military communications conference (MILCOM'10), San Jose, California, USA, pp 1777–1782 He T, Huang C, Blum BM, Stankovic JA, Abdelzaher T (2003) Range-free localization schemes for large scale sensor networks. In: Proceedings of the 9th ACM international conference on mobile computing and networking (MobiCom'03), pp 81–95 Jiang B, Ravindran B (2011) Completely distributed particle filters for target tracking in sensor networks. In: Proceedings of the 25th parallel distributed processing, Symposium (IPDPS'11) Julier SJ, Uhlmann JK (1997) A new extension of the kalman filter to nonlinear systems. In: Proceedings of the international aerosense, Symposium (SPIE'97), pp 182–193 Kalman RE (1960) A new approach to linear filtering and prediction problems. J Basic Eng 82:35–45 Li T, Ekpenyong A, Huang YF (2006) Source localization and tracking using distributed asynchronous sensor. IEEE Trans Signal Process 54:3991–4003 Lin CY, Peng WC, Tseng YC (2006) Efficient in-network moving object tracking in wireless sensor networks. IEEE Trans Mobile Comput 5:1044–1056 Lin KW, Hsieh MH, Tseng VS (2010) A novel prediction-based strategy for object tracking in sensor networks by mining seamless temporal movement patterns. Expert Syst Appl 37:2799–2807 Mazomenos EB, Reeve JS, White NM (2009) A range-only tracking algorithm for wireless sensor networks. In: International conference on advanced information networking and applications workshops (AINAW'07), pp 775–780 Nakamura EF, Loureiro AAF, Orgambide ACF (2007) Information fusion for wireless sensor networks: methods, models, and classifications. ACM Comput Surv 39:1–55 Neto JMRS, Silva JJC, Cavalcanti TCM, Rodrigues DP, da Rocha Neto JS, Glover IA (2010) Propagation measurements and modeling for monitoring and tracking in animal husbandry applications. In: Proceedings of the instrumentation and measurement technology conference (I2MTC'10). Austin, Texas, USA, pp 1181–1185 Niculescu D, Nath B (2001) Ad hoc positioning system (aps). In: Proceedings of the global telecommunications conference GLOBECOM'01), pp 2926–2931 Olfati-Saber, R.: Distributed kalman filter with embedded consensus filters. In: Proceedings of the 44th Conference on Decision and Control—European Control Conference (CDC-ECC'05), pp. 8179–8184 (2005) Oliveira HABF, Boukerche A, Nakamura EF, Loureiro AAF (2009) An efficient directed localization recursion protocol for wireless sensor networks. IEEE Transactions Computing 58:677–691 Oliveira, H.A.B.F., Nakamura, E.F., Loureiro, A.A.F., Boukerche, A.: Error analysis of localization systems in sensor networks. In: Proceedings of the 13th International Symposium on Geographic, Information Systems (GIS'05), pp. 71–78 (2005) Rosencrantz M, Gordon G, Thrun S (2003) Decentralized sensor fusion with distributed particle filters. In: Proceedings of the Conference on Uncertainty in AI (UAI) Sichitiu, M.L., Ramadurai, V.: Localization of wireless sensor networks with a mobile beacon. In: Proceedings of the International Conference on Mobile Ad-hoc and Sensor Systems (MASS'04), pp. 174–183 (2004) Souza, E.L., Campos, A., Nakamura, E.F.: Tracking targets in quantized areas with wireless sensor networks. In: Proceedings of the 36th Local Computer Networks (LCN'11), pp. 235–238. Bonn, Germany (2011) Souza, E.L., Nakamura, E.F., de Oliveira, H.A.: On the performance of target tracking algorithms using actual localization systems for wireless sensor networks. In: Proceedings of the 12th ACM international conference on Modeling, analysis and simulation of wireless and mobile systems (MSWiM '09), pp. 418–423. Tenerife, Canary Islands, Spain (2009) Tatham, B., Kunz, T.: Anchor node placement for localization in wireless sensor networks. In: Proceedings of the 7th Conference on Wireless and Mobile Computing, Networking and, Communications (WiMob'11), pp. 180–187 (2011) Tsai HW, Chu CP, Chen TS (2007) Mobile object tracking in wireless sensor networks. Computer Communications 30:1811–1825 Vercauteren T, Guo D, Wang X (2005) Joint multiple target tracking and classification in collaborative sensor networks. IEEE Journal on Selected Areas in Communications 23:714–723 Walchli, M., Skoczylas, P., Meer, M., Braun, T.: Distributed event localization and tracking with wireless sensors. In: Proceedings of the 5th International Conference on Wired/Wireless Internet, Communications (WWIC'07), pp. 247–258 (2007) Wang Z, Bulut E, Szymanski BK (2010) Distributed energy-efficient target tracking with binary sensor networks. ACM Transactions on Sensor Networks 6(4):1–32 Welch, G., Bishop, G.: An introduction to the kalman filter. In: The 28th International Conference on Computer Graphics and Interactive, Techniques (SIGGRAPH'01) (2006) Yang, H., Sikdar, B.: A protocol for tracking mobile targets using sensor networks. In: Proceedings of the 1st International Workshop on Sensor Network Protocols and Applications (SNPA'03), pp. 71–81 (2003) Zhang W, Cao G (2004) Dctc: Dynamic convoy tree-based collaboration for target tracking in sensor networks. IEEE Transactions on Wireless Communications 3:1689–1701 Zhao F, Shin J, Reich J (2002) Information-driven dynamic sensor collaboration for tracking applications. IEEE Signal Processing 19:61–72 This work is supported by the Brazilian National Council for Scientific and Technological Development (CNPq), under the grant numbers 474194/2007-8 (RastroAM), 55.4087/2006-5 (SAUIM) and 575808/2008-0 (Revelar), and also the the Amazon State Research Foundation (FAPEAM), trough the grant 2210.UNI175.3532. 03022011 (Projeto Anura—PRONEX 023/2009). Federal University of Amazonas, UFAM, Manaus, Brazil Éfren L. Souza & Horácio A. B. F. de Oliveira Analysis, Research and Technological Innovation Center, FUCAPI, Manaus, Brazil Eduardo F. Nakamura & Carlos M. S. Figueiredo Éfren L. Souza Eduardo F. Nakamura Horácio A. B. F. de Oliveira Carlos M. S. Figueiredo Correspondence to Éfren L. Souza. This work extend the previously evaluation made in Souza et al. [29] by introducing the usage of data fusion to reduce errors in the localization of sensor nodes. The results presented here show the benefits and costs of this new approach. Souza, É.L., Nakamura, E.F., de Oliveira, H.A.B.F. et al. Reducing the impact of location errors for target tracking in wireless sensor networks. J Braz Comput Soc 19, 89–104 (2013). https://doi.org/10.1007/s13173-012-0084-4 Target tracking algorithms Localization systems
CommonCrawl
# Introduction to MATLAB and Simulink MATLAB and Simulink are powerful tools for modeling, simulating, and analyzing dynamic systems. MATLAB is a programming environment that allows you to perform numerical computations, create visualizations, and solve complex mathematical problems. Simulink is an add-on to MATLAB that enables you to create, simulate, and analyze dynamic systems. In this section, we will cover the basics of MATLAB and Simulink, including their capabilities, advantages, and limitations. We will also discuss the importance of real-time simulation in various fields, such as engineering, physics, and biology. Here is an example of a simple MATLAB code that calculates the area of a rectangle: ```matlab width = 5; height = 10; area = width * height; disp(['The area of the rectangle is: ', num2str(area)]); ``` ## Exercise Write a MATLAB script that calculates the volume of a cylinder given its radius and height. ```matlab % Your code here ``` To calculate the volume of a cylinder, you can use the following formula: $$V = \pi r^2 h$$ Here's a MATLAB script that calculates the volume of a cylinder: ```matlab radius = 3; height = 7; volume = pi * radius^2 * height; disp(['The volume of the cylinder is: ', num2str(volume)]); ``` # Overview of the Simscape add-on Simscape is an add-on to MATLAB that allows you to model and simulate physical systems, such as mechanical, electrical, and fluid systems. It provides a graphical interface for creating and simulating dynamic systems, making it easier for engineers and researchers to visualize and analyze complex systems. In this section, we will discuss the key features of Simscape, including its library of pre-built components, the ability to create custom components, and the integration with other MATLAB and Simulink tools. Here's an example of a simple Simscape model that models a pendulum: ```matlab % Create a Simulink model model = simulink.create('model'); % Add a pendulum component to the model pendulum = simulink.add('simscape.components.mechanical.pendulum', model); % Set the mass and length of the pendulum simulink.set('pendulum.mass', 1); simulink.set('pendulum.length', 1); % Add a force input to the model force = simulink.add('simscape.components.mechanical.force', model); % Connect the force input to the pendulum simulink.connect('force.output', 'pendulum.force'); % Simulate the model simulink.simulate(model); ``` ## Exercise Create a Simscape model that models a simple DC motor. The motor should have a resistance, an inductance, and a back EMF. Simulate the model and plot the current and voltage waveforms. ```matlab % Your code here ``` # Creating a basic dynamic system in MATLAB and Simulink Here's an example of a Simulink model that models a mass-spring-damper system: ```matlab % Create a Simulink model model = simulink.create('model'); % Add a mass, spring, and damper component to the model mass = simulink.add('simscape.components.mechanical.mass', model); spring = simulink.add('simscape.components.mechanical.spring', model); damper = simulink.add('simscape.components.mechanical.damper', model); % Set the mass and stiffness of the spring simulink.set('spring.mass', 1); simulink.set('spring.stiffness', 100); % Set the damping coefficient of the damper simulink.set('damper.damping', 50); % Connect the components to the model simulink.connect('mass.output', 'spring.input'); simulink.connect('spring.output', 'damper.input'); % Simulate the model simulink.simulate(model); ``` ## Exercise Create a Simulink model that models a simple harmonic oscillator. The oscillator should have a mass, spring constant, and damping coefficient. Simulate the model and plot the displacement and velocity waveforms. ```matlab % Your code here ``` # Understanding the real-time simulation process Here's an example of a Simulink model that simulates the behavior of a simple harmonic oscillator using a fixed-step solver: ```matlab % Create a Simulink model model = simulink.create('model'); % Add a mass, spring, and damper component to the model mass = simulink.add('simscape.components.mechanical.mass', model); spring = simulink.add('simscape.components.mechanical.spring', model); damper = simulink.add('simscape.components.mechanical.damper', model); % Set the mass and stiffness of the spring simulink.set('spring.mass', 1); simulink.set('spring.stiffness', 100); % Set the damping coefficient of the damper simulink.set('damper.damping', 50); % Connect the components to the model simulink.connect('mass.output', 'spring.input'); simulink.connect('spring.output', 'damper.input'); % Configure the solver settings simulink.set('model.solver', 'ode45'); simulink.set('model.solver.FixedStep', 0.01); % Simulate the model simulink.simulate(model); ``` ## Exercise Create a Simulink model that simulates the behavior of a simple harmonic oscillator using a variable-step solver. Compare the results of the fixed-step and variable-step simulations. ```matlab % Your code here ``` # Setting up the simulation environment in Simulink Here's an example of a Simulink model that simulates the behavior of a simple harmonic oscillator using a variable-step solver: ```matlab % Create a Simulink model model = simulink.create('model'); % Add a mass, spring, and damper component to the model mass = simulink.add('simscape.components.mechanical.mass', model); spring = simulink.add('simscape.components.mechanical.spring', model); damper = simulink.add('simscape.components.mechanical.damper', model); % Set the mass and stiffness of the spring simulink.set('spring.mass', 1); simulink.set('spring.stiffness', 100); % Set the damping coefficient of the damper simulink.set('damper.damping', 50); % Connect the components to the model simulink.connect('mass.output', 'spring.input'); simulink.connect('spring.output', 'damper.input'); % Configure the solver settings simulink.set('model.solver', 'ode45'); simulink.set('model.solver.MaxStep', 0.1); % Set the simulation time simulink.set('model.stopTime', 10); % Simulate the model simulink.simulate(model); ``` ## Exercise Create a Simulink model that simulates the behavior of a simple harmonic oscillator using a fixed-step solver. Compare the results of the fixed-step and variable-step simulations. ```matlab % Your code here ``` # Creating simulation models for different types of dynamic systems Here's an example of a Simulink model that models a simple DC motor: ```matlab % Create a Simulink model model = simulink.create('model'); % Add a motor, resistance, and inductance component to the model motor = simulink.add('simscape.components.mechanical.motor', model); resistance = simulink.add('simscape.components.mechanical.resistance', model); inductance = simulink.add('simscape.components.mechanical.inductance', model); % Set the motor parameters simulink.set('motor.kV', 100); simulink.set('motor.resistance', 1); simulink.set('motor.inductance', 0.1); % Connect the components to the model simulink.connect('motor.output', 'resistance.input'); simulink.connect('resistance.output', 'inductance.input'); % Simulate the model simulink.simulate(model); ``` ## Exercise Create a Simulink model that models a simple electrical circuit with a resistor, an inductor, and a capacitor. Simulate the model and plot the current and voltage waveforms. ```matlab % Your code here ``` # Simulating and analyzing the results Here's an example of a Simulink model that simulates the behavior of a simple electrical circuit with a resistor, an inductor, and a capacitor: ```matlab % Create a Simulink model model = simulink.create('model'); % Add a resistor, inductor, and capacitor component to the model resistor = simulink.add('simscape.components.electrical.resistor', model); inductor = simulink.add('simscape.components.electrical.inductor', model); capacitor = simulink.add('simscape.components.electrical.capacitor', model); % Set the component parameters simulink.set('resistor.resistance', 1); simulink.set('inductor.inductance', 0.1); simulink.set('capacitor.capacitance', 1e-6); % Connect the components to the model simulink.connect('resistor.output', 'inductor.input'); simulink.connect('inductor.output', 'capacitor.input'); % Simulate the model simulink.simulate(model); ``` ## Exercise Simulate the model and plot the current and voltage waveforms. Analyze the system stability by calculating the natural frequency and damping ratio. ```matlab % Your code here ``` # Advanced simulation techniques: event-driven simulation, state-space representation, and optimization Here's an example of a Simulink model that simulates the behavior of a simple electrical circuit with an event-driven solver: ```matlab % Create a Simulink model model = simulink.create('model'); % Add a resistor, inductor, and capacitor component to the model resistor = simulink.add('simscape.components.electrical.resistor', model); inductor = simulink.add('simscape.components.electrical.inductor', model); capacitor = simulink.add('simscape.components.electrical.capacitor', model); % Set the component parameters simulink.set('resistor.resistance', 1); simulink.set('inductor.inductance', 0.1); simulink.set('capacitor.capacitance', 1e-6); % Connect the components to the model simulink.connect('resistor.output', 'inductor.input'); simulink.connect('inductor.output', 'capacitor.input'); % Configure the solver settings simulink.set('model.solver', 'ode45'); simulink.set('model.solver.MaxStep', 0.1); simulink.set('model.solver.EventDriven', 'on'); % Set the simulation time simulink.set('model.stopTime', 10); % Simulate the model simulink.simulate(model); ``` ## Exercise Create a Simulink model that simulates the behavior of a simple electrical circuit with a state-space representation. Analyze the system stability by calculating the natural frequency and damping ratio. ```matlab % Your code here ``` # Real-world applications of real-time simulation Here's an example of a Simulink model that simulates the behavior of a simple electrical circuit with a state-space representation: ```matlab % Create a Simulink model model = simulink.create('model'); % Add a resistor, inductor, and capacitor component to the model resistor = simulink.add('simscape.components.electrical.resistor', model); inductor = simulink.add('simscape.components.electrical.inductor', model); capacitor = simulink.add('simscape.components.electrical.capacitor', model); % Set the component parameters simulink.set('resistor.resistance', 1); simulink.set('inductor.inductance', 0.1); simulink.set('capacitor.capacitance', 1e-6); % Connect the components to the model simulink.connect('resistor.output', 'inductor.input'); simulink.connect('inductor.output', 'capacitor.input'); % Simulate the model simulink.simulate(model); ``` ## Exercise Create a Simulink model that simulates the behavior of a simple electrical circuit with a state-space representation. Analyze the system stability by calculating the natural frequency and damping ratio. ```matlab % Your code here ``` # Conclusion and summary of the material covered In this textbook, we have covered the basics of MATLAB and Simulink, the Simscape add-on, creating and simulating dynamic systems, advanced simulation techniques, and real-world applications of real-time simulation. We have also discussed the ethical considerations of using simulation in research and engineering. By the end of this textbook, you should have a solid understanding of how to model and simulate dynamic systems using MATLAB and Simulink, and you should be able to apply these skills to various fields, including engineering, physics, and biology. We hope that you have enjoyed this textbook and found it helpful in learning the fundamentals of real-time simulation. If you have any questions or suggestions for future topics, please feel free to contact us. Thank you for your time and attention, and we hope you have a great day!
Textbooks
\begin{document} \title[First-passage times for bounded increments] {First-passage times for random walks in the triangular array setting } \author[Denisov]{Denis Denisov} \address{Department of Mathematics, University of Manchester, Oxford Road, Manchester M13 9PL, UK} \email{[email protected]} \author[Sakhanenko]{Alexander Sakhanenko} \address{Sobolev Institute of Mathematics, 630090 Novosibirsk, Russia} \email{[email protected]} \author[Wachtel]{Vitali Wachtel} \address{Institut f\"ur Mathematik, Universit\"at Augsburg, 86135 Augsburg, Germany} \email{[email protected]} \begin{abstract} In this paper we continue our study of exit times for random walks with independent but not necessarily identically distributed increments. Our paper ``First-passage times for random walks with non-identically distributed increments'' (2018) was devoted to the case when the random walk is constructed by a fixed sequence of independent random variables which satisfies the classical Lindeberg condition. Now we consider a more general situation when we have a triangular array of independent random variables. Our main assumption is that the entries of every row are uniformly bounded by a deterministic sequence, which tends to zero as the number of the row increases. \end{abstract} \keywords{Random walk, triangular array, first-passage time, central limit theorem, moving boundary, transition phenomena} \subjclass{Primary 60G50; Secondary 60G40, 60F17} \thanks{\rm A.S. and V.W. were supporteed by RFBR and DFG according to the research project № 20-51-12007} \maketitle {\scriptsize } \section{Introduction and the main result.} \subsection{Introduction} Suppose that for each $n=1,2,\dots $ we are given independent random variables $X_{1,n},\dots,X_{n,n}$ such that \begin{gather} \label{i1} \mathbf E X_{i,n}=0 \quad\text{for all }i\le n \qquad\text{and}\qquad \sum_{i=1}^{n}\mathbf E X_{i,n}^2=1. \end{gather} For each $n$ we consider a random walk \begin{gather} \label{i2} S_{k,n}:=X_{1,n}+\dots+X_{k,n},\quad k=1,2,\dots,n. \end{gather} Let $\{g_{k,n}\}_{k=1}^n$ be deterministic real numbers, and let \begin{gather} \label{i3} T_n:=\inf\{k\geq1:S_{k,n}\leq g_{k,n}\} \end{gather} be the first crossing over the moving boundary $\{g_{k,n}\}$ by the random walk $\{S_{k,n}\}$. The main purpose of the present paper is to study the asymptotic behaviour, as $n\to\infty$, of the probability \begin{gather} \label{i4} \mathbf P(T_n>n)=\mathbf P\left(\min_{1\le k\le n}(S_{k,n}-g_{k,n})>0\right). \end{gather} We shall always assume that the boundary $\{g_{k,n}\}$ is of a small magnitude, that is, \begin{gather} \label{i6} g_n^*:=\max_{1\le k\le n}|g_{k,n}|\to0. \end{gather} Here and in what follows, all unspecified limits are taken with respect to $n\to\infty$. Furthermore, to avoid trivialities, we shall assume that \begin{gather} \label{i7} \mathbf{P}(T_n>n)>0\qquad\text{for all }\quad n\ge 1. \end{gather} An important particular case of the triangular array scheme is given by the following construction. Let $X_1,X_2,\dots $ be independent random variables with finite variances such that \begin{gather} \label{i11} \mathbf E X_{i}=0\quad \text{ for all }i\ge1 \qquad\text{and}\qquad B_n^2:=\sum_{i=1}^{n}\mathbf E X_{i}^2\to\infty. \end{gather} For a real deterministic sequence $\{g_1,g_2,\dots\}$ set \begin{gather} \label{i13} T:=\inf\{k\geq1:S_{k}\leq g_{k}\}, \quad\text{where}\quad S_{k}:=X_{1}+\dots+X_{k}. \end{gather} Stopping time $T$ is the first crossing over the moving boundary $\{g_{k}\}$ by the random walk~$\{S_{k}\}$. Clearly, \eqref{i11} -- \eqref{i13} is a particular case of \eqref{i1} -- \eqref{i3}. Indeed to obtain~\mbox{ \eqref{i1} -- \eqref{i3}} it is sufficient to set \begin{gather} \label{i15} X_{k,n}=\frac{X_k}{B_n},\quad S_{k,n}=\frac{S_k}{B_n},\quad g_{k,n}=\frac{g_k}{B_n}. \end{gather} However, the triangular array scheme is much more general than \eqref{i11} -- \eqref{i15}. If the classical Lindeberg condition holds for the sequence $\{X_k\}$ and $g_n=o(B_n)$ then, according to Theorem 1 in \cite{DSW16}, \begin{equation} \label{i17} \mathbf P(T>n)\sim \sqrt{\frac{2}{\pi}}\frac{U(B_n^2)}{B_n}, \end{equation} where $U$ is a positive slowly varying function with the values $$ U(B_n^2)=\mathbf E[S_n-g_n;T>n],\quad n\ge1. $$ The constant \(\sqrt{\frac{2}{\pi}}\) in front of the asymptotics has been inherited from the tail asymptotics of exit time of standard Brownian motion. Indeed, let $W(t)$ be the standard Brownian motion and set $$ \tau_x^{bm}:=\inf\{t>0:x+W(t)\le0\},\quad x>0. $$ Then, $$ \mathbf P(\tau_x^{bm}>t)= \mathbf P(|W(t)|\le x) =\mathbf P\left(|W(1)|\le \frac x{\sqrt t}\right) \sim \sqrt{\frac{2}{\pi}}\frac{x}{\sqrt{t}}, \quad \text{as}\quad \frac{x}{\sqrt{t}}\to 0. $$ The continuity of paths of $W(t)$ implies that $x+W(\tau_x^{bm})=0$. Combining this with the optional stopping theorem, we obtain \begin{align*} x=\mathbf E[x+W(\tau_x^{bm}\wedge t)]&=\mathbf E[x+W(t);\tau_x^{bm}>t)]+ \mathbf E[x+W(\tau_x^{bm});\tau_x^{bm}\le t)]\\ &=\mathbf E[x+W(t);\tau_x^{bm}>t)]. \end{align*} Therefore, for any fixed $x>0$, $$ \mathbf P(\tau_x^{bm}>t) \sim\sqrt{\frac{2}{\pi}}\frac{x}{\sqrt{t}}= \sqrt{\frac{2}{\pi}}\frac{\mathbf E[x+W(t);\tau_x^{bm}>t)]}{\sqrt{t}},\quad \text{as}\quad t\to\infty. $$ Thus, the right hand sides here and in \eqref{i17} are of the same type. \subsection{Main result} The purpose of the present note is to generalise the asymptotic relation \eqref{i17} to the triangular array setting. More precisely, we are going to show that the following relation holds \begin{gather} \label{i20} \mathbf P(T_n>n)\sim\sqrt{\frac{2}{\pi}}E_n, \end{gather} where \begin{gather} \label{i20+} E_n:=\mathbf E[S_{n,n}-g_{n,n};T_n>n] =\mathbf E[-S_{T_n,n};T_n\le n]-g_{n,n}\mathbf P(T_n>n). \end{gather} Unexpectedly for the authors, in contrast to the described above case of a single sequence, the Lindeberg condition is not sufficient for the validity of \eqref{i20}, see Example~\ref{Lind}. Thus, one has to find a more restrictive condition for \eqref{i20} to hold. In this paper we show that \eqref{i20} holds under the following assumption: there exists a sequence $r_n$ such that \begin{gather} \label{i5} \max_{1\le i\le n}|X_{i,n}|\le r_n\to 0. \end{gather} It is clear that under this assumption the triangular array satisfies the Lindeberg condition and, hence, the Central Limit Theorem holds. At first glance, \eqref{i5} might look too restrictive. However we shall construct a triangular array, see Example~\ref{Lind2}, in which the assumption \eqref{i5} becomes necessary for \eqref{i20} to hold. Now we state our main result. \begin{theorem} \label{thm:main} Assume that \eqref{i6} and \eqref{i5} are valid. Then there exists an absolute constant $C_1$ such that \begin{gather} \label{i31} \mathbf P(T_n>n)\ge\sqrt{\frac{2}{\pi}}E_n\big(1-C_1(r_n+g_n^*)^{2/3}\big). \end{gather} On the other hand, there exists an absolute constant $C_2$ such that \begin{gather} \label{i32} \mathbf P(T_n>n)\le\sqrt{\frac{2}{\pi}}E_n\big(1+C_2(r_n+g_n^*)^{2/3}\big), \qquad\text{if}\quad r_n+g_n^*\le1/24. \end{gather} In addition, for $m\le n$, \begin{gather} \label{i33} \mathbf P(T_n>m)\le\frac{4E_n}{B_m^{(n)}} \end{gather} provided that $$ B_m^{(n)}:=\left(\sum_{k=1}^m\mathbf E X_{k,n}^2\right)^{1/2}\ge24(r_n+g_n^*). $$ \end{theorem} \begin{corollary} \label{cor:asymp} Under conditions \eqref{i6}, \eqref{i7} and \eqref{i5} relation \eqref{i20} takes place. \end{corollary} Estimates \eqref{i31} and \eqref{i32} can be seen as an improved version of \eqref{i20}, with a rate of convergence. Moreover, the fact, that the dependence on $r_n$ and $g_n$ is expressed in a quite explicit way, is very important for our work \cite{DSW_prog} in progress, where we analyse unbounded random variables. In this paper we consider first-passage times of walks $S_n=X_1+X_2+\ldots+X_n$ for which the central limit theorem is valid but the Lindeberg condition may fail. We use Theorem~\ref{thm:main} to analyse the behaviour of triangular arrays obtained from $\{X_n\}$ by truncation. \subsection{Triangular arrays of weighted random variables.} Theorem~\ref{thm:main} and Corollary~\ref{cor:asymp} can be used in studying first-passage times of weighted sums of independent random variables. Suppose that we are given independent random variables $X_{1},X_2,\dots$ such that \begin{gather} \label{ex1} \mathbf E X_{i}=0 \quad\text{and}\quad \mathbf P(|X_{i}|\le M_i)=1 \quad\text{for all }i\ge1, \end{gather} where $M_1,M_2,\dots$ are deterministic. For each $n$ we consider a random walk \begin{gather} \label{ex2} U_{k,n}:=u_{1,n}X_1+\dots+u_{k,n}X_k,\quad k=1,2,\dots,n, \end{gather} and let \begin{gather} \label{ex3} \tau_n:=\inf\{k\geq1:U_{k,n}\leq G_{k,n}\} \end{gather} be the first crossing over the moving boundary $\{G_{k,n}\}$ by the random walk $\{U_{k,n}\}$. The main purpose of the present example is to study the asymptotic behaviour, as $n\to\infty$, of the probability \begin{gather} \label{ex4} \mathbf P(\tau_n>n)=\mathbf P\left(\min_{1\le k\le n}(U_{k,n}-G_{k,n})>0\right). \end{gather} We suppose that $\{u_{k,n}, G_{k,n}\}_{k=1}^n$ are deterministic real numbers such that \begin{gather} \label{ex6} M:=\sup_{k,n\ge1}\left(|u_{k,n}|M_k+|G_{k,n}|\right)<\infty \end{gather} and \begin{gather} \label{ex7} \sigma_n^2:=\sum_{k=1}^n u_{k,n}^2\mathbf E X_k^2\to\infty. \end{gather} Moreover, we assume that \begin{gather} \label{ex5} u_{k,n}\to u_k\quad\text{and}\quad G_{n,k}\to g_k \quad\text{for every }k\ge1. \end{gather} \begin{corollary} \label{cor:ex0} Assume that the distribution functions of all $X_k$ are continuous. Then, under assumptions \eqref{ex1}, \eqref{ex6}, \eqref{ex7} and \eqref{ex5}, \begin{equation} \label{ex8} \sigma_n\mathbf P(\tau_n>n)\to\sqrt{\frac{2}{\pi}} \mathbf E[-U_\tau]\in[0,\infty), \end{equation} where \begin{gather} \label{ex9} U_{k}:=u_{1}X_1+\dots+u_{k}X_k \quad\text{and}\quad \tau:=\inf\{k\geq1:U_{k}\leq g_k\}. \end{gather} \end{corollary} It follows from condition \eqref{ex5} that random walks $\{U_{k,n}\}$ introduced in \eqref{ex2} may be considered as perturbations of the walk $\{U_{k}\}$ defined in \eqref{ex9}. Thus, we see from \eqref{ex8} that the influence of perturbations on the behavior of the probability $\mathbf P(\tau_n>n)$ is concentrated in the $\sigma_n$. \begin{example} \label{ex:Gaposhkin} As an example we consider the following method of summation, which has been suggested by Gaposhkin~\cite{Gaposhkin}. Let $f:[0,1]\mapsto\mathbb{R}^+$ be a non-degenerate continuous function. For random variables $\{X_k\}$ define $$ U_k(n,f):=\sum_{j=1}^k f\left(\frac{j}{n}\right)X_j, \quad j=1,2,\ldots,n. $$ This sequence can be seen as a stochastic integral of $f$ with respect to the random walk $S_k=X_1+X_2+\ldots X_k$ normalized by $n$. We assume that the random variables $\{X_k\}$ are independent and identically distributed. Furthermore, we assume that $X_1$ satisfies \eqref{ex1} and that its distribution function is continuous. In this case $$ \sigma^2_n(f):=\frac1{n}\mathbf E X_1^2 \sum_{j=1}^n f^2\left(\frac{j}{n}\right)\to \sigma^2(f):=\mathbf E X_1^2\int_0^1f(t)dt>0. $$ From Corollary~\ref{cor:ex0} with $u_{k,n}:=f\left(\frac{j}{n}\right)\to f(0)=:u_k$, $G_{k,n}\equiv0$ and $\sigma_n:=\sqrt{n}\sigma_n(f)$ we immediately obtain \begin{equation} \label{ex2.2} \sqrt{n}\mathbf P\left(\min_{k\le n}U_k(n,f)>0\right) \to \sqrt{\frac{2}{\pi}}\frac{f(0)}{\sigma(f)} \mathbf E[-S_\tau]\in[0,\infty), \end{equation} where \begin{gather} \label{ex16} S_{k}:=X_1+\dots+X_k \quad\text{and}\quad \tau:=\inf\{k\geq1:S_{k}\leq 0\}. \end{gather} $\diamond$ \end{example} Clearly, \eqref{ex2.2} gives one exact asymptotics only when $f(0)>0$. The case $f(0)=0$ seems to be much more delicate. If $f(0)=0$ then one needs an information on the behaviour of $f$ near zero. If, for example, $f(t)=t^\alpha$ with some $\alpha>0$ then, according to Example 12 in \cite{DSW16}, $$ \mathbf P\left(\min_{k\le n}U_k(n,f)>0\right) =\mathbf P\left(\min_{k\le n}\sum_{j=1}^k j^\alpha X_j>0\right) \sim\frac{Const}{n^{\alpha+1/2}}. $$ Now we give an example of application of our results to study of transition phenomena. \begin{example} \label{ex:regression} Consider an autoregressive sequence \begin{align} \label{ex11} &U_{n}(\gamma)=\gamma U_{n-1}(\gamma)+X_n,\ n\ge0, \quad n=1,2,\dots, \quad\text{where}\quad U_0(\gamma)=0, \end{align} with a non-random $\gamma=\gamma_n\in(0,1)$ and with independent, identically distributed innovations $X_1,X_2,\dots$. As in the previous example, we assume that $X_1$ satisfies \eqref{ex1} and that its distribution function is continuous. Consider the exit time $$ T(\gamma):=\inf\{n\ge1: U_n(\gamma)\le0\}. $$ We want to understand the behavior of the probability $\mathbf P(T(\gamma)>n)$ in the case when $\gamma=\gamma_n$ depends on $n$ and \begin{gather} \label{ex13} \gamma_n\in(0,1)\quad\text{and}\quad \sup_n n(1-\gamma_n)<\infty. \end{gather} We now show that the autoregressive sequence $U_n(\gamma)$ can be transformed to a random walk, which satisfies the conditions of Corollary~\ref{cor:ex0}. First, multiplying \eqref{ex11} by $\gamma^{-n}$, we get $$ U_n(\gamma)\gamma^{-n}=U_n(\gamma)\gamma^{-(n-1)}+X_n\gamma^{-n}= \sum_{k=1}^n\gamma^{-k}X_k,\quad n\ge1. $$ Thus, for each $n\ge1$, \begin{align} \label{ex18} \{T(\gamma_n)>n\} =\left\{\sum_{j=1}^k \gamma_n^{-j}X_j>0\ \ \text{ for all }\ k\le n\right\}. \end{align} Comparing \eqref{ex18} with \eqref{ex2} and \eqref{ex4}, we see that we have a particular case of the model in Corollary~\ref{cor:ex0} with $u_{k,n}=\gamma_n^{-k}$ and $G_{k,n}=0$. Clearly, $u_{k,n}\to1$ for every fixed $k$. Furthermore, we infer from \eqref{ex13} that $$ \gamma_n^{-n}=e^{-n\log \gamma_n}=e^{O(n|\gamma_n-1|)}=e^{O(1)} $$ and $$ \sigma_n^2(\gamma_n):=\frac{\gamma_n^{-2n}-1}{1-\gamma_n^2} =\gamma_n^{-2}+\gamma_n^{-4}+\dots+\gamma_n^{-2n} =ne^{O(1)}. $$ These relations imply that \eqref{ex5} and \eqref{ex6} are fulfilled. Applying Corollary \ref{cor:ex0}, we arrive at \begin{equation} \label{ex15} \sigma_n(\gamma_n)\mathbf P(T(\gamma_n)>n)\to \sqrt{\frac{2}{\pi\mathbf E X_1^2}} \mathbf E[-S_\tau]\in(0,\infty), \end{equation} where $\tau$ is defined in \eqref{ex16}. $\diamond$ \end{example} \subsection{Discussion of the assumption \eqref{i5}} Based on the validity of CLT and considerations in~\cite{DSW16} one can expect that the Lindeberg condition will again be sufficient. However the following example shows that this is not the case and the situation is more complicated. \begin{example} \label{Lind} Let $X_2,X_3,\ldots$ and $Y_2,Y_3,\ldots$ be mutually independent random variables such that \begin{gather} \label{i21-} \mathbf E X_k=\mathbf E Y_k=0,\ \mathbf E X_k^2=\mathbf E Y_k^2=1\quad\text{and}\quad \mathbf P(|X_k|\le M)=1\ \text{for all }k\ge2 \end{gather} for some finite constant $M$. It is easy to see that the triangular array \begin{gather} \label{i24+} X_{1,n}:=\frac{Y_n}{\sqrt{n}},\ X_{k,n}:=\frac{X_k}{\sqrt{n}},\ k=2,3,\ldots,n;\ n>1 \end{gather} satisfies the Lindeberg condition. Indeed, $\sum_{i=1}^n\mathbf E X_{i,n}^2=1$ and for every $\varepsilon>\frac M{\sqrt{n}}$ one has \begin{gather} \label{i26} \sum_{i=1}^n\mathbf E [X_{i,n}^2;|X_{i,n}|>\varepsilon]=\mathbf E [X_{1,n}^2;|X_{1,n}|>\varepsilon] \le \mathbf E X_{1,n}^2=\frac {\mathbf E Y_{n}^2}n=\frac1n\to0 \end{gather} due to the fact that $|X_{k,n}|\le\frac M{\sqrt{n}}$ for all $k\ge2$. We shall also assume that $g_{k,n}\equiv0$. For each $n>1$ let random variables $Y_n$ be defined as follows \begin{gather} \label{i25} Y_n:= \begin{cases} N_n,\! \!&\text{with probability}\ p_n:=\frac1{2N_n^2},\\ 0,\!\! & \text{with probability}\ 1-2p_n,\\ - N_n ,\!\! & \text{with probability}\ p_n, \end{cases} \end{gather} where $N_n\ge1$. Note that $\mathbf E Y_n=0$ and $\mathbf E Y_n^2=1$. For every $n>1$ we set \begin{gather} \label{i21} U_{n}:=X_2+X_3+\ldots+X_{n} \quad\text{and}\quad \underline U_{n}:=\min_{2\le i\le n}U_i. \end{gather} It is easy to see that $$ \{T_n>n\}=\left\{Y_n=N_n\right\}\cap \left\{\underline U_{n}>-N_n\right\}. $$ Noting now that $\underline U_{n}\ge -(n-1)M$, we infer that \begin{gather} \label{i27-} \{T_n>n\}=\{Y_n=N_n\},\quad\text{for any }N_n>(n-1)M. \end{gather} In this case we have \begin{align} \label{i28-} \nonumber E_n&=\mathbf E[S_{n,n};T_n>n] =\mathbf E\left[\frac{Y_n+U_{n}}{\sqrt{n}};Y_n=N_n\right]\\ \nonumber &=\mathbf P(Y_n=N_n)\mathbf E\left[\frac{N_n+U_{n}}{\sqrt{n}}\right] =\mathbf P(Y_n=n)\frac{N_n+\mathbf E U_{n}}{\sqrt{n}}\\ &=\mathbf P(Y_n=n)\frac{N_n}{\sqrt{n}}. \end{align} In particular, from \eqref{i27-} and \eqref{i28-} we conclude that \begin{align*} \mathbf P(T_n>n)=\mathbf P(Y_n=n)=\frac{E_n\sqrt{n}}{N_n}<\frac{E_n\sqrt{n}}{M(n-1)}=o(E_n) \end{align*} provided that $N_n>(n-1)M$. This example shows that \eqref{i20} can not hold for all triangular arrays satisfying the Lindeberg condition. $\diamond$ \end{example} We now construct an array, for which the assumption \eqref{i5} becomes necessary for the validity of \eqref{i20}. \begin{example} \label{Lind2} We consider again the model from the previous example and assume additionally that the variables $X_2,X_3,\dots$ have the Rademacher distribution, that is, $$ \mathbf P(X_k=\pm1)=\frac{1}{2}. $$ Finally, in order to have random walks on lattices, we shall assume that $N_n$ is a natural number. It is then clear that $r_n:=\frac{N_n}{\sqrt{n}}$ is the minimal deterministic number such that $$ \max_{k\le n}|X_{k,n}|\le r_n. $$ As in Example~\ref{Lind}, we shall assume that $g_{k,n}\equiv0$. In order to calculate $E_n$ we note that \begin{align*} E_n=\mathbf E[S_{n,n};T_n>n] &=\mathbf P\left(X_{1,n}=r_n\right) \mathbf E\left[r_n+\frac{U_{n}}{\sqrt{n}};r_n+\frac{\underline U_{n}}{\sqrt{n}}>0\right]\\ &=\mathbf P\left(X_{1,n}=r_n\right)\frac{1}{\sqrt{n}} \mathbf E\left[N_n+U_{n};N_n+\underline U_{n}>0\right]. \end{align*} It is well known that for $m\ge1$ the sequence $(N+U_m){\rm 1}_{\{N+\underline{U}_m>0\}}$ is a martingale with $U_1=\underline{U}_1=0$. This implies that $$ \mathbf E[N+U_m;N+\underline{U}_m>0]=N\quad\text{for all}\quad m,N\ge1. $$ Consequently, \begin{equation} \label{i23} E_n=p_n\frac{N_n}{\sqrt{n}}=p_n r_n. \end{equation} Furthermore, \begin{align*} \mathbf P(T_n>n)=\mathbf P\left(X_{1,n}=r_n\right) \mathbf P\left(\frac{N_n}{\sqrt{n}}+\frac{\underline U_{n}}{\sqrt{n}}>0\right) =p_n\mathbf P(N_n+\underline U_{n}>0). \end{align*} Using the reflection principle for the symmetric simple random walk, one can show that \begin{equation} \label{i24-} \mathbf P\left(N+\underline U_m>0\right)=\mathbf P(-N<U_m\le N) \quad\text{for all}\quad m,N\ge1. \end{equation} Consequently, $\mathbf P(T_n>n)=p_n\mathbf P(-N_n<U_{n}\le N_n)$. Combining this equality with \eqref{i23}, we obtain \begin{equation} \label{i27} \frac{\mathbf P(T_n>n)}{E_n}= \frac{1}{r_n}\mathbf P\left(-r_n<\frac{U_{n}}{\sqrt{n}}\le r_n\right). \end{equation} Using the central limit theorem, one obtains \begin{align} \label{i24} \mathbf P\left(-r_n<\frac{U_{n}}{\sqrt{n}}\le r_n\right) \sim\Psi\left(r_n\right), \end{align} where \begin{gather} \label{a3} \varphi(u):=\frac{1}{\sqrt{2\pi}}e^{-u^2/2} \qquad\text{and}\qquad \Psi(x):=2\int_0^{x^+}\varphi(u)du. \end{gather} We will postpone the proof of \eqref{i24-} and \eqref{i24} till the end of the paper. Assuming that \eqref{i24-} and \eqref{i24} are true, as a result we have $$ \frac{\mathbf P(T_n>n)}{E_n}\sim\frac{\Psi\left(r_n\right)}{r_n}. $$ Noting now that $\frac{\Psi(a)}{a}<2\varphi(0)=\sqrt{\frac{2}{\pi}}$ for every $a>0$, we conclude that the assumption $r_n\to0$ is necessary and sufficient for the validity of \eqref{i20}. More precisely, \begin{itemize} \item $\mathbf P(T_n>n)\sim\sqrt{\frac{2}{\pi}}E_n$ iff $r_n\to0$; \item $\mathbf P(T_n>n)\sim\frac{\Psi(a)}{a}E_n$ iff $r_n\to a>0$; \item $\mathbf P(T_n>n)=o(E_n)$ iff $r_n\to\infty$. \end{itemize} $\diamond$ \end{example} \section{Proofs.} In this section we are going to obtain estimates, which are valid for each fixed $n$. For that reason we will sometimes omit the subscript $n$ and introduce the following simplified notation: \begin{gather} \label{a0} T:=T_n,\quad X_k:=X_{k,n},\quad S_k:=S_{k,n},\quad g_k:=g_{k,n},\ \ 1\le k< n \end{gather} and \begin{gather} \label{a1} \rho:=r_n+g_n^*,\quad B_k^2:= \sum_{i=1}^k\mathbf E X_{i}^2,\quad B_{k,n}^2:=B_n^2-B_k^2=1-B_k^2,\ \ 1\le k< n. \end{gather} \subsection{Some estimates in the central limit theorem} For every integer~\mbox{$1\le k\le n$} and every real $y$ define \begin{gather} \label{a2} Z_k:=S_k-g_k,\ \widehat{Z}_k:=Z_k{\bf 1}\{T>k\} \ \text{and}\ Q_{k,n}(y):= \mathbf{P}\Big(y+\min_{k \le j\leq n}(Z_j-Z_k)>0\Big). \end{gather} \begin{lemma} \label{Arak} For all $y\in R$ and for all $0\leq k<n$ with $B_{k,n}>0$ \begin{equation} \label{a4} \left|Q_{k,n}(y)-\Psi\Big(\frac{y}{B_{k,n}}\Big)\right| \le \frac{C_0\rho}{B_{k,n}}\ind{y>0}, \end{equation} where $C_0$ is an absolute constant. \end{lemma} \begin{proof} For non-random real $y$ define \begin{gather} \label{a5} q_{k,n}(y):= \mathbf{P}\Big(y+\min_{k \le j\leq n}(S_j-S_k)>0\Big),\quad n>k\ge1. \end{gather} It follows from Corollary 1 in Arak~\cite{Arak75} that there exists an absolute constant $C_A$ such that \begin{equation} \label{a6} \left|q_{k,n}(y)-\Psi\Big(\frac{y}{B_{k,n}}\Big)\right| \le\frac{C_A}{B_{k,n}}\max_{k<j\le n}\frac{\mathbf{E}|X_j|^3}{\mathbf{E}X_j^2}\le \frac{C_A r_n}{B_{k,n}}, \end{equation} where maximum is taken over all $j$ satisfying $\mathbf{E}X_j^2>0$. In the second step we have used the inequality $\mathbf E|X_j|^3\le r_n\mathbf E X_j^2$ which follows from \eqref{i5}. We have from (\ref{a2}) that $|Z_k- S_k|=|g_k|\le g_n^*$. Hence, for $Q_{k,n}$ and $q_{k,n}$ defined in \eqref{a2} and \eqref{a5}, we have \begin{equation} \label{a7} q_{k,n}(y_-)\le Q_{k,n}(y)\le q_{k,n}(y_+),\quad\text{where}\quad y_\pm:=y\pm2g_n^*. \end{equation} Then we obtain from~\eqref{a6} that \begin{equation} \label{a8} \left|q_{k,n}(y_\pm)-\Psi\Big(\frac{y_\pm}{B_{k,n}}\Big)\right| \le \frac{C_A r_n}{B_{k,n}}. \end{equation} On the other hand, it is easy to see from \eqref{a3} that $$ \Big| \Psi\Big(\frac{y_\pm}{B_{k,n}}\Big)-\Psi\Big(\frac{y}{B_{k,n}}\Big)\Big| \le\frac{2\varphi(0)|y_\pm-y|}{B_{k,n}}=\frac{4\varphi(0)g_n^*}{B_{k,n}}. $$ Applying this inequality together with~\eqref{a7} and~\eqref{a8} we immediately obtain~\eqref{a4} for $y>0$ $C_0:=C_A+4\varphi(0)$. For $y\le 0$ inequality~\eqref{a4} immediately follows since $Q_{k,n}(y)=0=\Psi(y)$. \end{proof} \begin{lemma} \label{L4} If $1\le m\le n$, then \begin{gather} \label{m11} \mathbf E S_m^+\ge\frac{3}{8}B_m-r_n. \end{gather} Moreover, for all $m$ satisfying $B_m\ge24(r_n+g_n^*)$ we have \begin{gather} \label{m12} \mathbf P(T>m)\le3\frac{\mathbf E \widehat{Z}_m}{B_m}. \end{gather} \end{lemma} \begin{proof} We will use the following extension of the Berry-Esseen inequality due to Tyurin~\cite{Tyurin2010}: \[ \sup_{x\in\mathbb{R}}|\mathbf P( S_m>x)-\mathbf P(B_m\eta>x)| \le 0.5606\frac{\sum_{j=1}^m\mathbf E|X_j|^3}{B_m^3} \le 0.5606\frac{r_n}{B_m}, \] when $B_m>0$. Here $\eta$ is a random variable that follows the standard normal distribution. This inequality implies that, for every $C>0$, \begin{align*} \mathbf E S_m^+&=\int_0^\infty\mathbf P(S_m>x)dx\ge\int_0^{CB_m}\mathbf P(S_m>x)dx\\ &\ge\int_0^{CB_m}\left(\mathbf P(B_m\eta>x)-0.5606\frac{r_n}{B_m}\right)dx =B_m\mathbf E(\eta^+\wedge C)-0.5606Cr_n. \end{align*} Further, \begin{align*} \mathbf E(\eta^+\wedge C) =\int_0^\infty(x\wedge C)\varphi(x)dx &=\int_0^C x\frac{1}{2\pi}e^{-x^2/2}dx+C\int_C^\infty\varphi(x)dx\\ &=\varphi(0)-\varphi(C)+C\int_C^\infty\varphi(x)dx. \end{align*} Taking here $C=1/0.5606$ and using tables of the standard normal distribution we conclude that $\mathbf E(\eta^+\wedge C) >0.375>\frac{3}{8}$ and \eqref{m11} holds. Next, according to Lemma 25 in \cite{DSW16}, \begin{gather} \label{m14} \mathbf E Z^+_m\mathbf P(T>m)\le\mathbf E \widehat{Z}_m,\qquad 1\le m\le n. \end{gather} Therefore, it remains to derive a lower bound for $\mathbf E Z^+_m$. We first note that \begin{gather*} \label{m14+} S_m=Z_m+g_m\le Z_m^++g_m^+\le Z_m^++g_n^*. \end{gather*} Hence, $S_m^+\le Z_m^++g_n^*$ and, taking into account \eqref{m11}, we get \begin{gather} \label{m15} \mathbf E Z_m^+\ge \mathbf E S_m^+-g_n^*\ge\frac{3}{8}B_m-(r_n+g_n^*) . \end{gather} If $m$ is such that $\frac{B_m}{24}\ge r_n+g_n^*$, then we infer from \eqref{m14} and \eqref{m15} that \begin{gather*} \label{m16} \mathbf E \widehat{Z}_m\ge\mathbf E Z^+_m\mathbf P(T>m)\ge \left(\frac{3}{8}B_m-(r_n+g_n^*)\right)\mathbf P(T>n) \\ \ge \left(\frac{3}{8}-\frac{1}{24}\right) B_m\mathbf P(T>m) =\frac{1}{3} B_m\mathbf P(T>m). \end{gather*} Thus, \eqref{m12} is proven. \end{proof} \subsection{Estimates for expectations of $\widehat{Z}_k$.} \begin{lemma} \label{L3} Let $\alpha$ be a stopping time such that $1\le\alpha\le l\le n$ with probability one. Then \begin{equation} \label{m2} \mathbf{E}\widehat{Z}_\alpha- \mathbf{E}\widehat{Z}_l \le2g_n^* p(\alpha,l) \quad\text{with}\quad p(\alpha,l):=\mathbf{P}(\alpha<T,\alpha<l). \end{equation} Moreover, \begin{align} \label{m3} \mathbf{E}\widehat{Z}_\alpha- \mathbf{E}\widehat{Z}_l &\ge\mathbf E[X_T;\alpha<T\le l]-2g_n^*p(\alpha,l) \ge-(2g_n^*+r_n)p(\alpha,l). \end{align} In addition, the equality in (\ref{i20+}) takes place. \end{lemma} \begin{proof} Define events $$ A_1:=\{\alpha<T\le l\} \quad\text{and}\quad A_2:=\{\alpha<l<T\}. $$ Then, clearly, $ \{\alpha<T,\alpha<l\}=A_1\cup A_2. $ Using Lemma 20 from \cite{DSW16}, we obtain \begin{align} \nonumber \mathbf{E}\widehat{Z}_\alpha+\mathbf{E}[ S_T;T\le\alpha]&= -\mathbf{E}[ g_\alpha;\alpha<T]\\ \nonumber &=-\mathbf{E}[ g_\alpha;A_2]-\mathbf{E}[ g_l;\alpha=l<T]-\mathbf{E}[ g_\alpha;A_1], \\ \label{m6blue} \quad \mathbf{E}\widehat{Z}_l+\mathbf{E}[ S_T;T\le l]&= -\mathbf{E}[ g_l;T>l]=-\mathbf{E}[ g_l;A_2]-\mathbf{E}[ g_l;\alpha=l<T]. \end{align} Thus, \begin{gather} \label{m6} \mathbf{E}\widehat{Z}_\alpha- \mathbf{E}\widehat{Z}_l =\mathbf{E}[ S_T- g_\alpha;A_1] +\mathbf{E}[ g_l- g_\alpha;A_2]. \end{gather} Next, by the definition of $T$, $$ g_T\ge S_T=S_{T-1}+X_T> g_{T-1}+X_T. $$ Hence, \begin{align*} \mathbf{E}[ S_T- g_\alpha;A_1] \le\mathbf{E}[ g_T- g_\alpha;A_1] \le2g_n^*\mathbf{P}(A_1) \end{align*} and \begin{align*} \mathbf{E}[ S_T- g_\alpha;A_1] &\ge\mathbf{E}[ g_{T-1}- g_\alpha+X_T;A_1]\\ &\ge \mathbf{E}[ X_T;A_1]-2g_n^*\mathbf{P}(A_1) \ge-(2g_n^*+r_n)\mathbf{P}(A_1). \end{align*} Furthermore, \begin{gather*} \label{m7} |\mathbf{E}[ g_n- g_\alpha;A_2]|\le 2g_n^*\mathbf{P}(A_2). \end{gather*} Plugging these estimates into \eqref{m6}, we arrive at desired bounds. The equality in (\ref{i20+}) follows from (\ref{m6blue}) with $l=n$. \end{proof} For every $h>0$ define \begin{gather} \label{d1} \nu(h):=\inf\{k\geq1:S_k\geq g_k+h\}=\inf\{k\geq1:Z_k\geq h\}. \end{gather} \begin{lemma} \label{L5} Suppose that $m\le n$ is such that the inequality \eqref{m12} takes place, \begin{gather} \label{d2} B_m\ge24g_n^* \quad\text{and}\quad h\ge6g_n^*. \end{gather} Then \begin{align} \label{d3} 2\mathbf{E}\widehat{Z}_{\nu(h)\wedge m} \le3\mathbf{E}\widehat{Z}_m\le4\mathbf{E}\widehat{Z}_n=4E_n, \quad \mathbf{P}(\widehat{Z}_{\nu(h)\wedge m}>0)\le \varkappa E_n, \\ \label{d4} 2\varkappa g_n^*E_n\ge\mathbf{E}\widehat{Z}_{\nu(h)\wedge m}-E_n \ge \delta(h)-2\varkappa g_n^*E_n, \end{align} where \begin{align} \label{d5} 0\ge \delta(h):=\mathbf E[X_T;n\ge T>\nu(h)\wedge m]\ge- \varkappa r_nE_n \quad\text{and}\quad \varkappa:=\frac2h+\frac4{B_m}. \end{align} In particular, \eqref{i33} takes place. \end{lemma} \begin{proof} First, we apply Lemma \ref{L3} with $l=m$ and $\alpha=\nu(h)\wedge m$. For this choice of the stopping time one has \begin{align*} p(\nu(h)\wedge m,m)&=\mathbf P\left(\nu(h)\wedge m<T,\nu(h)\wedge m<m\right)\\ &\le\mathbf P(\widehat{Z}_{\nu(h)\wedge m}\ge h) \le\frac{\mathbf E \widehat{Z}_{\nu(h)\wedge m}}{h}. \end{align*} Plugging this bound into \eqref{m2} and using the inequality $h\ge 6g_n^*$, we get \begin{align*} \mathbf{E}\widehat{Z}_{\nu(h)\wedge m}-\mathbf{E}\widehat{Z}_m \le\frac{2g_n^*}{h}\mathbf{E}\widehat{Z}_{\nu(h)\wedge m} \le\frac{\mathbf{E}\widehat{Z}_{\nu(h)\wedge m}}3 \end{align*} and hence \begin{align} \label{d6} \frac23\mathbf{E}\widehat{Z}_{\nu(h)\wedge m}\le\mathbf{E}\widehat{Z}_m. \end{align} Next, we apply Lemma \ref{L3} with $l=n$ and $\alpha=m$. In this case \mbox{$p(m,n)=\mathbf P(T>m)$} and we may use \eqref{m12}. Substituting these estimates into \eqref{m2} and using \eqref{d2}, we obtain \begin{align*} \mathbf{E}\widehat{Z}_m-\mathbf{E}\widehat{Z}_n \le 2g_n^* \mathbf P(T>m) \le\frac{6g_n^*}{B_m}\mathbf{E}\widehat{Z}_m \le\frac14\mathbf{E}\widehat{Z}_m. \end{align*} Therefore, \begin{align} \label{d7} \frac34\mathbf{E}\widehat{Z}_m\le\mathbf{E}\widehat{Z}_n. \end{align} We conclude from \eqref{d6} and \eqref{d7} that the first relation in \eqref{d3} takes place. In particular, from \eqref{m12} and \eqref{d7} we get that \eqref{i33} holds under assumptions of Lemma~\ref{L5}. At last, we are going to apply Lemma \ref{L3} with $l=n>m$ and $\alpha=\nu(h)\wedge m$. For this choice of the stopping time one has \begin{align} \label{d8} \nonumber p(\nu(h)\wedge m,n)&=\mathbf P\left(T>\nu(h)\wedge m\right) =\mathbf P(\widehat{Z}_{\nu(h)\wedge m}>0)\\ \nonumber &\le\mathbf P(\widehat{Z}_{\nu(h)\wedge m}\ge h)+\mathbf P\left(T> m\right)\\ &\le\frac{\mathbf E \widehat{Z}_{\nu(h)\wedge m}}{h}+\frac{3\mathbf E \widehat{Z}_m}{B_m} \le\frac{2E_n}{h}+\frac{4E_n}{B_m}=\varkappa E_n. \end{align} Plugging this bound into \eqref{m2} and \eqref{m3}, we immediately obtain \eqref{d4}. The second inequality in \eqref{d3} also follows from \eqref{d8}; and using \eqref{i5} together with \eqref{d8} we find~\eqref{d5}. Thus, all assertions of Lemma \ref{L5} are proved. \end{proof} \subsection{Proof of Theorem~\ref{thm:main}.} According to the representation (36) in \cite{DSW16}, \begin{align} \label{d11} \nonumber \mathbf{P}(T>n) &=\mathbf{E}\left[Q_{\nu(h)\wedge m,n}(Z_{\nu(h)\wedge m});T>\nu(h)\wedge m\right]\\ &=\mathbf{E}Q_{\nu(h)\wedge m,n}(\widehat Z_{\nu(h)\wedge m}). \end{align} \begin{lemma} \label{L2} Suppose that all assumptions of Lemma \ref{L5} are fulfilled and that \mbox{$B_{m,n}>0$.} Then one has \begin{align} \label{d13} \nonumber \left|{\mathbf{P}}(T>n)-\mathbf{E}\Psi\Big( \frac{\widehat Z_{\nu(h)\wedge m}} {B_{{\nu(h)\wedge m},n}}\Big)\right| &\le \frac{C_0\rho}{B_{m,n}}\mathbf{P}(\widehat Z_{\nu(h)\wedge m}>0)\\ &\le2\varphi(0)\frac{1.3C_0\varkappa\rho E_n}{B_{m,n}} . \end{align} In addition, \begin{gather} \label{d14} \mathbf E\Psi\Big(\frac {\widehat Z_{\nu(h)\wedge m}}{B_{{\nu(h)\wedge m},n}}\Big) \le \frac{2\varphi(0) E_n(1+2\varkappa g_n^*)}{B_{m,n}}, \\ \label{d15} \mathbf E\Psi\Big(\frac {\widehat Z_{\nu(h)\wedge m}}{B_{{\nu(h)\wedge m},n}}\Big) \ge 2\varphi(0) E_n\Big(1-\frac{ (r_n+h)^2}{6}-2\varkappa g_n^*-\varkappa r_n\Big). \end{gather} \end{lemma} \begin{proof} Using (\ref{a4}) with $y=\widehat Z_{\nu(h)\wedge m}$, we obtain the first inequality in (\ref{d13}) as a consequence of (\ref{d11}). The second inequality in (\ref{d13}) follows from (\ref{d3}). Next, it has been shown in \cite[p. 3328]{DSW16} that \begin{gather} \label{d16} 2\varphi(0)a\ge\Psi(a)\ge 2\varphi(0)a(1-a^2/6)\quad \text{for all }a\ge0. \end{gather} Recall that $0\le z:=\widehat Z_{\nu(h)\wedge m}\le r_n+h$ and $B_n=1$. Hence, by (\ref{d16}), \begin{gather} \label{d18} \Psi\Big(\frac z{B_{{\nu(h)\wedge m},n}}\Big)\le\Psi\Big(\frac z{B_{m,n}}\Big) \le\frac{2\varphi(0) z}{B_{m,n}}, \\ \label{d19} \Psi\Big(\frac z{B_{{\nu(h)\wedge m},n}}\Big)\ge\Psi\Big(\frac z{B_n}\Big) \ge\frac{2\varphi(0) z}{B_n}\Big(1-\frac{ z^2}{6B_n^2}\Big) \ge{2\varphi(0) z}\Big(1-\frac{ (r_n+h)^2}{6}\Big). \end{gather} Taking mathematical expectations in (\ref{d18}) and (\ref{d19}) with $z=\widehat Z_{\nu(h)\wedge m}$, we obtain: \begin{gather} \label{d20} \frac{2\varphi(0) \mathbf E\widehat Z_{\nu(h)\wedge m}}{B_{m,n}}\ge \mathbf E\Psi\Big(\frac {\widehat Z_{\nu(h)\wedge m}}{B_{{\nu(h)\wedge m},n}}\Big) \ge{2\varphi(0) \mathbf E\widehat Z_{\nu(h)\wedge m}}\Big(1-\frac{ (r_n+h)^2}{6}\Big). \end{gather} Now~(\ref{d14}) and~(\ref{d15}) follow from~(\ref{d20}) together with~(\ref{d3}) and~(\ref{d4}). \end{proof} \begin{lemma} \label{L7} Assume that $\rho\le1/64$. Then inequalities \eqref{i31} and \eqref{i32} take place with some absolute constants $C_1$ and $C_2$. \end{lemma} \begin{proof} Set \begin{equation} \label{d21} m:=\min\{j\le n:B_j\ge\frac32\rho^{1/3}\} \quad\text{and}\quad h:=\rho^{1/3}. \end{equation} Noting that $r_n\le\rho\le\rho^{1/3}/4^2$ we obtain \begin{gather} \label{d22} B_m^2=B_{m-1}^2+\mathbf E X_m^2<\left(\frac32\rho^{1/3}\right)^2+r_n^2 \le\frac94\rho^{2/3}+\frac1{4^6} <\frac17. \end{gather} Consequently, $B_{m,n}^2=1-B_m^2$ and we have from \eqref{d21} that \begin{equation} \label{d23} B_{m,n}^2>\frac67,\quad 24\rho\le\frac{24}{4^2}\rho^{1/3}=\frac32\rho^{1/3}\le B_m, \quad 6g_n<\frac{6}{4^2}\rho^{1/3}<\rho^{1/3}=h. \end{equation} Thus, all assumptions of Lemmas~\ref{L5} and~\ref{L2} are satisfied. Hence, Lemma~\ref{L2} implies that \begin{gather} \label{d24} 2\varphi(0) E_n(1-\rho_1-\rho_2 -2\varkappa \rho)\le\mathbf{P}(T>n), \\ \label{d25} \mathbf{P}(T>n)\le2\varphi(0) E_n(1+\rho_1)(1+2\varkappa \rho)(1+\rho_3), \end{gather} where we used that $2g_n^*+r_n\le2\rho$ and \begin{gather} \label{d26} \rho_1:=1.3C_0\varkappa\rho,\quad\rho_2:=\frac{ (r_n+h)^2}{6}, \quad \rho_3:=\frac1{B_{m,n}}-1. \end{gather} Now from \eqref{d5} and \eqref{d21} with $\rho^{1/3}\le1/4$ we have \begin{gather*} \label{d26+} \rho\varkappa=\frac{2\rho}h+\frac{4\rho}{B_m} \le2\rho^{2/3}+\frac{4\rho^{2/3}}{3/2}<4.7\rho^{2/3}, \quad r_n+h\le\frac1{4^2}\rho^{1/3}+\rho^{1/3}. \end{gather*} Then, by \eqref{d22}, \begin{align*} \frac1{B_{m,n}}=\frac{B_{m,n}}{B_{m,n}^2}=\frac{\sqrt{1-B_m^2}}{1-B_m^2} \le\frac{1-B_m^2/2}{1-B_m^2}=1+\frac{B_m^2}{2B_{m,n}^2} <1+1.4\rho^{2/3}. \end{align*} So, these calculations and \eqref{d26} yield \begin{gather} \label{d28} \rho_1<5C_0\rho^{2/3},\quad\rho_2<0.2\rho^{2/3}, \quad \rho_3<1.4\rho^{2/3}, \quad2\varkappa\rho<9.4\rho^{2/3}. \end{gather} Substituting \eqref{d28} into \eqref{d24} we obtain \eqref{i31} with any $C_1\ge 5C_0+9.6$. On the other hand from \eqref{d28} and \eqref{d25} we may obtain \eqref{i32} with a constant $C_2$ which may be calculated in the following way: \begin{gather*} \label{d25+} C_2=\sup_{\rho^{1/3}\le1/4}\left[5C_0(1+2\varkappa \rho)(1+\rho_3)+9.4(1+\rho_3)+1.4\right]<\infty. \end{gather*} \end{proof} Thus, when $\rho\le1/4^3$, the both assertions of Theorem~\ref{thm:main} immediately follow from Lemma \ref{L7}. But if $\rho>1/4^3$ then \eqref{i32} is valid with any $C_1\ge4^2=16$ because in this case right-hand side in \eqref{i32} is negative. Let us turn to the upper bound~\eqref{i32}. If $\rho\le\frac{1}{24}$ but $\rho>\frac{1}{64}$ then \eqref{i33} holds for $m=n$; and as a result we have from \eqref{i33} with any $C_2\ge32/\varphi(0)$ that $$ \mathbf P(T_n>n)\le 4E_n\le4^3E_n\rho^{2/3}\le2\varphi(0)E_n(1+C_2\rho^{2/3}) \quad\text{for}\quad \rho^{1/3}>1/4. $$ So, we have proved all assertions of Theorem~\ref{thm:main} in all cases. \subsection{Proof of Corollary~\ref{cor:ex0}.} In order to apply Corollary~\ref{cor:asymp} we introduce the following triangular array: \begin{gather} \label{ex21} X_{j,n}:=\frac{u_{j,n}X_j}{\sigma_n},\quad g_{j,n}:=\frac{G_{j,n}}{\sigma_n}, \quad 1\le j\le n,\ n\ge1. \end{gather} The assumptions in \eqref{ex6} and \eqref{ex7} imply that the array introduced in \eqref{ex21} satisfies \eqref{i5} and \eqref{i6}. Thus, \begin{align*} \mathbf P\left(\tau_n>n\right)=\mathbf P(T_n>n) &\sim\sqrt{\frac{2}{\pi}}\mathbf E[S_{n,n}-g_{n,n};T_n>n]\\ &=\sqrt{\frac{2}{\pi}}\Bigl(\mathbf E[S_{n,n};T_n>n] -g_{n,n}\mathbf P(T_n>n)\Bigr). \end{align*} Here we also used \eqref{i20+}. Since $g_{n,n}\to0$, we conclude that $$ \mathbf P\left(\tau_n>n\right) \sim \sqrt{\frac{2}{\pi}}\mathbf E[S_{n,n};T_n>n]. $$ Noting that $S_{n,n}=U_{n,n}/\sigma_n$, we get \begin{equation} \label{ex2.1} \mathbf P\left(\tau_n>n\right) \sim \sqrt{\frac{2}{\pi}}\frac{1}{\sigma_n} \mathbf E[U_{n,n};\tau_n>n]. \end{equation} By the optional stopping theorem, $$ \mathbf E[U_{n,n};\tau_n>n]=-\mathbf E[U_{\tau_n,n};\tau_n\le n]. $$ It follows from \eqref{ex5} that, for every fixed $k\ge1$, \begin{equation} \label{ex19} U_{k,n}\to U_k\ \text{a.s.} \end{equation} and, taking into account the continuity of distribution functions, \begin{align} \label{ex20} \nonumber \mathbf P(\tau_n>k) &=\mathbf P(U_{1,n}>G_{1,n},U_{2,n}>G_{2,n},\ldots,U_{k,n}>G_{k,n})\\ &\hspace{1cm}\to \mathbf P(U_{1}>g_1,U_{2}>g_2,\ldots,U_{k}>g_k) =\mathbf P(\tau>k). \end{align} Obviously, \eqref{ex20} implies that \begin{equation} \label{ex29} \mathbf P(\tau_n=k)\to \mathbf P(\tau=k)\quad \text{for every }k\ge1. \end{equation} Furthermore, it follows from the assumptions \eqref{ex1} and \eqref{ex6} that \begin{equation} \label{ex25} |U_{\tau_n,n}|\le M\quad\text{on the event }\{\tau_n\le n\}. \end{equation} Then, combining \eqref{ex19}, \eqref{ex29} and \eqref{ex25}, we conclude that \begin{equation} \label{ex22} \mathbf E[U_{\tau_n,n};\tau_n\le k] =\sum_{j=1}^k\mathbf E[U_{j,n};\tau_n=j] \to \sum_{j=1}^k\mathbf E[U_{j};\tau=j] =\mathbf E[U_\tau;\tau\le k]. \end{equation} Note also that, by \eqref{ex25} and \eqref{ex20}, \begin{equation*} \limsup_{n\to\infty}|\mathbf E[U_{\tau_n,n};k<\tau_n\le n]| \le M\limsup_{n\to\infty}\mathbf P(\tau_n>k). \end{equation*} Therefore, \begin{align} \label{ex23} \nonumber \limsup_{n\to\infty} \mathbf E[U_{\tau_n,n};\tau_n\le n] &\le \limsup_{n\to\infty} \mathbf E[U_{\tau_n,n};\tau_n\le k] +\limsup_{n\to\infty}|\mathbf E[U_{\tau_n,n};k<\tau_n\le n]|\\ &=\mathbf E[U_\tau;\tau\le k]+M\mathbf P(\tau>k) \end{align} and \begin{align} \label{ex24} \nonumber \liminf_{n\to\infty} \mathbf E[U_{\tau_n,n};\tau_n\le n] &\ge \liminf_{n\to\infty} \mathbf E[U_{\tau_n,n};\tau_n\le k] -\limsup_{n\to\infty}|\mathbf E[U_{\tau_n,n};k<\tau_n\le n]|\\ &=\mathbf E[U_\tau;\tau\le k]-M\mathbf P(\tau>k). \end{align} Letting $k\to\infty$ in \eqref{ex23} and \eqref{ex24}, and noting that $\tau$ is almost surely finite, we infer that $$ \mathbf E[U_{\tau_n,n};\tau_n\le n]\to \mathbf E[U_\tau]. $$ Consequently, by the optional stopping theorem, $$ \mathbf E[U_{\tau_n,n};\tau_n>n]=-\mathbf E[U_{\tau_n,n};\tau_n\le n]\to \mathbf E[-U_\tau]. $$ Plugging this into \eqref{ex2.1}, we obtain the desired result. \subsection{Calculations related to Example~\ref{Lind2}} \begin{lemma} \label{Ex2} For the simple symmetric random walk $\{U_m\}$ one has $$ \mathbf P\left(N+\underline U_m>0\right)=\mathbf P(-N<U_m\le N) \quad\text{for all}\quad m,N\ge1 $$ and $$ \sup_{N\ge1}\left|\frac{\mathbf P(-N<U_{n}\le N)}{\Psi(N/\sqrt{n})}-1\right|\to0. $$ \end{lemma} \begin{proof} By the reflection principle for symmetric simple random walks, \begin{equation*} \mathbf P\left(N+U_{m}=k, N+\underline U_{m}\le 0\right)= \mathbf P(U_m=N+k)\quad\text{for every }k\ge1. \end{equation*} Thus, by the symmetry of the random walk $U_m$, $$ \mathbf P\left(N+U_{m}>0, N+\underline U_{m}\le 0\right) =\mathbf P(U_m<-N)=\mathbf P(U_m>N). $$ Therefore, \begin{align*} \mathbf P\left(N+\underline U_{m}>0\right) &=\mathbf P\left(N+U_{m}>0\right)-\mathbf P\left(N+U_{m}>0, N+\underline U_{m}\le 0\right)\\ &=\mathbf P(U_m>-N)-\mathbf P(U_m>N) =\mathbf P(-N< U_{m}\le N). \end{align*} We now prove the second statement. Recall that $U_n$ is the sum of $n-1$ independent, Rademacher distributed random variables. By the central limit theorem, $U_{n}/\sqrt{n-1}$ converges to the standard normal distribution. Therefore, $U_{n}/\sqrt{n}$ has the same limit. This means that $$ \varepsilon_n^2:=\sup_{x>0}|\mathbf P(-x\sqrt{n}<U_n\le x\sqrt{n})-\Psi(x)| \to0. $$ Taking into account that $\Psi(x)$ increases, we conclude that, for every $\delta>0$, $$ \sup_{x\ge\delta}\left|\frac{\mathbf P(-x\sqrt{n}<U_n\le x\sqrt{n})}{\Psi(x)} -1\right|\le\frac{\varepsilon_n^2}{\Psi(\delta)}. $$ Choose here $\delta=\varepsilon_n$. Noting that $\Psi(\varepsilon_n)\sim 2\varphi(0)\varepsilon_n$, we obtain $$ \sup_{N\ge\varepsilon_n\sqrt{n}}\left|\frac{\mathbf P(-N<U_n\le N)} {\Psi(N/\sqrt{n})}-1\right| \le\frac{\varepsilon_n^2}{\Psi(\varepsilon_n)} \sim\frac{\varepsilon_n}{2\varphi(0)}\to0. $$ It remains to consider the case $N\le\varepsilon_n\sqrt{n}$. Here we shall use the local central limit theorem. Since $U_n$ is $2$-periodic, $$ \sup_{k:\ k\equiv n-1({\rm mod}2)} |\sqrt{n-1}\mathbf P(U_n=k)-2\varphi(k/\sqrt{n-1})|\to0. $$ Noting that $$ \sup_{k\le \varepsilon_n\sqrt{n}} |\varphi(k/\sqrt{n-1})-\varphi(0)|\to0, $$ we obtain $$ \sup_{N\le \varepsilon_n\sqrt{n}} \left|\frac{\sqrt{n-1}\mathbf P(-N<U_n\le N)}{2\varphi(0)m(n,N)}-1\right|\to0, $$ where $$ m(n,N)=\#\{k\in(-N,N]:\ k\equiv n-1({\rm mod}2)\}. $$ Since the interval $(-N,N]$ contains $N$ even and $N$ odd lattice points, $m(n,N)=N$ for all $n$, $N\ge 1$. Consequently, $$ \sup_{N\le \varepsilon_n\sqrt{n}} \left|\frac{\sqrt{n-1}\mathbf P(-N<U_n\le N)}{2\varphi(0)N}-1\right|\to0, $$ It remains now to notice that $$ \Psi(N/\sqrt{n})\sim \frac{2\varphi(0)N}{\sqrt{n}} $$ uniformly in $N\le \varepsilon_n\sqrt{n}$. \end{proof} \end{document}
arXiv
At Central Middle School the $108$ students who take the AMC 8 meet in the evening to talk about problems and eat an average of two cookies apiece. Walter and Gretel are baking Bonnie's Best Bar Cookies this year. Their recipe, which makes a pan of $15$ cookies, lists these items: $\bullet$ $1\frac{1}{2}$ cups of flour $\bullet$ $2$ eggs $\bullet$ $3$ tablespoons butter $\bullet$ $\frac{3}{4}$ cups sugar $\bullet$ $1$ package of chocolate drops They will make only full recipes, no partial recipes. They learn that a big concert is scheduled for the same night and attendance will be down $25\%.$ How many recipes of cookies should they make for their smaller party? The $108\cdot 0.75=81$ students need $2$ cookies each so $162$ cookies are to be baked. Since $162\div 15=10.8,$ Walter and Gretel must bake $\boxed{11}$ recipes. A few leftovers are a good thing!
Math Dataset
\begin{document} \addtocontents{toc}{\protect\setcounter{tocdepth}{-1}} \title{Latticed $k$-Induction \\ with an Application to Probabilistic Programs\thanks{\setlength{\leftskip}{0em} This work has been partially funded by the ERC Advanced Project FRAPPANT under grant No.~787914. } } \authorrunning{K.~Batz et al.} \author{Kevin Batz\inst{1}$^{\text{(\Letter)}}$\orcidID{0000-0001-8705-2564} \and Mingshuai Chen\inst{1}$^{\text{(\Letter)}}$\orcidID{0000-0001-9663-7441} \and Benjamin Lucien Kaminski\inst{2}$^{\text{(\Letter)}}$\orcidID{0000-0001-5185-2324} \and Joost-Pieter Katoen\inst{1}$^{\text{(\Letter)}}$\orcidID{0000-0002-6143-1926} \and Christoph Matheja\inst{3}$^{\text{(\Letter)}}$\orcidID{0000-0001-9151-0441} \and Philipp Schr\"oer\inst{1}\orcidID{0000-0002-4329-530X} } \institute{ RWTH Aachen University, Aachen, Germany\\ \email{\{kevin.batz,chenms,katoen\}@cs.rwth-aachen.de} \and University College London, London, United Kingdom\\ \email{[email protected]} \and ETH Z\"urich, Z\"urich, Switzerland\\ \email{[email protected]} } \maketitle \setlength{\floatsep}{1\baselineskip} \setlength{\textfloatsep}{1\baselineskip} \setlength{\intextsep}{1\baselineskip} \setcounter{footnote}{0} \begin{abstract} We revisit two well-established verification techniques, \emph{$k$-in{\-}duc{\-}tion} and \emph{bounded model checking} (BMC), in the more general setting of fixed point theory over complete lattices. Our main theoretical contribution is \emph{latticed $k$-induction}, which (i) generalizes classical $k$-induction for verifying transition systems, (ii) generalizes Park induction for bounding fixed points of monotonic maps on complete lattices, and (iii) extends from naturals $k$ to transfinite ordinals $\ensuremath{\kappa}$, thus yielding \emph{$\ensuremath{\kappa}$-induction}. The lattice-theoretic understanding of $k$-induction and BMC enables us to apply both techniques to the \emph{fully automatic verification of infinite-state probabilistic programs}. Our prototypical implementation manages to automatically verify non-trivial specifications for probabilistic programs taken from the literature that---using existing techniques---cannot be verified without synthesizing a stronger inductive invariant first. \keywords{$k$-induction \and Bounded model checking \and Fixed point theory \and Probabilistic programs \and Quantitative verification} \end{abstract} \lstset{ basicstyle=\ttfamily, keywords=[3]{while,if,else,nat}, tabsize=2, breaklines=true } \section{Introduction} \input{introduction} \section{Verification as a Fixed Point Problem} \label{sec:problem_statement} \input{bmc_k_induction_intro} \section{Latticed $k$-Induction} \label{sec:k_induction} \input{k_induction_complete_lattices} \section{Latticed vs.~Classical $k$-Induction} \label{sec:latticed-vs-classical} \input{latticed_vs_classical} \section{Latticed Bounded Model Checking} \label{sec:bmc} \input{bmc_complete_lattice} \section{Probabilistic Programs} \label{sec:ppwp} \input{pgcl} \input{wp} \section{BMC and $k$-Induction for Probabilistic Programs} \label{sec:application2pps} \input{bmc_k_induction_ppl_intro} \subsection{Linear Expectations} \label{sec:linear_expectations} \input{linear_expectations} \subsection{Deciding Quantitative Entailments between Linear Expectations} \label{sec:deciding_entailments} \input{decidability} \subsection{Computing Minima of Linear Expectations} \label{sec:computing_minima} \input{admissibility} \section{Implementation} \label{sec:implementation} \input{implementation} \section{Experiments} \label{sec:impl:experiments} \input{experiments} \section{Conclusion} \label{sec:conclusion} \input{conclusion} \subsubsection*{Acknowledgements.} Benjamin Lucien Kaminski is indebted to Larry~Fischer for his linguistic advice---this time on the word \enquote{latticed}. \appendix \section{Appendix} \input{app-lattices} \input{app-latticed_vs_classical} \input{app-linear} \input{app-entailment-sound} \input{app-omega-inductivity} \section{Benchmarks} \label{app:benchmarks} \input{app-benchmarks-wp} \end{document}
arXiv
Home > Proceedings > Proc. Centre Math. Appl. > CMA/AMSI Research Symposium "Asymptotic Geometric Analysis, Harmonic Analysis and Related Topics" CMA/AMSI Research Symposium "Asymptotic Geometric Analysis, Harmonic Analysis and Related Topics" February 21-24, 2006 | Murramarang (NSW) Editor(s) Alan McIntosh, Pierre Portal Proc. Centre Math. Appl., 42: 135pp. (2007). Read Full Abstract + This volume contains the proceedings of the CMA/AMSI Research Symposium on "Asymptotic Geometric Analysis, Harmonic Analysis and Related Topics'', organized by Andrew Hassell, Alan McIntosh, Shahar Mendelson, Pierre Portal, and Fyodor Sukochev at Murramarang (NSW) in February 2006. The meeting was sponsored by the Centre for Mathematics and its Applications (Australian National University) and the Australian Mathematical Sciences Institute whose support is gratefully acknowledged. The Symposium covered a variety of topics in functional, geometric, and harmonic analysis, and brought together experts, early career researchers, and doctoral students from Australia, Canada, Finland, France, Germany, Israel, and the USA. The Symposium covered a variety of topics in functional, geometric, and harmonic analysis, and brought together experts, early career researchers, and doctoral students from Australia, Canada, Finland, France, Germany, Israel, and the USA. It is our hope that this volume reflects the lively research atmosphere of this conference, and we are glad to open it with a result of Ian Doust, Florence Lancien, and Gilles Lancien, which was essentially discovered during the symposium. Hide All Book Information - Proceedings of the Centre for Mathematics and its Applications, Volume 42 Rights: Copyright © 2007, Centre for Mathematics and its Applications, Mathematical Sciences Institute, The Australian National University. This book is copyright. Apart from any fair dealing for the purpose of private study, research, criticism or review as permitted under the Copyright Act, no part may be reproduced by any process without permission. Inquiries should be made to the publisher. First available in Project Euclid: 18 November 2014 < Previous Volume | Next Volume > Australian National University, Mathematical Sciences Institute View All Abstracts + Preface and Poster Proceedings of the Centre for Mathematics and its Applications Vol. 42, - (2007). Spectral theory for linear operators on $L^1$ or $C(K)$ spaces Ian Doust , Florence Lancien , Gilles Lancien Proceedings of the Centre for Mathematics and its Applications Vol. 42, 1-10 (2007). Read Abstract + It is known that on a Hilbert space, the sum of a real scalar-type operator and a commuting well-bounded operator is well-bounded. The corresponding property has been shown to be fail on $L^p$ spaces, for $1 \lt p \neq 2 \lt \infty$. We show that it does hold however on every Banach space X such that $X$ or $X*% is a Grothendieck space. This class notably includes $L^1$ and $C(K)$ spaces. Vector-valued singular integrals, and the border between the one-parameter and the multi-parameter theories Tuomas P. Hytönen Proceedings of the Centre for Mathematics and its Applications Vol. 42, 11-41 (2007). Function theory in sectors and the analytic functional calculus for systems of operators Brian Jefferies The connection between holomorphic and monogenic functions in sectors is used to construct an analytic functional calculus for several sectorial operators acting in a Banach space. The results are applied to the $H^\infty$-functional calculus for the differentiation operators on a Lipschitz surface. On an operator-valued $T(1)$ theorem by Hytönen and Weis Cornelia Kaiser We consider generalized Calderón-Zygmund operators whose kernel takes values in the space of all continuous linear operators between two Banach spaces. In the spirit of the $T(1)$ theorem of David and Journé we prove boundedness results for such operators on vector-valued Riesz potential spaces. This improves and generalizes a result by Hytönen and Weis. A remark on the $H^\infty$-calculus Nigel J. Kalton If $A,B$ are sectorial operators on a Hilbert space with the same domain and range, and if $\parallel Ax \parallel \approx \parallel Bx \parallel$ and $\parallel A^{-1}x\parallel \approx \parallel B^{-1}x \approx$, then it is a result of Auscher, McIntosh and Nahmod that if $A$ has an $H^\infty$-calculus then so does $B$. On an arbitrary Banach space this is true with the additional hypothesis on B that it is almost R-sectorial as was shown by the author, Kunstmann and Weis in a recent preprint. We give an alternative approach to this result. Wrapping Brownian motion and heat kernels on compact Lie groups David Maher The fundamental solution of the heat equation on $\mathboldR^n$ is known as the heat kernel which is also the transition density of a Brownian motion. Similar statements hold when $\mathboldR^n$ is replaced by a Lie group. We briefly demonstrate how the results on $\mathboldR^n$ concerning the heat kernel and Brownian motion may be easily transferred to compact Lie groups using the wrapping map of Dooley and Wildberger. Remarks on the Rademacher-Menshov theorem Christopher Meaney Proceedings of the Centre for Mathematics and its Applications Vol. 42, 100-110 (2007). We describe Salem's proof of the Rademacher-Menshov Theorem, which shows that one constant works for all orthogonal expansions in all $L^2$-spaces. By changing the emphasis in Salem's proof we produce a lower bound for sums of vectors coming from bi-orthogonal sets of vectors in a Hilbert space. This inequality is applied to sums of columns of an invertible matrix and to Lebesgue constants. Commutator estimates in the operator $L^p$-spaces Denis Potapov , Fyodor Sukochev We consider commutator estimates in non-commutative (operator) $L^p$-spaces associated with general semi-finite von Neumann algebra. We discuss the difficulties which appear when one considers commutators with an unbounded operator in non-commutative $L^p$-spaces with $p \neq \infty$. We explain those difficulties using the example of the classical differentiation operator. The atomic decomposition for tent spaces on spaces of homogeneous type Emmanuel Russ In the Euclidean context, tent spaces, introduced by Coifman, Meyer and Stein, admit an atomic decomposition. We generalize this decomposition to the case of spaces of homogeneous type.
CommonCrawl
\begin{document} \title{ Protection zone in a diffusive predator-prey model with Beddington-DeAngelis functional response\thanks{Supported by the National Natural Science Foundation of China (11171048).}} \author{Xiao He$^{\rm a,b}$ \quad Sining Zheng$^{\rm b,}${\thanks{Corresponding author. E-mail: [email protected]\,(X. He), [email protected]\,(S.N. Zheng)}}\\ \footnotesize $^{\rm a}$Department of Mathematics, Dalian Minzu University, Dalian 116600, P.R. China\\ \footnotesize$^{\rm b}$School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, P. R. China} \maketitle \date{} \newtheorem{theorem}{Theorem} \newtheorem{definition}{Definition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{remark}{Remark} \renewcommand{\thesection.\arabic{equation}}{\thesection.\arabic{equation}} \catcode`@=11 \@addtoreset{equation}{section} \catcode`@=12 \maketitle{} \begin{abstract} In any reaction-diffusion system of predator-prey models, the population densities of species are determined by the interactions between them, together with the influences from the spatial environments surrounding them. Generally, the prey species would die out when their birth rate is too low, the habitat size is too small, the predator grows too fast, or the predation pressure is too high. To save the endangered prey species, some human interference is useful, such as creating a protection zone where the prey could cross the boundary freely but the predator is prohibited from entering. This paper studies the existence of positive steady states to a predator-prey model with reaction-diffusion terms, Beddington-DeAngelis type functional response and non-flux boundary conditions. It is shown that there is a threshold value $\theta_0$ which characterizes the refuge ability of prey such that the positivity of prey population can be ensured if either the prey's birth rate satisfies $\theta\geq\theta_0$ (no matter how large the predator's growth rate is) or the predator's growth rate satisfies $\mu\le 0$, while a protection zone $\Omega_0$ is necessary for such positive solutions if $\theta<\theta_0$ with $\mu>0$ properly large. The more interesting finding is that there is another threshold value $\theta^*=\theta^*(\mu,\Omega_0)<\theta_0$, such that the positive solutions do exist for all $\theta\in(\theta^*,\theta_0)$. Letting $\mu\rightarrow\infty$, we get the third threshold value $\theta_1=\theta_1(\Omega_0)$ such that if $\theta>\theta_1(\Omega_0)$, prey species could survive no matter how large the predator's growth rate is. In addition, we get the fourth threshold value $\theta_*$ for negative $\mu$ such that the system admits positive steady states if and only if $\theta>\theta_*$. All these results match well with the mechanistic derivation for the B-D type functional response recently given by Geritz and Gyllenberg [A mechanistic derivation of the DeAngelis-Beddington functional response, J. Theoret. Biol. 314 (2012) 106--108]. Finally, we obtain the uniqueness of positive steady states for $\mu$ properly large, as well as the asymptotic behavior of the unique positive steady state as $\mu\rightarrow\infty$. \begin{description} \item[MSC:] 92D40, 35J47, 35K57 \item[Keywords:] Reaction-Diffusion; Predator-Prey; Beddington-DeAngelis type functional response; Protection zone; Bifurcation \end{description} \end{abstract} \section{Introduction} \mbox\indent Biological resources are renewable, but many have been exploited unreasonably. Nowadays, some species cannot survive in their habitat without human intervention. Such interventions have included establishing banned fishing areas and fishing periods to cope with over-fishing in fishery production, and setting up nature reserves to protect endangered species. These phenomena are usually described via diffusive predator-prey models, where the population evolution of the species relies on the interactions between predator and prey, as well as the influences from the spatial environments surrounding them. Naturally, prey species would die out when the prey's birth rate is too low, the habitat size is too small, the predator's growth rate is too fast, or the predation rate is too high. To save the endangered prey species, various human interferences are proposed such as creating a protection zone where the prey could cross the boundary freely but the predator is prohibited from entering. Refer to the works on protection zones by Du {\it et al} for the Lotka-Voltera type competition system \cite{DL}, Holling II type predator-prey system \cite{DS2006}, Leslie type predator-prey system \cite{DPW}, as well as predator-prey systems with protection coefficients \cite{DS2007}. Oeda studied the effects of a cross-diffusive Lotka-Voltera type predator-prey system with a protection zone \cite{O}. A cross-diffusive Lotka-Voltera type competition system with a protection zone was investigated by Wang and Li \cite{WL}. Zou and Wang studied an ODE model of protection zones, where the sizes of the protection zones are reflected by restricting the functionals' coefficient for the predator \cite{ZW}. Recently, Cui, Shi and Wu observed the strong Allee effect in a diffusive predator-prey system with protection zones \cite{RC}. In this paper, we study the steady states to the following diffusive predator-prey system with Beddington-DeAngelis type functional response \begin{eqnarray}\label{a} \left\{ \begin{array}{llll}\displaystyle u_t-d_1\Delta u=u(\theta-u-\frac{a(x)v}{1+mu+kv}),&x\in \Omega,~~t>0, \\[6pt] \displaystyle v_t-d_2\Delta v=v(\mu-v+\frac{cu}{1+mu+kv}),& x\in \Omega_1,~~t>0,\\[4pt] \displaystyle \frac{\partial u}{\partial {n}}=0 ,& x\in\partial\Omega,~~t>0,\\[4pt] \displaystyle\frac{\partial v}{\partial {n}}=0 ,& x\in\partial\Omega_1,~~t>0,\\[4pt] \displaystyle u(x,0)=u_0(x)\geq(\not\equiv) 0,&x\in\Omega,\\[4pt] \displaystyle v(x,0)=v_0(x)\geq(\not\equiv) 0, & x\in \Omega_1, \end{array}\right. \end{eqnarray} where $\Omega$ is a bounded domain in $\mathbb{R}^N$ $(N\leq3)$ with smooth boundary $\partial\Omega$, ${\Omega}_0\Subset\Omega$ with $\partial \Omega_0$ smooth, $\Omega_1=\Omega\backslash\overline{\Omega}_0$, constants $d_1,d_2,\theta,c,m,k>0$, $\mu\in \mathbb{R}$, $\frac{\partial}{\partial n}$ is the outward normal derivative on the boundary, and\begin{eqnarray} a(x)=\left\{ \begin{array}{llll} 0,&x\in\overline{\Omega}_0,\\a,&x\in\Omega_1.\end{array}\right.\end{eqnarray} The fact of $a(x)=0$ in ${\Omega}_0$ implies that no predation could take place there. Eq.\,(\ref{a}) is a reaction-diffusion system of species $u$ and $v$, and the dynamical behavior of species would be determined not only by the mechanism of the functional response between $u$ and $v$, but also by the interaction between their reaction and diffusion. Here prey $u$ and predator $v$ disperse at rates $d_1$ and $d_2$, and grow at rates $\theta$ and $\mu$, respectively. The prey is consumed with the functional response of Beddington-DeAngelis type $\frac{a(x)uv}{1+mu+kv}$ in $\Omega$, and contributes to the predator with growth rate $\frac{cuv}{1+mu+kv}$ in $\Omega_1$. Non-flux boundary conditions mean that the habitat of the two species is closed. The B-D type functional response was introduced by Beddington \cite{B} and DeAngelis \cite{DGO}. Refer to \cite{B,DGO,DK} for the background of the original predator-prey model with B-D type functional response. Guo and Wu studied the existence, multiplicity, uniqueness and stability of the positive solutions under homogeneous Dirichlet boundary conditions in \cite{GW2010}, as well as the effect of large $k$ in \cite{GW2012}. Chen and Wang established the existence of nonconstant positive steady-states under Neumann boundary conditions \cite{CW,PW}. In particular, a mechanistic derivation for the B-D type functional response has been given by Geritz and Gyllenberg in \cite{GG} recently, where predators $v$ were divided into searchers $v_S$ with attack rate $a$ and handlers $v_H$ with handling time $h$, while preys $u$ were structured into two classes: active preys $u_P$ and those prey individuals $u_R$ who have found a refuge with total refuge number $b$ and sojourn time $\tau$. In these terms, the parameters in B-D type functional response of (\ref{a}) can be understood as that $m=ah$ reflects the handling time of $v_H$, and $k=b\tau$ describes the refuge ability of the prey. The prey's refuge may come from its aggregation, reduction of its activity, or places where its predation risk is somehow reduced \cite{S}. Dynamic consequences of prey refuges were observed by Gonz\'{a}lez-Olivares and Ramos-jiliberto with more prey, fewer predators and enhanced stability \cite{GR}. On the other hand, refuges from species usually cost the prey in terms of reduced feeding or mating opportunities \cite{S}, and hence their population could not be very large. In contrast, the protection zones, as refuges from humans, always benefit the endangered species. Refer to \cite{HRVL,KR,MD,SMR,WW, WF, YR} for more backgrounds on prey refuges and their affections. In this paper, we will show the effect of the prey's refuge and the size of the protection zone on the coexistence and stability of the predator-prey system with B-D type functional response. The results obtained here observe the general law that refuges and protection zones benefit the coexistence of species \cite{GR,S,ZW}. Since the model (\ref{a}) contains different coefficients $a(x)$ and $c$ in the B-D type functional response terms for $u$ and $v$ respectively, without loss of generality, suppose $d_1=d_2=1$ for simplicity. The steady-state problem corresponding to (\ref{a}) takes the form \begin{eqnarray} \left\{ \begin{array}{llll} \displaystyle-\Delta u=u(\theta-u-\frac{a(x)v}{1+mu+kv})&\mbox{in}~\Omega,\\[6pt] \displaystyle-\Delta v=v(\mu-v+\frac{cu}{1+mu+kv})&\mbox{in}~\Omega_1,\\[4pt] \displaystyle\frac{\partial u}{\partial {n}}\Big|_{\partial\Omega}=0,~~ \frac{\partial v}{\partial {n}}\Big|_{\partial\Omega_1}=0.&\end{array}\right. \label{s}\end{eqnarray} Denote by $\lambda_1(q)$ the first eigenvalue of $-\Delta+q$ over $\Omega$ under homogeneous Neumann boundary conditions with $q=q(x)\in L^\infty(\Omega)$. The following properties of $\lambda_1(q)$ are well known: \begin{itemize} \item[(i)] $\lambda_1(0)=0$; \item[(ii)] $\lambda_1(q_1)>\lambda_1(q_2)$ if $q_1\geq q_2$ and $q_1\not\equiv q_2$; \item[(iii)] $\lambda_1(q)$ is continuous with respect to $q\in L^\infty(\Omega)$. \end{itemize} Define \begin{align} \label{q2} \theta^*(\mu,\Omega_0)=\lambda_1(q(x)),~~\theta_0=\frac{a}{k},~~\theta_1(\Omega_0)=\lambda_1(q_0(x)), \end{align} with \begin{align}\label{q1}q(x)=\frac{a(x)\mu}{1+k\mu},~~q_0(x)= \left\{\begin{array}{lll}0, &x\in \overline\Omega_0,\\ \theta_0, &x\in\Omega_1. \end{array}\right.\end{align} Denote by $U_{\theta,q_0}$ the solution of the scalar problem \begin{align} \label{uq}-\Delta u=u(\theta-u-q_0(x))~{\rm in}~\Omega,~~\frac{\partial u}{\partial{n}}=0~{\rm on}~\partial\Omega. \end{align} Due to $\theta_1=\inf\limits_{\phi\in H^1(\Omega),\int_{\Omega}\phi^2dx>0}\frac{\int_{\Omega}|\nabla\phi|^2dx+\frac{a}{k}\int_{\Omega_1}\phi^2dx}{\int_{\Omega}\phi^2dx}$, the properties (i)--(iii) of $\lambda_1(q)$ imply the following lemma: \begin{lemma}\label{l1.1} $\theta^*{(\mu,\Omega_0)}$ is strictly increasing with respect to $\mu$ and decreasing when $\Omega_0$ enlarging, $\theta^*(0,\Omega_0)=0$, $\theta^*(\mu,\Omega_0)<\theta_0$, $\lim_{\mu\rightarrow\infty}\theta^*(\mu,\Omega_0)=\theta_1(\Omega_0)\leq\frac{a|\Omega_1|}{k|\Omega|}$, $\lim_{|\Omega_1|\rightarrow0}\theta^*(\mu,\Omega_0)=0$, $\lim_{|\Omega_0|\rightarrow0}\theta^*(\mu,\Omega_0)=\frac{a\mu}{1+k\mu}$.\qquad$\Box$ \end{lemma} Biologically, we are interested in the positivity of the prey $u$ in the diffusive predator-prey model (\ref{s}). We state the main results of the paper one by one as follows. Obviously, either large $\theta$ or small $\mu$ benefits the prey $u$. In the first theorem, we give two sufficient conditions for keeping the prey positive without protection zones. \begin{theorem}\label{th1} If $\theta\ge \theta_0$ or $\mu\le0$, then the positivity of $u$ would be ensured automatically without any protections zones. \end{theorem} The next theorem implies that a suitable protection zone guarantees the existence of positive solutions to (\ref{s}) under $\theta<\theta_0$ with $\mu>0$. \begin{theorem}\label{th2} Suppose $\mu> 0$. If $\theta^*(\mu,\Omega_0)< \theta< \theta_0$, then Eq.\,(\ref{s}) has at least one positive solution. Furthermore, if $\theta\leq\theta^*(\mu,\Omega_0)$ with $m\le\frac{(k\mu+1)^2}{a\mu}$, then Eq.\,(\ref{s}) has no positive solutions. \end{theorem} In the third theorem, we give a necessary and sufficient condition for the coexistence of $u$ and $v$ under $\mu\in(-\frac{c}{m},0]$. \begin{theorem}\label{th2'} Suppose $-\frac{c}{m}<\mu\le 0$. Then Eq.\,(\ref{s}) has at least one positive solution if and only if $\theta>\theta_*=-\frac{\mu}{c+m\mu}=\frac{|\mu|}{c-m|\mu|}\ge 0$.\end{theorem} \begin{remark}\label{mk1}{\rm Since $\lim_{|\Omega_1|\rightarrow0}\theta^*(\mu,\Omega_0)=0$ by Lemma \ref{l1.1}, for any $\theta>0$ and $\mu\ge 0$ fixed, the key condition $\theta>\theta^*(\mu,\Omega_0)$ in Theorem \ref{th2} can be realized by enlarging the size of the protection zone $\Omega_0=\Omega\setminus\overline \Omega_1$. So does the condition $\theta>\theta_1$ in the following Theorem \ref{th3}. }\end{remark} \begin{remark}\label{mk2} {\rm Theorem \ref{th1} shows that no protection zones are necessary for the positivity of $u$ if $\mu\le 0$. It is known by Theorem \ref{th2'} that in addition to the positivity of $u$, the positivity of $v$ can be ensured also if the death rate of the predator $v$ is not too high with $\mu\in (-\frac{c}{m},0]\subset (-\infty,0]$ and the birth rate of the prey $u$ is properly large such that $\theta>\theta_*$.} \end{remark} Finally, the last theorem says the positive solutions of (\ref{s}) are in fact unique if $\theta$ is even larger than $\theta_1$ under large $\mu$, and determines the asymptotic behavior of the unique positive solution as $\mu\rightarrow \infty$. In fact, from Lemma \ref{l1.1} and Theorem \ref{th2} that if $\theta>\theta_1$, prey species could be alive no matter how large the predator's growth rate is. \begin{theorem}\label{th3} If $\theta>\theta_1(\Omega_0)$, then there exists $\mu^*>0$ such that the positive solution of (\ref{s}) is unique and linearly stable when $\mu\geq\mu^*$. Furthermore, the unique positive solution satisfies $(u,v-\mu)\rightarrow (U_{\theta,q_0},0)$ uniformly on $\overline\Omega$ and $\overline\Omega_1$, respectively, as $\mu\rightarrow\infty$. \end{theorem} This paper is arranged as follows. In the next two sections, we give the proofs of Theorems \ref{th1}--\ref{th2'} and Theorem \ref{th3}, respectively. The last section is devoted to a discussion of the obtained results, by analyzing them with the mechanistic derivation for the B-D type functional response in \cite{GG}. \section{Existence of positive solutions} \mbox\indent At first we deal with the proof of Theorem \ref{th1}. \noindent{\bf Proof of Theorem \ref{th1}.} Assume $\mu\leq-\frac{c}{m}$. Integrate the second equation of (\ref{s}) over $\Omega_1$ to get $$0=\int_{\Omega_1}v\Big(\mu-v+\frac{cu}{1+mu+kv}\Big)dx,$$and hence $$0\leq\int_{\Omega_1}v^2dx=\int_{\Omega_1}v(\mu+\frac{cu}{1+mu+kv})dx\leq(\mu+\frac{c}{m})\int_{\Omega_1}vdx\leq 0.$$This concludes $v\equiv0$, and so $u$ satisfies\begin{align}\label{uu} -\Delta u=u(\theta-u)~~\mbox{in}~\Omega,~~~\frac{\partial u}{\partial n}=0~~{\rm on}~\partial\Omega.\end{align} Obviously, (\ref{uu}) admits the solution $u=\theta>0$. The desired result for $-\frac{c}{m}<\mu\le 0$ is substantially concluded from Theorem \ref{th2'}. Indeed, the subcase of $\theta>\theta_*$ is covered by Theorem \ref{th2'}, while for $\theta\le \theta_*$, it can be found in the proof of Theorem \ref{th2'} that $v\equiv 0$, and so $u=\theta>0$. Next consider the first equation of (\ref{s}) with $\theta\geq\theta_0$. It is easy to know that $\frac{a(x)v}{1+mu+kv}<\frac{a(x)}{k}\le \frac{a}{k}=\theta_0$ for $v\ge 0$. Thus, for any $v(x)\ge 0$, there is $\tilde\theta_0\in (0,\theta_0)$ such that \begin{align*}\label{uuu} -\Delta u=u(\theta-u-\frac{a(x)v}{1+mu+kv})> u(\theta-\tilde\theta_0-u)~~\mbox{in}~\Omega,~~\frac{\partial u}{\partial n}=0~~{\rm on}~\partial \Omega. \end{align*} This ensures that $u\ge \theta-\tilde\theta_0>0$.\qquad$\Box$ We need some preliminaries represented as lemmas and propositions for the proof of Theorem \ref{th2}, and begin with two known results on the maximum principle and the Harnack inequality. \begin{lemma}\label{l2.1} (Maximum Principle \cite{LN}) Let $g\in C(\overline{\Omega}\times \mathbb{R})$, $w\in C^2(\Omega)\bigcap C^1(\overline\Omega)$, where $\Omega$ is a bounded domain in $\mathbb{R}^N$ with smooth boundary. \begin{description} \item{ {\rm (a)}} If ~$\Delta w+g(x,w)\leq0 $ in $\Omega$, $\frac{\partial w}{\partial {n}}\geq0$ on $\partial \Omega$ and $\min_{\overline \Omega} w=w(x_0)$, then $g(x_0,w(x_0))\leq0$. \item{ {\rm (b)}} If ~$\Delta w+g(x,w)\geq0$ in $\Omega$, $\frac{\partial w}{\partial {n}}\leq0$ on $\partial \Omega$ and $\max_{\overline \Omega} w=w(x_0)$, then $g(x_0,w(x_0))\geq0$. \end{description} \quad$\Box$ \end{lemma} \begin{lemma}\label{l2.2} (Harnack Inequality \cite{LN1999,LNT}) Let $f\in L^p(\Omega)$ with $p>\max\{\frac{N}{2},1\}$, and w be a non-negative solution of $\Delta w+f(x)w=0$ in a bounded domain $\Omega\subset\mathbb{R}^N$ with smooth boundary under homogeneous Neumann boundary condition. Then there exists a positive constant $C=C(p,N,\Omega,\| f\|_{L^p(\Omega)})$ such that $$\max\limits_{\overline\Omega}w\leq C\min\limits_{\overline\Omega}w.\qquad\Box$$ \end{lemma} The following {\it a priori} estimates are easy to get. \begin{lemma}\label{l2.3} Let $(u,v)$ be a nontrivial non-negative solution of (\ref{s}). Then \begin{eqnarray*} 0<u\leq\theta,~~\mu_+<v\leq\mu_++\frac{c\theta}{1+m\theta+k\mu_+},~~ \|u\|_{C^{1,\alpha}(\overline\Omega)}+\|v\|_{C^{1,\alpha}(\overline\Omega_1)}\leq C,\end{eqnarray*} with $\mu_+=\max\{\mu,0\}$, $\alpha\in(0,1)$, $C=C(\theta,\mu,\Omega_0)>0$. \end{lemma} {\bf Proof.} Suppose $u(x_0)=\max_{\overline \Omega}u(x)>0$. By Lemma \ref{l2.1}(b), we have $$u(x_0)\Big(\theta-u(x_0)-\frac{a(x_0)v(x_0)}{1+mu(x_0)+kv(x_0)}\Big)\geq 0,$$ and then $$ u(x_0)\leq\theta-\frac{a(x_0)v(x_0)}{1+mu(x_0)+kv(x_0)}\leq\theta.$$ Due to Lemma \ref{l2.2}, we arrive at $0<u\leq\theta$ on $\overline\Omega$. Similarly, we can show $\mu_+<v\leq\mu_++\frac{c\theta}{1+m\theta+k\mu_+}$ on ${\overline\Omega}_1$. The $C^{1,\alpha}$ boundedness of solutions comes from the elliptic regularity theory together with the Sobolev embedding theorem.\qquad$\Box$ We will use the local bifurcation theorem of Crandall and Rabinowitz \cite{CR} and the global bifurcation theorem of Rabinowitz \cite{R} to prove Theorem \ref{th2}. Denote the semitrivial solution curves by \begin{eqnarray*} \Gamma_u=\{(\theta,u,v)=(\theta,0,\mu);\,\theta>0\},~~ \Gamma_v=\{(\theta,u,v)=(\theta,\theta,0);\,\theta>0\}. \end{eqnarray*} Define \begin{eqnarray*} X=W^{2,p}_n(\Omega)\times W^{2,p}_n(\Omega_1),~Y=L^p(\Omega)\times L^p(\Omega_1)~{\rm with}~p>N,~Z=C_n^1(\overline\Omega)\times C_n^1(\overline\Omega_1), \end{eqnarray*} where \begin{eqnarray*} W^{2,p}_n(\Omega)=\{w\in W^{2,p}(\Omega);~\frac{\partial w}{\partial n}=0~{\rm on}~\partial \Omega\},~C_n^1(\overline\Omega)=\{w\in C^1(\overline\Omega);\frac{\partial w}{\partial n}=0~{\rm on}~\partial\Omega\}. \end{eqnarray*} The Sobolev embedding theorem implies $X\subseteq Z$. Let $(\phi^*,\psi^*)$ solve \begin{align*} &\Delta\phi^*+(\theta^*-\frac{a(x)\mu}{1+k\mu})\phi^*=0~{\rm in}~\Omega,~\frac{\partial \phi^*}{\partial {n}}=0~{\rm on}~\partial\Omega,\\ &\Delta\psi^*-\mu\psi^*+\frac{c\mu}{1+k\mu}\phi^*=0~{\rm in}~\Omega_1,~\frac{\partial \psi^*}{\partial {n}}=0~{\rm on}~\partial\Omega_1. \end{align*} Then $\psi^*=(-\Delta+\mu I)^{-1}_{\Omega_1}\frac{c\mu}{1+k\mu}\phi^*$. \begin{proposition}\label{p2.1} Let $\mu>0$. Then there are positive solutions of (\ref{s}) bifurcating from $\Gamma_u$ if and only if $\theta>\theta^*(\mu,\Omega_0)$, possessing the form \begin{eqnarray}\ \Gamma_1=\{(\theta,u,v)=(\theta(s),s\phi^*+o(|s|),\mu+s\psi^*+o(|s|));\,s\in (0,\sigma)\}\end{eqnarray} with $(\theta(0),u(0),v(0))=(\theta^*,0,\mu)$ for some $\sigma>0$ in a neighborhood of $(\theta^*,0,\mu)\in \mathbb{R}\times X$. \end{proposition} {\bf Proof.} Denote by $V=v-\mu$, \begin{eqnarray} F(\theta,u,V)=\Big(\begin{array}{clcr} \ \Delta u+f_1(\theta,u,V+\mu)\\ \Delta V+f_2(\mu,u,V+\mu)\end{array}\Big)^T~{\rm and}~ F_1(\theta,u,v)=\Big(\begin{array}{clcr} \ \Delta u+f_1(\theta,u,v) \\ \Delta v+f_2(\mu,u,v) \end{array}\Big)^T\end{eqnarray} with \begin{align*}f_1(\theta,u,v)=u(\theta-u-\frac{a(x)v}{1+mu+kv}),~~ f_2(\mu,u,v)=v(\mu-v+\frac{cu}{1+mu+kv}).\end{align*} Obviously, $F(\theta,u,V)=0 $ is equivalent to $ F_1(\theta,u,v)=0$, and $F_1(\theta,0,\mu)=F(\theta,0,0)=0$ for $\theta\in \mathbb{R}$. A direct calculation yields \begin{eqnarray}\ F_{(u,V)}(\theta,0,0)[\phi,\psi]=\Big(\begin{array}{clcr}\ \Delta\phi+(\theta-\frac{a(x)\mu}{1+k\mu})\phi \\ \Delta\psi-\mu\psi+\frac{c\mu}{1+k\mu}\phi \end{array}\Big)^T.\end{eqnarray}\ By the Krein-Rutman theorem, $F_{(u,V)}(\theta,0,0)[\phi,\psi]=(0,0)$ has a solution $\phi>0$ if and only if $\theta=\theta^*$. So $(\theta^*,0,\mu)$ is the only possible bifurcation point from which positive solutions of (\ref{s}) bifurcate from $\Gamma_u$. Besides, we have \begin{align*} {\rm Ker} F_{(u,V)}(\theta^*,0,0)={\rm Span}\,\{(\phi^*,\psi^*)\},~~ \makebox{dim\,Ker}\, F_{(u,V)}(\theta^*,0,0)=1. \end{align*} For $(\bar\phi,\bar\psi)\in Y\cap \makebox{Range}\,F_{(u,V)}(\theta^*,0,0)$, choose $(\phi,\psi)\in X$ such that \begin{eqnarray}\label{ee1} \left\{ \begin{array}{llll} \displaystyle\Delta\phi+(\theta-\frac{a(x)\mu}{1+k\mu})\phi=\bar\phi,\\[6pt] \displaystyle\Delta\psi-\mu\psi+\frac{c\mu}{1+k\mu}\phi=\bar\psi. \end{array} \right.\end{eqnarray} Multiplying by $\phi^*$ on both sides of the first equation of (\ref{ee1}) and integrating by parts over $\Omega$, we get $\int_\Omega\bar\phi\phi^* dx=0$. Then \begin{eqnarray} \makebox{Range}\,F_{(u,V)}(\theta^*,0,0)=\Big\{(\bar\phi,\bar\psi)\in Y;\int_\Omega\bar\phi\phi^* dx=0\Big\}, \end{eqnarray} and thus $$\makebox{codim\,Range}\,F_{(u,V)}(\theta^*,0,0)=1.$$ By a simple calculation, \begin{align*} &F_\theta(\theta^*,0,0)=F_{\theta\theta}(\theta^*,0,0)=(0,0),\\&F_{\theta(u,V)}(\theta^*,0,0)[\phi^*,\psi^*]= (\phi^*,0) \notin \makebox{Range\,}F_{(u,V)}(\theta^*,0,0).\end{align*} In conclusion, the proposition is proved by the local bifurcation theorem \cite{CR}. \qquad$\Box$ \begin{proposition}\label{p2.2} Let $-\frac{c}{m}<\mu<0$. Then there are positive solutions of (\ref{s}) bifurcating from $\Gamma_v$ if and only if $\theta>\theta_*=-\frac{\mu}{c+m\mu}$, having the form \begin{eqnarray} \Gamma_2=\{(\theta,u,v)=(\tilde{\theta}(s),\theta+s\phi_*(x)+o(|s|),s+o(|s|));s\in(0,\tilde{\sigma})\} \end{eqnarray} with $\tilde{\theta}(0)=-\frac{\mu}{c+m\mu}$, $\phi_*=(\Delta-\theta I)^{-1}\frac{a(x)\theta}{1+m\theta}$ for some $\tilde{\sigma}>0$ near $(\theta,\theta,0)\in \mathbb{R}\times X$. \end{proposition} {\bf Proof.} Let $w=u-\theta$, \begin{eqnarray}\ G(\theta,w,v)=\Big(\begin{array}{clcr}\ \Delta w+(w+\theta)(-w-\frac{a(x)v}{1+m(w+\theta)+kv})\\ \Delta v+v(\mu-v+\frac{c(w+\theta)}{1+m(w+\theta)+kv}) \end{array}\Big)^T, \end{eqnarray}\ Then $F_1(\theta,u,v)=0$ is equivalent to $G(\theta,w,v)=0$. We have \begin{align*} &G_{(w,v)}(\theta,w,v)[\phi,\psi]=\\ &\left(\begin{array}{clcr}\ \Delta\phi-(2w+\theta)\phi-\frac{a(x)v}{1+m(w+\theta)+kv}\phi+\frac{a(x)mv(w+\theta)}{(1+m(w+\theta)+kv)^2}\phi-\frac{a(x)(w+\theta)(1+m(w+\theta))}{(1+m(w+\theta)+kv)^2}\psi\\[8pt] \Delta\psi+(\mu-2v)\psi+\frac{c(w+\theta)}{1+m(w+\theta)+kv}\psi-\frac{ckv(w+\theta)}{(1+m(w+\theta)+kv)^2}\psi+\frac{cv(1+kv)}{(1+m(w+\theta)+kv)^2}\phi \end{array}\right)^T, \\[6pt] &G_\theta(\theta,w,v)= \Big(\begin{array}{clcr} -w-\frac{a(x)v}{1+m(w+\theta)+kv}+\frac{a(x)mv(w+\theta)}{(1+m(w+\theta)+kv)^2}\\ \frac{cv(1+kv)}{(1+m(w+\theta)+kv)^2} \end{array}\Big)^T.\end{align*} The equation $G_{(w,v)}(\theta,0,0)[\phi,\psi]=(0,0)$ is equivalent to \begin{eqnarray}\label{ee2} \left\{ \begin{array}{llll} \Delta\phi-\theta\phi-\frac{a(x)\theta}{1+m\theta}\psi=0~~~~~&{\rm in}~\Omega,\\[3pt] \Delta\psi+\mu\psi+\frac{c\theta}{1+m\theta}\psi=0~~~~&{\rm in}~\Omega_1,\\[3pt] \frac{\partial\phi}{\partial{n}}=0~{\rm on}~\partial\Omega,~~\frac{\partial\psi}{\partial{n}}=0~{\rm on}~\partial\Omega_1. \end{array}\right. \end{eqnarray} The second equation of (\ref{ee2}) has a solution $\psi>0$ if and only if $\mu=-\frac{c\theta}{1+m\theta}$, i.e. $\theta=-\frac{\mu}{c+m\mu}=\theta_*$. Thus $(\theta_*,\theta,0)$ is the only possible bifurcation point along $\Gamma_v$, and $\phi_*$ solves the first equation of (\ref{ee2}) with $\theta=\theta_*$ and $\psi\equiv 1$. It is easy to verify that $$\makebox{Ker}\,G_{(w,v)}(\theta_*,0,0)=\mbox{Span\,}\{(\phi_*,1)\},~~ \makebox{dim\,Ker\,}G_{(w,v)}(\theta,0,0)=1.$$A direct calculation shows \begin{align*} &G_\theta(\theta_*,0,0)=G_{\theta\theta}(\theta_*,0,0)=(0,0),\\ &\mbox{Range\,}G_{(w,v)}(\theta_*,0,0)=\{(f,g)\in Y;\int_\Omega gdx=0\},~\mbox{codim\,Range\,}G_{(w,v)}(\theta_*,0,0)=1,\\ &G_{\theta(w,v)}(\theta_*,0,0)[\phi_*,1]=\big( -\phi_*-\frac{a(x)}{(1+m\theta)^2}, \frac{c}{(1+m\theta)^2}\big)\notin \mbox{Range\,}G_{(w,v)}(\theta_*,0,0). \end{align*} By the local bifurcation theorem \cite{CR}, we get the desired results of the Proposition \ref{p2.2}. \qquad$\Box$ In order to use the global bifurcation theorem for $\mu>0$, define $ F_2:\mathbb{R}\times Z\rightarrow Z$ by \begin{eqnarray} \ F_2(\theta,u,v)= \Big(\begin{array}{clcr}\ u \\ v-\mu\end{array}\ \Big)^T-\Big(\begin{array}{clcr}\ (-\Delta+I)^{-1}_\Omega(u+f_1(\theta,u,v))\\ (-\Delta+I)^{-1}_{\Omega_1}(v-\mu+f_2(\mu,u,v)) \end{array}\ \Big)^T. \end{eqnarray} Then (\ref{s}) is equivalent to $F_2(\theta,u,v)=0$. Let $\tilde{\Gamma}_1\subset \mathbb{R}\times Z$ be the maximal connected set satisfying \begin{align*} \Gamma_1\subset\tilde{\Gamma}_1\subset\{(\theta,u,v)\in \mathbb{R}\times Z\backslash\{(\theta^*,0,\mu)\};F_2(\theta,u,v)=(0,0)\}. \end{align*} From the global bifurcation theory of Rabinowitz \cite{R}, one of the following non-excluding results must be true (see Theorem 6.4.3 in \cite{G}): \begin{description} \item[\rm (a)] $\tilde{\Gamma}_1$ is unbounded in $\mathbb{R}\times Z$.\item[\rm(b)] There exists a constant $\bar\theta\neq\theta^*$ such that $(\bar\theta,0,\mu)\in\tilde \Gamma_1$.\item[\rm (c)] There exists $(\tilde{\theta},\tilde{\phi},\tilde{\psi})\in \mathbb{R}\times(Y_1\backslash\{(0,\mu)\})$ with $Y_1=\{(\bar\phi,\bar\psi)\in Z;\int_\Omega\bar\phi\phi^*dx=0\}$ such that $(\tilde{\theta},\tilde{\phi},\tilde{\psi})\in\tilde{\Gamma}_1$. \end{description} Now we give the proofs of Theorems \ref{th2} and \ref{th2'}. \noindent{\bf Proof of Theorem \ref{th2}.} At first we know that $u,v>0$ for any $(\theta,u,v)\in\tilde{\Gamma}_1$ which means that the case (c) above cannot occur by $\phi^*>0$. Otherwise there is a $(\bar\theta,\bar u,\bar v)\in \tilde{\Gamma}_1$ such that (1) $\bar u>0$ with $\bar v(x_0)=0$ for some $x_0\in \Omega_1$, or (2) $u(x_1)=v(x_2)=0$ for some $x_1\in \Omega$ and $x_2\in\Omega_1$, or (3) $\bar v>0$ with $\bar u(x_3)=0$ for some $x_3\in \Omega$. Denote by $\mathscr{B}_\Omega=\{\phi\in C^1_n(\overline\Omega);\,\phi>0~\mbox{on}~\overline\Omega\}$. Choose a sequence $\{(\theta_i,u_i,v_i)\}_{i=1}^\infty\subset\tilde{\Gamma}_1\cap(\mathbb{R}\times\mathscr{B}_\Omega\times\mathscr{B}_{\Omega_1})$ such that $\lim\limits_{i\rightarrow\infty}(\theta_i,u_i,v_i)=(\bar\theta,\bar u,\bar v)$ in $\mathbb{R}\times Z$, where $\bar\theta$ can be $\infty$. Obviously, $(\bar u,\bar v)$ is a non-negative solution of (\ref{s}) with $\theta=\bar\theta$. By Lemma \ref{l2.2}, one of the following must hold: $$\mbox{(1)}~\bar u>0,\bar v\equiv0;~~ \mbox{(2)}~\bar u\equiv0, \bar v\equiv 0; ~~\mbox{(3)}~ \bar u\equiv0,\bar v>0. $$ For (3), we have $-\Delta\bar v=\bar v(\mu-\bar v)$ in $\Omega_1$, $\frac{\partial\bar v}{\partial {n}}=0$ on $\partial\Omega_1$, and thus $\bar v\equiv\mu$. By Proposition \ref{p2.1}, this implies $\bar\theta=\theta^*$, a contradiction to the definition of $\tilde{\Gamma}_1$. Suppose (1) or (2) is true. Integrate the second equation of (\ref{s}) on $\Omega_1$ with $(u,v)=(u_i,v_i)$ to obtain \begin{eqnarray*} \int_{\Omega_1}v_i(\mu-v_i+\frac{cu_i}{1+mu_i+kv_i})dx=0, ~~i\in \mathbb{N}. \end{eqnarray*} On the other hand, $\mu>0$ and $\bar v\equiv0$ ensure $\mu-v_i>0$, and thus $\mu-v_i+\frac{cu_i}{1+mu_i+kv_i}>0$ for $i$ large enough, also a contradiction. The case (b) is excluded by Proposition \ref{p2.1}. So, the only true case is (a). From Lemma \ref{l2.3}, $(u,v)$ are uniformly bounded in $Z$ as $(\theta,u,v)\in\tilde{\Gamma}_1$ which shows that $\theta$ is unbounded. Combining this with Proposition \ref{p2.1}, we know that (\ref{s}) has at least one positive solution for $\theta>\theta^*(\mu,\Omega_0)$ with $\mu>0$. Now, let $(u,v)$ be a positive solution of (\ref{s}) with $m\leq\frac{(1+k\mu)^2}{a\mu}$. A direct calculation yields that $u+\frac{a(x)v}{1+mu+kv}>\frac{a(x)\mu}{1+k\mu}$. By the monotonicity of the eigenvalue, we conclude that $$0=\lambda_1\big(-\theta+u+\frac{a(x)v}{1+mu+kv}\big)>\lambda_1\big(-\theta+\frac{a(x)\mu}{1+k\mu}\big).$$ Then $$\theta>\lambda_1(\frac{a(x)\mu}{1+k\mu})=\theta^*(\mu,\Omega_0).$$ This shows that (\ref{s}) has no positive solution whenever $\theta\leq\theta^*(\mu,\Omega_0)$ and $m\leq\frac{(1+k\mu)^2}{a\mu}$. \qquad$\Box$ \noindent{\bf Proof of Theorem \ref{th2'}}. When $\mu=0$, fix $\theta>0$. By Lemma \ref{l1.1} and Theorem \ref{th2}, we can take a sequence $\{(\mu_i,u_i,v_i)\}_{i=1}^\infty$ such that $(u_i,v_i)$ is a positive solution of (\ref{s}) with $\mu=\mu_i>0$, $\lim_{i\rightarrow\infty}\mu_i=0$. By Lemma \ref{l2.3} and embedding theorem, we can choose a subsequence (still denoted by $\{(\mu_i,u_i,v_i)\}_{i=1}^\infty$) such that $(u_i,v_i)$ converges to $(\tilde{u},\tilde{v})\in Z$, a non-negative solution of (\ref{s}). By Lemma \ref{l2.2}, $\tilde{u}>0$ or $\tilde{u}\equiv0$ in $\Omega$; $\tilde{v}>0$ or $\tilde{v}\equiv0$ in $\Omega_1$. If $\tilde{u}\equiv0$ and $\tilde{v}>0$, then $\mu_i-v_i+\frac{cu_i}{1+mu_i+kv_i}<0$ in $\Omega_1$ for $i$ large enough. This contradicts $\int_{\Omega_1}v_i(\mu_i-v_i+\frac{cu_i}{1+mu_i+kv_i})dx=0$. If $\tilde{u}>0$ and $\tilde{v}\equiv0$, then $\mu_i-v_i+\frac{cu_i}{1+mu_i+kv_i}>0$ in $\Omega_1$ for $i$ large enough, also a contradiction with $\int_{\Omega_1}v_i(\mu_i-v_i+\frac{cu_i}{1+mu_i+kv_i})dx=0$. If $\tilde{u}\equiv0$ and $\tilde{v}\equiv0$, then $\theta-u_i+\frac{a(x)v_i}{1+mu_i+kv_i}>0$ in $\Omega$ for $i$ large enough, a contradiction to $\int_{\Omega}u_i(\theta-u_i+\frac{a(x)v_i}{1+mu_i+kv_i})dx=0$. In summary, we must have $\tilde{u},\tilde{v}>0$ in $\Omega$ and $\Omega_1$, respectively. This means that (\ref{s}) possesses positive solutions for all $\theta>0$ if $\mu=0$. Now, suppose $-\frac{c}{m}<\mu<0$. For $\theta>-\frac{\mu}{c+m\mu}>0$, the existence of positive solutions can be obtained from Proposition \ref{p2.2} by a similar global bifurcation analysis of $\Gamma_u$ as that for the branch $\Gamma_v$ with $\mu>0$. We omit the details. Conversely, let $(u,v)$ be a positive solution of (\ref{s}) with $\mu\in(-\frac{c}{m},0]$. Then $0<u\leq\theta$ by Lemma \ref{l2.3}, and hence $$\mu=\lambda_1(v-\frac{cu}{1+mu+kv})>\lambda_1(-\frac{cu}{1+mu}) \geq\lambda_1(-\frac{c\theta}{1+m\theta})=-\frac{c\theta}{1+m\theta},$$ namely, $\theta>-\frac{\mu}{c+m\mu}$. \qquad$\Box$ \section{Uniqueness of positive solutions} \mbox\indent In this section, we use topological degree to prove Theorem \ref{th3} for $\theta>\theta_1$ and large $\mu$. At first, introduce an auxiliary problem \begin{eqnarray} \left\{ \begin{array}{llll} \displaystyle-\Delta u=u\Big(\theta-u-\frac{a(x)v}{1+mu+kv}\Big)&\mbox{in}~\Omega,\\[4pt] \displaystyle-\Delta v=v\Big(\mu-v+t\frac{cu}{1+mu+kv}\Big)&\mbox{in}~\Omega_1,\\[4pt] \displaystyle\frac{\partial u}{\partial {n}}\Big|_{\partial \Omega}=0,\quad \frac{\partial v}{\partial {n}}\Big|_{\partial\Omega_1}=0 \end{array}\right. \label{st}\end{eqnarray} with the parameter $t\in[0,1]$. Eq.\,(\ref{st}) reverts back to (\ref{s}) if $t=1$. When $t=0$, we have from the second equation of (\ref{st}) that $v\equiv \mu$, and then obtain the scalar problem \begin{eqnarray}\label{su} \left\{ \begin{array}{llll}\displaystyle-\Delta u=u(\theta-u-\frac{a(x)\mu}{1+mu+k\mu})&{\rm in}~\Omega,\\ \displaystyle\frac{\partial u}{\partial{n}}=0&{\rm on}~\partial\Omega, \end{array}\right.\end{eqnarray} which yields Eq.\,(\ref{uq}) as $\mu\rightarrow\infty$. \begin{lemma}\label{l3.1} Problem (\ref{uq}) has a unique positive solution if and only if $\theta>\theta_1$. \end{lemma} {\bf Proof.} Suppose $\theta>\theta_1$. Let $\phi>0$ be the normalized eigenfunction with respect to $\theta_1$. Set $\underline{u}=\epsilon\phi$. Then \begin{align*} -\Delta\underline{u}=-\epsilon\Delta\phi=\epsilon(\theta_1-q_0(x))\phi =\epsilon\phi(\theta-q_0(x)-\epsilon\phi)+\epsilon\phi(\theta_1-\theta+\epsilon\phi). \end{align*} Choose $\epsilon$ small enough such that $\theta_1-\theta+\epsilon\phi<0$ to get $$-\Delta\underline{u}\leq\underline{u}(\theta-q_0(x)-\underline{u})~{\rm in}~\Omega, ~~\frac{\partial\underline{u}}{\partial n}=0~{\rm on}~\partial\Omega.$$ Obviously, $\underline{u}=\epsilon\phi$ and $\overline{u}=\theta$ are a pair of positive sub- and supersolutions of (\ref{uq}) with $\underline{u}\leq\overline{u}$. We can get a positive solution of Eq.\,(\ref{uq}) by the sub-supersolution method. Let $\tilde{u}$ and $\hat{u}$ be the minimal and maximal positive solutions to (\ref{uq}), respectively. Since $$\int_{\Omega}\nabla\tilde{u}\cdot\nabla\hat{u}dx=\int_{\Omega}\tilde{u}\hat{u}(\theta-\tilde{u}-q_0(x))dx= \int_{\Omega}\tilde{u}\hat{u}(\theta-\hat{u}-q_0(x))dx,$$we conclude $$\int_{\Omega}\tilde{u}\hat{u}(\tilde{u}-\hat{u})dx=0.$$ Therefore $\tilde{u}\equiv \hat{u}$. On the other hand, it is obviously true for any positive solution $u_1$ of (\ref{uq}) that $\theta=\lambda_1(u_1+q_0(x))>\lambda_1(q_0(x))=\theta_1$. \qquad$\Box$ Next, we show the uniqueness of positive solutions to (\ref{su}). \begin{proposition}\label{p3.1} Suppose $\theta>\theta_1$. There is a $\tilde{\mu}=\tilde{\mu}(\theta)>0$ such that for any $\mu>\tilde{\mu}$, problem (\ref{su}) has an unique positive solution. \end{proposition} {\bf Proof.} Since $-q_0(x)<-\frac{a(x)\mu}{1+mU_{\theta,q_0}+k\mu}$, then $U_{\theta,q_0}$ is a subsolution of (\ref{su}). Obviously, $\theta$ is a supersolution of (\ref{su}) and $U_{\theta,q_0}\leq\theta$. Then there exist positive solutions to (\ref{su}). To prove the uniqueness of the positive solutions to (\ref{su}), we at first show that the positive solutions of (\ref{su}) are linearly stable for large $\mu$. Let $U$ be a positive solution of (\ref{su}). Consider the eigenvalue problem \begin{eqnarray} -\Delta\phi=\theta\phi-2U\phi-\frac{a(x)\mu(1+k\mu)}{(1+mU+k\mu)^2}\phi+\eta\phi~{\rm in}~\Omega, ~~\frac{\partial\phi}{\partial{n}}=0~{\rm on}~\partial\Omega \end{eqnarray} with the principal eigenvalues denoted by \begin{eqnarray}\label{ee} \eta=\eta(\mu)=\inf_{\phi\in H^{1}(\Omega),\,\| \phi\|_2=1}\int_{\Omega}[|\nabla\phi|^2-\theta\phi^2+2U\phi^2+\frac{a(x)\mu(1+k\mu)}{(1+mU+k\mu)^2}\phi^2]dx. \end{eqnarray}\ We have \begin{align*}0=\lambda_1\big(-\theta+2U+\frac{a(x)\mu(1+k\mu)}{(1+mU+k\mu)^2}-\eta\big)>\lambda_1(-\theta-\eta)=-\theta-\eta,\end{align*} i.e., $\eta>-\theta$. Denote by $\eta^*$ the principal eigenvalue of the problem \begin{eqnarray}\label{eee} -\Delta\phi=\theta\phi-2U_{\theta,q_0}(x)\phi-q_0(x)\phi+\eta^*\phi~{\rm in}~\Omega,~~ \frac{\partial \phi}{\partial{n}}=0~{\rm on}~\partial\Omega \end{eqnarray} with the normalized eigenfunction $\phi^*>0$. Then $\eta^*=\frac{\int_\Omega U_{\theta,q_0}^2\phi^* dx}{\int_\Omega U_{\theta,q_0}\phi^* dx}>0$. Due to $\frac{a(x)\mu}{1+mu+k\mu}\rightarrow q_0(x)$ uniformly on $\overline\Omega$ as $\mu\rightarrow\infty$, we know $U\rightarrow U_{\theta,q_0}(x)$ uniformly on $\overline\Omega$. It follows from (\ref{ee}) that \begin{align}\eta&\le\int_{\Omega}[|\nabla\phi^*|^2-\theta{\phi^*}^2+2U{\phi^*}^2+\frac{a(x)\mu(1+k\mu)}{(1+mU+k\mu)^2}{\phi^*}^2]dx \nonumber\\&=\eta^*+\int_\Omega[2(U-U_{\theta,q_0})+\frac{a(x)\mu(1+k\mu)}{(1+mu+k\mu)^2}-q_0(x)]{\phi^*}^2dx. \end{align} Thus $-\theta<\eta<M$ with $M>0$ independent of $\mu$. We claim that $\lim\inf_{\mu\rightarrow\infty}\eta=r>0$. In fact, choose a sequence $\mu_n\rightarrow\infty$ such that $\eta_n\rightarrow r$, and \begin{eqnarray}\label{een} -\Delta\phi_n=\theta\phi_n-2u_n\phi_n-\frac{a(x) \mu_n(1+k\mu_n)}{(1+mu_n+k\mu_n)^2}\phi_n+\eta_n\phi_n~{\rm in}~\Omega,~~\frac{\partial \phi_n}{\partial{n}}=0~{\rm on}~\partial\Omega \end{eqnarray} with normalized $\phi_n>0$, i.e. $\|\phi_n\|_2=1$. As $\int_\Omega|\nabla\phi_n|^2dx$ are uniformly bounded with respect to $n$, there exists a subsequence $\phi_{n_k}\rightharpoonup\phi_0$ weakly in $H^1(\Omega)$. Obviously, $\phi_0\geq0$ and $\|\phi_0\|_2=1$. Multiply (\ref{een}) by $\varphi\in C_0^{\infty}(\Omega)$ and integrate by parts to have $$\int_\Omega\nabla\phi_n\cdot\nabla\varphi dx=\int_\Omega[\theta\phi_n\varphi-2u_n\phi_n\varphi-\frac{a(x)\mu_n(1+k\mu_n)}{(1+mu_n+k\mu_n)^2}\phi_n\varphi+\eta_n\phi_n\varphi]dx.$$ Since $u_n\rightarrow U_{\theta,q_0}$ uniformly on $\overline{\Omega}$ as $n\rightarrow\infty$, we have $$\int_\Omega\nabla\phi_0\cdot\nabla\varphi dx=\int_\Omega[\theta\phi_0\varphi-2U_{\theta,q_0}\phi_0\varphi-q_0(x)\phi_0\varphi+r\phi_0\varphi]dx.$$ Comparing with (\ref{eee}), we prove the claim that $r=\eta^*>0$. So, there exists $\tilde{\mu}>0$ such that $\eta=\eta(\mu)>0$ when $\mu>\tilde{\mu}$, which implies the linear stability of the positive solutions to (\ref{su}). Let \begin{align}&H(t,u)=[MI-\Delta]^{-1}(M+\theta-u-t\frac{a(x)\mu}{1+mu+k\mu})u,\label{H}\\ &A=\{u\in C(\overline\Omega);~\varepsilon_0<u<\theta+1\}\nonumber \end{align} with $0<\varepsilon_0<\min_{x\in\overline{\Omega}}U_{\theta,q_0}(x)$, $0\leq t\leq 1$, $M$ large. Define $$S(t,u)=u-H(t,u).$$It is easy to see that $S(t,u)\neq0$ for all $u\in\partial A$, $0\leq t\leq 1$. For large $M$, by the compactness of $H$, there are only finitely many isolated fixed points in $A$, denoted by $u_1,\dots,u_m$. Together with the linear stability of the positive solutions and the homotopy invariance of fixed point index, we have $$1=\mbox{index}(S(0,u),A,0)=\mbox{index}(S(1,u),A,0)=\sum_{i=1}^{m}\mbox{index}(H,u_i)=m.$$ Therefore, there is an unique positive fixed point to (\ref{H}) with $t=1$ whenever $\mu>\tilde{\mu}$, i.e. problem (\ref{su}) has an unique positive solution. \qquad$\Box$ Now, we can deal with the uniqueness Theorem \ref{th3}. \noindent {\bf Proof of Theorem \ref{th3}.} Let $(u,v)$ be a positive solution of (\ref{s}) with large $\mu$. Linearize the eigenvalue problem of (\ref{s}) at $(u,v)$ to have \begin{eqnarray}\label{31} \left\{ \begin{array}{llll}\displaystyle-\Delta\phi=\theta\phi-2u\phi-\frac{a(x)v(1+kv)}{(1+mu+kv)^2}\phi-\frac{a(x)u(1+mu)}{(1+mu+kv)^2}\psi +\eta\phi&{\rm in}~\Omega, \\ \displaystyle-\Delta\psi=\mu\psi-2v\psi+\frac{cu(1+mu)}{(1+mu+kv)^2}\psi+\frac{cv(1+kv)}{(1+mu+kv)^2}\phi+\eta\psi&{\rm in}~\Omega_1, \\ \displaystyle\frac{\partial\phi}{\partial n}=0~{\rm on}~\partial\Omega,~~~~\frac{\partial\psi}{\partial n}=0~{\rm on}~\partial\Omega_1. \end{array}\right. \end{eqnarray} Here $\phi$, $\psi$ and $\eta$ may be complex-valued. From Kato's inequality, we have \begin{align}\label{32} -\Delta|\phi|&\leq-{\rm Re}\Big(\frac{\bar{\phi}}{|\phi|}\Delta\phi\Big)\nonumber\\ &={\rm Re}\big(\theta|\phi|-2u|\phi|-\frac{a(x)v(1+kv)}{(1+mu+kv)^2}|\phi|+\frac{a(x)u(1+mu)}{(1+mu+kv)^2}\psi\cdot\frac{\bar{\phi}}{|\phi|}+\eta|\phi|\big)\nonumber\\ &\leq\theta|\phi|-2u|\phi|-\frac{a(x)v(1+kv)}{(1+mu+kv)^2}|\phi|+\frac{a(x)u(1+mu)}{(1+mu+kv)^2}|\psi|+{\rm Re}(\eta)|\phi|. \end{align} To obtain the linear stability, it suffices to prove that for any $\delta>0$, there exists $\mu_{\delta}>0$ such that the eigenvalues $\eta$ of (\ref{31}) satisfy ${\rm Re}(\eta)\geq\eta^*-\delta$ when $\mu\geq\mu_{\delta}$. Otherwise, there exist a $\delta_0>0$ and a sequence $\{(\mu_n,\eta_n,u_n,v_n,\phi_n,\psi_n)\}_{n=1}^{\infty}$ satisfying (\ref{31}) with $\|\phi_n\|_2+\|\psi_n\|_2=1$, and $\mu_n\rightarrow \infty$ as $n\rightarrow\infty$ such that $\mbox{Re}(\eta_n)<\eta^*-\delta_0$. Replace $(\mu,\eta,u,v,\phi,\psi)$ in (\ref{32}) with $(\mu_n,\eta_n,u_n,v_n,\phi_n,\psi_n)$, multiply by $|\phi_n|$, and then integrate by parts over $\Omega$ to have \begin{align}\label{pp} \int_\Omega|\nabla|\phi_n||^2dx\nonumber&\leq\int_\Omega\big(\theta|\phi_n|^2-2u_n|\phi_n|^2-\frac{a(x)v_n(1+kv_n)}{(1+mu_n+kv_n)}|\phi_n|^2\\ &~~+\frac{a(x)u_n(1+mu_n)}{(1+mu_n+kv_n)^2}|\psi_n||\phi_n|\big) dx+(\eta^*-\delta_0)\int_\Omega|\phi_n|^2dx. \end{align} Let $r_n$ be the principal eigenvalue of the eigenvalue problem \begin{align*} -\Delta\varphi=\theta\varphi-2u_n\varphi-\frac{a(x)v_n(1+kv_n)}{(1+mu_n+kv_n)^2}\varphi+r_n\varphi~{\rm in}~\Omega,~~\frac{\partial\varphi}{\partial n}=0~{\rm on}~\partial\Omega. \end{align*} We know that \begin{align*}r_n-\eta^*=\inf_{\varphi\in H^1(\Omega)}\frac{\int_\Omega[|\nabla\varphi|^2-\theta\varphi^2+2u_n\varphi^2+\frac{a(x)v_n(1+kv_n)}{(1+mu_n+kv_n)^2}\varphi^2-\eta^*\varphi^2]dx} {\int_\Omega\varphi^2dx}, \end{align*} and $r_n\rightarrow\eta^*$ by the proof of Proposition \ref{p3.1}. So, there exists a $N>0$ such that $r_n-\eta^*>-\frac{\delta_0}{2}$ for $n>N$. Thus by (\ref{pp}), \begin{align*} -\frac{\delta_0}{2}\int_\Omega|\phi_n|^2dx&<(r_n-\eta^*)\int_\Omega|\phi_n|^2dx \nonumber\\&\leq\int_\Omega[|\nabla\phi_n|^2-\theta|\phi_n|^2+2u_n|\phi_n|^2+\frac{a(x)v_n(1+kv_n)}{(1+mu_n+kv_n)^2}|\phi_n|^2-\eta^*|\phi_n|^2]dx \nonumber\\&\leq-\delta_0\int_{\Omega}|\phi_n|^2dx+\int_{\Omega_1}\frac{au_n(1+mu_n)}{(1+mu_n+kv_n)^2}|\psi_n||\phi_n| dx.\end{align*} Since $\frac{au_n(1+mu_n)}{(1+mu_n+kv_n)^2}\rightarrow 0$ in $C(\overline\Omega_1)$ as $n\rightarrow \infty$, then $\int_\Omega|\phi_n|^2dx\rightarrow 0$. Using Kato's inequality again, we have \begin{align*}-\Delta|\psi_n|\leq\mu_n|\psi_n|-2v_n|\psi_n|+\frac{cu_n(1+mu_n)}{(1+mu_n+kv_n)^2}|\psi_n|+ \frac{cv_n(1+kv_n)}{(1+mu_n+kv_n)^2}|\phi_n|+(\eta^*-\delta_0)|\psi_n|. \end{align*} Multiply by $|\psi_n|$ and integrate by parts over $\Omega_1$ to get \begin{align*}\int_{\Omega_1}|\nabla|\psi_n||^2dx\nonumber&\leq\int_{\Omega_1}[\mu_n|\psi_n|^2-2v_n|\psi_n|^2+ \frac{cu_n(1+mu_n)}{(1+mu_n+kv_n)^2}|\psi_n|^2 \nonumber\\&~~+\frac{cv_n(1+kv_n)}{(1+mu_n+kv_n)^2}|\phi_n||\psi_n|+(\eta^*-\delta_0)|\psi_n|^2]dx \nonumber\\&\leq\int_{\Omega_1}[-\mu_n+\frac{cu_n(1+mu_n)}{(1+mu_n+kv_n)^2}+\eta^*-\delta_0]|\psi_n|^2dx \nonumber\\&~~+\int_{\Omega_1}\frac{cv_n(1+kv_n)}{(1+mu_n+kv_n)^2}|\phi_n||\psi_n| dx. \end{align*} Consequently, \begin{align*}\int_{\Omega_1}|\psi_n|^2dx\nonumber &\leq\frac{1}{\mu_n}\int_{\Omega_1}[\frac{cu_n(1+mu_n)}{(1+mu_n+kv_n)^2}+\eta^*-\delta_0]|\psi_n|^2dx\\ &~~+\frac{1}{\mu_n}\int_{\Omega_1}\frac{cv_n(1+kv_n)}{(1+mu_n+kv_n)^2}|\phi_n||\psi_n|dx\nonumber\\ &\leq\frac{1}{\mu_n}\int_{\Omega_1}(\frac{c\theta}{1+m\theta}+\eta^*-\delta_0)|\psi_n|^2dx +\frac{1}{\mu_n}\int_{\Omega_1}\frac{c}{k}|\phi_n||\psi_n|dx. \end{align*} This concludes $\int_{\Omega_1}|\psi_n|^2dx\rightarrow 0$ as $n\rightarrow\infty$, since $\mu_n\rightarrow \infty$ and $|\phi_n|,|\psi_n|$ are bounded in $L^2(\Omega_1)$. In summary, we have obtained $\int_\Omega|\phi_n|^2dx$,$\int_{\Omega_1}|\psi_n|^2dx\rightarrow 0$, as $n\rightarrow\infty$ which contradict with $\|\phi_n\|_2+\|\psi_n\|_2=1$. By using a similar argument to that in the proof of Proposition \ref{p3.1}, we get from the linear stability of the positive solutions to (\ref{s}) and Proposition \ref{p3.1} that the solution of (\ref{s}) must be unique when $\mu>\max\{\tilde{\mu},\mu_0\}$ with $\mu_0=\inf\{\mu_\delta; \delta\in (0,\eta^*)\}$. Finally, we consider the asymptotic behavior of the unique positive solution $(u,v)$ as $\mu\rightarrow \infty$. Since $\frac{cu}{1+mu+kv}\le \frac{c\theta}{1+m\theta+k\mu}\rightarrow 0$ as $\mu\rightarrow \infty$, for any $\epsilon>0$, there is a $\mu_\epsilon>0$ such that $\frac{cu}{1+mu+kv}<\epsilon$ for $\mu>\mu_\epsilon$. Then \begin{align*} \mu v-v^2\leq-\Delta v\leq(\mu+\epsilon)v-v^2~{\rm in}~\Omega_1, \end{align*} which yields $\mu\leq v\leq\mu+\epsilon$ for $\mu>\mu_\epsilon$. And thus $v-\mu\rightarrow 0$ as $\mu\rightarrow\infty$. We know that $\frac{a(x)v}{1+mu+kv}\rightarrow q_0(x)$, and then $u\rightarrow U_{\theta,q_0}(x)$ uniformly on $\overline\Omega$ as $\mu\rightarrow\infty$. \qquad$\Box$ \section{Discussion} \mbox\indent In a reaction-diffusion system of predator-prey PDE model, in addition to the interaction mechanism between the species, the behavior of the species is also affected by the diffusion of the species, as well as the size and geometry of the habitat. Obviously, the prey species would die out under excessive predation from nature or humans. The results obtained in this paper show the way in which the created protection zone saves the endangered prey species in the diffusive predator-prey model with Beddington-DeAngelis type functional response and non-flux boundary conditions. Compared with previous results on protection zone problems with other functional responses such as the Lotka-Voltera type competition system \cite{DL}, Holling II type predator-prey system \cite{RC,DS2006}, and Leslie type predator-prey system \cite{DPW}, richer dynamic properties have been observed for the model (\ref{a}) with B-D type functional response in this paper. It can be found that a total of four threshold values are obtained here for the prey birth rate $\theta$, i.e., $\theta_0$, $\theta^*$, $\theta_1$ (for the predator growth rate $\mu>0$) and $\theta_*$ (for $\mu\le 0$). The first threshold value $\theta_0$ gives the necessary condition for establishing a protection zone to save the prey $u$. By Theorem \ref{th1}, the survival of $u$ could be automatically ensured without protection zones whenever $\theta>\theta_0=\frac{a}{k}$, which can be realized when the refuge ability of the prey is properly large that $k>\frac{a}{\theta}$, or the predation rate is small that $a<{\theta}{k}$. In other words, the protection zones have to be made only if the prey's refuge ability is too weak with respect to its birth rate $\theta$ and the predation rate $a$. This matches with the mechanistic derivation for the B-D type functional response proposed in \cite{GG}. In addition, Theorem \ref{th1} says also that the protection zones are unnecessary if the predator's growth rate $\mu\le 0$, where the predator species $v$ can not live without the prey $u$, and thus the extinction of $v$ cannot take place after of $u$. The second threshold value is $\theta^*=\theta^*(\mu,\Omega_0)=\lambda_1(q(x))$ with $q(x)=\frac{a(x)\mu}{1+k\mu}$ and $\mu>0$. By Theorem \ref{th2}, the positive steady states can be attained for $\theta\in(\theta^*,\theta_0)$. Due to the monotonicity of the principal eigenvalue $\lambda_1=\lambda_1(q(x))$ with respect to $q(x)$, we know that the threshold value $\theta^*$ would be enlarged (and hence harmful for the prey $u$) when the predation rate $a(x)$ or the predator's growth rate $\mu$ increase, or when the prey refuge $k$ or the size of the protection zone $\Omega_0$ decrease. Conversely, Theorem \ref{th2} says also that the prey $u$ must become extinct when $\theta\le\theta^*$ with the handling time $m$ of $v_H$ being shorter than $\frac{(k\mu+1)^2}{a\mu}$. All of these match with those in \cite{GG}. In addition, since $\theta^*{(\mu,\Omega_0)}\le\theta_0$ is strictly increasing with respect to $\mu$ and decreasing when enlarging $\Omega_0$, letting $\mu\rightarrow\infty$, we get the third threshold value $\theta_1=\theta_1(\Omega_0)$ such that if $\theta>\theta_1(\Omega_0)$, prey species could survive no matter how large the predator's growth rate is. The critical $\theta=\theta_1(\Omega_0)$ implies a critical size of the protection zone as well, namely, if the real protection zone $\widetilde{\Omega}_0\Supset \Omega_0$, the survival of the prey with such birth rate $\theta$ is independent of the predator's growth rate. Also, the uniqueness and linear stability obtained in Theorem \ref{th3} for $\mu$ large enough are reasonable because $\frac{cu}{1+mu+kv}\rightarrow 0$, and hence $v-\mu\rightarrow 0$ as $\mu\rightarrow\infty$. Since the condition $\mu \le 0$ yields the survival of $u$ without protection zones by Theorem \ref{th1}, the fourth threshold value $\theta_*$ obtained in Theorem \ref{th2'} with $\mu\in (-\frac{c}{m},0]$ is only made for $v$ alive. In fact, the conversion of prey is limited by $\frac{c}{m}$, as shown in the proof of Theorem \ref{th1}, the predator $v$ must be die out if its growth rate $\mu\le-\frac{c}{m}$. With such non-positive growth rate $\mu\in (-\frac{c}{m},0]$, there should be properly large number of prey to survive the predator, just as described via the criterion $\theta>\theta_*=\frac{|\mu|}{c-m|\mu|}\ge 0$ in Theorem \ref{th2'}. It is worth pointing out that the threshold value $\theta_*$ for alive predator $v$ would be enlarged as $m$ (the handling time of $v_H$) is increasing. This well matches the mechanism in \cite{GG}. We have shown the effect of refuge ability of the prey and protection zones on the the coexistence and stability of predator-prey species in the paper. In fact, protection zone can be regarded as another refuge offered by human intervention, which is necessary if the prey's refuge ability is too weak in the predator-prey system to prevent the extinction of the prey populations. The critical sizes of protection zones, obtained in this paper and represented by the principal eigenvalues $\theta^*=\lambda_1(q(x))$ and $\theta_1=\lambda_1(q_0(x))$, show the basic requirement (depending on the predator's growth rate) and the sufficient one (working under any predator's growth rate), respectively. The results of the present paper would be helpful to design nature reserves and no-fishing zones, etc. {\small } \end{document}
arXiv
Using Fraction Notation: Addition, Subtraction, Multiplication & Division Instructor: Jeff Calareso Show bio Jeff teaches high school English, math and other subjects. He has a master's degree in writing and literature. In mathematics, a fraction is a number that is not whole, and mathematical equations that contain fractions can be challenging to understand and solve. Learn how to use fraction notations to make it easier to perform addition, subtraction, multiplication, and division in problems with fractions. Updated: 10/02/2021 Fraction Notation The term fraction notation just means a fraction written as a/b. We call the number above the line the numerator. The one below the line is the denominator. If it rains five days in a week, well, that's a dreary week. In fraction notation, we'd say it rained 5/7 days. The denominator represents the total number of days in the week. The numerator is the part of the whole, or the number of days it rained. What if we're in a Beatles song, and it rains eight days a week? Our fraction would be 8/7. That's called an improper fraction. It also breaks the calendar. But it's still a fraction written in fraction notation. In this lesson, we're going to learn how to do all the fun things you might want to do with fractions: addition, subtraction, multiplication and division. Whoa. That's a lot. But don't worry. We'll start simple and build from there. Coming up next: Factoring Out Variables: Instructions & Examples 0:02 Fraction Notation 0:59 Multiplication 1:57 Division 3:15 Addition 4:55 Subtraction You might think we'd start with addition, which is so often the simplest operation. But with fraction notation, multiplication is actually the easiest. When we multiply fractions, a/b * c/d = ac/bd. In other words, 2/3 * 5/7 equals 2 * 5 over 3 * 7. That's 10/21. Let's see that in action. Let's say there's 1/2 of a pie just sitting on the kitchen counter, begging to be eaten. You decide to eat 1/3 of what's there. That 1/2 * 1/3. We just multiply the numerators, 1 * 1, to get 1. Then we multiply the denominators, 2 * 3, to get 6. How much of the pie did you eat? 1/6. As you can see, there were originally 6 pieces, so your 1/3 of 1/2 is 1/6 of the original pie. Let's tackle division next. When we divide fractions, (a/b) / (c/d) = a/b * d/c. Wait, what? When we divide fractions, we take the reciprocal of the second fraction, and then multiply them together. In other words, flip the second fraction upside down, then multiply. So, 2/3 divided by 5/7 equals 2/3 * 7/5. That's 14/15. Should we see it in action? Ok. Let's say you're working off that pie by running a half marathon. But you only had a little pie, so you're running as part of a 4-person relay team. What fraction of a marathon are you running? That's 1/2, or half the marathon, divided by 4 people, or 4/1. To figure out (1/2) / (4/1), we take the reciprocal of 4/1. Again, just flip it upside down, like how your stomach feels if you go running too soon after eating pie. So 4/1 becomes 1/4. Then multiply 1/2 * 1/4. That's 1/8. So you'll run 1/8 of a full marathon. That's not bad! Ok, time to talk addition. When we add fractions, we find a common denominator. Then add the numerators. We can't add 1/2 and 1/4, but we can add 2/4 and 1/4, which is 3/4. Let's think about what this means. Let's say you have a box of 12 doughnuts. You eat one, or 1/12, of the doughnuts. Your friend eats 1/3 of the doughnuts. How do you compare 1/12 and 1/3? It's like your friend is trying to hide how many doughnuts he ate. Not cool. You need to figure out what 1/3 is in terms of the 12 doughnuts. That's what we mean by the common denominator. Remember that the denominator represents the whole, while the numerator is the part. If your doughnut-loving friend eats 1/3 of the doughnuts, how many out of 12 is that? To find the common denominator, you can multiply 1/3 * 4/4. Why? Because 3 * 4 is 12. And it's ok to multiply a fraction by some version of 1, which is what 4/4 is. That gets us 4/12. So your friend ate 4 doughnuts. Oh, man, that's a lot. I hope there's still a chocolate-frosted one left. If we want to know how many doughnuts were eaten, we'd be adding 1/12 and 1/3. To add these fractions, we find the common denominator, 12 - so it's 1/12 + 4/12 - and then we add the numerators: 1 + 4 = 5. So, 5 out of 12 doughnuts were eaten. To subtract fractions, we also find a common denominator, and then we just subtract the numerators. Let's try this out. What if you and your friend have a falling out over what you now refer to as 'the doughnut incident.' You walked to the store to get those doughnuts, even though your friend lives closer. You live 3/4 of a mile from the store and he lives 1/8 of a mile from the store. How much closer is he? This is a classic fraction subtraction problem. What is 3/4 minus 1/8? We need a common denominator. That will be 8. Let's multiply 3/4 * 2/2 to get 6/8. We can work with 6/8 - 1/8. That's 5/8. So your doughnut-hogging friend is 5/8ths of a mile closer to the store. To summarize, we learned about using fraction notation to perform basic operations. To multiply, we just multiply the numerators, then multiply the denominators. With division, we first flip the second fraction. This flipped fraction is called the reciprocal. Then we multiply them together. When adding or subtracting, we need to find common denominators. Then we add or subtract the numerators. At the end of this lesson you should understand how and be able to add, subtract, multiply and divide fractions. Math / Algebra I: High School What is Factoring in Algebra? - Definition & Example 5:32 How to Find the Prime Factorization of a Number 5:36 Using Prime Factorizations to Find the Least Common Multiples 7:28 Equivalent Expressions and Fraction Notation 5:46 Using Fraction Notation: Addition, Subtraction, Multiplication & Division 6:12 Factoring Out Variables: Instructions & Examples Combining Numbers and Variables When Factoring 6:35 Transforming Factoring Into A Division Problem 5:11 Factoring By Grouping: Steps, Verification & Examples 7:46 Multiplying & Dividing Fractions & Mixed Numbers | Operations & Examples What is Fractional Notation? - Definition & Conversion What is a Fraction? Factors of a Number | How to Find Prime Factorization of a Number Linear Algebra: Help & Tutorials Big Ideas Math Algebra 1: Online Textbook Help Honors Algebra 1 Textbook View High School: Algebra 1 Smarter Balanced Assessments - Math Grade 11: Test Prep & Practice GED Math: Quantitative, Arithmetic & Algebraic Problem Solving Math 105: Precalculus Algebra College Algebra: Help and Review High School Algebra I: Help and Review High School Algebra I: Homework Help Resource High School Algebra I: Tutoring Solution Holt McDougal Algebra 2: Online Textbook Help Prentice Hall Algebra 2: Online Textbook Help McDougal Littell Pre-Algebra: Online Textbook Help High School Algebra II: Homeschool Curriculum Study Guide & Help Courses Quiz & Worksheet - Angles in Pythagorean Triples Practice Problems Quiz & Worksheet - Resultant Vector Formula Quiz & Worksheet - Tangent Ratio Quiz & Worksheet - Triangular Pyramids Quiz & Worksheet - Singular Matrices NY Regents - Geometric Solids: Help and Review NY Regents Exam - Geometry Help and Review Flashcards NY Regents - Logic in Mathematics: Tutoring Solution NY Regents - Introduction to Geometric Figures: Tutoring Solution NY Regents - Similar Polygons: Tutoring Solution ESL Teaching Strategies for ESL Students Lesson Plans for Teachers Common Core Math Grade 8 - Expressions & Equations: Standards UExcel Weather and Climate: Study Guide & Test Prep Psychology 105: Research Methods in Psychology ILTS Social Science - Sociology and Anthropology (249): Test Practice and Study Guide Calculus: Homework Help Resource AP World History - The Medieval Warm Period: Help and Review Measurements of Inflation Quiz & Worksheet - WIP Accounting Journal Entries Quiz & Worksheet - Accounts Payable Journal Quiz & Worksheet - Characteristics of Computer Science Quiz & Worksheet - The Knowledge Economy Quiz & Worksheet - Reengineering in Business What Is Pharmacology? - Definition & Principles State-Dependent Memory: Definition & Overview California Department of Education: Educator Effectiveness Funds A Rose for Emily Lesson Plan Science Word Walls Special Education Advocacy Groups Dinosaur Activities for Kids How to Earn a Micro Credential Science Writing Prompts Preschool Book List Earth Science Projects for Middle School Teacher Associations in Texas How do you multiply fractions? Explain with examples. Divide 11 by 2\frac{1}{2} . A "Mushy" doll costs $30 1/10. A whales doll costs $12 7/10 more than a "Mushy" doll. How much should Samantha pay if she buys one of each? In the division of fractions, why do we only flip the second fraction? Why can't we flip the first fraction? If you walk { \frac{3}{4} } mile and then jog { \frac{2}{5} } mile, what is the total distance covered? What is the sum (\frac{1}{2b}) + (\frac{b}{2})? Write in simplest form. \frac{\frac{1}{10}}{\frac{1}{4}} What is { (\frac{2}{9} \times \frac{b}{9}) }? A magazine full page is 30".How many inches in a \frac{1}{8} page and a \frac{3}{4} page? Which of the following numbers could be added to 3/16 to make a sum greater than 1/2? (a) 5/16 (b) 10/32 (c) 2/6 (d) 1/4
CommonCrawl
The solution depends only on a little algebra and some clear mathematical thinking. Pierre, Tarbert Comprehensive, Ireland, Prateek, Riccarton High School, Christchurch, New Zealand and Vassil from Lawnswood Sixth Form, Leeds started by taking small values of $n$, usually a good way to begin. This solution comes from Arun Iyer, S.I.A High School and Junior College, India. They all found the answer which is $30$. and it is quite easy to see that $n(n-1)(n+1)(n^2+1)$ is divisible by $2$, $3$ and $5$ for all values of $n$. As $n$, $(n-1)$ and $(n+1)$ are three consecutive integers their product must be divisible by $2$ and by $3$. If none of these numbers is divisible by $5$ then $n$ is either of the form $5k+2$ or $5k+3$ for some integer $k$ and in both of these cases we can check that $n^2 + 1$ is divisible by $5$. Since $2$, $3$ and $5$ are coprime therefore $n^5 - n$ is divisible by $2 \times 3 \times 5$ i.e by $30$. Since the second term of the sequence is $2^5-2 = 30$ therefore the divisor cannot be greater than $30$. Therefore $30$ is the largest number that d ivides each member of the sequence. Polynomial functions and their roots. Factors and multiples. Expanding and factorising quadratics. Generalising. Making and proving conjectures. Mathematical reasoning & proof. Inequalities. Networks/Graph Theory. Common factors. Creating and manipulating expressions and formulae.
CommonCrawl
Math Calculus Solve the differential equation. (Use C for any needed constant.) \\ 4\frac{dy}{d\theta} = \frac... Solve the differential equation. (Use C for any needed constant.) $$4\frac{dy}{d\theta} = \frac {e^y \sin ^2\theta}{y \sec \theta}$$ Integration using by parts and variable separable methods To solve the given integral equation we have to use integration by parts method and variable separable method. Integration by parts . Let u and {eq}v^{\prime} {/eq} be two functions in t then {eq}\int u v^{\prime} dt = u v - \int u ^{\prime} v dt + C {/eq} Variable separable method. If the differential equation is in the form {eq}\frac { dy}{dx } = F(x) \ G(y) {/eq} Where F and G are functions in x and y respectively . Then we can convert the given d.e in the form {eq}\frac { dy}{G(y) } = F(x) \ dx \\ => \int \frac { dy}{G(y) } = \int F(x) \ dx {/eq}. Given integral is {eq}\begin{align*} & 4\frac{dy}{d\theta} = \frac {e^y \sin ^2\theta}{y \sec \theta}\\ & => 4 y e ^{-y} \ dy =\frac { \sin... Integration Problems in Calculus: Solutions & Examples Learn what integration problems are. Discover how to find integration sums and how to solve integral calculus problems using calculus example problems. Solve the differential equation. (Use C for any needed constant.) 2 \frac{dy}{d \theta} = \frac {e^y sin^2 \theta}{y \sec \theta} Solve the differential equation. \frac {dy}{dx} - y \tan x = \sin x Solve the given differential equation by separation of variables. a. \frac{dy}{dx} \sin 5x b. \frac {dy}{dx}= (x + 1)^2 Solve the differential equation y'' + 4y' + 3y = \sec x Find the general solution for the differential equation \frac{\mathrm{d} y}{\mathrm{d} x} = 2x \sec (5y). Assume that the solution has y values with \frac {-\pi}{2} \leq 5y \leq \frac {\pi}{2}. Solve the differential equation. ='false' \frac {dy}{dx} = \frac {x - 9}{\sqrt {x^2 - 18x + 7 Solve the differential equation. y'= \frac {7x^6y}{\ In y} Solve the differential equation \frac { d y } { d x } = \frac { x y ^ { 2 } } { ( 1 + x ^ { 2 } ) } Solve the following differential equation: a) 5 \frac{dy}{dx} + 4y = 24; \ \ \ y(3) = 16 b) \frac {dy}{dx}=10(x-5); \ \ \ y(0)=7 Solve the differential equation y' + \frac {4y}{ x+6} =(x + 6)^2 where y = 2 when x = 0. Solve the differential equation y ' = x e^{- sin x} - y cos x. Find solution of following differential equation. [{MathJax fullWidth='false' {y^{'\,' + y = {\sec ^2}x}] [{MathJax fullWidth='false' \begin{align*} y\left( \pi \right) &= 0\\ {y'}\left( \pi \right) &= 0 \end{align*] Solve the differential equation. \frac {dy}{dx} = 11 \frac {\sqrt x}{e^y} y=In( \Box +c) Solve the differential equation \frac{dy}{d \theta} =\frac{ e^y \sin^2 \theta}{y \sec \theta} A differential equation is given by \displaystyle2 \frac { d ^ { 3 } x } { d t ^ { 3 } } - 12 \frac { d ^ { 2 } x } { d t ^ { 2 } } + 16 \frac { d x } { d t } = 0. Find the general solution x(t). solve the differential equation. y \cos^2xdy + \sin xdx = 0 Solve the following differential equation: y" + 4y = \sin 3x Solve the following differential equation. (sin x) y' + (cos x) y = cos 2x Solve the differential equation. y'' + y = 5 + \sin 3x Solve the differential equation: (e^x sin y + 3y)dx + (3x - cos y + e^x cos y)dy = 0 Solve the differential equation (\sin 2x)y'=e^{5y}\cos 2x Solve the following differential equation. dy / dx = y^3 sin (x), with y (0) = 1 Solve the differential equation. y'' + y' = \sin x Solve the differential equation: a) {y}' + 2y = \frac{e^{-2t{1 + t^2} }] b) {y}' - \frac{1}{t} \sin (t^2) = - \frac{2}{t} y }] c) {y}' + \cot (t) y = e^t \cot (t) Solve the following differential equation. y' + y^2 \sin x = 0 Solve the differential equation: (y + \sin y) \frac{dy}{dx} = x + x^3 Solve the differential equation. y ?? ? y = x + sin x ; y ( 0 ) = 2 , y ? ( 0 ) = 3 Solve the differential equation y' = 2y^2 \sin x Solve the following differential equation: \sin x \frac{dy}{dx} = \cos x + 2y \cos x Solve the following differential equation. y'' + 4y = sin x y = 0, y' = 1 when x = 0 Solve the differential equation: cos ( x ) y + sin ( x ) y = 1 Solve the following differential equation: (\tan {(x)} - 5 \sin {(x)} \sin {(y)} )dx + 5 \cos {(x)} \cos {(y)}dy = 0. Solve the differential equation \frac{dy}{dx} = \frac{x + y \sin x - 2xy}{x^2+ \cos x +y} Solve the differential equation dy/dx = x - sin y/-1 + x cos y . Solve the differential equation x'' + 4x = \sin (2t) Solve the following differential equation. y'' + 4y = sin 2x Solve the differential equation y'' + 3y' - 2y = e^{-2x}[(9+20x) \cos 3x + (26 - 32x) \sin 3x ] Solve the differential equation. y' + y = \sin (e^x). Solve the following differential equation. 1 + (\frac{x}{y} - \sin y)y' = 0 Find the general solution of the differential equation. Use C for the constant of integration. 1. \frac { d y } { d x } = 8 x + \frac { 9 } { x } 2. \frac { d r } { d p } = 6 \operatorname { sin } ( Find the solution to the differential equation. y'' + 4y = 16t sin 2t Solve the following differential equation with the given initial condition (a) \displaystyle \frac { d y } { d t } = \frac { t + 1 } { t y } , t > 0 , y ( 1 ) = - 3 (b) y ^ { 2 } y ^ { \prime } = 1. Solve the differential equation. \displaystyle\frac { d y } { d x } = e ^ { 3 x - 3 y } 2. Evaluate the integral. \displaystyle\int \frac { 6 d x } { \sqrt { 4 - 36 x ^ { 2 } } } Find the general solution for the differential equation 4y \frac {dy}{dx}=7x Solve the differential equation. yy' = \sec y^2 \sec^2 x Solve the differential equation - 2 e^y sin 2x + sin 3y fraction 1 x dx + e^y cos 2x + 3 cos 3y ln x + y dy = 0 Determine whether the given function is a solution to the given differential equation or not. \frac{d^2y}{dx^2} + y = x^2 + 2, y = \sin x = x^2 Find the form of a particular solution to the following differential equations. Do not solve for the constants. (a) y'' + 2y' + 2y = t \cos(t) + 2te^{-t} - 3te^{-t} \sin (t) (b) y'' - 2y' + y = t Solve the given differential equation by undetermined coefficients. y'' - 4y = (x^2 - 6) \sin (2x) Solve the given differential equation by undetermined coefficients. y'' - y' = sin x Solve the given differential equation by undetermined coefficients. y'' + 2y' + y = sin x + 2 cos 2x Solve the differential equation: ( sin x sin y - xe^4)dy . Solve the differential equation y'' + 9y = 0. A. \ \sin t + C \\[2ex] B. \ C_1 \cos 3t + C_2 \sin 3t \\[2ex] C. Ce^t \\[2ex] D. Ce^{-t} Solve the differential equation with the initial condition v(0)=0 \frac {dv}{dt} = 32 - v. Solve the following differential equation. {y}'' - 4{y}' - 12y = 0, y(0) = 12, {y}'(0) = 8. Solve for constant A and B that will make the function y = A \sin t + B \cos t a solution to the differential equation (d^2y)/(dt^2) + 2((dy)/dt)) + 4y = 5 \sin t Solve the differential equation using the method of undetermined coefficients. y'' - 4 y' + 4y = x - sin x Solve the following differential equation using the method of undetermined coefficients. \ddot{x} + 3\dot{x} + 2x = 6 \sin 2t Find the general solution of the differential equation. (x^3 sin x + y ) dx - x^2 dy = 0 Find the general solution to the differential equation. y'' + 9y = 2 sin 3x. Find the general solution of the differential equation y'' + y = 2\cos x + \sin x Find the general solution to the following differential equation: dy - \frac {2 sin 4x cos 4x}{2 - y^3} dx = 0. Find the general solution for the following differential equation. y'' - 4 y' - 12 y = 3 e^{5 x} + sin (2 x) + x e^{4 x} Find the general solution of the differential equation ty' + 2y = \sin t,\quad t \gt 0 Find the general solution of the differential equation y'' + x y' + 4 y = sin (210 x) cos (210 x). Find the general solution to the following differential equation: xy' - 2y = x^4 \sin (x) Solve the given differential equation by separation of variables. a. \frac {dS}{dr}=kS b. \frac {dQ}{dt}=k(Q-70) Solve the differential equation: sin xdy dx +(cos x)y=xsin(x^2). Solve the following differential equation: (2 x y - sin x) dx + (x^2 - cos y) dy = 0 Solve the differential equation y'' + y = 6x \sin x by the method of undetermined coefficients. Find the solution of the differential equation that satisfies the the given initial condition: \sqrt{(x)}y = \frac e^{\sqrt (x) {y^2}, \ \ (x0), y(1)=6e^{1/3} Find the general solution of the given differential equation. y'' + y = 7 \sin 2t + t \cos 2t y(t) = Find the general solution of the given differential equation. y'' + y = 7 sin 2t + t cos 2 t Find the particular solution of the differential equation that satisfies the initial condition. y' + y \sec x = \sec\ x,\ y(0) = 38 Solve the differential equation: d^2 \frac {A}{dx^2} + \frac {(8pi^2mE)}{(h^2)} A = 0 Find the particular solution of the differential equation. \sqrt { x ^ { 2 } + 9 } \frac { d y } { d x } = 1 , \quad x \geq - 3 , y ( 0 ) = 9 Solve the Differential Equations. 1) y' + y = \sin(e^x) 2) \sin x \frac{dy}{dx} + (\cos x)y = sin (x^2) 3) x \frac{dy}{dx} - 4y = x^4e^x 4) (1 + t) \frac{du}{dt} + u = 1 + t, t > 0 5) t \l Find the general solution for the differential equation. Leave the solution in implicit form: (y sec x +2 sec x)y'=x. Determine whether the given differential equation is exact. If it is exact, solve it. (sin y - ysin x) dx + (cos x + xcos y - y) dy = 0. Solve the differential equation. (x + 1) \frac {dy}{dx} + 5y = 10 Solve the differential equation \frac {dy}{dx} = -2x^3 + 4 given that y = 4 when x = 0. Determine whether the differential equation is exact. If it is, then solve it. If it is not, solve it by first finding the appropriate integrating factor. (a) (sin y - y sin x) dx + (cos x + x cos y - Find the particular solution to the following differential equation: x^2y''' + x^2 y'' -2xy ' + 2y =x^3 sin x, if y_h = C_1x + C_2 x^{-1} + C_3x^2 Find a particular solution of the differential equation: d 4 y d x 4 + 2 d 2 y d x 2 + y = 3 sin x - 5 cos x Find the particular solution of the differential equation y'' -y' -6y = 2 \sin 3x Find the particular solution of the following differential equation. 3y''+2y'+y= sin x. Find the particular solution to the differential equation y'' + y = \sin x + x \cos x Solve the differential equation y' = \sec x \tan x + 2 given that y = 4 when x = \frac {\pi}{4}. Solve the differential equation by using the substitution u = \frac {y}{x}. 64 x, dx + (y - 16x) dy = 0 8x+c(8x-y)=\Box Solve the following differential equation using substitution method: (3 sin y - 5 x) dx+ 2 x^2 cot y dy = 0 Solve the following differential equations using the method of undetermined coefficients. y'' + 9 y = 2 sin t Find: Use integration to find general solution of the differential equation \frac{dy}{dx}= \sec 3x \tan 3x Solve the differential equation. d y + 2 y d x = e^{- 4 x} d x y = (Use C as the arbitrary constant.) Solve the differential equation. (Use C for any needed constant.) 2\frac{\mathrm{d} y}{\mathrm{d} \theta}= \frac{e^y\sin^2\theta}{y\sec \theta} Find the general solution of the given differential equations {y}''' - {y}'' + y' = \sin t Find the solution of the differential equation x {dy} / {dx} + sec x (x sin x + cos x) y = sec x.
CommonCrawl
How can I fix jerky movement in a continuous action space I am training an agent to do object avoidance. The agent has control over its steering angle and its speed. The steering angle and speed are normalized in a $[−1,1]$ range, where the sign encodes direction (i.e. a speed of −1 means that it is going backwards at the maximum units/second). My reward function penalises the agent for colliding with an obstacle and rewards it for moving away from its starting position. At a time $t$, the reward, $R_t$, is defined as $$ R_t= \begin{cases} r_{\text{collision}},&\text{if collides,}\\ \lambda^d\left(\|\mathbf{p}^{x,y}_t-\mathbf{p}_0^{x,y}\|_2-\|\mathbf{p}_{t-1}^{x,y}-\mathbf{p}_0^{x,y}\|_2 \right),&\text{otherwise,} \end{cases} $$ where $\lambda_d$ is a scaling factor and $\mathbf{p}_t$ gives the pose of the agent at a time $t$. The idea being that we should reward the agent for moving away from the inital position (and in a sense 'exploring' the map—I'm not sure if this is a good way of incentivizing exploration but I digress). My environment is an unknown two-dimensional map that contains circular obstacles (with varying radii). And the agent is equipped with a sensor that measures the distance to nearby obstacles (similar to a 2D LiDAR sensor). The figure below shows the environment along with the agent. Since I'm trying to model a car, I want the agent to be able to go forward and reverse; however, when training, the agent's movement is very jerky. It quickly switches between going forward (positive speed) and reversing (negative speed). This is what I'm talking about. One idea I had was to penalise the agent when it reverses. While that did significantly reduce the jittery behaviour, it also caused the agent to collide into obstacles on purpose. In fact, over time, the average episode length decreased. I think this is the agent's response to the reverse penalties. Negative rewards incentivize the agent to reach a terminal point as fast as possible. In our case, the only terminal point is obstacle collision. So then I tried rewarding the agent for going forward instead of penalising it for reversing, but that did not seem to do much. Evidently, I don't think trying to correct the jerky behaviour directly through rewards is the proper approach. But I'm also not sure how I can do it any other way. Maybe I just need to rethink what my reward signal wants the agent to achieve? How can I rework the reward function to have the agent move around the map, covering as much distance as possible, while also maintaining smooth movement? reinforcement-learning deep-learning deep-rl rewards reward-shaping Shon Verch Shon VerchShon Verch I think you should try to reason in terms of total "area" explored by the agent rather than "how far" it moves from the initial point, and also you should add some reward terms to push the agent steering more often. I think that the problem with your setting is more or less this: The agent go as straight as it can because you're rewarding it for it, it start sensing an obstacle so it stops, there is no reward for steering so the best strategy to go away from the obstacle and not end the episode is just to go backwards. Considering that you have information about the grid points at any time you could rewrite the reward function in terms of grid squared explored by checking at each move if the agent end up in a new square grid: $$ R_t= \begin{cases} r_{\text{collision}}\\ \lambda^d\left(\|\mathbf{p}^{x,y}_t-\mathbf{p}_0^{x,y}\|_2-\|\mathbf{p}_{t-1}^{x,y}-\mathbf{p}_0^{x,y}\|_2 \right) + r_{new-squared-explored} \end{cases} $$ Moreover it would be useful to add some reward terms also related to how the agent avoid the obstacle, for example a penalisation when the sensor goes and remain under a certain threshold (to make the agent learn to not go and stay too close to an obstacle) but also a rewarding term when an obstacle is detected and the agent manage to maintain a certain distance from it (even though if not well tuned this term could lead the agent to learn to just run in circles around a single obstacle, but if tuned properly I think it might help to make the agent movements smoother). Edoardo GuerrieroEdoardo Guerriero $\begingroup$ I hadn't thought of it like that..you're right! Mind clarifying the $r_{new-squared-explored}$ term? Should this be the new cells the agent visited at time $t$? $\endgroup$ – Shon Verch Aug 31 '20 at 15:29 $\begingroup$ @ShonVerch yes exactly. Reward n if it visits a new square at time t or 0 if it doesn't (even a penalisation could be a possibility, it depends on how much the agent should move around). Of course it works only if you have complete informations about the environment, but it seems to be the case for your setting. $\endgroup$ – Edoardo Guerriero Aug 31 '20 at 15:41 $\begingroup$ Alright cool. Quick question about more general reinforcement learning. Is it fine if my reward function uses information that is not accessible to the agent (I.e. not part of the observation space) at inference. Because while I have information about the grid, I only have that in the simulated environment. $\endgroup$ – Shon Verch Aug 31 '20 at 15:49 Not the answer you're looking for? Browse other questions tagged reinforcement-learning deep-learning deep-rl rewards reward-shaping or ask your own question. Issue with simple game AI Are there any reliable ways of modifying the reward function to make the rewards less sparse? Can someone please help me validate my MDP? How to avoid rapid actuator movements in favor of smooth movements in a continuous space and action space problem? How should I design the reward function for racing game (where the goal is to reach finishing line before the opponent)? How do I design the rewards and penalties for an agent whose goal it is to explore a map
CommonCrawl
Evaluation of the interaction between insecticide resistance-associated genes and malaria transmission in Anopheles gambiae sensu lato in central Côte d'Ivoire Rosine Z. Wolie ORCID: orcid.org/0000-0002-3499-31141,2,3, Alphonsine A. Koffi2,3, Ludovic P. Ahoua Alou2,3, Eleanore D. Sternberg4,8, Oulo N'Nan-Alla1, Amal Dahounto2, Florent H. A. Yapo2, Kpahe M. H. Kanh1, Soromane Camara2,3, Welbeck A. Oumbouke2,7, Innocent Z. Tia2,3,6, Simon-Pierre A. Nguetta1, Matthew B. Thomas4 & Raphael NGuessan2,3,5 There is evidence that the knockdown resistance gene (Kdr) L1014F and acetylcholinesterase-1 gene (Ace-1R) G119S mutations involved in pyrethroid and carbamate resistance in Anopheles gambiae influence malaria transmission in sub-Saharan Africa. This is likely due to changes in the behaviour, life history and vector competence and capacity of An. gambiae. In the present study, performed as part of a two-arm cluster randomized controlled trial evaluating the impact of household screening plus a novel insecticide delivery system (In2Care Eave Tubes), we investigated the distribution of insecticide target site mutations and their association with infection status in wild An. gambiae sensu lato (s.l.) populations. Mosquitoes were captured in 40 villages around Bouaké by human landing catch from May 2017 to April 2019. Randomly selected samples of An. gambiae s.l. that were infected or not infected with Plasmodium sp. were identified to species and then genotyped for Kdr L1014F and Ace-1R G119S mutations using quantitative polymerase chain reaction assays. The frequencies of the two alleles were compared between Anopheles coluzzii and Anopheles gambiae and then between infected and uninfected groups for each species. The presence of An. gambiae (49%) and An. coluzzii (51%) was confirmed in Bouaké. Individuals of both species infected with Plasmodium parasites were found. Over the study period, the average frequency of the Kdr L1014F and Ace-1R G119S mutations did not vary significantly between study arms. However, the frequencies of the Kdr L1014F and Ace-1R G119S resistance alleles were significantly higher in An. gambiae than in An. coluzzii [odds ratio (95% confidence interval): 59.64 (30.81–131.63) for Kdr, and 2.79 (2.17–3.60) for Ace-1R]. For both species, there were no significant differences in Kdr L1014F or Ace-1R G119S genotypic and allelic frequency distributions between infected and uninfected specimens (P > 0.05). Either alone or in combination, Kdr L1014F and Ace-1R G119S showed no significant association with Plasmodium infection in wild An. gambiae and An. coluzzii, demonstrating the similar competence of these species for Plasmodium transmission in Bouaké. Additional factors including behavioural and environmental ones that influence vector competence in natural populations, and those other than allele measurements (metabolic resistance factors) that contribute to resistance, should be considered when establishing the existence of a link between insecticide resistance and vector competence. Mosquitoes of the Anopheles gambiae species complex are the main malaria vectors in sub-Saharan Africa [1]. The remarkable vector capacity of these mosquitoes [2] is largely due to their propensity to blood feed on humans and rest indoors [3]. The great ability of these mosquitoes to adapt to human behaviour has led to the development of insecticide-based vector control measures targeting indoor biting and resting. These measures primarily comprise the use of long-lasting insecticidal nets and indoor residual spraying, which are used to limit human-vector contact and reduce mosquito survival [4]. These insecticide-based vector control tools have been highly effective against malaria vectors, as shown by considerable reductions in disease burden [5]. However, the long-term effectiveness of both of these strategies is threatened by the emergence of insecticide resistance in malaria vector populations [6, 7]. There are several mechanisms responsible for insecticide resistance, of which metabolic and target site resistance are the most common [8,9,10]. Metabolic resistance leads to an increase in the activities of enzymes responsible for an insecticide's degradation, while modification of the insecticide target site prevents the insecticide molecule from binding to the site. The molecular basis of resistance mediated by target site mutations has been characterized for several mosquito populations [11,12,13]. For example, the G119S mutation in the acetylcholinesterase-1 gene (Ace-1R) (a single amino acid substitution from glycine to serine at locus 119 at the acetylcholinesterase catalytic site) is responsible for organophosphate and carbamate resistance among malaria vectors in West Africa [14]. Likewise, the L1014F mutation of the knockdown resistance (Kdr) gene, also called the Kdr-west mutation (an amino acid substitution from leucine to phenylalanine in the voltage gated sodium channel gene, at the 1014 locus, typically causing knock down resistance) is responsible for pyrethroid and dichlorodiphenyltrichloroethane resistance in mosquito populations [12]. Despite the rise of insecticide resistance, its operational significance for vector control is controversial. In many instances, insecticide-based tools seem to continue to protect against malaria [15,16,17,18], whereas a community trial of long-lasting insecticidal nets clearly demonstrated that resistance is having an impact on their effectiveness [19]. Resistance is dynamic and therefore cannot be randomized to assess its epidemiological impact. Several studies have evaluated the association between single insecticide resistance gene mutations (of Kdr or Ace-1R) and vector competence in An. gambiae [20,21,22]. However, these involved laboratory assays utilizing mosquito colonies or wild strains infected with malaria parasites in the laboratory. The coexistence of both Kdr and Ace-1R in wild populations of An. gambiae sensu lato (s.l.) is common in west Africa, including Côte d'Ivoire [23, 24]. To our knowledge, the impact of this association on vector competence has never been studied. We took advantage of a two-arm cluster randomized controlled trial evaluating the impact of household screening plus a novel insecticide delivery system (In2Care Eave Tubes) to capture mosquitoes in study villages around Bouaké by human landing catches, between May 2017 and April 2019. Mosquitoes were identified to species and then genotyped for Kdr L1014F and Ace-1R G119S mutations using quantitative polymerase chain reaction (qPCR) assays, and the frequencies of the two alleles were compared between Anopheles coluzzii and Anopheles gambiae and then between infected and uninfected groups for each species. The trial was conducted from May 2017 to April 2019 in central Côte d'Ivoire. The methodology used in this study has been well described by Sternberg et al. [25]. Briefly, 40 villages within a 60-km radius in the district of Bouaké were identified for inclusion in the study. All the households in the 40 study villages received insecticide-treated nets, while those of half of the study villages (20 villages) also had household screening (S) and In2Care Eave Tubes (ET) installed (SET). Mosquito collection and processing The mosquito-collection process has been previously described by Sternberg et al. [25]. Each month during the trial, mosquitoes were sampled by human landing catches (HLC) both indoors and outdoors at four randomly selected houses in each of the 40 study villages. HLC were undertaken from 6 p.m. to 8 a.m. the following day for two consecutive nights during the first 5 months of the trial and then on one night per month until the end of the trial. The collected mosquitoes were sorted and morphologically identified to species using the key described by Gillies and Meillon [26] and counted. All malaria vectors were stored for further analysis, but for the interaction study, only An. gambiae s.l., the main malaria vector in Côte d'Ivoire, was considered. PCR assays were used to assess sporozoite prevalence in a monthly random sub-sample of up to 30 female mosquitoes per village. Mosquitoes were identified to sibling species and Kdr L1014F and Ace-1R G119S mutations detected. Genomic DNA was extracted from the head and thorax of individual females using cetyltrimethylammonium bromide, as described by Yahouedo et al. [27]. Detection of Plasmodium infection Plasmodium spp. (Plasmodium malariae, Plasmodium falciparum, Plasmodium ovale and Plasmodium vivax) infections were detected by real-time PCR in accordance with Mangold et al. [28]. The primers were synthesized and supplied by Eurofins Genomics (Ebersberg, Germany) and were as follows: forward PL1473F18 (5′-TAA CGA AGA ACG TCT TAA-3′) and reverse PL1679R18 (5′-GTT CCT CTA AGA AGC TTT-3′). The reactions were prepared in a total reaction volume of 10 μl, which contained 2 μl of 5× HOT FIREPol EvaGreen qPCR Mix Plus (Solis Biodyne, Tartu, Estonia), 0.3 μl of each primer, 6.4 μl of sterile water, and 1 μl of DNA template. The real-time PCR mixtures were pre-incubated at 95 °C for 12 min followed by amplification for 50 cycles of 10 s at 95 °C, 5 s at 50 °C and 20 s at 72 °C, with fluorescence acquisition at the end of each cycle. Characterisation of the PCR product was performed with melting curve analysis of the amplicons (95 °C for 60 s, 60 °C for 60 s, then 60–90 °C for 1 s), with fluorescence acquisition at each temperature transition. Plasmodium species were identified by melting curve generated at different temperatures (i.e., for P. malariae, 73.5–75.5 °C; for P. falciparum, 75.5–77.5 °C; for P. ovale, 77.5–79.5 °C; and for P. vivax:, 79.5–81.5 °C). A subsample of 1392 An. gambiae s.l. (686 infected with Plasmodium sp. and 706 uninfected, which were randomly selected) was analysed for molecular identification of sibling species. The molecular identification was performed using a classic PCR assay in accordance with Favia et al. [29]. The following primers were used: R3 (5'-GCC AAT CCG AGC TGA TAG CGC-3'), R5 (5'-CGA ATT CTA GGG AGC TCC AG-3'), Mopint (5'-GCC CCT TCC TCG ATG GCA T-3') and B/Sint (5'-ACC AAG ATG GTT CGT TGC-3'). The reaction mixture consisted of 14 μl of sterile water, 0.75 μl of each primer R3 and R5, 1.5 μl of each primer Mopint and B/Sint, and 5 µl of Master Mix. A 23.5-µl volume of the reaction mixture was inserted into each 0.5-ml PCR tube along with 1 µl of each DNA sample. Amplification was performed on a MJ Research PTC-100 Thermal Cycler PCR machine (Marshall Scientific, Watertown, MA) with cycling conditions of 95 °C for 3 min, followed by 30 cycles at 95 °C for 30 s, 72 °C for 45 s and 72 °C for 60 s. Amplified fragments were analysed on 2% agarose gel with 4 μl of SYBR Green. The results were analysed as described in Favia et al. [29] to determine An. coluzzii [1300-bp band (R3/R5) plus 727-bp band (Mop-int)] or An. gambiae [1300-bp band (R3/R5) plus 475-bp band (B/S-int)]. Detection of Kdr L1014F mutation in An. gambiae s.l. Detection of the Kdr L1014F mutation was performed using the TaqMan real-time PCR assay, as described by Bass et al. [30]. The reactions were carried out in a total reaction volume of 10 μl, which contained 2 μl of 5× HOT FIREPol Probe Universal qPCR Mix (Solis Biodyne), 0.125 µl primer/probe mix, 6.875 μl of sterile water, and 1 μl of DNA template. Primers Kdr-forward (5'-CATTTTTCTTGGCCACTGTAGTGAT-3') and Kdr-reverse (5'-CGATCTTGGTCCATGTTAATTTGCA-3') were standard oligonucleotides with no modification. The probes were labelled with two distinct fluorophores: VIC to detect the susceptible allele, and FAM to detect the resistant allele. Amplifications were performed on a LightCycler 96 Systems real-time qPCR machine (Roche LifeScience, Meylan, France) with cycling conditions of 95 °C for 10 min, followed by 45 cycles at 95 °C for 10 s, 60 °C for 45 s and 72 °C for 1 s. FAM and VIC fluorescence was captured at the end of each cycle and genotypes were called from endpoint fluorescence using LightCycler 96 software (Roche LifeScience) for the analysis of the results. Detection of Ace-1 R G119S mutation in An. gambiae s.l. Allelic and genotypic frequencies for insensitive acetylcholinesterase phenotypes characterized by the G119S mutation were determined for An. gambiae s.l. by using the TaqMan assay, in accordance with Bass et al. [31]. The reactions were carried out in a total reaction volume of 10 μl, which contained 2 μl of the 5× HOT FIREPol Probe Universal qPCR Mix (Solis Biodyne), 0.125 µl primer/probe mix, 6.875 μl of sterile water, and 1 μl of DNA template. Primers Ace-1-Forward (5'-GGC CGT CAT GCT GTG GAT-3'), and Ace-1-Reverse (5'-GCG GTG CCG GAG TAG A-3') were standard oligonucleotides with no modification. The probes were labelled with two distinct fluorophores: VIC to detect the susceptible allele and FAM to detect the resistant allele. Amplifications were performed on a LightCycler 96 Systems real-time qPCR machine (Roche LifeScience) with cycling conditions of 95 °C for 10 min, followed by 55 cycles at 92 °C for 15 s, 60 °C for 60 s and 72 °C for 1 s. FAM and VIC fluorescence was captured at the end of each cycle and genotypes were called from endpoint fluorescence using LightCycler 96 software (Roche LifeScience) for the analysis of the results. To analyse the distribution of Kdr L1014F and Ace-1R G119S genotypic and allelic frequencies, data collected for the same study arm between May 2017 and April 2019 were compared between the species. The association between genotypic and allelic frequencies for these mutations and infection status were determined using Pearson's chi-square test in R (version 4.0.3). Kdr L1014F and Ace-1R G119S combined genotypic distribution frequencies within infection status for each species were also included. Fisher's exact test was used when the number of individual samples available for a test was less than 30. The significance threshold was set at 5%. Odds ratio (OR) was computed to assess the strength of difference or association between resistance alleles and infection status. The allelic frequencies were tested to Hardy–Weinberg equilibrium (HWE) conformity using the exact HW test, and were calculated as follows: $$R\,allelic\, frequency=\frac{RS+2(RR)}{2(RS+RR+SS)}$$ where RR indicates the resistant homozygous genotype, RS the heterozygous genotype, and SS the susceptible homozygous genotype. Nota bene: Kdr L1014F and Ace-1R G119S mutations each comprise three genotypes expressing different allelic variants on the targeted loci. The resistant (R) and susceptible (S) alleles are possible versions of these genes. Ethical clearance Ethical approval was obtained from the ethics committee of the Côte d'Ivoire Ministry of Health (reference 039/MSLS/CNER-dkn), the Pennsylvania State University Human Research Protection Program under the Office for Research Protections (references STUDY00003899 and STUDY00004815), and the ethical review board of the London School of Hygiene and Tropical Medicine (no. 11223). Verbal and written informed consent, using the language spoken locally, was obtained from all the participants (mosquito collectors and head of each household) prior to their enrolment in the study. Mosquito collectors were vaccinated against yellow fever, and the project provided treatment of confirmed malaria cases free of charge for any study participant, in accordance with national policies. Genotypic and allelic frequency distributions of Kdr L1014F and Ace-1 R G119S mutations in An. gambiae s.l. Out of the 1392 mosquitoes analysed by PCR, 1255 were successfully identified to species (< 10% failure rate). Both An. gambiae (n = 624; 49.7%) and An. coluzzii (n = 631; 50.3%) were found. For each species, the proportions of infected vs uninfected individuals were similar (Fig. 1). There were no significant differences in the allelic frequency of Kdr or Ace-1R between the control and Eave Tube areas for each species (P ˃ 0.05) (Table 1). Genotypic and allelic frequencies of Kdr L1014F and Ace-1R G119S mutations for An. coluzzii and An. gambiae are shown in Table 2. Kdr allelic frequency was significantly greater in An. gambiae than in An. coluzzii [OR (95% confidence interval; 95% CI): 59.64 (30.81–131.63)] (Table 2). By contrast, the frequency of heterozygous individuals was significantly higher for An. coluzzii (42.95%) than for An. gambiae (1.12%), indicating deviation from HWE in the An. gambiae populations with an excess of resistant homozygous genotypes (Table 2) (P < 0.001). Anopheles gambiae sensu lato distribution by infection status. Error bars represent 95% confidence intervals (CIs). SET Screening plus In2Care Eave Tubes Table 1 Allelic frequencies of knockdown resistance gene (Kdr) L1014F mutation and acetylcholinesterase-1 gene (Ace-1R) G119S mutation between study arms Table 2 Genotypic and allelic frequencies of Kdr L1014F and Ace-1R G119S gene mutations in Anopheles gambiae and Anopheles coluzzii The allelic frequency of the Ace-1R G119S mutation was low in both An. coluzzii and An. gambiae, although it was significantly more prevalent in An. gambiae than in An. coluzzii [OR (95% CI): 2.79 (2.17–3.60)]. Deviation from HWE for Ace-1R G119S was observed for both An. gambiae and An. coluzzii populations. Insecticide-resistance genes and infection status The genotypic and allelic frequencies of Kdr L1014F and Ace-1R G119S gene mutations among infected and uninfected mosquitoes are shown in Table 3. Regardless of the species, there were no significant differences in genotypic or allelic frequencies between infected and uninfected individuals (P ˃ 0.05) (Table 3). Table 3 Genotypic and allelic frequencies of Kdr L1014F and Ace-1R G119S gene mutations between infected and uninfected Anopheles gambiae and Anopheles coluzzii Frequencies of Kdr and Ace-1 R genotypic combinations and infection status Nine possible genotypic combinations for the Kdr L1014F and Ace-1R G119S mutations were recorded in this study (Fig. 2). For all genotypic combinations, the first two alleles refer to Kdr genotypes whereas the last two alleles refer to Ace-1R genotypes: Kdr-Ace-1R (RRRR), Kdr-Ace-1R (RRRS), Kdr-Ace-1R (RRSS), Kdr-Ace-1R (RSRR), Kdr-Ace-1R (RSRS), Kdr-Ace-1R (RSSS), Kdr-Ace-1R (SSRR), Kdr-Ace-1R (SSRS), and Kdr-Ace-1R (SSSS). Figure 2 shows that the frequency of individuals bearing Kdr RR genotypes, either when present alone or together with Ace-1R genotypes, was significantly higher in wild An. gambiae than in wild An. coluzzii; this was observed in both control and SET areas. By contrast, the frequencies of mosquitoes bearing the Kdr heterozygous genotype were significantly higher for An. coluzzii than for An. gambiae, confirming that the former species is better adapted to insecticide pressure than the latter one (Fig. 2). Overall, there were no significant differences between infected and uninfected groups for each of the genotypic combinations for An. coluzzii or An. gambiae. Frequencies of Kdr and Ace-1R genotypic combinations between infected and uninfected groups in each study arm. Error bars represent 95% CIs. For all combined genotypes, the first two alleles refer to Kdr genotypes and the last two refer to Ace-1R genotypes. RR Resistant homozygous genotype, RS heterozygous genotype, SS susceptible homozygous genotype; for other abbreviations, see Fig. 1 This study evaluated the effects of the Kdr L1014F and Ace-1R G119S gene mutations on Plasmodium spp. infection status in natural An. gambiae s.l. populations. The presence of both An. coluzzii and An. gambiae in similar proportions in this longitudinal study was consistent with the results of previous studies carried out in the area of Bouaké [24, 32], but it contrasts with the results of another study conducted in adjacent areas within Bouaké that found An. coluzzii to be predominant [33]. The observed difference is likely due to the study sampling period covering both the rainy and dry seasons in our study compared to the rainy season only in the other study [33]. We observed no difference in infection rate between An. gambiae and An. coluzzii. This aligns with the results of previous studies conducted in Burkina Faso and Senegal [21, 34], which reported equivalent susceptibility of these species to Plasmodium. The results presented here demonstrate that these sibling species are equally competent vectors of malaria in humans in the central region of Côte d'Ivoire. With regard to resistance genes, there were no significant differences in the allelic frequency of Kdr or Ace-1R between the control and Eave Tube areas regardless of mosquito species. This is because Kdr was already close to fixation (> 80%) in An. gambiae s.l. species prior to the intervention employing the Eave Tubes [24], leaving a tiny window for further selection. Also, the insecticide deployed in the Eave Tube trial was a pyrethroid (β-cyfluthrin) [35] which could not induce selection pressure on Ace-1R since this gene is associated with organophosphate and carbamate resistance [14, 24]. We found significantly higher Kdr L1014F and Ace1R G119S genotypic and allelic frequencies in An. gambiae than in An. coluzzii, which was in agreement with observations of Koukpo et al. [36] in Benin and Zogo et al. [37] in Côte d'Ivoire. There was a 59 times greater probability of encountering the Kdr L1014F resistance allele in An. gambiae than in An. coluzzii, whereas the frequency of individuals heterozygous for Kdr L1014F was higher for An. coluzzii (42.95%) than for An. gambiae (1.12%). These results clearly highlight a deviation from HWE within both malaria vector species for the Kdr L1014F mutation. It is possible that evolutionary factors affect mosquito population structure through the excessive use of insecticides. These factors induce the selection of rare and existing mutations in natural populations of both species which later become variably widespread [38]. Furthermore, Ace-1R G119S allelic frequency was significantly higher in An. gambiae than in An. coluzzii, although the amplitude was moderate. The low proportion (< 10%) of homozygous resistant (RR) genotypes observed in An. gambiae and An. coluzzii populations could indicate a high fitness cost associated with the Ace-1R G119S gene [39, 40]. Conversely, this potential fitness cost associated with Ace-1R may be counteracted by the duplication of this gene, which induced various heterozygous genotypes by increasing their proportions [41]. Further studies focusing on Ace-1R genotype distribution, including duplication in An. gambiae s.l., are needed. Our study showed that in areas where Kdr L1014F and Ace-1R G119S coexist in An. gambiae s.l., the frequency of individuals bearing the Kdr L1014F RR genotype appeared to be significantly higher for An. gambiae than for An. coluzzii. By contrast, the frequencies of those bearing the Kdr L1014F heterozygous genotype were significantly higher for An. coluzzii than for An. gambiae, confirming the trend seen when this genotype is present in isolation. To our knowledge, this is the first study to evaluate the distribution of An. gambiae s.l. individuals bearing both of these mutations. The results presented here call for further studies to better understand the genotypic structure of their combinations. The vector competence in association with resistance genes was investigated. We found no evidence of an association between Plasmodium infection status and Kdr L1014F or Ace-1R G119S gene mutations. These results are similar to those found in a study undertaken in Guinea where these target site mutations (Kdr L1014F or Ace-1R G119S) were not associated with Plasmodium infection in wild An. gambiae [42], but phenotypic resistance was rather associated with infection. By contrast, a study in Tanzania found a link between Kdr-east and Plasmodium infection in wild An. gambiae [43]. The lack of an association between Plasmodium infection status and resistance genes under natural conditions contrasts with the findings of several other studies, which reported that resistance-associated genes affect vector competence for the transmission of Plasmodium parasites [20, 21, 44]. There are three possible reasons for these differences. First, these contrasting results could derive from studies that used colonies maintained in the laboratory over years, which can decrease resistance, including a loss of genetic diversity [45, 46]. Second, some genetic susceptibility studies do not take into account additional factors that influence competence in natural vector populations, e.g. mosquito blood-feeding rate, age at infection, longevity, and exposure to an insecticide and to other pathogens that could influence mosquito immune status [47,48,49,50,51]. A natural infection study also implies that the effects of ecology and behaviour on vector competence have been assessed [52, 53]. Third, resistance involves mutations and metabolic components with different functions; therefore studying one in isolation from the other may not be representative of phenotypic resistance. The absence of an association between a combination of genotypes (Kdr L1014F-Ace-1R G119S) and infection status in An. coluzzii or An. gambiae needs to be considered further in the context of control programmes, given that this is now a common observation in many parts of west Africa [13, 24]. We found no significant association between the Kdr L1014F and Ace-1R G119S mutations, when present alone or together, and infection status in wild An. gambiae and An. coluzzii, which demonstrates the similar competence of these species for Plasmodium transmission within areas of Bouaké. However, the frequencies of the Kdr and Ace-1R genotypes and alleles were significantly higher in An. gambiae than in An. coluzzii. Additional factors that influence vector competence in natural vector populations and measurements of factors besides alleles or genotypes that contribute to resistance should be considered when investigating the existence of a link between insecticide resistance and vector competence. The data supporting the conclusions of this manuscript are included within the manuscript and are available from the corresponding author on reasonable request. Ace-1 R : Acetylcholinesterase-1 gene HWE: Hardy–Weinberg equilibrium Kdr : Knockdown resistance gene qPCR: Quantitative polymerase chain reaction Screening plus In2Care Eave Tubes Single nucleotide polymorphism Sinka ME, Bangs MJ, Manguin S, Rubio-Palis Y, Chareonviriyaphap T, Coetzee M, et al. A global map of dominant malaria vectors. Parasit Vectors. 2012;5:69. https://doi.org/10.1186/1756-3305-5-69. Garrett-Jones C, Shidrawi GR. Malaria vectorial capacity of a population of Anopheles gambiae. Bull WHO. 1969;40:531–545. https://apps.who.int/iris/handle/10665/267721. Coluzzi M, Sabatini A, Petrarca V, Di Deco MA. Chromosomal differentiation and adaptation to human environments in the Anopheles gambiae complex. Trans R Soc Trop Med Hyg. 1979;73:483–97. https://doi.org/10.1016/0035-9203(79)90036-1. Zaim M, Aitio A, Nakashima N. Safety of pyrethroid-treated mosquito nets. Med Vet Entomol. 2000;14:1–5. https://doi.org/10.1046/j.1365-2915.2000.00211.x. Bhatt S, Weiss DJ, Cameron E, Bisanzio D, Mappin B, Dalrymple U, et al. The effect of malaria control on Plasmodium falciparum in Africa between 2000 and 2015. Nature. 2015;526:207–11. https://doi.org/10.1038/nature15535. Ranson H, N'Guessan R, Lines J, Moiroux N, Nkuni Z, Corbel V. Pyrethroid resistance in African anopheline mosquitoes: what are the implications for malaria control? Trends Parasitol. 2011;27:91–8. https://doi.org/10.1016/j.pt.2010.08.004. Strode C, Donegan S, Garner P, Enayati AA, Hemingway J. The impact of pyrethroid resistance on the efficacy of insecticide-treated bed nets against African anopheline mosquitoes: systematic review and meta-analysis. PLoS Med. 2014;11: e1001619. https://doi.org/10.1371/journal.pmed.1001619. Mitchell SN, Rigden DJ, Dowd AJ, Lu F, Wilding CS, Weetman D, et al. Metabolic and target-site mechanisms combine to confer strong DDT resistance in Anopheles gambiae. PLoS ONE. 2014;9: e92662. https://doi.org/10.1371/journal.pone.0092662. Chouaïbou M, Zivanovic GB, Knox TB, Jamet HP, Bonfoh B. Synergist bioassays: a simple method for initial metabolic resistance investigation of field Anopheles gambiae s.l. populations. Acta Trop. 2014;130:108–11. https://doi.org/10.1016/j.actatropica.2013.10.020. Stevenson BJ, Bibby J, Pignatelli P, Muangnoicharoen S, O'Neill PM, Lian L-Y, et al. Cytochrome P450 6M2 from the malaria vector Anopheles gambiae metabolizes pyrethroids: sequential metabolism of deltamethrin revealed. Insect Biochem Mol Biol. 2011;41:492–502. https://doi.org/10.1016/j.ibmb.2011.02.003. Chouaïbou M, Kouadio FB, Tia E, Djogbenou L. First report of the East African kdr mutation in an Anopheles gambiae mosquito in Côte d'Ivoire. Wellcome Open Res. 2017; 2:8. https://doi.org/10.12688/wellcomeopenres.10662.1. Martinez-Torres D, Chandre F, Williamson MS, Darriet F, Berge JB, Devonshire AL, et al. Molecular characterization of pyrethroid knockdown resistance (Kdr) in the major malaria vector Anopheles gambiae s.s. Insect Mol Biol. 1998;7:179–84. https://doi.org/10.1046/j.1365-2583.1998.72062.x. Dabiré RK, Namountougou M, Diabaté A, Soma DD, Bado J, Toé HK, et al. Correction: distribution and frequency of kdr mutations within Anopheles gambiae s.l. populations and first report of the Ace-1 G119S mutation in Anopheles arabiensis from Burkina Faso (West Africa). PLoS ONE. 2015;10:e0141645. https://doi.org/10.1371/journal.pone.0101484. Essandoh J, Yawson AE, Weetman D. Acetylcholinesterase (Ace-1) target site mutation 119S is strongly diagnostic of carbamate and organophosphate resistance in Anopheles gambiae s.s. and Anopheles coluzzii across southern Ghana. Malaria J. 2013;12:404. https://doi.org/10.1186/1475-2875-12-404. Cook J, Hergott D, Phiri W, Rivas MR, Bradley J, Segura L, et al. Trends in parasite prevalence following 13 years of malaria interventions on Bioko island, Equatorial Guinea: 2004–2016. Malaria J. 2018;17:62. https://doi.org/10.1186/s12936-018-2213-9. Dossou-Yovo J, Guillet P, Rogier C, Chandre F, Carnevale P, Assi S-B, et al. Protective efficacy of lambda-cyhalothrin treated nets in Anopheles gambiae pyrethroid resistance areas of Côte d'Ivoire. Am J Trop Med Hyg. 2005;73:859–64. https://doi.org/10.4269/ajtmh.2005.73.859. Kleinschmidt I, Bradley J, Knox TB, Mnzava AP, Kafy HT, Mbogo C, et al. Implications of insecticide resistance for malaria vector control with long-lasting insecticidal nets: a WHO-coordinated, prospective, international, observational cohort study. Lancet Infect Dis. 2018;18:640–9. https://doi.org/10.1016/S1473-3099(18)30172-5. Tokponnon FT, Sissinto Y, Ogouyémi AH, Adéothy AA, Adechoubou A, Houansou T, et al. Implications of insecticide resistance for malaria vector control with long-lasting insecticidal nets: evidence from health facility data from Benin. Malaria J. 2019;11:550. https://doi.org/10.1186/s13071-018-3101-4. Protopopoff N, Mosha JF, Lukole E, Charlwood JD, Wright A, Mwalimu CD, et al. Effectiveness of a long-lasting piperonyl butoxide-treated insecticidal net and indoor residual spray interventions, separately and together, against malaria transmitted by pyrethroid-resistant mosquitoes: a cluster, randomised controlled, two-by-two factorial design trial. Lancet. 2018;391:1577–88. https://doi.org/10.1016/S0140-6736(18)30427-6. Alout H, Ndam NT, Sandeu MM, Djégbe I, Chandre F, Dabiré RK, et al. Insecticide resistance alleles affect vector competence of Anopheles gambiae s.s. for Plasmodium falciparum field isolates. PLoS ONE. 2013;8:e63849. https://doi.org/10.1371/journal.pone.0063849. Ndiath M, Cailleau A, Diedhiou S, Gaye A, Boudin C, Richard V, et al. Effects of the kdr resistance mutation on the susceptibility of wild Anopheles gambiae populations to Plasmodium falciparum: a hindrance for vector control. Malaria J. 2014;13:340. https://doi.org/10.1186/1475-2875-13-340. Mitri C, Markianos K, Guelbeogo WM, Bischoff E, Gneme A, Eiglmeier K, et al. The kdr-bearing haplotype and susceptibility to Plasmodium falciparum in Anopheles gambiae: genetic correlation and functional testing. Malaria J. 2015;14:391. https://doi.org/10.1186/s12936-015-0924-8. Dabiré RK, Namountougou M, Diabaté A, Soma DD, Bado J, Toé HK, et al. Distribution and frequency of kdr mutations within Anopheles gambiae s.l. populations and first report of the Ace1G119S mutation in Anopheles arabiensis from Burkina Faso (West Africa). PLoS ONE. 2014;9:e101484. https://doi.org/10.1371/journal.pone.0101484. Camara S, Koffi AA, Ahoua Alou LP, Koffi K, Kabran JPK, Koné A, et al. Mapping insecticide resistance in Anopheles gambiae (s.l.) from Côte d'Ivoire. Parasit Vectors. 2018;11:19. https://doi.org/10.1186/s13071-017-2546-1. Sternberg ED, Cook J, Ahoua Alou LP, Aoura CJ, Assi SB, Doudou DT, et al. Evaluating the impact of screening plus eave tubes on malaria transmission compared to current best practice in central Côte d'Ivoire: a two armed cluster randomized controlled trial. BMC Public Health. 2018;18:894. https://doi.org/10.1186/s12889-018-5746-5. Gillies MT, Coetzee M. A supplement to the Anophelinae of Africa south of the Sahara (Afrotropical Region). Johannesburg. 1987;143:15. Yahouédo GA, Chandre F, Rossignol M, Ginibre C, Balabanidou V, Mendez NGA, et al. Contributions of cuticle permeability and enzyme detoxification to pyrethroid resistance in the major malaria vector Anopheles gambiae. Sci Rep. 2017;7:11091. https://doi.org/10.1038/s41598-017-11357-z. Mangold KA, Manson RU, Koay ESC, Stephens L, Regner M, Thomson RB, et al. Real-Time PCR for detection and identification of Plasmodium spp. J Clin Microbiol. 2005;43:2435–40. https://doi.org/10.1128/JCM.43.5.2435-2440.2005. Favia G, Lanfrancotti A, Spanos L, Sidén-Kiamos I, Louis C. Molecular characterization of ribosomal DNA polymorphisms discriminating among chromosomal forms of Anopheles gambiae s.s.: An. gambiae s.s. rDNA polymorphisms. Insect Mol Biol. 2001;10:19–23. https://doi.org/10.1046/j.1365-2583.2001.00236.x. Bass C, Nikou D, Donnelly MJ, Williamson MS, Ranson H, Ball A, et al. Detection of knockdown resistance (kdr) mutations in Anopheles gambiae: a comparison of two new high-throughput assays with existing methods. Malaria J. 2007;6:111. https://doi.org/10.1186/1475-2875-6-111. Bass C, Nikou D, Vontas J, Williamson MS, Field LM. Development of high-throughput real-time PCR assays for the identification of insensitive acetylcholinesterase (ace-1R) in Anopheles gambiae. Pestic Biochem Physio. 2010;96:80–5. https://doi.org/10.1016/j.pestbp.2009.09.004. Koffi AA, Ahoua-Alou LP, Djenontin A, Kabran JPK, Dosso Y, Kone A, et al. Efficacy of Olyset ® Duo, a permethrin and pyriproxyfen mixture net against wild pyrethroid-resistant Anopheles gambiae s.s. from Côte d'Ivoire: an experimental hut trial. Parasite. 2015;22:28. https://doi.org/10.1051/parasite/2015028. Zoh DD, Ahoua Alou LP, Toure M, Pennetier C, Camara S, Traore DF, et al. The current insecticide resistance status of Anopheles gambiae (s.l.) (Culicidae) in rural and urban areas of Bouaké, Côte d'Ivoire. Parasit Vectors. 2018;11:118. https://doi.org/10.1186/s13071-018-2702-2. Gnémé A, Guelbéogo WM, Riehle MM, Sanou A, Traoré A, Zongo S, et al. Equivalent susceptibility of Anopheles gambiae M and S molecular forms and Anopheles arabiensis to Plasmodium falciparum infection in Burkina Faso. Malar J. 2013;12:204. https://doi.org/10.1186/1475-2875-12-204. Sternberg ED, Cook J, Alou LPA, Assi SB, Koffi AA, Doudou DT, et al. Impact and cost-effectiveness of a lethal house lure against malaria transmission in central Côte d'Ivoire: a two-arm, cluster-randomised controlled trial. Lancet. 2021;397:805–15. https://doi.org/10.1016/S0140-6736(21)00250-6. Koukpo CZ, Fassinou AJYH, Ossè RA, Agossa FR, Sovi A, Sewadé WT, et al. The current distribution and characterization of the L1014F resistance allele of the kdr gene in three malaria vectors (Anopheles gambiae, Anopheles coluzzii, Anopheles arabiensis) in Benin (West Africa). Malaria J. 2019;18:175. https://doi.org/10.1186/s12936-019-2808-9. Zogo B, Soma DD, Tchiekoi BN, Somé A, Ahoua Alou LP, Koffi AA, et al. Anopheles bionomics, insecticide resistance mechanisms, and malaria transmission in the Korhogo area, northern Côte d'Ivoire: a pre-intervention study. Parasite. 2019;26:40. https://doi.org/10.1051/parasite/2019040. Nkya TE, Poupardin R, Laporte F, Akhouayri I, Mosha F, Magesa S, et al. Impact of agriculture on the selection of insecticide resistance in the malaria vector Anopheles gambiae: a multigenerational study in controlled conditions. Parasit Vectors. 2014;7:480. https://doi.org/10.1186/s13071-014-0480-z. Alout H, Dabiré RK, Djogbénou LS, Abate L, Corbel V, Chandre F, et al. Interactive cost of Plasmodium infection and insecticide resistance in the malaria vector Anopheles gambiae. Sci Rep. 2016;6:29755. https://doi.org/10.1038/srep29755. Djogbénou L, Noel V, Agnew P. Costs of insensitive acetylcholinesterase insecticide resistance for the malaria vector Anopheles gambiae homozygous for the G119S mutation. Malar J. 2010;9:12. https://doi.org/10.1186/1475-2875-9-12. Djogbénou LS, Assogba B, Essandoh J, Constant EAV, Makoutodé M, Akogbéto M, et al. Estimation of allele-specific Ace-1 duplication in insecticide-resistant Anopheles mosquitoes from West Africa. Malar J. 2015;14:507. https://doi.org/10.1186/s12936-015-1026-3. Collins E, Vaselli NM, Sylla M, Beavogui AH, Orsborne J, Lawrence G, et al. The relationship between insecticide resistance, mosquito age and malaria prevalence in Anopheles gambiae s.l. from Guinea. Sci Rep. 2019;9:8846. https://doi.org/10.1038/s41598-019-45261-5. Kabula B, Tungu P, Rippon EJ, Steen K, Kisinza W, Magesa S, et al. A significant association between deltamethrin resistance, Plasmodium falciparum infection and the Vgsc-1014S resistance mutation in Anopheles gambiae highlights the epidemiological importance of resistance markers. Malaria J. 2016;15:289. https://doi.org/10.1186/s12936-016-1331-5. Ndiath MO, Cohuet A, Gaye A, Konate L, Mazenot C, Faye O, et al. Comparative susceptibility to Plasmodium falciparum of the molecular forms M and S of Anopheles gambiae and Anopheles arabiensis. Malar J. 2011;10:269. https://doi.org/10.1186/1475-2875-10-269. Fournier DA, Skaug HJ, Ancheta J, Ianelli J, Magnusson A, Maunder MN, et al. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models. Optim Methods Softw. 2012;27:233–49. https://doi.org/10.1080/10556788.2011.597854. Bolker BM, Brooks ME, Clark CJ, Geange SW, Poulsen JR, Stevens MHH, et al. Generalized linear mixed models: a practical guide for ecology and evolution. Trends Ecol Evol. 2009;24:127–35. https://doi.org/10.1016/j.tree.2008.10.008. Cohuet A, Harris C, Robert V, Fontenille D. Evolutionary forces on Anopheles: what makes a malaria vector? Trends Parasitol. 2010;26:130–6. https://doi.org/10.1016/j.pt.2009.12.001. Glunt KD, Thomas MB, Read AF. The effects of age, exposure history and malaria infection on the susceptibility of Anopheles mosquitoes to low concentrations of pyrethroid. PLoS ONE. 2011;6: e24968. https://doi.org/10.1371/journal.pone.0024968. Manguin S. Biodiversity of malaria in the world. English version completely updated. Paris: John Libbey Eurotext; 2008; 133:427. http://hdl.handle.net/10390/2213. Churcher TS, Lissenden N, Griffin JT, Worrall E, Ranson H. The impact of pyrethroid resistance on the efficacy and effectiveness of bednets for malaria control in Africa. eLife. 2016;2(5):e16090. https://doi.org/10.7554/eLife.16090. Mbepera S, Nkwengulila G, Peter R, Mausa EA, Mahande AM, Coetzee M, et al. The influence of age on insecticide susceptibility of Anopheles arabiensis during dry and rainy seasons in rice irrigation schemes of northern Tanzania. Malaria J. 2017;16:364. https://doi.org/10.1186/s12936-017-2022-6. Simard F, Ayala D, Kamdem G, Pombi M, Etouna J, Ose K, et al. Ecological niche partitioning between Anopheles gambiae molecular forms in Cameroon: the ecological side of speciation. BMC Ecol. 2009;9:17. https://doi.org/10.1186/1472-6785-9-17. Gimonneau G, Bouyer J, Morand S, Besansky NJ, Diabate A, Simard F. A behavioral mechanism underlying ecological divergence in the malaria mosquito Anopheles gambiae. Behav Ecol. 2010;21:1087–92. https://doi.org/10.1093/beheco/arq114. We would like to thank the technical staff at the Institut Pierre Richet, Bouaké, Côte d'Ivoire for their valuable support during the mosquito collection surveys and laboratory analysis. The authors are very grateful to colleagues working in various disciplines in Côte d'Ivoire, especially at the Unité de Recherche et de Pédagogie de Génétique, UFR Biosciences, Université Félix Houphouët-Boigny, Abidjan, for their useful contributions. We also thank the volunteer mosquito collectors in the villages for their participation in the study. This study is supported by a grant to the Pennsylvania State University from the Bill & Melinda Gates Foundation (OPP1131603) for the evaluation of the impact of an intervention comprising household screening plus a novel insecticide delivery system called In2Care Eave Tubes. Unité de Recherche et de Pédagogie de Génétique, Université Félix Houphouët-Boigny, UFR Biosciences, Abidjan, Côte d'Ivoire Rosine Z. Wolie, Oulo N'Nan-Alla, Kpahe M. H. Kanh & Simon-Pierre A. Nguetta Vector Control Product Evaluation Centre, Institut Pierre Richet (VCPEC-IPR), Bouaké, Côte d'Ivoire Rosine Z. Wolie, Alphonsine A. Koffi, Ludovic P. Ahoua Alou, Amal Dahounto, Florent H. A. Yapo, Soromane Camara, Welbeck A. Oumbouke, Innocent Z. Tia & Raphael NGuessan Institut Pierre Richet (IPR), Institut National de Santé Publique (INSP), Bouaké, Côte d'Ivoire Rosine Z. Wolie, Alphonsine A. Koffi, Ludovic P. Ahoua Alou, Soromane Camara, Innocent Z. Tia & Raphael NGuessan Department of Entomology, Center for Infectious Disease Dynamics, The Pennsylvania State University, University Park, PA, USA Eleanore D. Sternberg & Matthew B. Thomas Department of Disease Control, London School of Hygiene and Tropical Medicine, London, UK Raphael NGuessan Université Alassane Ouattara, Bouaké, Côte d'Ivoire Innocent Z. Tia Innovative Vector Control Consortium, IVCC, Liverpool, UK Welbeck A. Oumbouke Department of Vector Biology, Liverpool School of Tropical Medicine, Liverpool, L3 5QA, UK Eleanore D. Sternberg Rosine Z. Wolie Alphonsine A. Koffi Ludovic P. Ahoua Alou Oulo N'Nan-Alla Amal Dahounto Florent H. A. Yapo Kpahe M. H. Kanh Soromane Camara Simon-Pierre A. Nguetta Matthew B. Thomas RZW, AAK and RN designed the study. RZW, FHAY, AAPL, EDS, IZT, WAO and SC conducted the field, laboratory and data management work. RZW, AD, and MHK analysed the data. RZW wrote the manuscript. KAA, ONA, EDS, AAPL, SPAN, MBT and NR supervised the study and revised the manuscript. All authors reviewed and approved the final manuscript. Correspondence to Rosine Z. Wolie. Ethical clearance and consent information are included within the manuscript. Wolie, R.Z., Koffi, A.A., Ahoua Alou, L.P. et al. Evaluation of the interaction between insecticide resistance-associated genes and malaria transmission in Anopheles gambiae sensu lato in central Côte d'Ivoire. Parasites Vectors 14, 581 (2021). https://doi.org/10.1186/s13071-021-05079-5 Knockdown resistance gene L1014F mutation Acetylcholinesterase-1 gene G119S mutation Malaria transmission Dipteran vectors and associated diseases Spatial inequality, infectious diseases and disease control
CommonCrawl
\begin{definition}[Definition:Semantic Tableau/Closed] Let $T$ be a semantic tableau. Then $T$ is '''closed''' {{iff}} all of its leaves are marked closed. \end{definition}
ProofWiki
The area of rectangle $ABCD$ with vertices $A$(0, 0), $B$(0, 4), $C$($x$, 4) and $D$($x$, 0) is 28 square units. If $x > 0$, what is the value of $x$? Plotting the points, we see that the dimensions of the rectangle are $x$ and $4$. The area of the rectangle is $(\text{length})(\text{width})=4x$, so $4x=28$ and $x=\boxed{7}$. [asy] size(5cm); import graph; defaultpen(linewidth(0.7)+fontsize(12)); real x = 7; pair A=(0,0), B=(0,4), C=(x,4), D=(x,0); pair[] dots = {A,B,C,D}; dot(dots); draw(A--B--C--D--cycle); xaxis(-2,9,Arrows(4)); yaxis(-2,7,Arrows(4)); label("$A$",A,SW); label("$B$",B,NW); label("$C$",C,NE); label("$D$",D,SE); label("$x$",(B+C)/2,N); label("$4$",(C+D)/2,E);[/asy]
Math Dataset
\begin{definition}[Definition:Sociable Chain/Order] Let $m$ be a positive integer. Let $s \left({m}\right)$ be the aliquot sum of $m$. Let a sequence $\left\langle{a_k}\right\rangle$ be a sociable chain. The '''order''' of $a_k$ is the '''smallest''' $r \in \Z_{>0}$ such that :$a_r = a_0$ Category:Definitions/Sociable Numbers \end{definition}
ProofWiki
A newly discovered Bordetella species carries a transcriptionally active CRISPR-Cas with a small Cas9 endonuclease Yury V. Ivanov1, Nikki Shariat2,3, Karen B. Register4, Bodo Linz1, Israel Rivera1, Kai Hu1, Edward G. Dudley2 & Eric T. Harvill1,5 Clustered regularly interspaced short palindromic repeats (CRISPR) and CRISPR-associated genes (cas) are widely distributed among bacteria. These systems provide adaptive immunity against mobile genetic elements specified by the spacer sequences stored within the CRISPR. The CRISPR-Cas system has been identified using Basic Local Alignment Search Tool (BLAST) against other sequenced and annotated genomes and confirmed via CRISPRfinder program. Using Polymerase Chain Reactions (PCR) and Sanger DNA sequencing, we discovered CRISPRs in additional bacterial isolates of the same species of Bordetella. Transcriptional activity and processing of the CRISPR have been assessed via RT-PCR. Here we describe a novel Type II-C CRISPR and its associated genes—cas1, cas2, and cas9—in several isolates of a newly discovered Bordetella species. The CRISPR-cas locus, which is absent in all other Bordetella species, has a significantly lower GC-content than the genome-wide average, suggesting acquisition of this locus via horizontal gene transfer from a currently unknown source. The CRISPR array is transcribed and processed into mature CRISPR RNAs (crRNA), some of which have homology to prophages found in closely related species B. hinzii. Expression of the CRISPR-Cas system and processing of crRNAs with perfect homology to prophages present in closely related species, but absent in that containing this CRISPR-Cas system, suggest it provides protection against phage predation. The 3,117-bp cas9 endonuclease gene from this novel CRISPR-Cas system is 990 bp smaller than that of Streptococcus pyogenes, the 4,017-bp allele currently used for genome editing, and which may make it a useful tool in various CRISPR-Cas technologies. Clustered regularly interspaced short palindromic repeats (CRISPR)-Cas (CRISPR-associated) systems serve as an adaptive immune mechanism in prokaryotes that confer protection against bacteriophages and other mobile elements and vectors [1]. A typical CRISPR-cas locus includes a CRISPR array of containing direct repeats (DR) separated by spacers (Sp) and adjacent cas genes [2]. In response to invading DNA, CRISPRs acquire short fragments of the foreign nucleic acid sequences and insert those as new spacers at the beginning of the CRISPR array, with each spacer flanked on both sides by direct repeat sequences. This acquisition step involves Cas1, Cas2, and Cas9 proteins [3–5]. Cas9, the signature of Type II CRISPR systems [6], is a RNA-guided endonuclease. CRISPR arrays are transcribed and subsequently processed into small individual CRISPR RNAs (crRNA). This "maturation" step of the array precursor requires a trans-activating crRNA (tracrRNA), an endogenous ribonuclease RNase III, and Cas9 [7, 8]; although RNase-III-independent systems exist for some bacteria with Type II-C CRISPRs [9]. In Streptococcus pyogenes, one of the most well studied Type II CRISPR-Cas systems, both tracrRNA and crRNA guide the Cas9 endonuclease to a complementary target sequence (protospacer) to mediate a double-stranded DNA break during target interference. For additional specificity and to avoid cutting within the array itself (autoimmunity), RNA-guided Cas9 cleavage requires a protospacer adjacent motif (PAM; in S. pyogenes: 5′-NGG-3′) flanking the target site. The specifically targeted endonuclease activity of the S. pyogenes Type II-A CRISPR-Cas system has allowed for important breakthrough applications in RNA-guided control of gene expression, genome engineering, and genome editing of multiple organisms [7, 10]. But limitations of this particular system have led to a search for new CRISPR-Cas systems with altered features. The publicly available CRISPRfinder program [11] identified CRISPR-cas loci in 45 % (1176/2612) of the bacterial genomes analyzed, but CRISPR-Cas systems have not been identified within the genus Bordetella. This genus, which is comprised of nine species, is historically subdivided into "classical" and "non-classical" bordetellae. The extensively studied classical bordetellae consist of the three respiratory pathogens: B. pertussis and B. parapertussis, the causative agents of "whooping cough" in humans, and B. bronchiseptica, which causes a broad variety of respiratory disease in many different mammals. The non-classical bordetellae are both genotypically and phenotypically different from the classical bordetellae [12]. They consist of the six recently described species: B. hinzii, B. holmesii, B. ansorpii, B. trematum, B. petrii, and B. avium, all of which are only partially characterized [13–17]. While the classical bordetellae are usually associated with respiratory disease, several non-classical species have also been isolated from wound and ear infection, septicemia and endocarditis, predominantly from immunocompromised patients. For example, B. hinzii, which is a respiratory pathogen in poultry [18] and rodents [19], has also been isolated from humans with chronic cholangitis [20], bacteremia [21], or fatal septisemia [22]. We set out to define the sequence diversity within the Bordetella genus and recently published the genome sequences of numerous isolates from several species [23–26]. During these studies, we discovered a novel species that we named Bordetella pseudohinzii (manuscript in preparation). This species is a close relative of B. hinzii and naturally infects laboratory-raised mice. B. hinzii and B. pseudohinzii are distinguishable based on substantial divergence in sequence and gene content, as well as the presence of a CRISPR-Cas system that is unique to the genome of B. pseudohinzii. Here, we describe this novel CRISPR-Cas system, demonstrate that it is transcriptionally active and present evidence that it acts as an adaptive immune system against mobile genetic elements, including bacteriophage sequences present in B. hinzii. These data suggest that both species have recently shared an ecological niche with phages, which are represented by the prophages in B. hinzii genomes and the matching spacers in the genome of B. pseudohinzii, and that acquisition of this CRISPR-Cas system protects against those. Bacterial strains and culture conditions Bacterial isolates used in this study are described in Additional file 1: Table S1. Cultures used for preparation of DNA were grown at 37 °C on Bordet-Gengou agar containing 10 % sheep's blood. Stainer-Scholte broth cultures inoculated with colonies from Bordet-Gengou agar and incubated at 37 °C with shaking were used for RNA purification. Growth in broth culture was monitored periodically by checking optical density values (at 600 nm wave length). DNA isolation, PCR, and sequencing DNA used for amplification and sequencing of cas genes and the CRISPR array was purified using a commercially available kit (Promega) and was quantified with a Nanodrop 2000 (Thermo Scientific). Primers for amplification of the complete CRISPR array and of the cas9, cas1, cas2, and 16S rRNA genes from other B. pseudohinzii isolates (Additional file 2: Table S2) were designed based on the genome sequence of isolate 8-296-03 [GenBank:JHEP01000084]. PCR reactions included 200 μM of dNTPs, 0.5 μM of each primer, 1.5 mM of MgCl2, 2 U of Taq polymerase (Roche), 5.0 μl of 10× Buffer II, 10 % DMSO and ~150 ng of purified DNA template in a final volume of 50 μl. Cycling conditions for the amplification of cas9 were 95 °C for 15 min and 35 cycles of 95 °C for 30 s, 54 °C for 30 s and 72 °C for 3 min, followed by a final elongation step of 72 °C for 4 min. Cycling conditions used with the remaining primer pairs were identical except that the extension time was shortened from 3 min to 1 min. PCR amplicons used for sequencing were purified with ExoSAP-IT (USB Corporation) and sequenced at the National Animal Disease Center Genomics Unit using Applied Biosystems Big Dye Terminator v3.1 on an Applied Biosystems 3130 XL Genetic Analyzer sequencer. CRISPR-cas locus annotation and protospacer prediction All Bordetella genome sequences available at GenBank were searched for the presence of CRISPR systems using CRISPRfinder [11]. Predicted spacer sequences were submitted to BLAST search to query the nucleotide collection (nr/nt) and whole-genome shotgun contigs (wgs) databases at NCBI. Because E-value is inadequate when using short nucleotide sequences as BLAST queries, we introduce a "percent hit quality" score (% HQ) to identify and rank the most significant BLAST hits: $$ \%HQ=\frac{\%\operatorname{cov}\times \% ID}{100\%}, $$ where %cov represents the percentage of coverage between the spacer and predicted protospacer sequences and %ID stands for percent nucleotide identity between the two. GC-content The guanine and cytosine content (GC-content) was calculated within a 120-bp sliding window. The difference in GC-content between the CRISPR-cas locus and the genome average was determined using a two-proportion test implemented in Minitab 17 (www.minitab.com). Briefly, numbers of G + C ("positive events") and A + T ("negative events") were calculated separately for the chromosome [GenBank:JHEP01000084] and for the CRISPR-cas locus. Because the CRISPR array consists of repetitive sequences, its GC-content is skewed and, therefore, the array sequence was not included. The significance of the difference (P-value) was calculated using two-tailed Fisher's exact test. RNA purification and RT-PCR analysis Total RNA was isolated from bacterial cultures during logarithmic growth at OD600 = 0.5 and during the stationary phase after overnight growth using the TRIzol® Plus RNA Purification System (Life Technologies). To eliminate any residual DNA in the samples, a DNase treatment was implemented during RNA extraction, following the manufacturer's protocol. Reverse transcription reactions were carried out using Superscript III reverse transcriptase (Invitrogen), random hexamer primers and 150 ng of total RNA, following the manufacturer's instructions. Primers for the amplification of cas9, cas1, and cas2 gene fragments (Additional file 2: Table S2) were designed to yield PCR amplicons of ~100 bp in size (Fig. 1c). The PCR reaction mixture consisted of 2 μl of cDNA template (150 ng/μl), 0.2 μl of 10-mM dNTP mix, 1 μl of 10-mM forward and reverse oligonucleotide primer, 0.2 μl of Taq DNA polymerase (1 unit), 2 μl of 10× ThermoPol reaction buffer, and 14.6 μl of ddH2O, in a total volume of 20 μl. Amplification was carried out at 95 °C for 10 min, followed by 35 cycles of 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s. A final extension step was carried out at 72 °C for 8 min. PCR products were electrophoresed in 2 % agarose gels and visualized with ethidium bromide under UV-light. Organization and expression of the Type II-C CRISPR-cas locus of B. pseudohinzii. a Graphical representation of the CRISPR-cas locus. The red block upstream of the cas9 gene is a putative tracrRNA flanked by the predicted promoter (arrow) and stem-loop terminator (up-side-down sigma symbol). The CRISPR array is enlarged relative to the cas genes for visual clarity. b Nucleotide sequence of the CRISPR array. SP# is the spacer sequence number; DR# is the direct repeat number. Nucleotides deviating from the DR consensus in DR-1 and DR-4 are highlighted in red. c Confirmation of cas9 (C9), cas1 (C1), and cas2 (C2) expression during logarithmic (Log) and stationary phases of growth. Each PCR amplicon was designed to have a similar size. M: 100-bp DNA ladder. d The CRISPR array is processed into individual, mature crRNAs. Positive and negative strands are relative to the orientation shown in Fig. 1a. M: 50-bp DNA ladder Mature crRNAs were PCR amplified using the Quanti-Mir RT Kit (Systems Biosciences) following the manufacturer's instructions. Briefly, a poly(A)-tail with an attached adaptor sequence was ligated to the mRNA transcripts, and the product was converted to cDNA. crRNAs corresponding to spacers Sp2, Sp3, Sp10, and Sp19 were PCR-amplified from the resulting cDNA library with primers (Additional file 2: Table S2) complementary to the attached adaptor and the individual spacer sequences. The PCR only yielded amplicons from mature crRNAs but not from the unprocessed transcript of the CRISPR array. Each PCR product consisted of a spacer sequence, a flanking part of the direct repeat and an attached poly(A)-tail with a universal primer sequence. The expected size of each of the four tested amplicons is ~85 bp. PCR products were electrophoresed in 2.8 % agarose gels and visualized with ethidium bromide under UV-light. Annotation of the CRISPR-Cas elements and expression in vitro The genome of Bordetella pseudohinzii strain 8-296-03 contains three consecutive, apparently co-transcribed, genes (Fig. 1a) that are homologous to cas9, cas1, and cas2 of Alicycliphilus denitrificans (Additional file 3: Figure S1, Additional file 4: Table S3). Upstream of those genes, a putative tracrRNA is encoded divergently, flanked by a putative promoter and a rho-independent stem-loop terminator. Downstream of the cas genes, the CRISPR array contains 22 direct repeats (DR) and 21 spacer sequences (Sp). Of these, 19 direct repeats are identical, one repeat (DR-4) has a single nucleotide polymorphism (SNP), and the terminal direct repeat (DR-1) has 3 SNPs (Fig. 1b). While each direct repeat is exactly 36 nucleotides in length, the spacer sequences vary: 19 spacers are 30 nucleotides and two are 29 nucleotides long. The sequence of each spacer is unique. Based on the presence and organization of cas9, cas1, and cas2 genes within the operon, we typed this CRISPR-Cas system as Type II-C, according to the classification of the CRISPR-Cas systems established by Makarova et al. [6]. A functional CRISPR-Cas system requires expression of the cas genes and the CRISPR array, followed by maturation of individual crRNAs. Therefore, we performed an RT-PCR to test whether the cas genes are transcribed during growth in vitro. Amplicons of cas9, cas1, and cas2 were observed from RNA obtained during both logarithmic and stationary phases of growth (Fig. 1c). Processing of the precursor CRISPR array transcript into mature crRNAs was also confirmed by RT-PCR (Fig. 1d). To predict putative protospacer targets we submitted each spacer sequence to BLAST search. Table 1 summarizes hits with >80 % hit quality (HQ). Two spacers, Sp8 and Sp9, are identical to prophage elements found in B. hinzii. Sp16, Sp10, and Sp20 show high HQ (97 %, 86 %, and 83 %, respectively) with different prophages found in B. hinzii and B. bronchiseptica and with a capsid gene of a Microviridae-family phage (subfamily Gokushovirinae), respectively. Spacer Sp13 matches a transposase of the IS3/IS911 family with (90 % HQ). Importantly, several prophages identified as likely sources of spacer elements are not found in the genome of B. pseudohinzii 8-296-03 but are present in closely related B. hinzii, which appear to lack a CRISPR-Cas system. Collectively, these observations suggest that acquisition of the CRISPR-Cas system by B. pseudohinzii conferred CRISPR-mediated protection against these bacteriophages and other mobile genetic elements. Table 1 Highest-scoring BLASTn hits for spacer sequences The S. pyogenes Cas9 protein (SpyCas9) contains RuvC-like and HNH motifs that were shown to be essential for its function [7]. We searched for both these motifs in the corresponding Cas9 from B. pseudohinzii (BpsuCas9) (Additional file 5: Figure S2). The RuvC-like endonuclease motif showed 80 % amino acid (aa) similarity (47 % aa identity) and the HNH motif had 66 % aa similarity (33 % aa identity). In each of the motifs, amino acid residues that are conserved among different type-II Cas9 proteins and that were shown to be essential for Cas9 function are identical in BpsuCas9. Targeted cleavage by Cas9 requires a protopspacer adjacent motif (PAM), a short sequence, which is 5′-NGG-3′ in S. pyogenes [7]. We attempted to determine in silico a possible PAM sequence that is recognized by BpsuCas9, but the few available protospacer sequences with a high HQ score (Table 1) limited the number of potential sequence candidates. Although eight predicted protospacers and their flanking sequences are not sufficient to conclusively determine the exact PAM sequence, we propose 5′-WGR-3′ as a potential motif used by BpsuCas9 (Additional file 6: Figure S3). Additional B. pseudohinzii isolates possess the CRISPR-cas locus Eleven other isolates identified as B. pseudohinzii on the basis of their 16S rRNA genes were tested for the presence of a CRISPR-cas locus. PCR using gene-specific primers confirmed the presence of cas9, cas1, and cas2 in all isolates (Fig. 2a-c). A CRISPR array was also found in all isolates but some variation in size was observed (Fig. 2d). Sequencing of the CRISPR array PCR amplicons revealed that their lengths are affected by the loss of Sp14 in isolate#2, of Sp8 to Sp11 in isolate#6, and of Sp9 in isolate#10, consistent with the difference in size observed among amplicons (Fig. 2d and e). Each missing spacer is accompanied by the loss of adjacent direct repeat, so that the overall architecture of the array, (−DR-Sp-)n, remains intact. The array sequences from all isolates are otherwise identical to one another (Fig. 2e). Eleven additional B. pseudohinzii isolates possess the CRISPR-cas locus. PCR amplicons of cas9 (a), cas1 (b), cas2 (c) and the CRISPR array (d) from 12 B. pseudohinzii isolates. Lane 1: isolate 8-296-03; lanes 2–12: sequentially obtained isolates #2 to #12, respectively (Additional file 1: Table S1); Lane 13: negative control PCR. e Schematic representation of the CRISPR array in each of the isolates. Squares represent spacer sequences. Diamonds represent direct repeats. Arrows in panel D denote isolates that are missing spacer sequences Sp14 (isolate #2), Sp8 to Sp11 (#6) and Sp9 (#10) in panel e The insertion site of the CRISPR-Cas system is a recombination hotspot in Bordetella Since we found no evidence for a CRISPR-Cas system in any Bordetella species, suggesting it was acquired solely by the B. pseudohinzii lineage, we assessed the local gene organization near the insertion site of the CRISPR-cas locus in other Bordetella genomes (Fig. 3). Only B. pseudohinzii and B. hinzii exhibit synteny of both the upstream dapB—murB—tRNA-Gly gene cluster and the downstream cluster consisting of a reductase (redoxin) and a disulfide-isomerase (S 2 isom). In other species, synteny is conserved only upstream of the point at which the CRISPR-Cas locus is located in B. pseudohinzii. Genes downstream vary both in identity and orientation, even among isolates of the same species (B. bronchiseptica), suggesting this region is a hotspot for recombination. Local gene organization near the insertion site of the B. pseudohinzii CRISPR-cas locus in multiple Bordetella genomes. The genome synteny is conserved upstream the insertions site, while downstream genes vary in both identity and orientation. Dashed lines denote absence of sequence as compared to B. pseudohinzii 8-296-03. Genes with the same functional annotations are colored identically. The CRISPR array is shown schematically with black diamonds for direct repeats and colored rectangles for spacer sequences; the total number of spacers is 21 Since absence of the CRISPR-Cas system in the other Bordetella species suggests that it was acquired via horizontal gene transfer (HGT), we examined the GC-content of the region including upstream dapB—murB—tRNA-Gly genes, the CRISPR-cas locus, and the downstream redoxin-S 2 isom genes. The GC-profile of the upstream and downstream genes is consistent with the genome average of 66.5 % (Fig. 4, grey horizontal line). In contrast, the CRISPR-cas locus has a GC-content of 56 %, which is significantly lower (two-tailed Fisher's exact test, P < 0.01). These data strongly suggest that the CRISPR-cas locus of B. pseudohinzii has been horizontally acquired from an unknown source, likely one with a lower GC-content. The GC-content of the B. pseudohinzii 8-296-03 CRISPR-cas locus is significantly lower than that of the genome. The grey horizontal line indicates the average GC-content of 66.5 % for the genome. White rectangles represent genes. The CRISPR array is represented by diamonds (♦, direct repeats) and squares (■, spacer sequences). On the x-axis, 0 corresponds to nucleotide coordinate 24,537 bp of the contig [GenBank:JHEP02000007] Evolutionary relationship and horizontal gene transfer The Cas9 protein is a signature feature of all type-II CRISPR-Cas systems. To identify a possible source of the B. pseudohinzii CRISPR-Cas system, we performed BLAST searches for Cas9 protein sequences (Fig. 5). The two highest-scoring hits, both from Alicycliphilus denitrificans, have 74 % aa identity (Additional file 4: Table S3), suggesting that the proposed recent acquisition of this CRISPR-Cas system into the genome of B. pseudohinzii was probably from an unknown vector. The Cas9-based phylogeny depicted in Fig. 5 includes the highest-scoring hits together with a subset of selected Cas9 sequences previously published elsewhere [27]. Notably, the 5 closest hits are from related genera, all of which belong to the order Burkholderiales in the class Betaproteobacteria. Immediately outside of this clade is the Cas9 from other Betaproteobacteria and from gamma proteobacterium HdN1. The six-member clade of the Burkholderiales, including B. pseudohinzii, is not the only occurrence of Cas9 in the Burkholderiales; Ralstonia syzygii and Oligella urethralis also belong to this order but possess divergent Cas9 sequences more closely related to those from a variety of Alphaproteobacteria (Fig. 5). The presence of closely related bacteria within several clades of the tree suggests multiple, independent HGT events associated with the acquisition of CRISPR systems. Cas9-based phylogeny and GC-content of B. pseudohinzii and the highest-scoring BLAST hits. a Maximum likelihood tree based on Cas9 proteins. The green rectangle outlines taxa from the order Burkholderiales. Taxa are colored according to their class-level taxonomic assignment: Gammaproteobacteria in black, Bacilli in red, Betaproteobacteria in blue, Alphaproteobacteria in purple, Bacteroidia in orange, Flavobacteriia in golden, Epsilonproteobacteria in grey, Clostridia in green, Actinobacteria in cyan. All nodes have >50 % bootstrap support (10,000 replicates). b The GC-content of cas9 and the corresponding bacterial genome. ∆ is the arithmetic difference between the cas9 and genome GC-contents; +/− indicates whether cas9 has a lower (−) or higher (+) GC-content To further explore horizontal acquisition of CRISPR-Cas systems, we calculated the GC-contents for both the cas9 gene and the genome for all taxa on the tree. The cas9 sequences ranged from 73.5 % to 29 % GC-content. Likewise, the genomes varied in a similar range from 70.7 % to 30.5 % GC-content. However, in several cases a discrepancy is apparent between the GC-contents of cas9 and the corresponding genome (Fig. 5b, column ∆). The largest difference was found in S. pyogenes M1 whose cas9 has a GC-content 24.9 % higher than the average for the genome. Discrepancy between the 16S-rRNA-gene tree relating bacterial species and the tree relating their cas9 gene products suggests horizontal acquisition of the CRISPR-Cas. Similarly, GC-content differences between the CRISPR-cas locus and the rest of the genome further support this HGT. Several lines of evidence suggest that the novel CRISPR-Cas system described here is functional. We observed active transcription of cas genes and array sequence, as well as maturation of the array transcript. Further, the array contains multiple spacer sequences with homology to prophages in genomes of the most closely related species, B. hinzii. Yet, those prophages are absent from B. pseudohinzii, suggesting that the CRISPR-Cas may have provided protection against them as an adaptive immune system. Interestingly, B. hinzii contains prophages and B. pseudohinzii contains CRISPR-associated spacer sequences that perfectly match those prophages. These observations indicate that both species have been predated by the same phage and have survived that predation in these two different ways. Acquisition of the prophage or the CRISPR-Cas system, either of which would prevent further phage predation, could have also accelerated the divergent evolution of B. hinzii and B. pseudohinzii by differently affecting uptake or loss of various other genes, contributing to the observed differences in gene content of these closely related species. It is often observed that horizontally acquired DNA has a lower GC-content than the genome that receives it [28]; and the GC-content of the CRISPR-Cas system in B. pseudohinzii follows this trend. However, our comparison of multiple genomes revealed several cases in which the GC-content of the acquired CRISPR-Cas system is higher than the genome average (Fig. 5). The most striking example is S. pyogenes whose cas9 gene is functional and is successfully used in genome manipulations. This gene has a 25 % higher GC-content than the genome that contains it, suggesting that S. pyogenes acquired its CRISPR-Cas system by HGT and that substantial differences in GC-content do not prevent the function of the Cas9 protein. Recent advances in genome editing, genome engineering, and transcriptional control of genes in multiple organisms take advantage of the endonuclease SpyCas9. However, an important limitation of SpyCas9 is its size. The S. pyogenes cas9 allele measures 4,107 base pairs, a size that stretches the carrying capacities of some commonly employed vectors. To address this problem, a recent paper described the use of a 3,159-bp gene encoding Cas9 from Staphylococcus aureus (SaCas9), which recognizes a different PAM sequence (5′-NNGRR-3′) [29]. We introduce BpsuCas9, which is of a similarly small size (3,117 bp) and employs a PAM consensus sequence that putatively consists of 5′-WGR-3′ (Additional file 6: Figure S3) and may provide further flexibility with regards to designing guide RNAs. Future experiments will determine the specific features of the B. pseudohinzii CRISPR-Cas system and its potential utility as an additional or alternative tool for genome editing and other applications. This study revealed for the first time presence of the CRISPR-Cas system within the genus Bordetella, in a genome of newly discovered B. pseudohinzii sp. nov. We confirmed that this CRISPR-Cas system is actively transcribed and its crRNAs are processed during bacterial growth. Importantly, the CRISPR array carries spacer sequences matching bacteriophages that infect this and two most closely related B. hinzii species, thus, conferring adaptive immunity in B. pseudohinzii against these phages. The GC-content analysis of the CRISPR-cas locus and homology searches of Cas9 protein sequences explained how single species of Bordetella acquired this system horizontally from yet an unknown source. The most important observation made about this Bordetella CRISPR-Cas system is its Cas9 endonuclease that is different both in sequence and size from the endonucleases commonly employed in the CRISPR-Cas technology. While the smaller size of BpsuCas9 is of potential utility for more efficient use of biological shuttle vectors during transformations and viral transductions, the unique sequence of BpsuCas9 might allow for some alternative uses of these endonucleases, for example and in addition to the genome editing and genome engineering. Availability of supporting data The data set supporting the results of this article is available in the GenBank repository, [GenBank:JHEP00000000.2] at http://www.ncbi.nlm.nih.gov. CRISPR: Clustered regularly interspaced short palindromic repeats Sp: Spacer sequence DR: Direct repeat PAM: Protospacer adjacent motif crRNA: CRISPR RNA tracrRNA: Trans-activating CRISPR RNA SNP: Single-nucleotide polymorphism GC-content: Guanine + cytosine content SpyCas9: Cas9 endonuclease of S. pyogenes SaCas9: Cas9 endonuclease of S. aureus BpsuCas9: Cas9 endonuclease of B. pseudohinzii Hit quality Wiedenheft B, Sternberg SH, Doudna JA. RNA-guided genetic silencing systems in bacteria and archaea. Nature. 2012;482(7385):331–8. Barrangou R. The roles of CRISPR-Cas systems in adaptive immunity and beyond. Curr Opin Immunol. 2015;32:36–41. Heler R, Samai P, Modell JW, Weiner C, Goldberg GW, Bikard D, et al. Cas9 specifies functional viral targets during CRISPR-Cas adaptation. Nature. 2015;519(7542):199–202. Yosef I, Goren MG, Qimron U. Proteins and DNA elements essential for the CRISPR adaptation process in Escherichia coli. Nucleic Acids Res. 2012;40(12):5569–76. Nunez JK, Kranzusch PJ, Noeske J, Wright AV, Davies CW, Doudna JA. Cas1-Cas2 complex formation mediates spacer acquisition during CRISPR-Cas adaptive immunity. Nat Struct Mol Biol. 2014;21(6):528–34. Makarova KS, Haft DH, Barrangou R, Brouns SJ, Charpentier E, Horvath P, et al. Evolution and classification of the CRISPR-Cas systems. Nat Rev Microbiol. 2011;9(6):467–77. Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA, Charpentier E. A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity. Science. 2012;337(6096):816–21. Deltcheva E, Chylinski K, Sharma CM, Gonzales K, Chao Y, Pirzada ZA, et al. CRISPR RNA maturation by trans-encoded small RNA and host factor RNase III. Nature. 2011;471(7340):602–7. Zhang Y, Heidrich N, Ampattu BJ, Gunderson CW, Seifert HS, Schoen C, et al. Processing-independent CRISPR RNAs limit natural transformation in Neisseria meningitidis. Mol Cell. 2013;50(4):488–503. Sander JD, Joung JK. CRISPR-Cas systems for editing, regulating and targeting genomes. Nat Biotechnol. 2014;32(4):347–55. Grissa I, Vergnaud G, Pourcel C. CRISPRFinder: a web tool to identify clustered regularly interspaced short palindromic repeats. Nucleic Acids Res. 2007;35(Web Server issue):W52–7. Gross R, Keidel K, Schmitt K. Resemblance and divergence: the "new" members of the genus Bordetella. Med Microbiol Immunol. 2010;199(3):155–63. Gross R, Guzman CA, Sebaihia M, dos Santos VA, Pieper DH, Koebnik R, et al. The missing link: Bordetella petrii is endowed with both the metabolic versatility of environmental bacteria and virulence traits of pathogenic bordetellae. BMC Genomics. 2008;9:449-2164-9-449. Pittet LF, Emonet S, Schrenzel J, Siegrist CA, Posfay-Barbe KM. Bordetella holmesii: an under-recognised Bordetella species. Lancet Infect Dis. 2014;14(6):510–9. Temple LM, Weiss AA, Walker KE, Barnes HJ, Christensen VL, Miyamoto DM, et al. Bordetella avium virulence measured in vivo and in vitro. Infect Immun. 1998;66(11):5244–51. Vandamme P, Heyndrickx M, Vancanneyt M, Hoste B, De Vos P, Falsen E, et al. Bordetella trematum sp. nov., isolated from wounds and ear infections in humans, and reassessment of Alcaligenes denitrificans Ruger and Tan. Int J Syst Bacteriol 1996. 1983;46(4):849–58. Ko KS, Peck KR, Oh WS, Lee NY, Lee JH, Song JH. New species of Bordetella, Bordetella ansorpii sp. nov., isolated from the purulent exudate of an epidermal cyst. J Clin Microbiol. 2005;43(5):2516–9. Register KB, Sacco RE, Nordholm GE. Comparison of ribotyping and restriction enzyme analysis for inter- and intraspecies discrimination of Bordetella avium and Bordetella hinzii. J Clin Microbiol. 2003;41(4):1512–9. Jiyipong T, Morand S, Jittapalapong S, Raoult D, Rolain JM. Bordetella hinzii in rodents. Southeast Asia Emerg Infect Dis. 2013;19(3):502–3. Arvand M, Feldhues R, Mieth M, Kraus T, Vandamme P. Chronic cholangitis caused by Bordetella hinzii in a liver transplant recipient. J Clin Microbiol. 2004;42(5):2335–7. Cookson BT, Vandamme P, Carlson LC, Larson AM, Sheffield JV, Kersters K, et al. Bacteremia caused by a novel Bordetella species, "B. hinzii". J Clin Microbiol. 1994;32(10):2569–71. Kattar MM, Chavez JF, Limaye AP, Rassoulian-Barrett SL, Yarfitz SL, Carlson LC, et al. Application of 16S rRNA gene sequencing to identify Bordetella hinzii as the causative agent of fatal septicemia. J Clin Microbiol. 2000;38(2):789–94. Harvill ET, Goodfield LL, Ivanov Y, Meyer JA, Newth C, Cassiday P, et al. Genome sequences of 28 Bordetella pertussis U.S. outbreak strains dating from 2010 to 2012. Genome Announc. 2013;1(6): doi:10.1128/genomeA.01075-13. Harvill ET, Goodfield LL, Ivanov Y, Smallridge WE, Meyer JA, Cassiday PK, et al. Genome sequences of nine Bordetella holmesii strains isolated in the United States. Genome Announc. 2014;2(3):doi:10.1128/genomeA.00438-14. Register KB, Ivanov YV, Harvill ET, Brinkac L, Kim M, Losada L. Draft genome sequences of six Bordetella hinzii isolates acquired from avian and mammalian hosts. Genome Announc. 2015;3(2):doi:10.1128/genomeA.00081-15. Register KB, Ivanov YV, Jacobs N, Meyer JA, Goodfield LL, Muse SJ, et al. Draft genome sequences of 53 genetically distinct isolates of Bordetella bronchiseptica representing 11 terrestrial and aquatic hosts. Genome Announc. 2015;3(2):doi:10.1128/genomeA.00152-15. Chylinski K, Makarova KS, Charpentier E, Koonin EV. Classification and evolution of type II CRISPR-Cas systems. Nucleic Acids Res. 2014;42(10):6091–105. Nishida H. Genome DNA sequence variation, evolution, and function in bacteria and archaea. Curr Issues Mol Biol. 2012;15(1):19–24. Ran FA, Cong L, Yan WX, Scott DA, Gootenberg JS, Kriz AJ, et al. In vivo genome editing using Staphylococcus aureus Cas9. Nature. 2015;520:186–91. We thank Ken Boschert for deriving, preserving, and providing B. pseudohinzii isolates for this study; Brian Faddis for additional information and helpful discussions; Liliana Losada, Lauren Brinkac, and JCVI staff for sequencing the genome of B. pseudohinzii. We thank William Boatwright for excellent technical assistance and David Alt, Lea Ann Hobbs and Allen Jensen at the NADC Genomics Unit for DNA sequence data. We thank ARS Culture (NRRL) Collection for preserving bacterial isolates used in this study. The study was supported by National Institutes of Health grants GM083113, AI107016, AI116186, GM113681 (to E.T.H.). Mention of trade names or commercial products in this publication is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture or Pennsylvania State University. Department of Veterinary and Biomedical Sciences, Center for Infectious Disease Dynamics, Center for Molecular Immunology and Infectious Diseases, Pennsylvania State University, University Park, W213 Millennium Science Complex, University Park, PA, 16802, USA Yury V. Ivanov, Bodo Linz, Israel Rivera, Kai Hu & Eric T. Harvill Department of Food Science, Center for Infectious Disease Dynamics, Center for Molecular Immunology and Infectious Diseases, Pennsylvania State University, University Park, PA, 16802, USA Nikki Shariat & Edward G. Dudley Present address: Department of Biology, Gettysburg College, Gettysburg, PA, 17325, USA Nikki Shariat USDA, Agricultural Research Service, National Animal Disease Center, Ames, IA, 50010, USA Karen B. Register Lee Kong Chian School of Medicine and Singapore Centre on Environmental Life Sciences Engineering, Nanyang Technological University, Singapore, 637551, Singapore Eric T. Harvill Yury V. Ivanov Bodo Linz Israel Rivera Kai Hu Edward G. Dudley Correspondence to Yury V. Ivanov. YVI conceived the study, designed and performed experiments and analyses, analyzed the data, and wrote the manuscript; NS and KBR conceived the study, designed and conducted experiments, reviewed and edited the manuscript; BL designed experiments wrote the manuscript; IR designed PCR primers for the CRISPR-Cas, conducted RNA purifications and RT-PCR analyses of crRNA and cas transcripts; KH: designed and performed GC-content comparisons between cas9 and the genome; ETH: wrote the edited the manuscript; EGD and ETH conceived and oversaw the study. All authors read and approved the final manuscript. Nikki Shariat and Karen B. Register contributed equally to this work. Bacterial isolates used in this study. (DOC 39 kb) Oligonucleotide primer sequences used in this study. (DOC 49 kb) Top three homologous loci encoding cas9, cas1, and cas2. Top panel, with gene annotations, represents the cas9-cas1-cas2 locus of B. pseudohinzii 8-296-03 (query sequence). Bottom panel summarizes top three BLASTn hit results and illustrates their corresponding alignments against the query. Genome GenBank numbers are shown in blue, above each alignment. (DOC 136 kb) BLASTp comparisons of Type II Cas proteins. (DOC 77 kb) RuvC-like and HNH-motifs in SpyCas9 and BpsuCas9. RuvC-like motif residue Asp10 and HNH motif residue His840, which are essential for endonuclease activity, are shown in red. Underlined residues are highly conserved among Cas9 proteins from different bacterial species. An * (asterisk) indicates positions at which residue are identical. A: (colon) indicates positions at which residues are of strongly similar properties. A . (period) indicates conservation between residues of weakly similar properties. (DOC 24 kb) Signatures of protospacer adjacent motif (PAM). Vertical lines denote the same nucleotides and not the base pairing between them. Coloring indicates same nucleotides between predicted target sites. (DOC 62 kb) Ivanov, Y.V., Shariat, N., Register, K.B. et al. A newly discovered Bordetella species carries a transcriptionally active CRISPR-Cas with a small Cas9 endonuclease. BMC Genomics 16, 863 (2015). https://doi.org/10.1186/s12864-015-2028-9 Bordetella pseudohinzii Type II CRISPR SpyCas9 Protospacer
CommonCrawl
I have elsewhere remarked on the apparent lack of benefit to taking multivitamins and the possible harm; so one might well wonder about a specific vitamin like vitamin D. However, a multivitamin is not vitamin D, so it's no surprise that they might do different things. If a multivitamin had no vitamin D in it, or if it had vitamin D in different doses, or if it had substances which interacted with vitamin D (such as calcium), or if it had substances which had negative effects which outweigh the positive (such as vitamin A?), we could well expect differing results. In this case, all of those are true to varying extents. Some multivitamins I've had contained no vitamin D. The last multivitamin I was taking both contains vitamins used in the negative trials and also some calcium; the listed vitamin D dosage was a trivial ~400IU, while I take >10x as much now (5000IU). Brain focus pills mostly contain chemical components like L-theanine which is naturally found in green and black tea. It's associated with enhancing alertness, cognition, relaxation, arousal, and reducing anxiety to a large extent. Theanine is an amino and glutamic acid that has been proven to be a safe psychoactive substance. Some studies suggest that this compound influences, the expression in the genes present in the brain which is responsible for aggression, fear, and memory. This, in turn, helps in balancing the behavioral responses to stress and also helps in improving specific conditions, like Post Traumatic Stress Disorder (PTSD). Flaxseed oil is, ounce for ounce, about as expensive as fish oil, and also must be refrigerated and goes bad within months anyway. Flax seeds on the other hand, do not go bad within months, and cost dollars per pound. Various resources I found online estimated that the ALA component of human-edible flaxseed to be around 20% So Amazon's 6lbs for $14 is ~1.2lbs of ALA, compared to 16fl-oz of fish oil weighing ~1lb and costing ~$17, while also keeping better and being a calorically useful part of my diet. The flaxseeds can be ground in an ordinary food processor or coffee grinder. It's not a hugely impressive cost-savings, but I think it's worth trying when I run out of fish oil. Sounds too good to be true? Welcome to the world of 'Nootropics' popularly known as 'Smart Drugs' that can help boost your brain's power. Do you recall the scene from the movie Limitless, where Bradley Cooper's character uses a smart drug that makes him brilliant? Yes! The effect of Nootropics on your brain is such that the results come as a no-brainer. Smart drugs act within the brain speeding up chemical transfers, acting as neurotransmitters, or otherwise altering the exchange of brain chemicals. There are typically very few side effects, and they are considered generally safe when used as indicated. Special care should be used by those who have underlying health conditions, are on other medications, pregnant women, and children, as there is no long-term data on the use and effects of nootropics in these groups. The abuse of drugs is something that can lead to large negative outcomes. If you take Ritalin (Methylphenidate) or Adderall (mixed amphetamine salts) but don't have ADHD, you may experience more focus. But what many people don't know is that the drug is very similar to amphetamines. And the use of Ritalin is associated with serious adverse events of drug dependence, overdose and suicide attempts [80]. Taking a drug for another reason than originally intended is stupid, irresponsible and very dangerous. How much of the nonmedical use of prescription stimulants documented by these studies was for cognitive enhancement? Prescription stimulants could be used for purposes other than cognitive enhancement, including for feelings of euphoria or energy, to stay awake, or to curb appetite. Were they being used by students as smart pills or as "fun pills," "awake pills," or "diet pills"? Of course, some of these categories are not entirely distinct. For example, by increasing the wakefulness of a sleep-deprived person or by lifting the mood or boosting the motivation of an apathetic person, stimulants are likely to have the secondary effect of improving cognitive performance. Whether and when such effects should be classified as cognitive enhancement is a question to which different answers are possible, and none of the studies reviewed here presupposed an answer. Instead, they show how the respondents themselves classified their reasons for nonmedical stimulant use. Remember: The strictest definition of nootropics today says that for a substance to be a true brain-boosting nootropic it must have low toxicity and few side effects. Therefore, by definition, a nootropic is safe to use. However, when people start stacking nootropics indiscriminately, taking megadoses, or importing them from unknown suppliers that may have poor quality control, it's easy for safety concerns to start creeping in. P.S. Even though Thrive Natural's Super Brain Renew is the best brain and memory supplement we have found, we would still love to hear about other Brain and Memory Supplements that you have tried! If you have had a great experience with a memory supplement that we did not cover in this article, let us know! E-mail me at : [email protected] We'll check it out for you and if it looks good, we'll post it on our site! Sleep itself is an underrated cognition enhancer. It is involved in enhancing long-term memories as well as creativity. For instance, it is well established that during sleep memories are consolidated-a process that "fixes" newly formed memories and determines how they are shaped. Indeed, not only does lack of sleep make most of us moody and low on energy, cutting back on those precious hours also greatly impairs cognitive performance. Exercise and eating well also enhance aspects of cognition. It turns out that both drugs and "natural" enhancers produce similar physiological changes in the brain, including increased blood flow and neuronal growth in structures such as the hippocampus. Thus, cognition enhancers should be welcomed but not at the expense of our health and well being. These are some of the best Nootropics for focus and other benefits that they bring with them. They might intrigue you in trying out any of these Nootropics to boost your brain's power. However, you need to do your research before choosing the right Nootropic. One way of doing so is by consulting a doctor to know the best Nootropic for you. Another way to go about selecting a Nootropic supplement is choosing the one with clinically tested natural Nootropic substances. There are many sources where you can find the right kind of Nootropics for your needs, and one of them is AlternaScript. AMP and MPH increase catecholamine activity in different ways. MPH primarily inhibits the reuptake of dopamine by pre-synaptic neurons, thus leaving more dopamine in the synapse and available for interacting with the receptors of the postsynaptic neuron. AMP also affects reuptake, as well as increasing the rate at which neurotransmitter is released from presynaptic neurons (Wilens, 2006). These effects are manifest in the attention systems of the brain, as already mentioned, and in a variety of other systems that depend on catecholaminergic transmission as well, giving rise to other physical and psychological effects. Physical effects include activation of the sympathetic nervous system (i.e., a fight-or-flight response), producing increased heart rate and blood pressure. Psychological effects are mediated by activation of the nucleus accumbens, ventral striatum, and other parts of the brain's reward system, producing feelings of pleasure and the potential for dependence. Either way, if more and more people use these types of stimulants, there may be a risk that we will find ourselves in an ever-expanding neurological arm's race, argues philosophy professor Nicole Vincent. But is this necessarily a bad thing? No, says Farahany, who sees the improvement in cognitive functioning as a social good that we should pursue. Better brain functioning would result in societal benefits, she argues, "like economic gains or even reducing dangerous errors." The blood half-life is 12-36 hours; hence two or three days ought to be enough to build up and wash out. A week-long block is reasonable since that gives 5 days for effects to manifest, although month-long blocks would not be a bad choice either. (I prefer blocks which fit in round periods because it makes self-experiments easier to run if the blocks fit in normal time-cycles like day/week/month. The most useless self-experiment is the one abandoned halfway.) The effect? 3 or 4 weeks later, I'm not sure. When I began putting all of my nootropic powders into pill-form, I put half a lithium pill in each, and nevertheless ran out of lithium fairly quickly (3kg of piracetam makes for >4000 OO-size pills); those capsules were buried at the bottom of the bucket under lithium-less pills. So I suddenly went cold-turkey on lithium. Reflecting on the past 2 weeks, I seem to have been less optimistic and productive, with items now lingering on my To-Do list which I didn't expect to. An effect? Possibly. Table 4 lists the results of 27 tasks from 23 articles on the effects of d-AMP or MPH on working memory. The oldest and most commonly used type of working memory task in this literature is the Sternberg short-term memory scanning paradigm (Sternberg, 1966), in which subjects hold a set of items (typically letters or numbers) in working memory and are then presented with probe items, to which they must respond "yes" (in the set) or "no" (not in the set). The size of the set, and hence the working memory demand, is sometimes varied, and the set itself may be varied from trial to trial to maximize working memory demands or may remain fixed over a block of trials. Taken together, the studies that have used a version of this task to test the effects of MPH and d-AMP on working memory have found mixed and somewhat ambiguous results. No pattern is apparent concerning the specific version of the task or the specific drug. Four studies found no effect (Callaway, 1983; Kennedy, Odenheimer, Baltzley, Dunlap, & Wood, 1990; Mintzer & Griffiths, 2007; Tipper et al., 2005), three found faster responses with the drugs (Fitzpatrick, Klorman, Brumaghim, & Keefover, 1988; Ward et al., 1997; D. E. Wilson et al., 1971), and one found higher accuracy in some testing sessions at some dosages, but no main effect of drug (Makris et al., 2007). The meaningfulness of the increased speed of responding is uncertain, given that it could reflect speeding of general response processes rather than working memory–related processes. Aspects of the results of two studies suggest that the effects are likely due to processes other than working memory: D. E. Wilson et al. (1971) reported comparable speeding in a simple task without working memory demands, and Tipper et al. (2005) reported comparable speeding across set sizes. Barbara Sahakian, a neuroscientist at Cambridge University, doesn't dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. "Proving safety and efficacy is needed," she says. Frustrated by the lack of results, pharmaceutical companies have been shutting down their psychiatric drug research programmes. Traditional methods, such as synthesising new molecules and seeing what effect they have on symptoms, seem to have run their course. A shift of strategy is looming, towards research that focuses on genes and brain circuitry rather than chemicals. The shift will prolong the wait for new blockbuster drugs further, as the new systems are developed, and offers no guarantees of results. Phenylpiracetam (Phenotropil) is one of the best smart drugs in the racetam family. It has the highest potency and bioavailability among racetam nootropics. This substance is almost the same as Piracetam; only it contains a phenyl group molecule. The addition to its chemical structure improves blood-brain barrier permeability. This modification allows Phenylpiracetam to work faster than other racetams. Its cognitive enhancing effects can last longer as well. Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo. Finally, two tasks measuring subjects' ability to control their responses to monetary rewards were used by de Wit et al. (2002) to assess the effects of d-AMP. When subjects were offered the choice between waiting 10 s between button presses for high-probability rewards, which would ultimately result in more money, and pressing a button immediately for lower probability rewards, d-AMP did not affect performance. However, when subjects were offered choices between smaller rewards delivered immediately and larger rewards to be delivered at later times, the normal preference for immediate rewards was weakened by d-AMP. That is, subjects were more able to resist the impulse to choose the immediate reward in favor of the larger reward. Another popular option is nicotine. Scientists are increasingly realising that this drug is a powerful nootropic, with the ability to improve a person's memory and help them to focus on certain tasks – though it also comes with well-documented obvious risks and side effects. "There are some very famous neuroscientists who chew Nicorette in order to enhance their cognitive functioning. But they used to smoke and that's their substitute," says Huberman. Cytisine is not known as a stimulant and I'm not addicted to nicotine, so why give it a try? Nicotine is one of the more effective stimulants available, and it's odd how few nicotine analogues or nicotinic agonists there are available; nicotine has a few flaws like short half-life and increasing blood pressure, so I would be interested in a replacement. The nicotine metabolite cotinine, in the human studies available, looks intriguing and potentially better, but I have been unable to find a source for it. One of the few relevant drugs which I can obtain is cytisine, from Ceretropic, at 2x1.5mg doses. There are not many anecdotal reports on cytisine, but at least a few suggest somewhat comparable effects with nicotine, so I gave it a try. We included studies of the effects of these drugs on cognitive processes including learning, memory, and a variety of executive functions, including working memory and cognitive control. These studies are listed in Table 2, along with each study's sample size, gender, age and tasks administered. Given our focus on cognition enhancement, we excluded studies whose measures were confined to perceptual or motor abilities. Studies of attention are included when the term attention refers to an executive function but not when it refers to the kind of perceptual process taxed by, for example, visual search or dichotic listening or when it refers to a simple vigilance task. Vigilance may affect cognitive performance, especially under conditions of fatigue or boredom, but a more vigilant person is not generally thought of as a smarter person, and therefore, vigilance is outside of the focus of the present review. The search and selection process is summarized in Figure 2. "In the hospital and ICU struggles, this book and Cavin's experience are golden, and if we'd have had this book's special attention to feeding tube nutrition, my son would be alive today sitting right here along with me saying it was the cod liver oil, the fish oil, and other nutrients able to be fed to him instead of the junk in the pharmacy tubes, that got him past the liver-test results, past the internal bleeding, past the brain difficulties controlling so many response-obstacles back then. Back then, the 'experts' in rural hospitals were unwilling to listen, ignored my son's unexpected turnaround when we used codliver oil transdermally on his sore skin, threatened instead to throw me out, but Cavin has his own proof and his accumulated experience in others' journeys. Cavin's boxed areas of notes throughout the book on applying the brain nutrient concepts in feeding tubes are powerful stuff, details to grab onto and run with… hammer them! Companies already know a great deal about how their employees live their lives. With the help of wearable technologies and health screenings, companies can now analyze the relation between bodily activities — exercise, sleep, nutrition, etc. — and work performance. With the justification that healthy employees perform better, some companies have made exercise mandatory by using sanctions against those who refuse to perform. And according to The Kaiser Family Foundation, of the large U.S. companies that offer health screenings, nearly half of them use financial incentives to persuade employees to participate. "In 183 pages, Cavin Balaster's new book, How to Feed A Brain provides an outline and plan for how to maximize one's brain performance. The "Citation Notes" provide all the scientific and academic documentation for further understanding. The "Additional Resources and Tips" listing takes you to Cavin's website for more detail than could be covered in 183 pages. Cavin came to this knowledge through the need to recover from a severe traumatic brain injury and he did not keep his lessons learned to himself. This book is enlightening for anyone with a brain. We all want to function optimally, even to take exams, stay dynamic, and make positive contributions to our communities. Bravo Cavin for sharing your lessons learned!" Integrity & Reputation: Go with a company that sells more than just a brain formula. If a company is just selling this one item,buyer-beware!!! It is an indication that it is just trying to capitalize on a trend and make a quick buck. Also, if a website selling a brain health formula does not have a highly visible 800# for customer service, you should walk away. The amphetamine mix branded Adderall is terribly expensive to obtain even compared to modafinil, due to its tight regulation (a lower schedule than modafinil), popularity in college as a study drug, and reportedly moves by its manufacture to exploit its privileged position as a licensed amphetamine maker to extract more consumer surplus. I paid roughly $4 a pill but could have paid up to $10. Good stimulant hygiene involves recovery periods to avoid one's body adapting to eliminate the stimulating effects, so even if Adderall was the answer to all my woes, I would not be using it more than 2 or 3 times a week. Assuming 50 uses a year (for specific projects, let's say, and not ordinary aimless usage), that's a cool $200 a year. My general belief was that Adderall would be too much of a stimulant for me, as I am amphetamine-naive and Adderall has a bad reputation for letting one waste time on unimportant things. We could say my prediction was 50% that Adderall would be useful and worth investigating further. The experiment was pretty simple: blind randomized pills, 10 placebo & 10 active. I took notes on how productive I was and the next day guessed whether it was placebo or Adderall before breaking the seal and finding out. I didn't do any formal statistics for it, much less a power calculation, so let's try to be conservative by penalizing the information quality heavily and assume it had 25%. So \frac{200 - 0}{\ln 1.05} \times 0.50 \times 0.25 = 512! The experiment probably used up no more than an hour or two total. I largely ignored this since the discussions were of sub-RDA doses, and my experience has usually been that RDAs are a poor benchmark and frequently far too low (consider the RDA for vitamin D). This time, I checked the actual RDA - and was immediately shocked and sure I was looking at a bad reference: there was no way the RDA for potassium was seriously 3700-4700mg or 4-5 grams daily, was there? Just as an American, that implied that I was getting less than half my RDA. (How would I get 4g of potassium in the first place? Eat a dozen bananas a day⸮) I am not a vegetarian, nor is my diet that fantastic: I figured I was getting some potassium from the ~2 fresh tomatoes I was eating daily, but otherwise my diet was not rich in potassium sources. I have no blood tests demonstrating deficiency, but given the figures, I cannot see how I could not be deficient. A 100mg dose of caffeine (half of a No-Doz or one cup of strong coffee) with 200mg of L-theanine is what the nootropics subreddit recommends in their beginner's FAQ, and many nootropic sellers, like Peak Nootropics, suggest the same. In my own experiments, I used a pre-packaged combination from Nootrobox called Go Cubes. They're essentially chewable coffee cubes (not as gross as it sounds) filled with that same beginner dose of caffeine, L-theanine, as well as a few B vitamins thrown into the mix. After eating an entire box of them (12 separate servings—not all at once), I can say eating them made me feel more alert and energetic, but less jittery than my usual three cups of coffee every day. I noticed enough of a difference in the past two weeks that I'll be looking into getting some L-theanine supplements to take with my daily coffee. Endoscopy surgeries, being minimally invasive, have become more popular in recent times. Latest studies show that there is an increasing demand for single incision or small incision type of surgery as an alternative to traditional surgeries. As aging patients are susceptible to complications, the usage of minimally invasive procedures is of utmost importance and the need of the hour. There are unexplained situations of bleeding, iron deficiency, abdominal pain, search for polyps, ulcers, and tumors of the small intestine, and inflammatory bowel disease, such as Crohn's disease, where capsule endoscopy diagnoses fare better than traditional endoscopy. Also, as capsule endoscopy is less invasive or non-invasive, as compared to traditional endoscopy, patients are increasingly preferring the usage of capsule endoscopy as it does not require any recovery time, which is driving the smart pill market. Nootropics include natural and manmade chemicals that produce cognitive benefits. These substances are used to make smart pills that deliver results for enhancing memory and learning ability, improving brain function, enhancing the firing control mechanisms in neurons, and providing protection for the brain. College students, adult professionals, and elderly people are turning to supplements to get the advantages of nootropic substances for memory, focus, and concentration. A new all-in-one nootropic mix/company run by some people active on /r/nootropics; they offered me a month's supply for free to try & review for them. At ~$100 a month (it depends on how many months one buys), it is not cheap (John Backus estimates one could buy the raw ingredients for $25/month) but it provides convenience & is aimed at people uninterested in spending a great deal of time reviewing research papers & anecdotes or capping their own pills (ie. people with lives) and it's unlikely I could spare the money to subscribe if TruBrain worked well for me - but certainly there was no harm in trying it out. Hericium erinaceus (Examine.com) was recommended strongly by several on the ImmInst.org forums for its long-term benefits to learning, apparently linked to Nerve growth factor. Highly speculative stuff, and it's unclear whether the mushroom powder I bought was the right form to take (ImmInst.org discussions seem to universally assume one is taking an alcohol or hotwater extract). It tasted nice, though, and I mixed it into my sleeping pills (which contain melatonin & tryptophan). I'll probably never know whether the $30 for 0.5lb was well-spent or not. Amphetamines have a long track record as smart drugs, from the workaholic mathematician Paul Erdös, who relied on them to get through 19-hour maths binges, to the writer Graham Greene, who used them to write two books at once. More recently, there are plenty of anecdotal accounts in magazines about their widespread use in certain industries, such as journalism, the arts and finance. Many laboratory tasks have been developed to study working memory, each of which taxes to varying degrees aspects such as the overall capacity of working memory, its persistence over time, and its resistance to interference either from task-irrelevant stimuli or among the items to be retained in working memory (i.e., cross-talk). Tasks also vary in the types of information to be retained in working memory, for example, verbal or spatial information. The question of which of these task differences correspond to differences between distinct working memory systems and which correspond to different ways of using a single underlying system is a matter of debate (e.g., D'Esposito, Postle, & Rypma, 2000; Owen, 2000). For the present purpose, we ignore this question and simply ask, Do MPH and d-AMP affect performance in the wide array of tasks that have been taken to operationalize working memory? If the literature does not yield a unanimous answer to this question, then what factors might be critical in determining whether stimulant effects are manifest? Enhanced learning was also observed in two studies that involved multiple repeated encoding opportunities. Camp-Bruno and Herting (1994) found MPH enhanced summed recall in the Buschke Selective Reminding Test (Buschke, 1973; Buschke & Fuld, 1974) when 1-hr and 2-hr delays were combined, although individually only the 2-hr delay approached significance. Likewise, de Wit, Enggasser, and Richards (2002) found no effect of d-AMP on the Hopkins Verbal Learning Test (Brandt, 1991) after a 25-min delay. Willett (1962) tested rote learning of nonsense syllables with repeated presentations, and his results indicate that d-AMP decreased the number of trials needed to reach criterion. Going back to the 1960s, although it was a Romanian chemist who is credited with discovering nootropics, a substantial amount of research on racetams was conducted in the Soviet Union. This resulted in the birth of another category of substances entirely: adaptogens, which, in addition to benefiting cognitive function were thought to allow the body to better adapt to stress.
CommonCrawl
\begin{document} \date{} \title{The Einstein-Hilbert type action \ on metric-affine almost-product manifolds} \begin{abstract} We continue our study of the mixed Einstein-Hilbert action as a functional of a pseudo-Riemannian metric and a linear connection. Its geometrical part is the total mixed scalar curvature on a smooth manifold endowed with a~distribution or a foliation. We develop variational formulas for quantities of extrinsic geometry of a distribution on a metric-affine space and use them to derive Euler-Lagrange equations (which in the case of space-time are analogous to those in Einstein-Cartan theory) and to characterize critical points of this action on vacuum space-time. Together with arbitrary variations of metric and connection, we consider also variations that partially preserve the metric, e.g., along the distribution, and also variations among distinguished classes of connections (e.g., statistical and metric compatible, and this is expressed in terms of restrictions on contorsion tensor). One of Euler-Lagrange equations of the mixed Einstein-Hilbert action is an analog of the Cartan spin connection equation, and the other can be presented in the form similar to the Einstein equation, with Ricci curvature replaced by the new Ricci type tensor. This tensor generally has a complicated form, but is given in the paper explicitly for variations among semi-symmetric~connections. \end{abstract} \vskip 2mm\noindent \textbf{Keywords}: Pseudo-Riemannian metric, distribution, foliation, totally umbilical, variation, mixed scalar curvature, affine connection, mixed Einstein-Hilbert action, Sasaki manifold. \vskip1mm\noindent \textbf{MSC (2010)} {\small Primary 53C12; Secondary 53C44.} \section{Introduction} We study the mixed Einstein-Hilbert action as a functional of two variables: a pseudo-Riemannian metric and a linear connection. Its geometrical part is the total mixed scalar curvature on a smooth manifold endowed with a~distribution or a foliation. Our goals are to obtain the Euler-Lagrange equations of the action, present them in the classical form of Einstein equations and find their solutions for the vacuum case. \textbf{1.1. State-of-the-art}. The {Metric-Affine~Geo\-metry} (founded by E.\,Cartan) generalizes pseudo-Riemannian Geometry: it uses a linear connection $\bar\nabla$ with torsion, instead of the Levi-Civita connection $\nabla$ of metric $g=\langle\cdot,\,\cdot\rangle$ on a manifold $M$, e.g.,~\cite{mikes}, and appears in such context as almost Hermitian and Finsler manifolds and theory of gravity. To~describe geometric properties of $\bar\nabla$, we use the~difference $\mathfrak{T}=\bar\nabla-\nabla$ (called the \textit{contorsion tensor}) and also auxiliary (1,2)-tensors $\mathfrak{T}^*$ and $\mathfrak{T}^\wedge$~defined by \[ \langle\mathfrak{T}^*_X Y,Z\rangle = \langle\mathfrak{T}_X Z, Y\rangle,\quad \mathfrak{T}^\wedge_X Y = \mathfrak{T}_Y X, \quad X,Y,Z\in\mathfrak{X}_M . \] The~following distinguished classes of metric-affine manifolds $(M,g,\bar\nabla)$ are considered important. $\bullet$~\textit{Riemann-Cartan manifolds}, where the $\bar\nabla$-parallel transport along the curves preserves the metric, i.e., $\bar\nabla g =0$, e.g., \cite{gps,rze-27b}, This condition is equivalent to $\mathfrak{T}^*=-\mathfrak{T}$ and $\bar \nabla$ is then called a metric compatible (or: metric) connection. e.g.,~\cite{cb19}, where the torsion tensor is involved in the {Cartan spin connection equation}, see \eqref{Eq-EC-nabla}. More specific types of metric connections (e.g., the \textit{semi-symmetric connections} \cite{FR, Yano} and \textit{adapted metric connections} \cite{bf}) also find applications in geometry and theoretical physics. $\bullet$~\textit{Statistical manifolds}, where the tensor $\bar\nabla g$ is symmetric in all its entries and connection $\bar\nabla$ is torsion-free, e.g., \cite{ch12,op2016,pss-2020}. These conditions are equivalent to $\mathfrak{T}^\wedge=\mathfrak{T}$ and $\mathfrak{T}^* = \mathfrak{T}$. The theory of affine hypersurfaces in $\mathbb{R}^{n+1}$ is a natural source of such manifolds; they also find applications in theory of probability and statistics. The above classes of connections admit a natural definition of the sectional curvature: in case of metric connections by the same formula as for the Levi-Civita connection, and for statistical connections by the analogue introduced in \cite{op2016}. For the curvature tensor $\bar R_{X,Y}=[\bar\nabla_Y,\bar\nabla_X]+\bar\nabla_{[X,Y]}$ of an affine connection $\bar\nabla$, we have \begin{equation}\label{E-RC-2} \bar R_{X,Y} -R_{X,Y} = (\nabla_Y\,\mathfrak{T})_X -(\nabla_X\,\mathfrak{T})_Y +[\mathfrak{T}_Y,\,\mathfrak{T}_X], \end{equation} where $R_{X,Y}=[\nabla_Y,\nabla_X]+\nabla_{[X,Y]}$ is the~Riemann curvature tensor of $\nabla$. Similarly as in Riemannian geometry, one can also consider the scalar curvature $\overline{\rm S}$ of $\bar R$. Many notable examples of pseudo-Riemannian metrics come (as critical points) from variational problems, a particularly famous of which is the \textit{Einstein-Hilbert action}, e.g., \cite{besse}. Its Einstein-Cartan genera\-lization in the framework of metric-affine geometry, given (on a smooth manifold $M$) by \begin{equation}\label{Eq-EH} \bar J: (g,\mathfrak{T}) \to \int_M \big\{\frac1{2\mathfrak{a}}\,(\overline{\rm S}-2{\Lambda})+{\cal L}\big\}\,{\rm d}{\rm vol}_g , \end{equation} extends the original formulation of general relativity and provides interesting examples of metrics as well as connections. Here, $\Lambda$ is a constant (the ``cosmological constant"), ${\cal L}$ is Lagrangian describing the matter contents, and $\mathfrak{a}=8\pi G/c^{4}$ -- the coupling constant involving the gravitational constant $G$ and the speed of light $c$. To deal also with non-compact manifolds, it is assumed that the integral above is taken over $M$ if it converges; otherwise, one integrates over arbitrarily large, relatively compact domain $\Omega\subset M$, which also contains supports of variations of $g$ and $\mathfrak{T}$. The Euler-Lagrange equation for \eqref{Eq-EH} when $g$ varies is \begin{subequations} \begin{equation}\label{Eq-EC-R} \overline\operatorname{Ric} -\,(1/2)\,\overline{\rm S}\cdot g +\,\Lambda\,g = \mathfrak{a}\,\Xi \end{equation} (called the Einstein equation) with the non-symmetric Ricci curvature $\overline\operatorname{Ric}$ and the asymmetric ener\-gy-momentum tensor $\Xi$ (generalizing the stress tensor of Newtonian physics), given in a coordinates by $\Xi_{\mu\nu}=-2\,{\partial{\cal L}}/{\partial g^{\mu\nu}} +g_{\mu\nu}{\cal L}$. The Euler-Lagrange equation for \eqref{Eq-EH} when $\mathfrak{T}$ varies is an algebraic constraint with the torsion tensor ${\cal S}$ of $\bar\nabla$ and the spin tensor $s_{\mu\nu}^c=2\,{\partial{\cal L}}/{\partial \mathfrak{T}_{\mu\nu}^c}$ (used to describe the intrinsic angular momentum of particles in spacetime, e.g.,~\cite{tr}): \begin{equation}\label{Eq-EC-nabla} {\cal S}(X,Y) +\operatorname{Tr\,}({\cal S}(\cdot,Y) - {\cal S}(X,\cdot)) = \mathfrak{a}\,s(X,Y),\quad X,Y\in\mathfrak{X}_M. \end{equation} \end{subequations} Since ${\cal S}(X,Y)=\mathfrak{T}_XY-\mathfrak{T}_YX$, \eqref{Eq-EC-nabla} can be rewritten using the contorsion tensor. The solution of (\ref{Eq-EC-R},b) is a pair $(g,{\mathfrak T})$, satisfying this system, where the pair of tensors $(\Xi,s)$ (describing a specified type of matter) is given. In vacuum space-time, Einstein and Einstein-Cartan theories coincide. The~classification of solutions of (\ref{Eq-EC-R},b) is a deep and largely unsolved problem~\cite{besse}. \textbf{1.2. Objectives}. On a manifold equipped with an additional structure (e.g., almost product, complex or contact), one can consider an analogue of \eqref{Eq-EH} adjusted to that structure. In pseudo-Riemannian geometry, it may mean restricting $g$ to a certain class of metrics (e.g., conformal to a given one, in the Yamabe problem \cite{besse}) or even constructing a new, related action (e.g., the Futaki functional on a Kahler manifold \cite{besse}, or several actions on contact manifolds \cite{Blairsurvey}), to cite only few examples. The~latter approach was taken in authors' previous papers, where the scalar curvature in the Einstein-Hilbert action on a pseudo-Riemannian manifold was replaced by the mixed scalar curvature of a given distribution or a foliation. In this paper, a similar change in \eqref{Eq-EH} will be considered on a connected smooth $(n+p)$-dimensional manifold $M$ endowed with an affine connection and a smooth $n$-dimensional distribution $\widetilde\mD$ (a subbundle of the tangent bundle $TM$). Distributions and foliations (that can be viewed as integrable distributions) on manifolds appear in various situations, e.g., \cite{bf,rov-m}. When a pseudo-Riemannian metric $g$ on $M$ is non-degenerate along $\widetilde\mD$, it defines the orthogonal $p$-dimensional distribution $\mD$ such that both distributions span the tangent bundle: $TM=\widetilde\mD\oplus\mD$ and define a Riemannian almost-product structure on $(M,g)$, e.g., \cite{g1967}. From a mathematical point of view, a \textit{space-time} of general relativity is a $(n+1)$-dimensional time-oriented (i.e., with a given timelike vector field) Lorentzian manifold, see~\cite{bee}. A~space-time admits a global \textit{time function} (i.e., increasing function along each future directed nonspacelike curve) if and only if it is stable causal; in particular, a {glo\-bally hyperbolic} spacetime is naturally endowed with a codimension-one foliation (the level hypersurfaces of a given time-function), see \cite{bs,fs}. The~\textit{mixed Einstein-Hilbert action} on $(M,\widetilde\mD)$, \begin{equation}\label{Eq-Smix} \bar J_{{\mD}}: (g,{\mathfrak T})\mapsto\!\int_{M} \Big\{\frac1{2\mathfrak{a}}\, ( \overline{\rm S}_{{\rm mix}} -2\,{\Lambda}) +{\cal L}\Big\}\,{\rm d}\operatorname{vol}_g, \end{equation} is an analog of \eqref{Eq-EH}, where $\overline{\rm S}\,$ is replaced by the mixed scalar curvature $\overline{\rm S}_{\,\rm mix}$, see~\eqref{eq-wal2}, for the affine connection $\bar\nabla=\nabla+\mathfrak{T}$. The physical meaning of \eqref{Eq-Smix} is discussed in~\cite{bdrs} for the case of $\mathfrak{T}=0$. In view of the formula $\overline{\rm S}=\overline{\rm S}_{\rm mix}+\overline{\rm S}^{\,\top}\!+\overline{\rm S}^{\,\bot}$, where $\overline{\rm S}^{\,\top}$ and $\overline{\rm S}^{\,\bot}$ are the scalar curvatures along the distributions $\widetilde\mD$ and $\mD$, one can combine the actions \eqref{Eq-EH} and \eqref{Eq-Smix} to obtain the new \textit{perturbed Einstein-Hilbert action} on $(M,\widetilde\mD)$: $\bar J_{\varepsilon}: (g,{\mathfrak T})\mapsto\!\int_{M} \big\{\frac1{2\mathfrak{a}}\,(\overline{\rm S}+\varepsilon\,\overline{\rm S}_{\rm mix} -2\,{\Lambda}) +{\cal L}\big\}\,{\rm d}\operatorname{vol}_g$ with $\varepsilon\in\mathbb{R}$, whose critical points may describe geometry of the space-time in an extended theory of gravity. The mixed scalar curvature (being an averaged mixed sectional curvature) is one of the simplest curvature invariants of a pseudo-Riemannian almost-product structure. If~a distribution is spanned by a unit vector field $N$, i.e., $\<N,N\rangle=\varepsilon_N\in\{-1,1\}$, then $\overline{\rm S}_{\rm mix} = \varepsilon_N\overline\operatorname{Ric}_{N,N}$, where $\overline\operatorname{Ric}_{N,N}$ is the Ricci curvature in the $N$-direction. If $\dim M=2$ and $\dim\mD=1$, then obviously $\overline{\rm S}_{\rm mix}=\overline{\rm S}$. If~${\mathfrak T}=0$ then $\overline{\rm S}_{{\rm mix}}$ reduces to the mixed scalar curvature ${\rm S}_{{\rm mix}}$ of $\nabla$, see~\eqref{eq-wal2-0}, which can be defined as a sum of sectional curvatures of planes that non-trivially intersect with both of the distributions. Investigation of ${\rm S}_{{\rm mix}}$ led to multiple results regarding the existence of foliations and submersions with interesting geometry, e.g., integral formulas and splitting results, curvature prescribing and variational problems, see survey \cite{rov-5}. The~trace of the partial Ricci curvature (rank 2) tensor $r_{\cal D}$ is ${\rm S}_{\rm mix}$, see Section~\ref{sec:prel}. The understanding of the mixed curvature, especially, $r_{\cal D}$ and ${\rm S}_{\rm mix}$, is a fundamental problem of extrinsic geometry of foliations, see~\cite{rov-m}. Varying \eqref{Eq-Smix} with fixed $\mathfrak{T}=0$, as a functional of $g$ only, we obtain the Euler-Lagrange equations in the form similar to \eqref{Eq-EC-R}, see \cite{bdrs} for space-times, and for $\widetilde{\mD}$ of any~dimension, see \cite{rz-1,rz-2}, i.e., \begin{equation}\label{E-gravity} \operatorname{Ric}_{\,{\mD}} -\,(1/2)\,{\rm S}_{\,\mD}\cdot g +\,\Lambda\,g = \mathfrak{a}\,\Xi, \end{equation} where the Ricci and scalar curvature are replaced by the \textit{mixed Ricci curvature} $\operatorname{Ric}_{\,{\mD}}$, see \eqref{E-main-0ij}, and its~trace ${\rm S}_{\mD}$. In~\cite{RZconnection}, we obtained the Euler--Lagrange equations for \eqref{Eq-Smix} with fixed $g$ and variable $\mathfrak{T}$, see (\ref{ELconnection1}-h), and examined critical contorsion tensors (and corresponding connections) in general and in distinguished classes of (1,2)-tensors. We have shown that $\mathfrak{T}$ is critical for \eqref{Eq-Smix} with fixed $g$ if and only if $\mathfrak{T}$ obeys certain system of algebraic equations, however, unlike \eqref{Eq-EC-nabla}, these equations heavily involve also the pseudo-Riemannian geometry of the distributions. In the article we generalize these results, considering variations of \eqref{Eq-Smix} with respect to both $g$ and $\mathfrak{T}$, at their arbitrary values. As we are less inclined to discuss particular physical theories, we basically confine ourselves to studying the total mixed scalar curvature -- the geometric part of the mixed Einstein-Hilbert action, i.e., we set $\Lambda={\cal L}=0$ in \eqref{Eq-Smix}, which in physics correspond to vacuum space-time and no ``cosmological~constant": \begin{equation}\label{actiongSmix} \bar J_{{\rm mix}} : (g, \mathfrak{T}) \mapsto \int_M \overline{\rm S}_{\rm mix}\,{\rm d}\operatorname{vol}_g. \end{equation} Considering variations of the metric that preserve the volume of the manifold, we can also obtain the Euler-Lagrange equations for \eqref{actiongSmix}, that coincide with those for \eqref{Eq-Smix} with ${\cal L}=0$ and $\Lambda \neq 0$. The terms of $\bar{\rm S}_{\,\rm mix}$ without covariant derivatives of $\mathfrak{T}$ make up the \textit{mixed scalar $\mathfrak{T}$-curvature}, see Section~\ref{sec:prel}, which we find interesting on its own. In particular, ${\rm S}_{\,\mathfrak{T}}$ can be viewed as the Riemannian mixed scalar curvature of a distribution with all sectional curvatures of planes replaced by their $\mathfrak{T}$-curvatures (see \cite{op2016}), and for statistical connections we have $\bar{\rm S}_{\,\rm mix} = {\rm S}_{\,\rm mix} + {\rm S}_{\,\mathfrak{T}}$. Thus, we also study (in Section~\ref{sec: 2-1}) the following, closely related to \eqref{actiongSmix}, action on $(M,\widetilde{\mD})$: \begin{equation}\label{actiongISmix} I: (g, \mathfrak{T}) \mapsto \int_M {{\rm S}}_{\,\mathfrak{T}}\,{\rm d} \operatorname{vol}_g . \end{equation} For each of the examined actions \eqref{actiongSmix} and \eqref{actiongISmix}, we obtain the Euler-Lagrange equations and formulate results about existence and examples of their solutions, that we describe in more detail further below. In~particular, from \cite{RZconnection} we know that if $\mathfrak{T}$ is critical for the action \eqref{actiongSmix}, then $\mD$ and $\widetilde\mD$ are totally umbilical with respect to $\nabla$ -- and to express this together with other conditions, a pair of equations like (\ref{Eq-EC-R},b) is not sufficient. Due to this fact, only in the special case of semi-symmetric connections we present the Euler-Lagrange equation in the form, which directly generalizes \eqref{E-gravity}: \begin{equation}\label{E-gravity-gen} \overline\operatorname{Ric}_{\,{\mD}} -\,(1/2)\,\overline{\rm S}_{\,\mD}\cdot g +\Lambda\,g = \mathfrak{a}\,\Xi \end{equation} and a separate condition (\ref{ELconnection1}-h), similar to \eqref{Eq-EC-nabla}, for the vector field parameterizing this type of connection. In~the paper we study solutions of \eqref{E-gravity-gen} and (\ref{ELconnection1}-h) for the vacuum case. \textbf{1.3. Structure of the paper}. The article has the Introduction and three other Sections. Section~\ref{sec:prel} contains background definitions and necessary results from \cite{bdr,rz-1,rz-2,RZconnection}, among them the notions of the mixed scalar curvature and the mixed and the partial Ricci tensors are~central. Section~\ref{sec:main} contains the main results, described in detail below. Section~\ref{sec:aux} contains auxiliary lemmas with necessary, but lengthy computations, and the References include 30 items. In~Section~\ref{sec:main}, we derive the Euler--Lagrange equations for \eqref{actiongSmix} and \eqref{actiongISmix} and find some of their solutions -- critical pairs $(g,\mathfrak{T})$ for different kinds of variations of metric and connection. Apart from varying among all metrics that are non-degenerate on $\widetilde\mD$, we also restrict to the case when metric remains fixed on the distribution, and the complementary case when metric varies only on the distribution -- preserving its orthogonal complement and the metric on it. This approach (first, applied in \cite{RWa-1} for codimension one foliations) can be used to finding an optimal extension of a metric given only on the distribution -- which is the problem of the relationship between sub-Riemannian and Riemannian geometries. Moreover, in analogy to the Einstein-Hilbert action, all variations are considered in two kinds: with and without preserving the volume of the manifold, see~\cite{besse}. In~addition, together with arbitrary variations of connection, we consider variations among such distinguished classes as statistical and metric connections, and this is expressed in terms of constraints on $\mathfrak{T}$. Section~\ref{sec:main} is divided into four subsections, according to additional conditions we impose on connections (e.g., metric, adapted and statistical) or actions we consider (defined by the mixed scalar curvature $\overline{\rm S}_{\,\rm mix}$ and the algebraic curvature-type invariant of a contorsion tensor ${\rm S}_{\,\mathfrak{T}}$). In Section \ref{sec: 2-1}, we vary functional \eqref{actiongISmix} with respect to metric $g$. Compared to its variation with fixed $g$, which was considered in \cite{RZconnection}, we obtain additional conditions for general and metric connections. On the other hand, a metric-affine doubly twisted product is critical for \eqref{actiongISmix} if and only if it is critical for the action with fixed~$g$. Similarly, restricting \eqref{actiongISmix} to pairs of metrics and statistical connections also does not give any new Euler-Lagrange equations than those obtained in \cite{RZconnection}. In Section~\ref{sec: 2-2}, for arbitrary variations of $(g,\mathfrak{T})$ we show that statistical connections critical for \eqref{actiongSmix} on a closed $M$ are exactly those that are critical for \eqref{actiongSmix} with fixed $g$, and for $n+p>2$ these exist only on metric products. On the other hand, for every $g$ critical for \eqref{actiongSmix} with fixed $\mathfrak{T}=0$, there exist statistical connections, satisfying algebraic conditions {\rm(\ref{ELSmixIstat1},b}), such that $(g,\mathfrak{T})$ is critical for \eqref{actiongSmix} restricted to all metrics, but only statistical connections. Note that \eqref{ELSmixIstat2} is equivalent to $\mathfrak{T}$ acting invariantly on each distribution, i.e., with only components $\mathfrak{T}:\widetilde{\mD}\times\widetilde{\mD}\rightarrow\widetilde{\mD}$ and $\mathfrak{T}:{\mD}\times{\mD}\rightarrow{\mD}$. Equations {\rm(\ref{ELSmixIstat1},b}) imply also that the traces $\operatorname{Tr\,}^\top\mathfrak{T}$ and $\operatorname{Tr\,}^\perp\mathfrak{T}$ vanish, and these are the only restrictions for $\mathfrak{T}$ critical among statistical connections. In Section \ref{sec: 2-3} we show that for $n,p>1$ the critical value of \eqref{actiongSmix} attained by $(g, \mathfrak{T})$, where $\mathfrak{T}$ corresponds to a metric connection, depends only on $g$ and is non-negative on a Riemannian manifold. In other words, pseudo-Riemannian geometry determines the mixed scalar curvature of any critical metric connection. For general metric connections, we consider only adapted variations of the metric (see Definition \ref{defintionvariationsofg}) due to complexity of the variational formulas. Compared to \eqref{actiongSmix} with fixed $g$, we get a new condition \eqref{ELmetric}, involving the symmetric part of $\mathfrak{T}|_{{\mD}\times{\mD}}$ and of $\mathfrak{T}|_{\widetilde{\mD}\times\widetilde{\mD}}$ in the dual equation. Under some assumptions, trace of \eqref{ELmetric} depends only on the pseudo-Riemannian geometry of $(M,g,\widetilde\mD)$ and thus gives a necessary condition for the metric to admit a critical point of \eqref{actiongSmix} in a large class of connections (e.g., adapted), or for integrable distributions $\mD$. On the other hand, in the case of adapted variations, antisymmetric parts of $(\mathfrak{T}|_{{\mD}\times{\mD}})^\perp$ and $(\mathfrak{T}|_{\widetilde{\mD}\times\widetilde{\mD}})^\top$ remain free parameters of any critical metric connection, as they do not appear in Euler-Lagrange equations (note that these components define part of the critical connection's torsion). Thus, for a given metric $g$ that admits critical points of \eqref{actiongSmix}, one can expect to have multiple critical metric connections, and examples in Section~\ref{sec: 2-3} confirm~that. Section~\ref{sec:2-4} deals with a semi-symmetric connection (parameterized by a vector field), as a simple case of a metric connection. Although such connections are critical for the action \eqref{actiongSmix} and arbitrary variations of connections only on metric-affine products, when we restrict variations of the mixed scalar curvature to semi-symmetric connections, we obtain meaningful Euler-Lagrange equations (in Theorem~\ref{propUconnectionEL}), which allow us to explicitly present the mixed Ricci tensor -- analogous to the Ricci tensor in the Einstein equation. \section{Preliminaries} \label{sec:prel} Here, we recall definitions of some functions and tensors, used also in \cite{bdr,rz-1,rz-2,RZconnection,wa1}, and introduce several new notions related to geometry of $(M,g,\bar\nabla)$ endowed with a non-degenerate distribution. \textbf{2.1. The mixed scalar curvature}. Let ${\rm Sym}^2(M)$ be the space of symmetric $(0,2)$-tensors tangent to a smooth connected manifold~$M$. A~\textit{pseudo-Riemannian metric} $g=\langle\cdot,\cdot\rangle$ of index $q$ on $M$ is an element $g\in{\rm Sym}^2(M)$ such that each $g_x\ (x\in M)$ is a {non-degenerate bilinear form of index} $q$ on the tangent space $T_xM$. For~$q=0$ (i.e., $g_x$ is positive definite) $g$ is a Riemannian metric and for $q=1$ it is called a Lorentz metric. Let~${\rm Riem}(M)\subset{\rm Sym}^2(M)$ be the subspace of pseudo-Riemannian metrics of a given signature. A smooth subbundle $\widetilde{\mD}\subset TM$ (that is a regular distribution) is \textit{non-degenerate}, if $g_x$ is non-degenerate on $\widetilde{\mD}_x\subset T_x M$ for $x\in M$; in this case, the orthogonal complement ${\mD}$ of~$\widetilde{\mD}$ is also non-degenerate, and we have $\widetilde{\mD}_x\cap\,{\mD}_x=0$, $\widetilde{\mD}_x\oplus\,{\mD}_x=T_xM$ for all $x \in M$. Let~$\mathfrak{X}_M, \mathfrak{X}^\bot,\mathfrak{X}^\top$ be the modules over $C^\infty(M)$ of sections (vector fields) of $TM,{\mD}$ and $\widetilde{\mD}$, respectively. Let ${\rm Riem}(M,\widetilde{\mD},{\mD})\subset {\rm Riem}(M)$ be the subspace of pseudo-Riemannian metrics making $\widetilde{\mD}$ and ${\mD}$ (of ranks $\dim\widetilde{\mD}=n\ge1$ and $\dim{\mD}=p\ge1$) orthogonal and non-degenerate. Given $g\in{\rm Riem}(M,\widetilde{\mD},{\mD})$, a local adapted orthonormal frame $\{E_a,\,{\cal E}_{i}\}$, where $\{E_a\}\subset\widetilde{\mD}$ and $\varepsilon_i=\langle{\cal E}_{i},{\cal E}_{i}\rangle\in\{-1,1\}$, $\varepsilon_a=\<E_a,E_a\rangle\in\{-1,1\}$, always exists on $M$. The~following convention is adopted for the range of~indices: \begin{equation*} a,b,c\ldots{\in}\{1\ldots n\},\quad i,j,k\ldots{\in}\{1\ldots p\}. \end{equation*} All the quantities defined below with the use of an adapted orthonormal frame do not depend on the choice of this frame. We~have $X=\widetilde{X} + X^\perp$, where $\widetilde{X} \equiv X^\top$ is the $\widetilde{\mD}$-component of $X\in\mathfrak{X}_M$ (respectively, $X^\perp$ is the ${\mD}$-component of $X$) with respect to $g$. Set~$\operatorname{id}^\top(X)=X^\top$ and $\,\operatorname{id}^\bot(X)=X^\bot$. \begin{definition}\rm The function on $(M,g,\bar\nabla)$ endowed with a non-degenerate distribution $\widetilde{\mD}$, \begin{equation}\label{eq-wal2} \bar{\rm S}_{\,\rm mix} = \frac{1}{2}\sum\nolimits_{a,i} \varepsilon_a \varepsilon_i \big(\langle{\bar R}_{\,E_a, {\cal E}_i} E_a, {\cal E}_i\rangle +\langle{\bar R}_{\,{\cal E}_i, E_a}\, {\cal E}_i, E_a\rangle \big), \end{equation} is called the \textit{mixed scalar curvature with respect to connection} $\bar\nabla$. In particular case of the Levi-Civita connection $\nabla$, the function on $(M,g)$, \begin{equation}\label{eq-wal2-0} {\rm S}_{\rm mix} = \operatorname{Tr\,}_{g}{r}_{\mD} =\sum\nolimits_{\,a,i}\varepsilon_a \varepsilon_i\,\<R_{\,E_a, {\cal E}_{i}}\,E_a,\, {\cal E}_{i}\rangle \end{equation} is called the \textit{mixed scalar curvature} (with respect to $\nabla$). The symmetric $(0,2)$-tensor \begin{equation}\label{E-Rictop2} {r}_{{\mD}}(X,Y) = \sum\nolimits_{a} \varepsilon_a\, \<R_{\,E_a,\,X^\perp}\,E_a, \, Y^\perp\rangle, \quad X,Y\in \mathfrak{X}_M, \end{equation} is called the \textit{partial Ricci tensor} related to $\widetilde\mD$. \end{definition} Remark that on $(M,\widetilde\mD)$, the ${\rm S}_{\rm mix}$ and $g$-orthogonal complement to $\widetilde{\mD}$ are determined by the choice of metric~$g$. In particular, if $\dim\widetilde\mD=1$ then $r_\mD=\varepsilon_N\,R_N$, where $R_N=R_{N,\,^\centerdot}\,N$ is the Jacobi operator, and if $\dim\mD=1$ then $r_\mD=\operatorname{Ric}_{N,N}g^\bot$, where the symmetric (0,2)-tensor $g^\perp$ is defined by $g^\perp (X,Y) = \langle X^\perp, Y^\perp\rangle$ for $X,Y \in \mathfrak{X}_M$. We use the following convention for components of various $(1,1)$-tensors in an adapted orthonormal frame $\{E_a , {\cal E}_i \}$: $\mathfrak{T}_a = \mathfrak{T}_{E_a} ,\ \mathfrak{T}_i = \mathfrak{T}_{{\cal E}_i}$, etc. Following the notion of $\mathfrak{T}$-{sectional curvature} of a symmetric $(1,2)$-tensor $\mathfrak{T}$ on a vector space endowed with a scalar product and a cubic form, see~\cite{op2016}, we define the {mixed scalar $\mathfrak{T}$-curvature} by \eqref{E-SK}, as a sum of $\mathfrak{T}$-sectional curvatures of planes that non-trivially intersect with both of the distributions, \begin{equation}\label{E-SK} {\rm S}_{\,\mathfrak{T}} = \sum\nolimits_{\,a,i} \varepsilon_a \varepsilon_i ( \langle[\mathfrak{T}_i,\, \mathfrak{T}_a] E_a, {\cal E}_i\rangle +\langle[\mathfrak{T}_a,\, \mathfrak{T}_i]\, {\cal E}_i, E_a\rangle ). \end{equation} The definitions \eqref{E-SK}, \eqref{eq-wal2}--\eqref{eq-wal2-0} do not depend on the choice of an adapted local orthonormal frame. Thus, we can consider $\bar{\rm S}_{\,\rm mix}$ and ${\rm S}_{\mathfrak{T}}$ on $(M,\widetilde\mD)$ as functions of $g$ and $\mathfrak{T}$. If ${\cal T}$ is either symmetric or anti-symmetric then \eqref{E-SK} reads as ${\rm S}_{\,\mathfrak{T}} = \sum\nolimits_{a,i}\varepsilon_a\varepsilon_i\,\langle[\mathfrak{T}_i,\,\mathfrak{T}_a]\, E_a, {\cal E}_i\rangle$. As was mentioned in the Introduction, the~\textit{mixed scalar $\mathfrak{T}$-curvature} (for the contorsion tensor $\mathfrak{T}$) is a part of $\bar{\rm S}_{\,\rm mix}$, in~fact we have \cite[Eq.~(6)]{RZconnection}: \begin{equation}\label{barSmix} \bar{\rm S}_{\,\rm mix} = {\rm S}_{\,\rm mix} + {\rm S}_{\,\mathfrak{T}} + {\bar Q}/2, \end{equation} where ${\bar Q}$ consists of all terms with covariant derivatives of $\mathfrak{T}$, \begin{equation*} {\bar Q} = \sum\nolimits_{a,i}\varepsilon_a\varepsilon_i\big(\langle(\nabla_i \mathfrak{T})_a E_a, {\cal E}_i\rangle -\langle(\nabla_a \mathfrak{T})_i E_a, {\cal E}_i\rangle +\langle(\nabla_a \mathfrak{T})_i\, {\cal E}_i, E_a\rangle -\langle(\nabla_i \mathfrak{T})_a\, {\cal E}_i, E_a\rangle\big). \end{equation*} The formulas for the mixed scalar curvature in the next two lemmas are essential in our calculations. The~lemmas use tensors defined in \cite{rz-1}, which are briefly recalled below. \begin{proposition} The following presentation of the partial Ricci tensor in \eqref{E-Rictop2} is valid, see \cite{bdr,rz-1}: \begin{equation}\label{E-genRicN} r_{{\mD}}=\operatorname{div}\tilde h +\langle\tilde h,\,\tilde H\rangle-\widetilde{\cal A}^\flat-\widetilde{\cal T}^\flat-\Psi+{\rm Def}_{\mD}\,H. \end{equation} Tracing \eqref{E-genRicN}, we have, see {\rm \cite{wa1}}, \begin{equation}\label{eq-ran-ex} {\rm S}_{\rm mix} = \<H,H\rangle +\langle{\tilde H}, {\tilde H}\rangle -\<h,h\rangle -\langle {\tilde h},{\tilde h}\rangle +\<T,T\rangle +\langle\tilde T,\tilde T\rangle +\operatorname{div}(H+\tilde H)\,. \end{equation} For totally umbilical distributions, i.e., $h=\frac1nH\,{g^\top}$ and $\tilde h=\frac1p\,\tilde H\,{g^\bot}$, \eqref{eq-ran-ex} reads as \begin{equation}\label{E-PW-Smix-umb} {\rm S}_{\rm mix} = \frac{n-1}{n} \<H,H\rangle +\frac{p-1}{p} \langle{\tilde H},{\tilde H}\rangle +\<T, T\rangle +\langle {\tilde T}, {\tilde T} \rangle +\operatorname{div}(H +{\tilde H}), \end{equation} \end{proposition} Denote by $\<B,C\rangle_{|V}$ the inner product of tensors $B,C$ restricted to $V=(\tilde{\mD}\times{\mD})\cup({\mD}\times\tilde{\mD})$. \begin{proposition}[see \cite{r-affine}]\label{L-QQ-first} We have using \eqref{E-RC-2}, \begin{equation}\label{E-Q1Q2-gen} 2\,(\bar{\rm S}_{\,\rm mix} -{\rm S}_{\,\rm mix}) = \operatorname{div}\big( (\operatorname{Tr\,}^\top(\mathfrak{T} -\mathfrak{T}^*))^\bot +(\operatorname{Tr\,}^\bot(\mathfrak{T} -\mathfrak{T}^*))^\top \big) - Q, \end{equation} where \begin{eqnarray}\label{E-defQ} \nonumber && Q = - \langle\operatorname{Tr\,}^\bot\mathfrak{T},\, \operatorname{Tr\,}^\top\mathfrak{T}^*\rangle -\langle \operatorname{Tr\,}^\top\mathfrak{T},\,\operatorname{Tr\,}^\bot\mathfrak{T}^*\rangle +\langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V}\\ && \hskip-5mm -\,\langle\operatorname{Tr\,}^\top(\mathfrak{T}- \mathfrak{T}^*) -\operatorname{Tr\,}^\bot(\mathfrak{T} -\mathfrak{T}^*),\, H -{\tilde H}\rangle -\langle \mathfrak{T} -\mathfrak{T}^* +\mathfrak{T}^\wedge - \mathfrak{T}^{* \wedge} ,\ {\tilde A}-{\tilde T}^\sharp + A-T^\sharp\rangle . \end{eqnarray} and the partial traces of $\,\mathfrak{T}$ (similarly, for $\mathfrak{T}^*$, etc.) are given~by \begin{equation}\label{E-defTT} \operatorname{Tr\,}^\top\mathfrak{T} = \sum\nolimits_a\varepsilon_a \mathfrak{T}_a E_a,\quad \operatorname{Tr\,}^\bot\mathfrak{T} = \sum\nolimits_i\varepsilon_i\, \mathfrak{T}_i \,{\cal E}_i. \end{equation} \end{proposition} The tensors used in the above lemmas (and other ones) are defined below for one of the distributions (say, ${\mD}$; similar tensors for $\widetilde{\mD}$ are denoted using $^\top$ or $\ \widetilde{}\ $ notation). The integrability tensor and the second fundamental form $T, h:\widetilde{\mD}\times \widetilde{\mD}\to{\mD}$ of $\widetilde{\mD}$ are given by \begin{equation*} T(X,Y)=(1/2)\,[X,\,Y]^\perp,\quad h(X,Y) = (1/2)\,(\nabla_X Y+\nabla_Y X)^\perp, \quad X, Y \in \mathfrak{X}^\top. \end{equation*} The mean curvature vector field of $\widetilde{\mD}$ is given by $H=\operatorname{Tr\,}_{g} h=\sum\nolimits_a\varepsilon_a h(E_a,E_a)$. We call $\widetilde{\mD}$ {totally umbilical}, {minimal}, or {totally geodesic}, if $h=\frac1nH\,{g^\top},\ H =0$, or $h=0$, respectively. The ``musical" isomorphisms $\sharp$ and $\flat$ will be used for rank one and symmetric rank 2 tensors. For~example, if $\omega \in\Lambda^1(M)$ is a 1-form and $X,Y\in {\mathfrak X}_M$ then $\omega(Y)=\langle\omega^\sharp,Y\rangle$ and $X^\flat(Y) =\<X,Y\rangle$. For arbitrary (0,2)-tensors $A$ and $B$ we also have $\<A, B\rangle =\operatorname{Tr\,}_g(A^\sharp B^\sharp)=\<A^\sharp, B^\sharp\rangle$. The Weingarten operator $A_Z$ of $\widetilde{\mD}$ with $Z\in\mathfrak{X}^\bot\mathfrak{\mathfrak{}}$, and the operator $T^\sharp_Z$ are defined~by \[ \<A_Z(X),Y\rangle= \,h(X,Y),Z\rangle,\quad \<T^\sharp_Z(X),Y\rangle=\<T(X,Y),Z\rangle, \quad X,Y \in \mathfrak{X}^\top . \] The norms of tensors are obtained using \begin{equation*} \<h,h\rangle=\sum\nolimits_{\,a,b}\varepsilon_a\varepsilon_b\,\<h({E}_a,{E}_b),h({E}_a,{E}_b)\rangle, \quad \<T,T\rangle=\sum\nolimits_{\,a,b}\varepsilon_a\varepsilon_b\,\<T({E}_a,{E}_b),T({E}_a,{E}_b)\rangle,\quad {\rm etc}. \end{equation*} The \textit{divergence} of a vector field $X\in\mathfrak{X}_M$ is given by \begin{equation}\label{eq:div} (\operatorname{div} X)\,{\rm d}\operatorname{vol}_g = {\cal L}_{X}({\rm d}\operatorname{vol}_g), \end{equation} where ${\rm d} \operatorname{vol}_g$ is the volume form of $g$. One may show that \[ \operatorname{div} X=\sum\nolimits_{i}\varepsilon_i\,\langle\nabla_{i}\,X, {\cal E}_i\rangle +\sum\nolimits_{a}\varepsilon_a\,\langle\nabla_{a}\,X, {E}_a\rangle. \] The~${\mD}$-\textit{divergence} of a vector field $X\in\mathfrak{X}_M$ is given by $\operatorname{div}^\perp X=\sum\nolimits_{i} \varepsilon_i\,\langle\nabla_{i}\,X, {\cal E}_i\rangle$. Thus, $\operatorname{div} X=\operatorname{Tr\,}(\nabla X) = \operatorname{div}^\perp X +\widetilde{\operatorname{div}}\,X$. Observe that for $X\in\mathfrak{X}^\bot$ we have \begin{equation}\label{E-divN} {\operatorname{div}}^\bot X = \operatorname{div} X +\<X,\,H\rangle. \end{equation} For a $(1,2)$-tensor $P$ define a $(0,2)$-tensor ${\operatorname{div}}^\bot P$ by \[ ({\operatorname{div}}^\bot P)(X,Y) = \sum\nolimits_i \varepsilon_i\,\langle(\nabla_{i}\,P)(X,Y), {\cal E}_i\rangle,\quad X,Y \in \mathfrak{X}_M. \] For a~${\mD}$-valued $(1,2)$-tensor $P$, similarly to \eqref{E-divN}, we have \begin{eqnarray*} && ({\operatorname{div}}^\top P)(X,Y) =\sum\nolimits_a \varepsilon_a\,\langle(\nabla_{a}\,P)(X,Y), E_a\rangle =-\<P(X,Y), H\rangle,\\ && {\operatorname{div}}^\bot P = \operatorname{div} P+\<P,\,H\rangle\,, \end{eqnarray*} where $\<P,\,H\rangle(X,Y)=\<P(X,Y),\,H\rangle$ is a $(0,2)$-tensor. For example, $\operatorname{div}^\perp h = \operatorname{div} h+\<h,\,H\rangle$. For~a~function $f$ on $M$, we use the notation $\nabla^{\perp} f = (\nabla f)^{\perp}$ of the projection of $\nabla f$ onto $\mD$. The ${\mD}$-\textit{deformation tensor} ${\rm Def}_{\mD}\,Z$ of $Z\in\mathfrak{X}_M$ is the symmetric part of $\nabla Z$ restricted to~${\mD}$, \begin{equation*} 2\,{\rm Def}_{\mD}\,Z(X,Y)=\langle\nabla_X Z, Y\rangle +\langle\nabla_Y Z, X\rangle,\quad X,Y\in \mathfrak{X}^\bot. \end{equation*} The self-adjoint $(1,1)$-tensors: ${\cal A}$ (the \textit{Casorati type operator}) and ${\cal T}$ and the symmetric $(0,2)$-tensor $\Psi$, see \cite{bdr,rz-1}, are defined by \begin{eqnarray*} && {\cal A}=\sum\nolimits_{\,i}\varepsilon_i A_{i}^2,\quad {\cal T}=\sum\nolimits_{\,i}\varepsilon_i(T_{i}^\sharp)^2,\\ && \Psi(X,Y) = \operatorname{Tr\,} (A_Y A_X+T^\sharp_Y T^\sharp_X), \quad X,Y\in\mathfrak{X}^\bot. \end{eqnarray*} For readers' convenience, we gather below also definitions of all other basic tensors that will be used in further parts of the paper. We define a self-adjoint $(1,1)$-tensor ${\cal K}$ by the formula \[ {\cal K} = \sum\nolimits_i \varepsilon_{\,i} [T^\sharp_i, A_i] = \sum\nolimits_{\,i} \varepsilon_i (T^\sharp_i A_i - A_i T^\sharp_i), \] and the $(1,2)$-tensors $\alpha,\theta$ and ${\tilde\delta}_{Z}$ (defined for a given vector field $Z \in \mathfrak{X}_M$) on $(M, \widetilde{\mD}, g)$: \begin{eqnarray*} && \alpha(X,Y) = \frac{1}{2}\,(A_{X^{\perp}} (Y^{\top}) + A_{Y^{\perp}} (X^{\top})), \quad \theta(X,Y) = \frac{1}{2}\,(T^{\sharp}_{X^{\perp}}(Y^{\top}) + T^{\sharp}_{Y^{\perp}}(X^{\top})),\\ && {\tilde\delta}_{Z}(X,Y) = \frac{1}{2}\,\big(\langle\nabla_{X^{\top}} Z,\, Y^{\perp}\rangle +\langle\nabla_{Y^{\top}} Z,\, X^{\perp}\rangle\big), \quad X,Y \in \mathfrak{X}_M. \end{eqnarray*} For any $(1,2)$-tensors $P,Q$ and a $(0,2)$-tensor $S$ on $TM$, define the following $(0,2)$-tensor $\Upsilon_{P,Q}$: \[ \langle\Upsilon_{P,Q}, S\rangle = \sum\nolimits_{\,\lambda, \mu} \varepsilon_\lambda\, \varepsilon_\mu\, [S(P(e_{\lambda}, e_{\mu}), Q( e_{\lambda}, e_{\mu})) + S(Q(e_{\lambda}, e_{\mu}), P( e_{\lambda}, e_{\mu}))], \] where on the left-hand side we have the inner product of $(0,2)$-tensors induced by $g$, $\{e_{\lambda}\}$ is a local orthonormal basis of $TM$ and $\varepsilon_\lambda = \<e_{\lambda}, e_{\lambda}\rangle\in\{-1,1\}$. Note that \[ \Upsilon_{P,Q} = \Upsilon_{Q,P},\quad \Upsilon_{P, fQ_{1} + Q_{2}} = f\Upsilon_{P,Q_1}+\Upsilon_{P,Q_2}. \] Finally, for the contorsion tensor and $X \in TM$ we define ${\mathfrak{T}}^\top_X : \tilde{\mD} \rightarrow \tilde{\mD}$ by \[ {\mathfrak{T}}^\top_X Y = (\mathfrak{T}_X (Y^\top))^\top , \quad Y \in TM. \] \begin{remark} \label{remarkepsilons} \rm From now on, we shall omit factors $\varepsilon_\mu$ in all expressions with sums over an adapted frame (or its part), effectively identifying symbols $\sum_\mu$ with $\sum_\mu \varepsilon_\mu$ etc. As we assume in this paper that $g$ is non-degenerate on the distribution $\widetilde{\mD}$, the presence of factors $\varepsilon_\mu$ in the sums is the only difference in formulas with adapted frames for a Riemannian and a pseudo-Riemannian metric $g$. With the definitions given in this section, all tensor equations that follow look exactly the same in both these cases. In more complicated formulas we shall also omit summation indices, assuming that every sum is taken over all indices that appear repeatedly after the summation sign, and contains appropriate factors $\varepsilon_\mu$. \end{remark} \textbf{2.2. The mixed Ricci curvature}. Let $(M,g)$ be a pseudo-Riemannian manifold endowed with a non-degenerate distribution $\widetilde{\mD}$. We consider smooth $1$-parameter variations $\{g_t\in{\rm Riem}(M):\,|t|<\varepsilon\}$ of the metric $g_0 = g$. Let the infinitesimal variations, represented by a symmetric $(0,2)$-tensor \[ {B}_t\equiv\partial g_t/\partial t, \] be supported in a relatively compact domain $\Omega$ in $M$, i.e., $g_t =g$ outside $\Omega$ for all $|t|<\varepsilon$. We~call a variation $g_t$ \emph{volume-preserving} if ${\rm Vol}(\Omega,g_t) = {\rm Vol}(\Omega,g)$ for all $t$. We~adopt the notations $\partial_t \equiv \partial/\partial t,\ {B}\equiv{\partial_t g_t}_{\,|\,t=0}=\dot g$, but we shall also write $B$ instead of $B_t$ to make formulas easier to read, wherever it does not lead to confusion. Since $B$ is symmetric, then $\<C,\,B\rangle=\langle{\rm Sym}(C),\,B\rangle$ for any $(0,2)$-tensor $C$. We denote by $\otimes$ the product of tensors and use the symmetrization operator to define the symmetric product of tensors: $B\odot C = {\rm Sym}(B\otimes C)=\frac12\,(B\otimes C+ C\otimes B)$. \begin{definition} \label{defintionvariationsofg} \rm A family of metrics $\{g_t\in{\rm Riem}(M):\, |t|<\varepsilon\}$ such that $g_0 =g$ will be called (i) $g^\pitchfork$-\textit{variation} if $g_{t}(X,Y)= g_0(X,Y)$ for all $X,Y\in \mathfrak{X}^\top$ and $|t|<\varepsilon$. (ii) \textit{adapted variation}, if the $g_t$-orthogonal complement ${\mD}_t$ remain $g_0$-orthogonal to $\widetilde{\mD}$ for all~$t$. (iii) \textit{${{g^\top}}$-variation}, if it is adapted and $g_{t}(X,Y)=g_0(X,Y)$ for all $X,Y\in \mathfrak{X}^\bot$ and $|t|<\varepsilon$. (iv) \textit{${{g^\perp}}$-variation}, if it is adapted $g^\pitchfork$-variation. \end{definition} In other words, for $g^\pitchfork$-variations the metric on $\widetilde{\mD}$ is preserved. For adapted variation we have $g_t\in{\rm Riem}(M,\widetilde{\mD},{\mD})$ for all~$t$. For ${{g^\top}}$-variations only the metric on $\widetilde{\mD}$ changes, and for ${g^\bot}$-\textit{variations} only the metric on ${\mD}$ changes, and ${\mD}$ remains to be $g_t$-orthogonal to~$\widetilde{\mD}$. The symmetric tensor $B_t=\dot g_t$ (of any variation) can be decomposed into the sum of derivatives of $g^\pitchfork$- and ${g^\top}$-variations, see \cite{rz-2}. Namely, $B_t=B_t^\pitchfork + {\tilde B}_t$, where \[ B^\pitchfork_t =\bigg(\begin{array}{cc} B_{ t \,|\,{\cal D}\times{\cal D}} & {B}_{ t \,|\,{\cal D}\times\widetilde{\cal D}} \\ {B}_{ t \,|\,\widetilde{\cal D}\times{\cal D}} & 0 \end{array}\bigg), \quad {\tilde B}_t =\bigg(\begin{array}{cc} 0 & 0 \\ 0 & {B}_{ t \,|\,\widetilde{\cal D}\times\widetilde{\cal D}} \end{array}\bigg). \] Thus, for $g^\pitchfork$-variations $B(X,Y) =0$ for all $X,Y \in \mathfrak{X}^\top$. Denote by $^\top$ and $^\perp$ the $g_t$-orthogonal projections of vectors onto $\widetilde{\mD}$ and ${\mD}(t)$ (the $g_t$-orthogonal complement of $\widetilde{\mD}$), respectively. \begin{proposition}[see \cite{rz-2}]\label{prop-Ei-a} Let $g_t$ be a $g^\pitchfork$-variation of $g\in{\rm Riem}(M,\widetilde{\mD},{\mD})$. Let $\{E_a,\,{\cal E}_{i}\}$ be a local $(\widetilde{\mD},\,{\mD})$-adapted and orthonormal for $t=0$ frame, that evolves according to \begin{equation}\label{E-frameE} \partial_t E_a = 0,\qquad \partial_t {\cal E}_{i}=-(1/2)\, ({B}_t^\sharp({\cal E}_{i}))^{\perp} -({B}_t^\sharp({\cal E}_{i}))^{\top}. \end{equation} Then, for all $\,t, $ $\{E_a(t),{\cal E}_{i}(t)\}$ is a $g_t$-orthonormal frame adapted to $(\widetilde{\mD},{\mD}(t))$. \end{proposition} For any $g^\pitchfork$-variation of metric the evolution of ${\mD}(t)$ gives rise to the evolution of both $\widetilde{\mD}$- and ${\mD}(t)$-components of any $X\in\mathfrak{X}_M$: \begin{equation*} \partial_t (X^{\top}) = (\partial_t X)^{\top} + (B^{\sharp} (X^{\perp}))^{\top},\quad \partial_t (X^{\perp}) = (\partial_t X)^{\perp} -(B^{\sharp} (X^{\perp}))^{\top}. \end{equation*} The Divergence Theorem (with $X\in\mathfrak{X}_M$) states that \begin{equation}\label{E-DivThm} \int_{M} (\operatorname{div} X)\,{\rm d}\operatorname{vol}_g =0, \end{equation} when $M$ is closed (compact and without boundary); this is also true if $M$ is open and $X$ is supported in a relatively compact domain $\Omega\subset M$. For any variation $g_t$ of metric $g$ on $M$ with $B=\partial_t g$ we have \begin{equation}\label{E-dotvolg} \partial_t\,\big({\rm d}\operatorname{vol}_{g}\!\big) = \frac12\,(\operatorname{Tr\,}_{g} B)\,{\rm d}\operatorname{vol}_{g}, \end{equation} e.g., \cite{topp}. By Lemma~\ref{L-divX} and \eqref{E-DivThm}--\eqref{E-dotvolg}, \begin{equation}\label{E-DivThm-2} \frac{d}{dt}\int_M (\operatorname{div} X)\,{\rm d}\operatorname{vol}_g =\int_M \operatorname{div}\big(\partial_t X+\frac12\,(\operatorname{Tr\,}_g B) X\big)\,{\rm d}\operatorname{vol}_g = 0 \end{equation} for any variation $g_t$ of metric with ${\rm supp}\,(\partial_t g)\subset\Omega$, and $t$-dependent $X\in\mathfrak{X}_M$ with ${\rm supp}\,(\partial_t X)\subset\Omega$. Let ${\rm V}$ be the linear subspace of $TM\times TM$ spanned by $({\cal D}\times\widetilde{\cal D})\cup(\widetilde{\cal D}\times{\cal D})$. Thus, the product $TM\times TM$ is the sum of three subbundles, $\widetilde{\mD}\times\widetilde\mD$, ${\mD}\times\mD$ and ${\rm V}$. Using this decomposition, we define the tensor in \eqref{E-gravity}. \begin{definition}[see \cite{r2018}]\label{D-Ric-D}\rm The symmetric $(0,2)$-tensor $\operatorname{Ric}_{\,\mD}$ in \eqref{E-gravity}, defined by its restrictions on three complementary subbundles of $TM\times TM$, is referred to as the \textit{mixed Ricci curvature}: \begin{equation}\label{E-main-0ij} \left\{\begin{array}{c} \operatorname{Ric}_{\,\mD\,|\,\mD\times\mD} = {r}_{{\mD}} -\langle\tilde h,\,\tilde H\rangle +\widetilde{\cal A}^{\,\flat} -\widetilde{\cal T}^{\,\flat} +\Psi -{\rm Def}_{\cal D}\,H +\widetilde{\cal K}^{\,\flat} \\ \hskip10mm +\,H^\flat\otimes H^\flat -\frac{1}{2}\,\Upsilon_{\,h,h} -\frac12\,\Upsilon_{\,T,T} -\frac{n-1}{p+n-2}\,\operatorname{div}(\tilde H-{H})\,g^\perp, \\ \operatorname{Ric}_{\,\mD\,|\,V} = -4\langle\theta,\, {\tilde H}\rangle -2(\operatorname{div}(\alpha - \tilde\theta))_{\,|{\rm V}} -2\langle{\tilde\theta} - {\tilde\alpha}, H\rangle \\ \hskip10mm -\,2\,{\rm Sym}(H^{\flat}\otimes{\tilde H}^{\flat}) +2\,\tilde\delta_{H} - 4\,\Upsilon_{{\tilde\alpha}, \theta} - 2\,\Upsilon_{\alpha, {\tilde\alpha}} - 2\,\Upsilon_{\theta, {\tilde\theta}},\\ \operatorname{Ric}_{\,\mD|\,\widetilde\mD\times\widetilde\mD} = {r}_{\widetilde{\mD}}-\<h,\,H\rangle+{\cal A}^\flat-{\cal T}^\flat +\widetilde\Psi -{\rm Def}_{\widetilde{\cal D}}\,\tilde H +{\cal K}^\flat \\ \hskip10mm +\,\tilde H^\flat\otimes \tilde H^\flat -\frac{1}{2}\,\Upsilon_{\,\tilde h,\tilde h} -\frac12\,\Upsilon_{\,\tilde T,\tilde T} +\frac{p-1}{p+n-2}\,\operatorname{div}(\tilde H-{H})\,g^\top. \end{array} \right. \end{equation} Here \eqref{E-main-0ij}$_3$ is dual to \eqref{E-main-0ij}$_1$ with respect to interchanging distributions $\widetilde{\cal D}$ and $\cal D$, and their last terms vanish if $n=p=1$. Also, $\,{\rm S}_{\mD} := \operatorname{Tr\,}_g\operatorname{Ric}_{\,\mD} = {\rm S}_{\rm mix} + \frac{p-n}{n+p-2}\,\operatorname{div}(H-\tilde{H})$. \end{definition} The following theorem, which allows us to restore the partial Ricci curvature \eqref{E-main-0ij}, is based on calculating the variations with respect to $g$ of components in \eqref{eq-ran-ex} and using \eqref{E-DivThm-2} for divergence~terms. According to this theorem and Definition~\ref{D-Ric-D} we conclude that a metric $g\in{\rm Riem}(M,\widetilde{\mD})$ is critical for the action~\eqref{actiongSmix} with fixed $\mathfrak{T} =0$ (i.e., considered as a functional of $g$ only), with respect to volume-preserving variations of metric if and only if \eqref{E-gravity} holds. \begin{theorem}[see \cite{rz-2}] \label{T-main00} A metric $g\in{\rm Riem}(M,\widetilde{\mD})$ is critical for the action~\eqref{actiongSmix} with fixed $\mathfrak{T} =0$, with respect to volume-preserving ${g}^\pitchfork$-variations if and only if \begin{subequations} \begin{eqnarray}\label{E-main-0i} \nonumber &&\hskip-12mm {r}_{\mD} -\langle\tilde h,\,\tilde H\rangle +\widetilde{\cal A}^\flat -\widetilde{\cal T}^\flat +\Psi -{\rm Def}_{\mD}\,H + \widetilde{\cal K}^\flat + H^\flat\otimes H^\flat -\frac{1}{2}\,\Upsilon_{h,h} -\frac12\,\Upsilon_{T,T} \\ && -\frac12\,\big({\rm S}_{\rm mix} +\operatorname{div}(\tilde H -H)\big)\,g^\perp = \lambda\,g^\perp, \\ &&\hskip-15mm -4 \langle\theta,\,{\tilde H}\rangle - 2(\operatorname{div}(\alpha -\tilde \theta))_{\,|{\rm V}} -2\langle{\tilde\theta} -{\tilde\alpha}, H\rangle - 2\,H^{\flat}\odot{\tilde H}^{\flat} +2\,{\tilde\delta}_{H} - 4\Upsilon_{{\tilde\alpha},\theta} -2\,\Upsilon_{\alpha,{\tilde\alpha}} -2\,\Upsilon_{\theta,{\tilde\theta}} = 0 \end{eqnarray} \end{subequations} for some $\lambda\in\mathbb{R}$. The Euler-Lagrange equation for volume-preserving ${g}^\top$-variations is dual to \eqref{E-main-0i}. \end{theorem} \begin{example}\label{Ex-2-1}\rm For a \textit{space-time} $(M^{p+1},g)$ endowed with ${\widetilde\mD}$ spanned by a timelike unit vector field $N$, the tensor $\operatorname{Ric}_{\mD}$, see \eqref{E-main-0ij} with $n=1$, and its trace have the following particular form: \begin{eqnarray}\label{E-RicD-flow} && \left\{\begin{array}{c} \operatorname{Ric}_{\,\mD\,|\,\mD\times\mD} = \varepsilon_{N}(R_N +(\widetilde A_N)^2 -(\widetilde T^{\sharp}_N)^2 +[\,\widetilde T^{\sharp}_N,\,\widetilde A_N])^\flat +H^\flat\otimes H^\flat -\tilde\tau_1\,\widetilde h_{sc} -{\rm Def}_{\mD}\,H,\\ {\operatorname{Ric}_{\,\mD}(\cdot\,,N)}_{\,|\,\mD} = {\operatorname{div}}^{\perp}\widetilde T^{\sharp}_{N}|_{\,\cal D} +2\,(\widetilde T^{\sharp}_{N}({H}))^{\flat}, \\ \operatorname{Ric}_{\,\mD}(N,N) = \varepsilon_{N}\operatorname{Ric}_{N,N} -2\,\|\widetilde{T}\|^2 -\operatorname{div} H, \end{array} \right. \\ \label{E-RicD-flow-S} &&\quad\ {\rm S}_{\,\mD} = \varepsilon_N\operatorname{Ric}_{N,N}+\operatorname{div}(\varepsilon_N\,\tilde\tau_1 N - {H}). \end{eqnarray} Here $\tilde\tau_i=\operatorname{Tr\,}((\widetilde A_N)^i)$, $\widetilde A_N$ is the shape operator, $\widetilde T$ is the integrability tensor and $\widetilde h_{sc}$ is the scalar second fundamental form of $\mD$. Note that the right-hand side of \eqref{E-RicD-flow}$_2$ vanishes when $\mD$ is integrable. \end{example} \textbf{2.3. Variations with respect to $\mathfrak{T}$}. The next theorem is based on calculating the variations with respect to $\mathfrak{T}$ of components ${\rm S}_{\,\mathfrak{T}}$ and ${\bar Q}/2$ in \eqref{barSmix} and using \eqref{E-DivThm-2} for divergence~terms. Here $\{ e_\lambda \}$ are vectors of an adapted frame, without distinguishing distribution to which they belong. \begin{theorem} The Euler-Lagrange equation for \eqref{Eq-Smix} with fixed $g$, considered as a functional of an arbitrary $(1,2)$-tensor $\mathfrak{T}$, for all variations of \,$\mathfrak{T}$, is the following algebraic system with spin tensor $s_{\mu\nu}^c=2\,{\partial{\cal L}}/{\partial {\mathfrak T}_{\mu\nu}^c}$ $($hence $s_{\alpha\beta}^\gamma=\<s(e_\alpha,e_\beta),e_\gamma\rangle\,)$: \begin{subequations} \begin{eqnarray}\label{ELconnection1} \langle\operatorname{Tr\,}^\bot\mathfrak{T}^*-\Hb, Z\rangle\,\<X,Y\rangle + \langle\operatorname{Tr\,}^\bot\mathfrak{T}+\Hb, Y\rangle\,\<X,Z\rangle =-(\mathfrak{a}/2)\,\<s(X,Y),Z\rangle,\\ \label{ELconnection2} \langle\operatorname{Tr\,}^\top\mathfrak{T}^*-\Ht, W\rangle\,\<U,V\rangle + \langle\operatorname{Tr\,}^\top\mathfrak{T}+\Ht, V\rangle\,\<U,W\rangle=-(\mathfrak{a}/2)\,\<s(U,V),W\rangle,\\ \label{ELconnection3} \langle\operatorname{Tr\,}^\bot\mathfrak{T}^*+\Ht,\, U\rangle\,\<X,Y\rangle -\langle({A}_U - {T}^\sharp_U + \mathfrak{T}_U) X,\, Y\rangle = -(\mathfrak{a}/2)\,\<s(X,Y),U\rangle,\\ \label{ELconnection4} \langle\operatorname{Tr\,}^\top\mathfrak{T}^*+\Hb,\, X\rangle\,\<U,V\rangle -\langle( {\tilde A}_X - {\tilde T^\sharp}_X + \mathfrak{T}_X) U,\, V\rangle = -(\mathfrak{a}/2)\,\<s(U,V),X\rangle,\\ \label{ELconnection5} \langle\operatorname{Tr\,}^\bot\mathfrak{T}-\Ht,\, U\rangle\,\<X,Y\rangle + \langle({A}_U + {T}^\sharp_U-\mathfrak{T}_U) Y,\, X\rangle = -(\mathfrak{a}/2)\,\<s(X,U),Y\rangle,\\ \label{ELconnection6} \langle\operatorname{Tr\,}^\top\mathfrak{T}-\Hb,\, X\rangle\,\<U,V\rangle +\langle({{{\tilde A}}}_X + {{{\tilde T^\sharp}}}_X-\mathfrak{T}_X) V,\, U\rangle =-(\mathfrak{a}/2)\,\<s(U,X),V\rangle,\\ \label{ELconnection7} 2\,\langle{{{\tilde T^\sharp}}}_X\,U,\, V\rangle + \langle\mathfrak{T}_U\, V + \mathfrak{T}_V^*\, U,\, X\rangle = (\mathfrak{a}/2)\,\<s(X,U),V\rangle,\\ \label{ELconnection8} 2\,\langle{T}^\sharp_U X,\, Y\rangle + \langle\mathfrak{T}_X Y + \mathfrak{T}_Y^* X,\, U\rangle = (\mathfrak{a}/2)\,\<s(U,X),Y\rangle, \end{eqnarray} \end{subequations} for all $X,Y,Z\in\widetilde\mD$ and $U,V,W\in\mD$, see {\rm \cite[Eqs.~(15a-h)]{RZconnection}}, where variations of Lagrangian ${\cal L}$, i.e., spin tensor in {\rm (\ref{ELconnection1}-h)}, are omitted. Here, {\rm (\ref{ELconnection2},d,f,h)} are dual to {\rm (\ref{ELconnection1},c,e,g)}. \end{theorem} \begin{proof} Set $S=\partial_t\mathfrak{T}^t_{\,|\,t=0}$ for a one-parameter family $\mathfrak{T}^t\ (|t|<\varepsilon)$ of $(1,2)$-tensors. Using Proposition~\ref{L-QQ-first} and removing integrals of divergences of compactly supported (in a domain $\Omega$) vector fields, we~get \begin{eqnarray*} &&\quad {\frac{\rm d}{\rm dt}\int_M \bar{\rm S}_{\,\rm mix}(\mathfrak{T}^t)\,{\rm d} \operatorname{vol}_g}\,|_{\,t=0} \\ && =\frac12\int_M \sum\Big\{ \<S_a E_b,E_c\rangle\big(\langle\operatorname{Tr\,}^\bot{\mathfrak T}^*-\Hb, E_c\rangle\,\<E_a, E_b\rangle +\langle\operatorname{Tr\,}^\bot{\mathfrak T}+\Hb, E_b\rangle\,\<E_a, E_c\rangle\big) \\ && +\,\<S_a E_b, {\cal E}_i\rangle\big( \langle\operatorname{Tr\,}^\bot{\mathfrak T}^*+\Ht, {\cal E}_i\rangle\,\<E_a, E_b\rangle -\langle({A}_i - {T}^\sharp_i)E_a, E_b\rangle - \langle\mathfrak{T}_i E_a, E_b\rangle \big) \\ && +\,\<S_a {\cal E}_i, E_b\rangle\big( \langle\operatorname{Tr\,}^\bot{\mathfrak T}-\Ht, {\cal E}_i\rangle\,\<E_a, E_b\rangle + \langle({A}_i + {T}^\sharp_i)E_b, E_a\rangle - \langle\mathfrak{T}_i E_b, E_a\rangle\big) \\ && +\,\<S_a {\cal E}_i, {\cal E}_j\rangle \big( \langle(\tilde{A}_a - \tilde{T}^\sharp_a) {\cal E}_i, {\cal E}_j\rangle -\langle(\tilde{A}_a + \tilde{T}^\sharp_a) {\cal E}_i, {\cal E}_j\rangle - \langle\mathfrak{T}_i{\cal E}_j + \mathfrak{T}^*_j {\cal E}_i, E_a\rangle \big) \\ && +\,\<S_i {\cal E}_j, {\cal E}_k\rangle\big( \langle\operatorname{Tr\,}^\top{\mathfrak T}^*-\Ht, {\cal E}_k\rangle\,\langle{\cal E}_i, {\cal E}_j\rangle +\langle\operatorname{Tr\,}^\top{\mathfrak T} +\Ht, {\cal E}_j\rangle\,\langle{\cal E}_i, {\cal E}_k\rangle \big) \\ && +\,\<S_i {\cal E}_j, E_a\rangle\big(\langle\operatorname{Tr\,}^\top{\mathfrak T}^*+\Hb, E_a\rangle\,\langle{\cal E}_i, {\cal E}_j\rangle -\langle(\tilde{A}_a +\tilde{T}^\sharp_a){\cal E}_j, {\cal E}_i\rangle -\langle\mathfrak{T}_a{\cal E}_i, {\cal E}_j\rangle \big) \\ && +\,\<S_i E_a, {\cal E}_j\rangle\big(\langle\operatorname{Tr\,}^\top{\mathfrak T}-\Hb, E_a\rangle\,\langle{\cal E}_i, {\cal E}_j\rangle +\langle(\tilde{A}_a + \tilde{T}^\sharp_a){\cal E}_j, {\cal E}_i\rangle - \langle\mathfrak{T}_a {\cal E}_j, {\cal E}_i\rangle \big) \\ \nonumber && +\,\<S_i E_a, E_b\rangle(\langle({A}_i - {T}^\sharp_i) E_a, E_b\rangle {-}\langle({A}_i + {T}^\sharp_i) E_a, E_b\rangle {-} \langle\mathfrak{T}_a E_b + \mathfrak{T}^*_b E_a, {\cal E}_i\rangle ) \Big\}\,{\rm d}\operatorname{vol}_g. \end{eqnarray*} Since no further assumptions are made about $S$ or $\mathfrak{T}$, all the components $\<S_\mu e_{\lambda}, e_{\rho}\rangle$ are independent and the above formula gives rise to (\ref{ELconnection1}-h), where $X,Y,Z\in\widetilde\mD$ and $U,V,W\in\mD$ are any vectors from an adapted frame. Observe that in every equation from (\ref{ELconnection1}-h) each term contains the same set of those vectors and is trilinear in them, so all these equations hold in fact for all vectors $X,Y,Z\in\widetilde\mD$ and $U,V,W\in\mD$. Further below, we obtain many other formulas from computations in adapted frames, in the same way. \end{proof} Taking difference of symmetric (in $X,Y$) parts of (\ref{ELconnection3},e) with $s=0$ yields that $\widetilde{\cal D}$ is totally umbilical -- similar result for $\mD$ follows from dual equations (e.g., \cite{RZconnection}). For vacuum space-time (${\cal L}=0$), the (\ref{ELconnection1}-h) are simplified to the following equations (\ref{ELconnectionNew1}-j). \begin{corollary}[see Theorem~1 in \cite{RZconnection}]\label{T-main1} Let a metric-affine manifold $(M,g,\bar\nabla=\bar\nabla-\nabla)$ be endowed with a non-degenerate distribution~$\widetilde{\mD}$. Then $\mathfrak{T}$ is critical for the action \eqref{actiongSmix} with fixed $g$ for all variations of $\mathfrak{T}$ if and only if $\,\widetilde{\mD}$ and $\mD$ are {totally umbilical} and $\mathfrak{T}$ satisfies the following linear algebraic system for all $X,Y\in\widetilde\mD$ and $U,V\in\mD$: \begin{subequations} \begin{eqnarray}\label{ELconnectionNew1} && (\mathfrak{T}_U\, V +\mathfrak{T}^{*}_V\, U)^\top = -2\, {\widetilde{T}}(U, V), \\ \label{ELconnectionNew2} && (\operatorname{Tr\,}^\bot\mathfrak{T}^*)^\top = \Hb = -(\operatorname{Tr\,}^\bot\mathfrak{T})^\top \quad {\rm for~} n>1, \\ \label{ELconnectionNew4} && \mathfrak{T}^\top_U -\mathfrak{T}^{*\top}_U = 2\, {T}^\sharp_U, \\ \label{ELconnectionNew5} && \mathfrak{T}_U^\top + \mathfrak{T}_U^{*\top} = \langle\operatorname{Tr\,}^\bot(\mathfrak{T} +\mathfrak{T}^*),\,U\rangle\operatorname{id}^\top , \\ \label{ELconnectionNew7} && (\operatorname{Tr\,}^\bot(\mathfrak{T} -\mathfrak{T}^*))^\bot = (2-2/n)\,\Ht, \\ \label{ELconnectionNew8} && (\mathfrak{T}_X\, Y +\mathfrak{T}^{*}_Y\, X)^\bot = -2\,\Tb (X, Y), \\ \label{ELconnectionNew9} && (\operatorname{Tr\,}^\top\mathfrak{T}^*)^\bot = \Ht = -(\operatorname{Tr\,}^\top\mathfrak{T})^\bot \quad {\rm for~} p>1, \\ \label{ELconnectionNew11} && \mathfrak{T}_X^\bot -\mathfrak{T}_X^{*\bot} = 2\, {{\tilde T}}^\sharp_X, \\ \label{ELconnection1ab} && {\mathfrak{T}}_X^\bot + \mathfrak{T}_X^{*\bot} = \langle\operatorname{Tr\,}^\top(\mathfrak{T} +\mathfrak{T}^*),\,X\rangle\operatorname{id}^\bot , \\ \label{ELconnectionNew14} && (\operatorname{Tr\,}^\top(\mathfrak{T} -\mathfrak{T}^*))^\top = (2-2/p)\,\Hb . \end{eqnarray} \end{subequations} \end{corollary} \begin{example}\rm For our $(M^{p+1},g,{\widetilde\mD})$, see Example~\ref{Ex-2-1}, the system (\ref{ELconnection1}-h) reduces to \begin{eqnarray*} \langle\operatorname{Tr\,}^\bot({\mathfrak T}^*+{\mathfrak T}), N\rangle =-(\mathfrak{a}/2)\,\<s(N,N),N\rangle,\\ \langle\operatorname{Tr\,}^\top{\mathfrak T}^* -\Ht,\, W\rangle\,\<U,V\rangle +\langle\operatorname{Tr\,}^\top{\mathfrak T} +\Ht,\, V\rangle\,\<U,W\rangle =-(\mathfrak{a}/2)\,\<s(U,V),W\rangle,\\ \langle\operatorname{Tr\,}^\bot{\mathfrak T}^*,\, U\rangle - \langle{\mathfrak{T}}_U\,N, N\rangle = -(\mathfrak{a}/2)\,\<s(N,N),U\rangle,\\ (\langle\operatorname{Tr\,}^\top{\mathfrak T}^*, N\rangle+\tilde\tau_1)\,\<U,V\rangle -\langle(\tilde{A}_N -\tilde{T}^\sharp_N + {\mathfrak{T}}_N) U,\, V\rangle = -(\mathfrak{a}/2)\,\<s(U,V),N\rangle,\\ \langle\operatorname{Tr\,}^\bot{\mathfrak T},\, U\rangle - \langle{\mathfrak{T}}_U\,N, N\rangle = -(\mathfrak{a}/2)\,\<s(N,U),N\rangle,\\ (\langle\operatorname{Tr\,}^\top{\mathfrak T}, N\rangle-\tilde\tau_1)\,\<U,V\rangle +\langle(\tilde{A}_N +\tilde{T}^\sharp_N-{\mathfrak{T}}_N) V,\, U\rangle =-(\mathfrak{a}/2)\,\<s(U,N),V\rangle,\\ \langle\,2\,\tilde{T} (U, V) + {\mathfrak{T}}_U\, V + {\mathfrak{T}}_V^*\,U,\, N\rangle = (\mathfrak{a}/2)\,\<s(N,U),V\rangle,\\ \langle({\mathfrak{T}} + {\mathfrak{T}}^*)_N\,N,\, U\rangle = (\mathfrak{a}/2)\,\<s(U,N),N\rangle, \end{eqnarray*} where $U,V,W\in\mD$. \end{example} \section{Main results} \label{sec:main} In Section~\ref{sec: 2-1} we consider the total mixed scalar curvature of contorsion tensor for general and particular connections, e.g., metric and statistical. In Section~\ref{sec: 2-2} we consider the total mixed scalar curvature of statistical manifolds endowed with a distribution and metric-affine doubly twisted products. In Section~\ref{sec: 2-3} we consider the total mixed scalar curvature of Riemann-Cartan manifolds endowed with a distribution. In~Section~\ref{sec:2-4}, we derive the Euler-Lagrange equations for semi-symmetric connections and present the mixed Ricci tensor explicitly in \eqref{E-Ric-D-semi-sym}. Our aims are to find out which metrics admit critical points of examined functionals and which components of $\mathfrak{T}$ in these particular cases determine whether or not its mixed scalar curvature is critical in its class of connections. This might help to achieve better understanding of both mixed scalar curvature invariant and the role played by some components of contorsion tensor. \subsection{Variational problem with contorsion tensor} \label{sec: 2-1} By Proposition~\ref{L-QQ-first} and \eqref{E-SK}, we have the following decomposition \cite{r-affine} (note that these are terms of $-Q$ in the first line of \eqref{E-defQ}): \[ 2\,{\rm S}_{\,\mathfrak{T}} = \langle\operatorname{Tr\,}^\top\mathfrak{T},\,\operatorname{Tr\,}^\bot\mathfrak{T}^*\rangle +\langle\operatorname{Tr\,}^\bot\mathfrak{T},\,\operatorname{Tr\,}^\top\mathfrak{T}^*\rangle -\langle \mathfrak{T}^\wedge, \mathfrak{T}^* \rangle_{| V} . \] We consider arbitrary variations $\mathfrak{T}(t),\ \mathfrak{T}(0)=\mathfrak{T},\ |t|<\varepsilon$, and variations corresponding to metric and statistical connections, while $\Omega\subset M$ contains supports of infinitesimal variations $\partial_t\mathfrak{T}(t)$. In such cases, the Divergence Theorem states that if $X\in\mathfrak{X}_M$ is supported in $\Omega$ then \eqref{E-DivThm} holds. \begin{theorem}\label{propELSmixI} A pair $(g,\,\mathfrak{T})$ is critical for the action \eqref{actiongISmix} with respect to all variations of $\,\mathfrak{T}$ and $g$ if and only if $\mathfrak{T}$ satisfies the following algebraic systems (for all $X,Y,Z\in\widetilde\mD$ and $U,V,W\in\mD$): \begin{subequations} \begin{eqnarray}\label{ELSmixIadapted} && \operatorname{Tr\,}^\top(\mathfrak{T}_V \mathfrak{T}^\wedge_U) -\frac{1}{2}\,\langle\mathfrak{T}_V\,U +\mathfrak{T}_U\,V,\ \operatorname{Tr\,}^\top \mathfrak{T}^* \rangle = 0,\\ \label{ELSmixImixed} \nonumber &&\langle\operatorname{Tr\,}^\bot \mathfrak{T} -\operatorname{Tr\,}^\top \mathfrak{T},\ \mathfrak{T}^*_Y\, U\rangle -\langle\mathfrak{T}_Y U +\mathfrak{T}_U\, Y,\,\operatorname{Tr\,}^\top \mathfrak{T}^*\rangle - \operatorname{Tr\,}^\bot(\mathfrak{T}^*_Y (\mathfrak{T}^*)^\wedge_{\,U}) \\ &&\quad +\operatorname{Tr\,}^\top \big( \mathfrak{T}^*_Y (\mathfrak{T}^*)^\wedge_{\,U} +\mathfrak{T}_U \mathfrak{T}^\wedge_{\,Y} +\mathfrak{T}_Y \mathfrak{T}^\wedge_{\,U}\big) = 0 \\ \label{ELSmixIadapteddual} && \operatorname{Tr\,}^\top(\mathfrak{T}_Y \mathfrak{T}^\wedge_X) -\frac{1}{2}\,\langle\mathfrak{T}_Y\, X +\mathfrak{T}_X\, Y,\ \operatorname{Tr\,}^\bot \mathfrak{T}^* \rangle = 0, \end{eqnarray} \end{subequations} and \begin{subequations} \begin{eqnarray}\label{E-34} &&(\mathfrak{T}^*_X\,Y +\mathfrak{T}_Y\,X)^\bot =0,\\ && (\mathfrak{T}_U\,V +\mathfrak{T}^*_V\,U)^\top =0, \\ \label{E-34c} &&\<X,Z\rangle\langle\operatorname{Tr\,}^\bot\mathfrak{T}, Y\rangle +\<X,Y\rangle\,\langle\operatorname{Tr\,}^\bot\mathfrak{T}^*, Z\rangle = 0,\\ &&\<U,V\rangle\langle\operatorname{Tr\,}^\top\mathfrak{T}^*,\, W\rangle +\<U,W\rangle\langle\operatorname{Tr\,}^\top\mathfrak{T},\, V\rangle=0,\\ &&\mathfrak{T}_U^\top = \langle\operatorname{Tr\,}^\bot\mathfrak{T},\, U\rangle\operatorname{id}^\top, \\ &&\mathfrak{T}_X^\bot = \langle\operatorname{Tr\,}^\top\mathfrak{T}^*,\, X\rangle\operatorname{id}^\bot, \\ &&(\operatorname{Tr\,}^\bot(\mathfrak{T} -\mathfrak{T}^*))^\bot =0,\qquad (\operatorname{Tr\,}^\top(\mathfrak{T} -\mathfrak{T}^*))^\top =0. \end{eqnarray} \end{subequations} Moreover, if $n>1$ and $p>1$ then {\rm (\ref{E-34c},d)} read as \begin{eqnarray} \label{criticaltrIinlargedimensions} (\operatorname{Tr\,}^\bot\mathfrak{T})^\top = 0 = (\operatorname{Tr\,}^\bot\mathfrak{T}^*)^\top,\quad (\operatorname{Tr\,}^\top\mathfrak{T}^*)^\bot = 0 = (\operatorname{Tr\,}^\top\mathfrak{T})^\bot. \end{eqnarray} \end{theorem} \begin{proof} From Proposition~\ref{L-QQ-first} and Lemma~\ref{L-dT-3}, for a $g^\pitchfork$-variation $g_t$ of metric $g$ we obtain \begin{eqnarray}\label{Eq-47} && 2 \, \partial_t {\rm S}_{\,\mathfrak{T}} (g_t) = \partial_t \langle\operatorname{Tr\,}^\top\mathfrak{T},\,\operatorname{Tr\,}^\bot\mathfrak{T}^*\rangle +\partial_t \langle\operatorname{Tr\,}^\bot\mathfrak{T},\,\operatorname{Tr\,}^\top\mathfrak{T}^*\rangle -\partial_t\langle\mathfrak{T}^\wedge, \mathfrak{T}^* \rangle_{| V} \nonumber \\ && = \frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \big(\langle\operatorname{Tr\,}^\top\mathfrak{T}, \mathfrak{T}^*_i {\cal E}_j - \mathfrak{T}^*_j {\cal E}_i\rangle -\langle\mathfrak{T}_j {\cal E}_i + \mathfrak{T}_i {\cal E}_j , \operatorname{Tr\,}^\top\mathfrak{T}^*\rangle +2\,\langle\mathfrak{T}^*_{j} E_a, \mathfrak{T}_a {\cal E}_i\rangle\big)\nonumber \\ && +\sum B({\cal E}_i, E_b) \big(\langle\operatorname{Tr\,}^\bot\mathfrak{T} - \operatorname{Tr\,}^\top\mathfrak{T},\, \mathfrak{T}^*_b {\cal E}_i\rangle -\langle\mathfrak{T}_b {\cal E}_i + \mathfrak{T}_i E_b, \operatorname{Tr\,}^\top\mathfrak{T}^*\rangle \nonumber \\ && +\,\langle\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_{ b} E_a\rangle +\langle\mathfrak{T}^*_{b} E_a, \mathfrak{T}_a {\cal E}_i\rangle +\langle\mathfrak{T}^*_i E_a, \mathfrak{T}_a E_b\rangle -\langle\mathfrak{T}^*_j {\cal E}_i, \mathfrak{T}_b {\cal E}_j\rangle \big). \end{eqnarray} Thus, $\partial_t {\rm S}_{\,\mathfrak{T}}(g_t) =0$ if and only if the right hand side of \eqref{Eq-47} vanishes for all symmetric tensors $B=\partial_t g$. For the $({\mD}\times{\mD})$-part of $B$ we get \begin{equation*} \sum B({\cal E}_i, {\cal E}_j)\big(\frac{1}{2}\,\langle\operatorname{Tr\,}^\top\mathfrak{T}, \mathfrak{T}^*_i {\cal E}_j -\mathfrak{T}^*_j {\cal E}_i\rangle -\frac{1}{2}\,\langle\mathfrak{T}_j {\cal E}_i +\mathfrak{T}_i {\cal E}_j, \operatorname{Tr\,}^\top\mathfrak{T}^*\rangle +\operatorname{Tr\,}^\top(\mathfrak{T}_j \mathfrak{T}^\wedge_i)\big) = 0, \end{equation*} but since $B$ is arbitrary and symmetric and $\mathfrak{T}^*_i {\cal E}_j -\mathfrak{T}^*_j {\cal E}_i$ is skew-symmetric, this can be written as \eqref{ELSmixIadapted}. For the mixed part of $B$ (i.e., $B$ restricted to the subspace $V$) we get the following Euler-Lagrange equation: \begin{eqnarray*} && \sum B({\cal E}_i, E_b) \big( \langle\operatorname{Tr\,}^\bot\mathfrak{T}, \mathfrak{T}^*_b {\cal E}_i\rangle - \langle\operatorname{Tr\,}^\top\mathfrak{T}, \mathfrak{T}^*_b {\cal E}_i\rangle - \langle\mathfrak{T}_b {\cal E}_i + \mathfrak{T}_i E_b, \operatorname{Tr\,}^\top\mathfrak{T}^*\rangle \\ && +\,\langle\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_{ b} E_a\rangle + \langle\mathfrak{T}^*_i E_a, \mathfrak{T}_a E_b\rangle + \langle\mathfrak{T}^*_{b} E_a, \mathfrak{T}_a {\cal E}_i\rangle - \langle\mathfrak{T}^*_j {\cal E}_i, \mathfrak{T}_b {\cal E}_j\rangle \big) = 0. \end{eqnarray*} From this we obtain \eqref{ELSmixImixed}. Taking dual equation to \eqref{ELSmixIadapted} with respect to interchanging distributions $\widetilde{\mD}$ and ${\mD}$, we obtain \eqref{ELSmixIadapteddual}, which is the Euler-Lagrange equation for $g^\top$-variations. The proof of (\ref{E-34}-g), see \cite{RZconnection}, is based on calculation of variations with respect to $\mathfrak{T}$ of ${\rm S}_{\,\mathfrak{T}}$ and using \eqref{E-DivThm-2}. \end{proof} \begin{definition}[see Section~4 in \cite{RZconnection}]\rm The \textit{doubly twisted product} $B\times_{(v,u)} F$ of metric-affine manifolds $(B,g_B,\mathfrak{T}_B)$ and $(F, g_F,\mathfrak{T}_F)$ (or the \textit{metric-affine doubly twisted product}) is a mani\-fold $M=B\times F$ with the metric $g = g^\top + g^\bot$ and the affine connection, whose contorsion tensor is $\mathfrak{T}={\mathfrak{T}}^\top+{\mathfrak{T}}^\bot$,~where \begin{eqnarray*} g^\top(X,Y) \hspace*{-2.mm}&=&\hspace*{-2.mm} v^2 g_B(X^\top,Y^\top),\quad g^\bot(X,Y)=u^2 g_F(X^\bot,Y^\bot),\\ {\mathfrak{T}}^\top_XY \hspace*{-2.mm}&=&\hspace*{-2.mm} u^2(\mathfrak{T}_B)_{X^\top}Y^\top,\quad {\mathfrak{T}}^\bot_XY=v^2(\mathfrak{T}_F)_{X^\bot}Y^\bot, \end{eqnarray*} and the warping functions $u,v\in C^\infty(M)$ are positive. \end{definition} From Theorem~\ref{propELSmixI} we obtain the following \begin{corollary} A metric-affine doubly twisted product $B\times_{(v,u)} F$ with $\sum\nolimits_a\varepsilon_a \neq 0 \ne \sum\nolimits_i\varepsilon_i$ is critical for \eqref{actiongISmix} with respect to all variations of $\,\mathfrak{T}$ and $g$ if and only if \begin{equation} \label{BFtraces} \operatorname{Tr\,}\mathfrak{T}_B=0=\operatorname{Tr\,}\mathfrak{T}_F. \end{equation} \end{corollary} \begin{proof} It was proven in \cite[Corollary~13]{RZconnection} that a metric-affine doubly twisted product $B\times_{(v,u)} F$ is critical for \eqref{actiongISmix} with fixed $g$ and for variations of $\,\mathfrak{T}$ if and only if \eqref{BFtraces} holds. It can be easily seen that for such doubly twisted product satisfying $\operatorname{Tr\,}\mathfrak{T}_B=0=\operatorname{Tr\,}\mathfrak{T}_F$ all terms in (\ref{ELSmixIadapted}-c) vanish. \end{proof} \begin{corollary} \label{statisticalcritSmixI} A pair $(g, \mathfrak{T})$, where $\mathfrak{T}$ is the contorsion tensor of a statistical connection on $(M, g)$, is critical for the action \eqref{actiongISmix} with respect to all variations of metric, and variations of $\mathfrak{T}$ corresponding to statistical connections if and only if $\mathfrak{T}$ satisfies the following algebraic system: \begin{subequations} \begin{eqnarray} \label{ELSmixIstat1} && (\operatorname{Tr\,}^\top\mathfrak{T})^\top = 0 = (\operatorname{Tr\,}^\bot\mathfrak{T})^\bot, \\ \label{ELSmixIstat2} && (\mathfrak{T}_X\,Y)^\bot = 0 = (\mathfrak{T}_U\,V)^\top,\quad X,Y\in\widetilde\mD,\ \ U,V\in\mD. \end{eqnarray} \end{subequations} \end{corollary} \begin{proof} By \cite[Corollary~7]{RZconnection}, $\mathfrak{T}$ is critical for the action $\mathfrak{T} \mapsto \int_M {\rm S}_{\,\mathfrak{T}}\,{\rm d}\operatorname{vol}_g$, see \eqref{actiongISmix}, with respect to variations of $\mathfrak{T}$ corresponding to statistical connections if and only if the following equations hold: \begin{subequations} \begin{eqnarray}\label{ELSmixIstatI1} && (\operatorname{Tr\,}^\top \mathfrak{T})^\perp = 0 = (\operatorname{Tr\,}^\perp \mathfrak{T})^\top , \\ \label{ELSmixIstatI2} && (\mathfrak{T}_U V)^\top = \frac{1}{2}\,\<U,V\rangle (\operatorname{Tr\,}^\top \mathfrak{T})^\top , \\ && (\mathfrak{T}_X Y)^\perp = \frac{1}{2}\,\<X,Y\rangle (\operatorname{Tr\,}^\perp \mathfrak{T})^\perp , \end{eqnarray} \end{subequations} for all $X,Y\in\widetilde\mD$ and $U,V\in\mD$. If (\ref{ELSmixIstat1},b) hold, then also (\ref{ELSmixIstatI1}-c) hold, moreover if \eqref{ELSmixIstat2} is satisfied and $\mathfrak{T}$ corresponds to a statistical connection, then all terms in equations (\ref{ELSmixIadapted}-c) vanish. On the other hand, if (\ref{ELSmixIstatI1}-c) hold, then \eqref{ELSmixIadapted} becomes \begin{equation} \label{ELSmixIadaptedstat} \frac{n}{4} (\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp \flat} \otimes (\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp \flat} - \frac{3}{4} \langle (\operatorname{Tr\,}^\top \mathfrak{T})^\top , (\operatorname{Tr\,}^\top \mathfrak{T})^\top \rangle\, g^\perp =0, \end{equation} and \eqref{ELSmixIadapteddual} becomes dual to the above. If $p>1$ and $\langle (\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp} ,(\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp}\rangle \ne 0$, then there is $W \in \mD$ such that $\<W ,W \rangle \neq 0$ and $\langle W , (\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp} \rangle =0$, and evaluating \eqref{ELSmixIadaptedstat} on $W \otimes W$ we obtain $(\operatorname{Tr\,}^\top \mathfrak{T})^\top =0$ and then it also follows from \eqref{ELSmixIadaptedstat} that $(\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp } =0$. If $p>1$ and $(\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp} =0$, then we obtain $(\operatorname{Tr\,}^\top \mathfrak{T})^\top =0$ from \eqref{ELSmixIadapted}, as $g^\perp$ is non-degenerate. If $p>1$ and $\langle (\operatorname{Tr\,}^\perp \mathfrak{T})^{\perp}, (\operatorname{Tr\,}^\perp \mathfrak{T})^{\perp} \rangle = 0$ but $(\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp} \ne 0$, then \eqref{ELSmixIadapted} evaluated on $(\operatorname{Tr\,}^\perp\mathfrak{T})^{\perp}\otimes W$, where $W\in\mD$, implies that \[ \langle(\operatorname{Tr\,}^\top\mathfrak{T})^\top, (\operatorname{Tr\,}^\top\mathfrak{T})^\top\rangle\langle(\operatorname{Tr\,}^\perp\mathfrak{T})^{\perp}, W\rangle =0 \] and since $W$ here is arbitrary, it follows that $\langle(\operatorname{Tr\,}^\top\mathfrak{T})^\top, (\operatorname{Tr\,}^\top\mathfrak{T})^\top\rangle = 0$, and then it also follows from \eqref{ELSmixIadaptedstat} that $(\operatorname{Tr\,}^\perp \mathfrak{T})^{\perp}=0$. Equalities $(\operatorname{Tr\,}^\top \mathfrak{T})^\top = 0 = (\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp }$ together with (\ref{ELSmixIstatI2},c) yield \eqref{ELSmixIstat2}. If $n>1$ we can similarly use \eqref{ELSmixIadapteddual} for the same effect, and if $n=p=1$ then \eqref{ELSmixIadaptedstat} becomes \[ \langle(\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp} , (\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp} \rangle = 3 \langle (\operatorname{Tr\,}^\top \mathfrak{T})^{ \top} , (\operatorname{Tr\,}^\top \mathfrak{T})^{ \top} \rangle, \] which together with its dual imply $(\operatorname{Tr\,}^\top \mathfrak{T})^\top =0 = (\operatorname{Tr\,}^\perp \mathfrak{T})^{ \perp }$, and again we obtain \eqref{ELSmixIstat2} from (\ref{ELSmixIstatI2},c). \end{proof} Next we consider metric connections. Using (\ref{E-34}-g), we obtain the following. \begin{corollary}[see \cite{RZconnection}] A contorsion tensor $\mathfrak{T}$ corresponding to a {metric connection} is critical for the action \eqref{actiongISmix} with fixed $g$ for all variations of $\mathfrak{T}$ corresponding to metric connections if and only if $\mathfrak{T}$ satisfies the following linear algebraic system (for all $X,Y\in\widetilde\mD$ and $U,V\in\mD$): \begin{subequations} \begin{eqnarray} \label{Tmetriccrit1} && (\mathfrak{T}_Y\, X +\mathfrak{T}^*_X\, Y)^\bot = 0 = (\mathfrak{T}_U\, V +\mathfrak{T}^*_V\, U)^\top,\\ \label{Tmetriccrit3} && (\operatorname{Tr\,}^\top\mathfrak{T})^\top = 0 = (\operatorname{Tr\,}^\bot\mathfrak{T})^\bot,\\ \label{Tmetriccrit5} && \mathfrak{T}_X^\bot = 0 = \mathfrak{T}_U^\top, \\ \label{SmixImetricdimnp} && (\operatorname{Tr\,}^\bot\mathfrak{T})^\top=0\quad for\ n>1,\quad (\operatorname{Tr\,}^\top\mathfrak{T})^\bot=0\quad for\ p>1. \end{eqnarray} \end{subequations} \end{corollary} \begin{corollary} A pair $(g, \mathfrak{T})$, where $\mathfrak{T}$ is the contorsion tensor of a metric connection on $(M, g)$, is critical for \eqref{actiongISmix} with respect to all variations of metric, and variation of $\mathfrak{T}$ correspon\-ding to metric connections if and only if {\rm (\ref{Tmetriccrit1}-d)} are satisfied and the following algebraic system (where $X\in\widetilde\mD$ and $U\in\mD$) holds: \begin{eqnarray*} \nonumber && \operatorname{Tr\,}^\top((\mathfrak{T}_U)^\bot(\mathfrak{T}_X^\wedge)^\bot +2\,(\mathfrak{T}_U^\wedge)^\top(\mathfrak{T}_X)^\top) -\operatorname{Tr\,}^\bot((\mathfrak{T}_U^\wedge)^\top(\mathfrak{T}_X)^\top) +\langle\operatorname{Tr\,}^\bot\mathfrak{T},\, (\mathfrak{T}_X\, U)^\top\rangle = 0 . \end{eqnarray*} \end{corollary} \begin{proof} In \eqref{ELSmixIadapted}, by \eqref{Tmetriccrit5} we have $\langle\mathfrak{T}_a {\cal E}_i, E_b\rangle =0 = \langle\mathfrak{T}_a\, {\cal E}_i,\, {\cal E}_k\rangle$, and by \eqref{Tmetriccrit3} also $\langle\mathfrak{T}^*_a\, E_a,\, E_b\rangle=0$. Hence, what remains in \eqref{ELSmixIadapted} is \[ \langle(\mathfrak{T}_j\, {\cal E}_i +\mathfrak{T}_i\, {\cal E}_j)^\bot,\, \operatorname{Tr\,}^\top\mathfrak{T}^*\rangle = 0,\quad \forall\, i,j. \] By \eqref{SmixImetricdimnp}, this is identity if $p>1$. On the other hand, for $p=1$ it reduces to \[ 2 \langle\mathfrak{T}_1\,{\cal E}_1, {\cal E}_1\rangle\langle\operatorname{Tr\,}^\top\mathfrak{T}^*, {\cal E}_1\rangle = 0, \] and by \eqref{Tmetriccrit3}, $\langle\mathfrak{T}_1\, {\cal E}_1,\, {\cal E}_1\rangle=0$. Therefore, \eqref{ELSmixIadapted} is satisfied if (\ref{Tmetriccrit1}-c) and the second equation in \eqref{SmixImetricdimnp} are satisfied. Using dual parts of (\ref{Tmetriccrit1}-d) we obtain analogous result for \eqref{ELSmixIadapteddual}. From (\ref{Tmetriccrit1}-d) we~have for all $b,c,i,k$, \begin{eqnarray*} && \sum \langle\mathfrak{T}_a E_a , E_c\rangle=0,\quad \langle\mathfrak{T}^*_b {\cal E}_i,\,{\cal E}_k\rangle=0,\quad \sum \langle\mathfrak{T}^*_a E_a, E_c\rangle=0,\\ && \langle\mathfrak{T}^*_b {\cal E}_i,\,{\cal E}_k\rangle=0,\quad \langle\mathfrak{T}^*_i E_b, E_c\rangle=0,\quad \langle\mathfrak{T}_b {\cal E}_i,\,{\cal E}_k\rangle=0. \end{eqnarray*} Thus, in \eqref{ELSmixImixed} we have only the following terms: \begin{eqnarray*} && \sum \langle\mathfrak{T}_j {\cal E}_j, E_c\rangle \langle\mathfrak{T}^*_b {\cal E}_i, E_c\rangle + \sum \langle\mathfrak{T}^*_a {\cal E}_i, E_c\rangle \langle\mathfrak{T}_{ b} E_a, E_c\rangle + \sum \langle\mathfrak{T}^*_i E_a, {\cal E}_k\rangle \langle\mathfrak{T}_a E_b, {\cal E}_k\rangle\\ && +\, \sum \langle\mathfrak{T}^*_{b} E_a, E_c\rangle \langle\mathfrak{T}_a {\cal E}_i, E_c\rangle - \sum \langle\mathfrak{T}^*_j {\cal E}_i, E_c\rangle \langle\mathfrak{T}_b {\cal E}_j, E_c\rangle = 0 \end{eqnarray*} for all $b,i$. Using $\mathfrak{T}^* = -\mathfrak{T}$ (metric compatibility of $\mathfrak{T}$), we obtain that \eqref{ELSmixImixed} is equivalent to \begin{eqnarray*} && \sum \langle\mathfrak{T}_j {\cal E}_j, E_c\rangle \langle\mathfrak{T}_b {\cal E}_i, E_c\rangle +2 \sum \langle\mathfrak{T}_a {\cal E}_i, E_c\rangle \langle\mathfrak{T}_{ b} E_a, E_c\rangle \\ && + \sum \langle\mathfrak{T}_i E_a, {\cal E}_j\rangle \langle\mathfrak{T}_a E_b, {\cal E}_j\rangle - \sum \langle\mathfrak{T}_j {\cal E}_i, E_c\rangle \langle\mathfrak{T}_b {\cal E}_j, E_c\rangle = 0 \end{eqnarray*} for all $b,i$. This completes the proof. \end{proof} The results obtained when considering the action \eqref{actiongISmix} on metric-affine doubly twisted products, allow us to determine which of these structures are critical for the action \eqref{actiongSmix}. \begin{proposition} A metric-affine doubly twisted product $B\times_{(v,u)} F$ is critical for \eqref{actiongSmix} with respect to all variations of $g$ and $\mathfrak{T}$ if and only if \eqref{BFtraces} holds~and \begin{equation} \label{BFtotallygeodesic} \nabla^\top u =0 =\nabla^\perp v. \end{equation} \end{proposition} \begin{proof} It was proven in \cite{RZconnection} that a metric-affine doubly twisted product $B\times_{(v,u)} F$ is critical for action \eqref{actiongISmix} with fixed $g$, with respect to all variations of $\,\mathfrak{T}$, if and only if \eqref{BFtotallygeodesic} and \eqref{BFtraces} hold. Note that \eqref{BFtotallygeodesic} means that $TB$ and $TF$ as (integrable) distributions on $B\times_{(v,u)} F$ are totally geodesic. It can be easily seen that if \eqref{BFtraces} holds and the distributions are integrable and totally geodesic, then all terms in all variation formulas obtained in Lemma \ref{L-dT-3} vanish. \end{proof} \subsection{Statistical connections} \label{sec: 2-2} We define a new tensor $\Theta = \mathfrak{T} -\mathfrak{T}^* +\mathfrak{T}^\wedge - \mathfrak{T}^{* \wedge }$, composed of some terms appearing in \eqref{E-defQ}. \begin{theorem}\label{propstatcrit} Let $(g, \mathfrak{T})$ correspond to a statistical connection. Then $(g, \mathfrak{T})$ is critical for \eqref{actiongSmix} with respect to volume-preserving variations of $g$ and variations of $\mathfrak{T}$ among all $(1,2)$-tensors if and only if the following conditions are satisfied: 1. $\widetilde{\mD}$ and ${\mD}$ are both integrable, 2. $(\operatorname{Tr\,}^\top \mathfrak{T})^\top = 0 = (\operatorname{Tr\,}^\perp \mathfrak{T})^\perp$, see (\ref{ELSmixIstat1},b), 3. $\mathfrak{T}_X : \widetilde{\mD} \rightarrow \widetilde{\mD}$ for all $X\in\widetilde\mD$, 4. $\mathfrak{T}_U : {\mD} \rightarrow {\mD}$ for all $U\in\mD$, 5. if $n>1$ then ${\tilde H}=0$, 6. if $p>1$ then $H=0$, 7. $\widetilde{\mD}$ and ${\mD}$ are both totally umbilical, \newline and the following equations $($trivial when $n>1$ and $p>1$, see 5. and 6. above$)$ hold for some $\lambda\in\mathbb{R}$: \begin{subequations} \begin{eqnarray}\label{E-main-0iumbint} && \frac{n-1}{n}\,H^\flat\otimes H^\flat -\frac{1}{2} \big(\frac{n-1}{n}\,\<H,H\rangle +\frac{p-1}{p}\,\langle{\tilde H}, {\tilde H}\rangle +\frac{2(p-1)}{p}\,\operatorname{div}{\tilde H}\big)\,g^\perp = \lambda\,g^\perp, \\ \label{E-main-0iiumbint} && \frac{n-1}{n}\,\big({\tilde \delta}_H -\frac{p-1}{p}\,H^\flat \odot {\tilde H}^\flat\big) =0, \\ \label{E-main-0iiiumbint} && \frac{p-1}{p}\,{\tilde H}^\flat \otimes {\tilde H}^\flat -\frac{1}{2}\,\big(\frac{p-1}{p}\,\langle{\tilde H},{\tilde H}\rangle +\frac{n-1}{n}\,\<H, H\rangle +\frac{2(n-1)}{n}\,\operatorname{div} H \big) g^\top = \lambda\,g^\top . \end{eqnarray} \end{subequations} \end{theorem} \begin{proof} For any $\mathfrak{T}$ that corresponds to a statistical connection, we have $\mathfrak{T}^\wedge = \mathfrak{T}$ and $\mathfrak{T}^* = \mathfrak{T}$. Condition~1 follows from (\ref{ELconnectionNew1},f) and $\mathfrak{T} = \mathfrak{T}^\wedge$. Then (\ref{ELconnectionNew1},f), condition 1 and \[ \langle\mathfrak{T}_i {\cal E}_j, E_a\rangle=\langle\mathfrak{T}^*_j {\cal E}_i, E_a\rangle=\langle\mathfrak{T}_a {\cal E}_i, {\cal E}_j\rangle,\quad\forall\, i,j,a, \] yield condition 3. We get condition 5 from $\mathfrak{T}=\mathfrak{T}^*$ and \eqref{ELconnectionNew2}. Conditions 4 and 6 are dual to conditions 3 and 5, and are obtained analogously. Condition 2 follows from $\mathfrak{T}= \mathfrak{T}^*$, condition 3 (and its dual condition 5) and \eqref{ELconnectionNew4} (and its dual \eqref{ELconnectionNew9}). Condition 7 follows from Corollary~\ref{T-main1}. Let $g_t$ be a $g^\pitchfork$-variation of $g$. Although for statistical manifolds, \eqref{E-Q1Q2-gen} reads as \begin{equation}\label{eqvarstat} \bar{{\rm S}}_{\,\rm mix} -{{\rm S}}_{\,\rm mix} = {{\rm S}}_{\mathfrak{T}} = \langle\operatorname{Tr\,}^\top\mathfrak{T},\,\operatorname{Tr\,}^\bot\mathfrak{T}\rangle +\frac12\,\langle\mathfrak{T},\,\mathfrak{T}\rangle_{\,|\,V} , \end{equation} we cannot vary this formula with respect to metric with fixed $\mathfrak{T}$, because when $g$ changes, $\mathfrak{T}$ may no longer correspond to statistical connections (condition $\mathfrak{T} = \mathfrak{T}^*$ may not be preserved by the variation). Instead, we use Lemma~\ref{L-dT-3} and derive from \eqref{dtIIproduct} for $\mathfrak{T}$ corresponding to a statistical connection (for which $\mathfrak{T} = \mathfrak{T}^* = \mathfrak{T}^\wedge$ and $\Theta=0$) that \begin{eqnarray*} && \partial_t \langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V} =\sum B({\cal E}_i, E_b)\big(\langle\mathfrak{T}_j {\cal E}_i, \mathfrak{T}_b {\cal E}_j\rangle -3 \langle\mathfrak{T}_a {\cal E}_i, \mathfrak{T}_{ b} E_a\rangle\big) -\sum B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}_{j} E_a, \mathfrak{T}_a {\cal E}_i\rangle . \end{eqnarray*} From conditions 3-4: $\partial_t \langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V} = 0$. From \eqref{dtThetaA} with $\Theta=0$ we have \begin{eqnarray*} && \partial_t \langle \Theta, A \rangle = 2\sum B({\cal E}_j, E_b) \big(\langle h (E_a, E_b), {\cal E}_i\rangle \langle\mathfrak{T}_a {\cal E}_i, {\cal E}_j\rangle -\langle h (E_a, E_c), {\cal E}_j\rangle \langle\mathfrak{T}_a E_b, E_c\rangle\big) \nonumber \\ && -\,2\sum B({\cal E}_i, {\cal E}_j) \langle h (E_a, E_b), {\cal E}_i\rangle \langle\mathfrak{T}_a {\cal E}_j, E_b\rangle . \end{eqnarray*} For totally umbilical distribution, the last equation further simplifies to \begin{eqnarray*} && \partial_t \langle \Theta, A \rangle = \frac{2}{n} \sum B({\cal E}_j, E_b) \big(\langle H, {\cal E}_i\rangle \langle\mathfrak{T}_b {\cal E}_i, {\cal E}_j\rangle -\langle H, {\cal E}_j\rangle \langle\mathfrak{T}_a E_b, E_a\rangle \big) \nonumber \\ && -\frac{2}{n} \sum B({\cal E}_i, {\cal E}_j) \langle H, {\cal E}_i\rangle \langle\mathfrak{T}_a {\cal E}_j, E_a\rangle . \end{eqnarray*} From conditions 2-4 and we obtain in the above $\partial_t \langle \Theta, A \rangle = 0$. For integrable distributions, since $\Theta=0$, we have \begin{eqnarray*} \partial_t \langle \Theta, T^\sharp \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} 0,\quad \partial_t \langle \Theta, {\tilde T}^\sharp \rangle = 0, \end{eqnarray*} and from \eqref{dtThetatildeA}, with $\Theta=0$ and totally umbilical distributions, we have \begin{eqnarray*} \partial_t \langle \Theta, {\tilde A} \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \frac{2}{p} \sum B({\cal E}_j, E_b) \big(\langle{\tilde H}, E_a\rangle \langle\mathfrak{T}_a {\cal E}_j, E_b\rangle -\langle{\tilde H}, E_b\rangle \langle\mathfrak{T}_j {\cal E}_i, {\cal E}_i\rangle \big) \\ \hspace*{-1.5mm}&+&\hspace*{-1.5mm}\frac{2}{p} \sum B({\cal E}_i, {\cal E}_j) \langle{\tilde H}, E_a\rangle \langle\mathfrak{T}_a {\cal E}_j, {\cal E}_i\rangle . \end{eqnarray*} From conditions 3-4 and 2 we get in the above \begin{equation*} \partial_t \langle \Theta, {\tilde A}\rangle = -\frac{2}{p}\sum B({\cal E}_j, E_b) \langle{\tilde H}, E_b\rangle \langle\mathfrak{T}_i {\cal E}_i, {\cal E}_j\rangle=0. \end{equation*} From conditions 3-4, using \eqref{dttracetopI} and \eqref{dttraceperpI}, we get \begin{eqnarray*} && \partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}, \operatorname{Tr\,}^\perp \mathfrak{T}^*\rangle = -\sum B({\cal E}_i, E_b) \langle\operatorname{Tr\,}^\top\mathfrak{T}, E_c\rangle \<E_c, \mathfrak{T}_b {\cal E}_i\rangle =0 ,\\ && \partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}^*, \operatorname{Tr\,}^\perp \mathfrak{T}\rangle =\!\sum B({\cal E}_j, E_b) \langle\operatorname{Tr\,}^\bot\mathfrak{T} -2\operatorname{Tr\,}^\top\mathfrak{T},\, \mathfrak{T}_b {\cal E}_j\rangle -\!\sum B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}_j {\cal E}_i, \operatorname{Tr\,}^\top\mathfrak{T}\rangle . \end{eqnarray*} From conditions 3-4 and 2 we get $\partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}^*, \operatorname{Tr\,}^\perp \mathfrak{T}\rangle = 0$. From $\mathfrak{T}^* = \mathfrak{T}$, using \eqref{dtIEaH}, we obtain \begin{eqnarray*} && \partial_t \langle \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} -H\rangle =\sum B({\cal E}_i, {\cal E}_j) \langle\operatorname{Tr\,}^\top\mathfrak{T}, {\cal E}_j\rangle \langle{\cal E}_i, H\rangle\\ && +\sum B({\cal E}_j, E_b) \big( \langle\mathfrak{T}_b {\cal E}_j, {\tilde H} -H\rangle +\langle\operatorname{Tr\,}^\top\mathfrak{T}, E_b\rangle \langle{\cal E}_j, H\rangle -\langle\operatorname{Tr\,}^\top\mathfrak{T}, {\cal E}_j\rangle \<E_b, {\tilde H}\rangle \big) . \end{eqnarray*} From conditions 3-4 and 2 we get $\partial_t\, \langle\, \sum (\mathfrak{T}^*_a -\mathfrak{T}_a) E_a, {\tilde H} -H\rangle = 0$. Similarly, from \eqref{dtIeiH} we obtain \begin{eqnarray*} && \partial_t \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} -H\rangle = \sum B({\cal E}_i, {\cal E}_j) \big(\langle\mathfrak{T}_i {\cal E}_j, {\tilde H} -H\rangle + \langle\operatorname{Tr\,}^\bot\mathfrak{T}, {\cal E}_i\rangle \langle H, {\cal E}_j\rangle \big)\\ && +\!\sum B({\cal E}_j, E_b) \big(\langle\operatorname{Tr\,}^\bot\mathfrak{T}, E_b\rangle \langle H, {\cal E}_j\rangle + \langle\mathfrak{T}_j E_b, {\tilde H} -H\rangle -\langle\operatorname{Tr\,}^\bot\mathfrak{T}, {\cal E}_j\rangle \langle{\tilde H}, E_b\rangle \big) . \end{eqnarray*} From conditions 3-4 and 2 we get in the above \begin{equation*} \partial_t \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} -H\rangle = - \sum B({\cal E}_i, {\cal E}_j)\langle\mathfrak{T}_j\, {\cal E}_i, H\rangle . \end{equation*} By condition 6 we have $H=0$ if $p>1$ and if $p=1$ we only have $i=j=k=1$ and by condition~2, \[ \langle\mathfrak{T}_j {\cal E}_i, {\cal E}_k\rangle=\langle\operatorname{Tr\,}^\perp \mathfrak{T}, \,{\cal E}_1\rangle=0. \] Hence, for $\mathfrak{T}$ corresponding to a statistical connection satisfying the assumptions, any variation of $\overline{{\rm S}}_{\rm mix}$ with respect to $g$ is just a variation of ${\rm S}_{\rm mix}$ with respect to $g$. Thus, remaining (\ref{E-main-0iumbint}-c) are equations of Theorem~\ref{T-main00} written for both distributions integrable and umbilical. \end{proof} \begin{corollary} Let $M$ be a closed manifold. Then $(g, \mathfrak{T})$, where $\mathfrak{T}$ corresponds to a statistical connection on $(M,g)$, is critical for the action \eqref{actiongSmix} with respect to all variations of $g$ and $\mathfrak{T}$ if and only if $(g, \mathfrak{T})$ satisfy conditions 1-7 of Theorem~\ref{propstatcrit}; furthermore, either $n=p=1$ or $H=0={\tilde H}$. \end{corollary} \begin{proof} Clearly, (\ref{E-main-0iumbint}-c) hold when $n=p=1$. If $n,p>1$ then conditions 5 and 6 imply $H={\tilde H} =0$. Suppose that $n>1$, $p=1$ and $H \neq 0$ and let $N \in {\cal D}$ be a local unit vector field. Then, evaluating \eqref{E-main-0iumbint} on $N \otimes N$, we obtain \begin{equation} \label{H2const} \frac{n-1}{2n}\,\<H,H\rangle = \lambda. \end{equation} For $p=1$ we have $H = - (\operatorname{div} N) N$ and $\int_M \tau_1 \,{\rm d} \operatorname{vol}_g =0$ for $\tau_1=\<H,N\rangle$, e.g., \cite{RWa-1}. The integral formula shows that $\tau_1$ vanishes somewhere on $M$. On the other hand, \eqref{H2const} yields that $\<H,H\rangle = \tau_1^2$ is constant on $M$, hence $H=0$. Since $n>1$, condition 5 in Theorem~\ref{propstatcrit} implies also ${\tilde H}=0$. \end{proof} Equation \eqref{eqvarstat} and Corollary \ref{statisticalcritSmixI} imply the following \begin{corollary} Let $(g, \mathfrak{T})$ correspond to a statistical connection. Then $(g, \mathfrak{T})$ is critical for the action \eqref{actiongSmix} with respect to all variations of metric and variations of $\mathfrak{T}$ corresponding to statistical connections if and only if {\rm(\ref{ELSmixIstat1},b)} and equations of Theorem~\ref{T-main00} hold. \end{corollary} \subsection{Metric connections} \label{sec: 2-3} Here, we consider $g$ and $\mathfrak{T}$ as independent variables in the action \eqref{actiongSmix}, hence for every pair $(g,\mathfrak{T})$ critical for \eqref{actiongSmix} the contorsion tensor $\mathfrak{T}$ must be critical for \eqref{actiongSmix} with fixed $g$, and thus satisfy Corollary~\ref{T-main1}. Using this fact, we characterize those critical values of \eqref{actiongSmix}, that are attained on the set of pairs $(g, \mathfrak{T})$, where $\mathfrak{T}$ is the contorsion tensor of a metric (in particular, adapted) connection for $g$. \begin{proposition}\label{propQ} Let the contorsion tensor $\mathfrak{T}$ of a metric connection $\bar\nabla$ be critical for the action \eqref{actiongSmix} with fixed $g$. Then $\widetilde{\mD}$ and $\mD$ are both totally umbilical and for $Q$ given in \eqref{E-defQ} we have \begin{eqnarray} \label{criticalQmetricconnection} && \frac12\,Q = \frac{2n-1}{n}\,\langle\operatorname{Tr\,}^\top{\mathfrak{T}}, H\rangle + \frac{2p-1}{p}\,\langle\operatorname{Tr\,}^\perp \mathfrak{T} , {\tilde H}\rangle \nonumber \\ && +\,\frac{p-1}{p}\,\langle{\tilde H},{\tilde H}\rangle + \frac{n-1}{n}\,\<H, H\rangle + \<T, T\rangle + \langle {\tilde T}, {\tilde T} \rangle . \end{eqnarray} \end{proposition} \begin{proof} By Corollary~\ref{T-main1}, both distributions are totally umbilical. In this case, using (\ref{ELconnectionNew1}-j), we have \begin{eqnarray*} \langle\operatorname{Tr\,}^\top({\mathfrak{T}}- {\mathfrak{T}}^*), H -{\tilde H}\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} 2\,\langle \operatorname{Tr\,}^\top{\mathfrak{T}}, H\rangle -2\,\frac{p-1}{p}\,\langle{\tilde H},{\tilde H}\rangle,\\ \langle\operatorname{Tr\,}^\perp({\mathfrak{T}} -{\mathfrak{T}}^*),\, H -{\tilde H}\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} 2\frac{n-1}{n}\,\langle H, H\rangle -2\,\langle\operatorname{Tr\,}^\perp \mathfrak{T}, {\tilde H}\rangle, \\ -\langle\operatorname{Tr\,}^\top{\mathfrak{T}},\,\operatorname{Tr\,}^\bot{\mathfrak{T}}^*\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \frac{n-1}{n}\,\langle\operatorname{Tr\,}^\top \mathfrak{T}, H\rangle +\frac{p-1}{p}\,\langle\operatorname{Tr\,}^\perp\mathfrak{T}, {\tilde H}\rangle , \\ -\langle\operatorname{Tr\,}^\perp{\mathfrak{T}}, \operatorname{Tr\,}^\top{\mathfrak{T}}^*\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \frac{p-1}{p}\,\langle\operatorname{Tr\,}^\perp \mathfrak{T}, {\tilde H}\rangle +\frac{n-1}{n}\,\langle\operatorname{Tr\,}^\top \mathfrak{T}, H\rangle . \end{eqnarray*} For totally umbilical distributions and critical metric connection, (\ref{ELconnectionNew1}-j) yield \begin{eqnarray*} -2 \langle \mathfrak{T} +\mathfrak{T}^\wedge, A \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} 4 \langle\operatorname{Tr\,}^\top \mathfrak{T} , H\rangle,\\ -2 \langle \mathfrak{T} +\mathfrak{T}^\wedge, {\tilde A} \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} 4 \langle \operatorname{Tr\,}^\bot \mathfrak{T} , {\tilde H}\rangle , \\ \langle \mathfrak{T} +\mathfrak{T}^\wedge, T^\sharp \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} 2 \sum \langle\mathfrak{T}_a {\cal E}_i +\mathfrak{T}_i E_a, T^\sharp_i E_a\rangle = 4 \<T, T\rangle,\\ \langle \mathfrak{T} +\mathfrak{T}^\wedge, {\tilde T^\sharp} \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} 4\langle {\tilde T}, {\tilde T} \rangle , \\%. \langle{\mathfrak{T}}^*, \mathfrak{T}^\wedge \rangle_{\,|\,V} \hspace*{-2.mm}&=&\hspace*{-2.mm} - \langle \mathfrak{T}, \mathfrak{T}^\wedge \rangle_{\,|\,V} = - 2 \sum \langle\mathfrak{T}_i E_a, \mathfrak{T}_a {\cal E}_i\rangle = - 2\langle T, T\rangle - 2\langle {\tilde T}, {\tilde T} \rangle. \end{eqnarray*} Using the above in \eqref{E-defQ}, and simplifying the expression, completes the proof. \end{proof} \begin{remark}\rm Let $n, p >1$. By \eqref{criticaltrIinlargedimensions}, for critical metric connection equation \eqref{criticalQmetricconnection} becomes \begin{equation*} \frac12\,Q = -\langle H, H\rangle -\langle{\tilde H}, {\tilde H}\rangle + \<T, T\rangle + \langle {\tilde T}, {\tilde T} \rangle. \end{equation*} By this and \eqref{E-Q1Q2-gen}, for any critical metric connection on a closed manifold $(M,g)$ we have \begin{eqnarray*} \int_M \overline{{\rm S}}_{\rm mix}\,{\rm d} \operatorname{vol}_g &\overset{\eqref{eq-ran-ex}}=& \int_M \big(\frac{2n-1}{n} \<H,H\rangle +\frac{2p-1}{p} \langle{\tilde H},{\tilde H}\rangle \big)\,{\rm d} \operatorname{vol}_g . \end{eqnarray*} Thus, the right hand side of the above equation is the only critical value of the action \eqref{actiongSmix} (with fixed $g$ on a closed manifold $M$) restricted to metric connections for $g$. Notice that it does not depend on $\mathfrak{T}$, but only on the pseudo-Riemannian geometry of distributions on $(M,g)$. Moreover, on a Riemannian manifold it is always non-negative. \end{remark} Consider pairs $(g, \mathfrak{T})$, where $\mathfrak{T}$ corresponds to a metric connection, critical for \eqref{actiongSmix} with respect to $g^\perp$-variations. We apply only adapted variations, as they will allow to obtain the Euler-Lagrange equations without explicit use of adapted frame or defining multiple new tensors. The~case of general variations, mostly due to complicated form of tensor $F$ defined by \eqref{formulaF} that appears in variation formulas, is significantly more involved and beyond the scope of this~paper. Set \begin{equation}\label{E-chi} \chi = \sum\nolimits_{a,j} (\mathfrak{T}_j E_a)^{\perp \flat} \odot ({\tilde T^\sharp}_a {\cal E}_j)^{\perp \flat},\qquad \phi(X,Y) = (\mathfrak{T} +\mathfrak{T}^\wedge)_{X^\perp} Y^\perp . \end{equation} Define also $\phi^\top$ and $\phi^\perp$ by $\phi^\top(X,Y) = (\phi(X,Y))^\top$ and $\phi^\perp(X,Y) = (\phi(X,Y))^\perp$ for $X,Y \in \mathfrak{X}_M$. \begin{theorem} A pair $(g, \mathfrak{T})$, where $\mathfrak{T}$ corresponds to a metric connection on $M$, is critical for \eqref{actiongSmix} with respect to $g^\perp$-variations of metric and arbitrary variations of $\,\mathfrak{T}$ if and only if all the following conditions hold: $\widetilde{\mD}$ and $\mD$ are totally umbilical, the following Euler-Lagrange equation~holds: \begin{subequations} \begin{eqnarray} \label{ELmetric} && -\frac{5n-5}{n} H^\flat \otimes H^\flat -\frac12\,\Upsilon_{T,T} +2\,\widetilde{\cal{T}}^\flat \nonumber\\ && +\big(\frac{3p-3}{p}\operatorname{div} {\tilde H} - \frac{2n-1}{n}\,\langle\operatorname{Tr\,}^\top {\mathfrak{T}},\, H\rangle -\frac{2p-1}{p}\langle\operatorname{Tr\,}^\perp \mathfrak{T},\, {\tilde H}\rangle -\operatorname{div}( (\operatorname{Tr\,}^\perp \mathfrak{T} )^\top) \big)\, g^{\perp} \nonumber \\ && -2 \operatorname{div} \phi^\top +\langle\phi, \frac{3}{2}\,{\tilde H} -\frac{1}{2}\,H +\frac{1}{2}\,(\operatorname{Tr\,}^\top \mathfrak{T})^\perp \rangle + 7 \chi + \frac{3n+2}{n}\,H^\flat \odot (\operatorname{Tr\,}^\top \mathfrak{T})^{\perp \flat} =0, \end{eqnarray} $\mathfrak{T}$ satisfies the following linear algebraic system: \begin{eqnarray} \label{critcontorsion1} && (\mathfrak{T}_V\, U -\mathfrak{T}_U\, V)^\top = 2\,{\tilde T}(U, V), \\ \label{critcontorsiontTab} && \mathfrak{T}_U^\top = T^\sharp , \\ \label{trperpIperpH} && (\operatorname{Tr\,}^\bot \mathfrak{T})^\perp = \frac{n-1}{n} H, \\ \label{critcontorsionTab} && (\mathfrak{T}_Y\, X -\mathfrak{T}_X\, Y)^\perp = 2\,T(X, Y), \\ \label{critcontorsionTij} && \mathfrak{T}_X^\bot = \tilde T^\sharp_X, \\ \label{critcontorsionlast} && (\operatorname{Tr\,}^\top \mathfrak{T})^\top = \frac{p-1}{p} {\tilde H}, \end{eqnarray} for all $X,Y\in\widetilde\mD$ and $U,V\in\mD$; and \begin{eqnarray}\label{critcontorsionspec1} (\operatorname{Tr\,}^\top \mathfrak{T} )^\perp = -H,\quad if\ p>1, \qquad (\operatorname{Tr\,}^\perp \mathfrak{T} )^\top = -{\tilde H},\quad if\ n>1. \end{eqnarray} \end{subequations} \end{theorem} \begin{proof} By Corollary~\ref{T-main1}, $\mathfrak{T}$ is critical for \eqref{actiongSmix} (with fixed $g$) if and only if distributions $\widetilde{\mD}$ and $\mD$ are totally umbilical and (\ref{critcontorsion1}-g) (together with \eqref{critcontorsionspec1} if their respective assumptions on $n$ and $p$ hold) are satisfied. Let $\mathfrak{T}$ be critical for the action \eqref{actiongSmix} with fixed $g$. We shall prove that a pair $(g, \mathfrak{T})$ is critical for the action \eqref{actiongSmix} with respect to $g^\perp$-variations of metric if and only if \eqref{ELmetric} holds. By Proposition~\ref{L-QQ-first}, for any variation $g_t$ of metric such that ${\rm supp}(B)\subset\Omega$, and $Q$ in \eqref{E-defQ}, we have \[ \frac{d}{dt}\int_M \big(2(\bar{\rm S}_{\,\rm mix}-{\rm S}_{\,\rm mix})+Q\big)\,{\rm d}\operatorname{vol}_g = \frac{d}{dt}\int_M (\operatorname{div} X)\,{\rm d}\operatorname{vol}_g, \] where $X=(\operatorname{Tr\,}^\top(\mathfrak{T} -\mathfrak{T}^*))^\bot +(\operatorname{Tr\,}^\bot(\mathfrak{T} -\mathfrak{T}^*))^\top$. Although $X$ is not necessarily zero on $\partial\Omega$, we have ${\rm supp}\,(\partial_t X)\subset\Omega$, thus, $\frac{d}{dt}\int_M(\operatorname{div} X)\,{\rm d}\operatorname{vol}_g=0$, see \eqref{E-DivThm-2}, and hence: \begin{eqnarray*} && \frac{d}{dt}\int_M (\,\bar{\rm S}_{\,\rm mix} - {\rm S}_{\,\rm mix})\,{\rm d}\operatorname{vol}_g = -\frac{1}{2} \int_M (\partial_t Q)\,{\rm d}\operatorname{vol}_g -\frac{1}{4} \int_M Q\,\langle B, g \rangle \,{\rm d}\operatorname{vol}_g , \end{eqnarray*} where, up to divergence of a compactly supported vector field, $\partial_t Q$ is given in Lemma~\ref{dtQadapted}. For $g^\perp$-variations we get (see \cite[Eq.~(29]{rz-2} for more general case of $g^\pitchfork$-variations), \begin{eqnarray*} \frac{d}{dt}\int_M {\rm S}_{\,\rm mix}\ {\rm d}\operatorname{vol}_g &=& \int_M \big\langle -\operatorname{div}{\tilde h} -\widetilde{\cal K}^\flat - H^\flat \otimes H^\flat +\frac{1}{2}\,\Upsilon_{h,h} +\frac12\,\Upsilon_{T,T} +2\,\widetilde{\cal{T}}^\flat \\ \nonumber \hspace*{-1.5mm}&+&\hspace*{-1.5mm}\frac{1}{2}\,\big({{\rm S}}_{\,\rm mix} +\operatorname{div}({\tilde H} - H)\big)\, g^{\perp},\ B\big\rangle\,{\rm d}\operatorname{vol}_g. \end{eqnarray*} For totally umbilical distributions we have \[ \widetilde{\cal K}^\flat =0,\quad \operatorname{div}{\tilde h} = \frac{1}{p}\,(\operatorname{div}{\tilde H})\, g^\perp,\quad \big\langle \frac{1}{2}\,\Upsilon_{h,h}, B \big\rangle = \big\langle \frac{1}{n}\,H^\flat \otimes H^\flat, B\big\rangle . \] Hence, \begin{eqnarray*} && \frac{d}{dt}\int_M {\rm S}_{\,\rm mix}\, {\rm d}\operatorname{vol}_g = \int_M \big\langle \frac12\,\Upsilon_{T,T} +2\,\widetilde{\cal{T}}^\flat -\frac{n-1}{n}\, H^\flat \otimes H^\flat \\ \hspace*{-1.5mm}&+&\hspace*{-1.5mm}\frac{1}{2}\,\big({{\rm S}}_{\,\rm mix} +\operatorname{div}(\frac{p-2}{p} {\tilde H} - H) -\frac{1}{2} Q\,\big)\, g^{\perp} +\frac{1}{2}\,\delta Q,\,B\big\rangle\, {\rm d} \operatorname{vol}_g , \end{eqnarray*} where $\delta Q$ is defined by the equality $\langle\delta Q, B \rangle = -\partial_t Q$, see Lemma~\ref{dtQadapted}. Thus, the Euler-Lagrange equation for $g^\perp$-variations of metric and totally umbilical distributions is the following: \begin{equation}\label{ELQ} -\frac{2n-2}{n} H^\flat \otimes H^\flat +\Upsilon_{T,T} + 4\,\widetilde{\cal{T}}^\flat +\big({{\rm S}}_{\,\rm mix} +\operatorname{div}(\,\frac{p-2}{p} {\tilde H} - H) -\frac{1}{2}\,Q\, \big)\, g^{\perp} +\delta Q = 0 . \end{equation} Using Lemma~\ref{dtQadapted}, Proposition~\ref{propQ} and \eqref{E-PW-Smix-umb} in \eqref{ELQ}, we obtain \begin{eqnarray*} && -\frac{5n-5}{n} H^\flat \otimes H^\flat -\frac12\,\Upsilon_{T,T} + 2\,\widetilde{\cal{T}}^\flat +\big(\,\frac{3p-3}{p} \operatorname{div} {\tilde H} -\frac{2n-1}{n}\langle\operatorname{Tr\,}^\top {\mathfrak{T}},\, H\rangle \\ && -\frac{2p-1}{p}\,\langle\operatorname{Tr\,}^\perp \mathfrak{T},\, {\tilde H}\rangle -\operatorname{div}((\operatorname{Tr\,}^\perp \mathfrak{T} )^\top) \big )\, g^{\perp} -2 \operatorname{div} \phi^\top \\ && +\langle \phi,\, \frac{p+2}{p}\,{\tilde H} -\frac{1}{2} H +\frac{1}{2} \operatorname{Tr\,}^\top \mathfrak{T} \rangle + 7 \chi + \frac{3n+2}{n}\,H^\flat \odot (\operatorname{Tr\,}^\top \mathfrak{T})^{\perp \flat} =0. \end{eqnarray*} By \eqref{critcontorsionlast}, from the above we get \eqref{ELmetric}. \end{proof} \begin{remark}\label{remarkvolpreserving}\rm Note that for volume-preserving variations, the right hand sides of \eqref{ELmetric} and \eqref{ELQ} should be $\lambda\,g^\perp$, with $\lambda\in\mathbb{R}$ being an arbitrary constant \cite{rz-2}. This obviously applies also to the special cases of the Euler-Lagrange equation \eqref{ELmetric} discussed below. If $p>1$ and $n>1$ then \eqref{ELmetric} can be written as \begin{eqnarray}\label{ELmetricnpbig} && \frac{3-8n}{n} H^\flat \otimes H^\flat -\frac12\,\Upsilon_{T,T} +2\,\widetilde{\cal{T}}^\flat -2 \operatorname{div} \phi^\top +\langle \phi, \frac{3}{2} {\tilde H} - H \rangle + 7 \chi \nonumber \\ && +\,\big(\frac{4p-3}{p} \operatorname{div} {\tilde H} +\frac{2n-1}{n}\langle H, H\rangle +\frac{2p-1}{p}\langle{\tilde H}, {\tilde H}\rangle\big)\,g^{\perp} = 0 . \end{eqnarray} \end{remark} Taking trace of \eqref{ELmetricnpbig} and using (\ref{trperpIperpH},g--i) and equalities $\operatorname{Tr\,}_g\Upsilon_{T,T} = 2\,\<T, T\rangle$ and $\operatorname{Tr\,}_g\widetilde{\cal{T}}^\flat = -\langle{\tilde T}, {\tilde T}\rangle$, we obtain the following result. \begin{corollary}\label{corELtrace} Let a pair $(g, \mathfrak{T})$, where $g$ is a pseudo-Riemannian metric on $M$ and $\mathfrak{T}$ corresponds to a metric connection, be critical for \eqref{actiongSmix} with respect to $g^\perp$-variations of metric and arbitrary variations of $\,\mathfrak{T}$. Then for $n,p>1$ we have \begin{equation}\label{ELmetrictracenpbig} \frac{(2n-1)(p-5)}{n}\,\langle H,H \rangle -\<T, T\rangle -2\langle{\tilde T}, {\tilde T}\rangle + (4p-1) \operatorname{div} {\tilde H} + 2(p-2)\langle{\tilde H}, {\tilde H}\rangle + 7\operatorname{Tr\,}^\perp\chi = 0, \end{equation} and for $n=1$ and $p>1$ we get \begin{eqnarray*} && ({p-5})\<H,H\rangle -2\langle{\tilde T}, {\tilde T}\rangle + 3(p-1)\operatorname{div}{\tilde H} \nonumber \\ && -(p+4)\operatorname{div}((\operatorname{Tr\,}^\perp \mathfrak{T})^\top) +2(2 - p)\langle\operatorname{Tr\,}^\perp \mathfrak{T},\, {\tilde H} \rangle + 7 \operatorname{Tr\,}^\perp \chi =0. \end{eqnarray*} \end{corollary} Recall that an \textit{adapted connection} to $(\mD,\widetilde\mD)$, see e.g., \cite{bf}, is defined by \[ {\bar \nabla}_Z\, X \in\mathfrak{X}^\bot,\quad {\bar \nabla}_Z\, Y \in \mathfrak{X}^\top,\quad X\in\mathfrak{X}^\bot,\ Y\in\mathfrak{X}^\top,\ Z \in\mathfrak{X}_M, \] and an example is the Schouten-Van Kampen connection with contorsion tensor \[ \mathfrak{T}_{X} Y = -(\nabla_{X^\top}Y^\bot)^\top -(\nabla_{X^\top}Y^\top)^\bot -(\nabla_{X^\bot}Y^\bot)^\top -(\nabla_{X^\bot}Y^\top)^\bot, \quad X,Y \in\mathfrak{X}_M. \] \begin{proposition}\label{L-3-2old} Let $\widetilde{\mD}$ and $\mD$ both be totally umbilical. Then contorsion tensor $\mathfrak{T}$ corresponding to an adapted metric connection satisfies {\rm (\ref{ELmetric}-i)} if and only if it satisfies the equations \begin{subequations} \begin{eqnarray}\label{adaptedcritconfirst} && \mathfrak{T}_U^\top = T^\sharp_U, \\ \label{trperpIperpadapted} && (\operatorname{Tr\,}^\perp \mathfrak{T} )^\perp = \frac{n-1}{n} H, \\ && \mathfrak{T}_X^\bot = {\tilde T^\sharp}_X , \\ \label{adaptedcritconlast} && (\operatorname{Tr\,}^\top \mathfrak{T} )^\top = \frac{p-1}{p} {\tilde H},\\ \label{ELmetricadapted} && \frac{3-8n}{n}\,H^\flat \otimes H^\flat - \frac12\,\Upsilon_{T,T} -5\,\widetilde{\cal{T}}^\flat - \langle \phi, H \rangle \nonumber \\ && + \big(\,\frac{4p+1}{p}\, \operatorname{div} {\tilde H} + \frac{2p-4}{p}\, \langle{\tilde H}, {\tilde H}\rangle + \frac{2n-1}{n}\,\<H, H\rangle \big) g^\perp = 0, \end{eqnarray} \end{subequations} for all $X\in\widetilde\mD$ and $U\in\mD$. \end{proposition} \begin{proof} For adapted connection and totally umbilical distribution $\mD$ we have $\phi^\top = - 2 {\tilde h} = - \frac{2}{p} {\tilde H} g^\perp$, see \cite[Section~2.5]{RZconnection}, and \begin{eqnarray}\label{adaptedcontorsion} \mathfrak{T}_X Y \hspace*{-2.mm}&=&\hspace*{-2.mm} -(\nabla_{X^\top}\, Y^\top)^\bot -(\nabla_{X^\bot}\, Y^\bot)^\top +\,({{A}}_{Y^\bot} + {{T}}^\sharp_{Y^\bot}) X^\top + ({\tilde A}_{Y^\top} + {\tilde T}^\sharp_{Y^\top})\, X^\bot \nonumber \\ && +\, (\mathfrak{T}_X \,Y^\top)^\top + (\mathfrak{T}_X \,Y^\bot)^\bot. \end{eqnarray} Moreover, an adapted connection is critical for \eqref{actiongSmix} with fixed $g$ if and only if (\ref{adaptedcritconfirst}-d) hold, see \cite{RZconnection}. Note that for adapted connection from \eqref{adaptedcontorsion} we obtain $\chi = - \widetilde{\cal T}^\flat$, as for $X,Y \in \mD$ we have \[ 2\chi(X,Y) = \!\sum (2\langle{\tilde T^\sharp}_a{\cal E}_j, X\rangle\langle{\tilde T^\sharp}_a{\cal E}_j, Y\rangle + \langle{\tilde A}_a{\cal E}_j, X\rangle \langle{\tilde T^\sharp}_a {\cal E}_j, Y\rangle + \langle{\tilde A}_a{\cal E}_j, Y\rangle \langle{\tilde T^\sharp}_a{\cal E}_j, X\rangle) {=} -2\sum \langle{\tilde T^\sharp}_a{\tilde T^\sharp}_a X, Y\rangle \] for umbilical distributions. Also \eqref{critcontorsionspec1} hold, in all dimensions $n,p$. Thus, for a critical adapted connection, \eqref{ELmetric} simplifies to \eqref{ELmetricadapted}. \end{proof} If $p>1$ then $\phi^\perp$ is not determined by $(\operatorname{Tr\,}^\perp \mathfrak{T})^\perp$ and by \eqref{adaptedcontorsion} in Proposition~\ref{L-3-2old} can be set arbitrary for an adapted metric connection. Using this fact and taking trace of \eqref{ELmetricadapted} yield the following. \begin{corollary} Let $\widetilde{\mD}$ and $\mD$ both be totally umbilical. If a contorsion tensor $\mathfrak{T}$, corresponding to an adapted metric connection, satisfies {\rm(\ref{ELmetric}-i)} then the metric $g$ satisfies \begin{equation}\label{trELmetricadaptedcrit} \frac{ 5 -10n + 2np -p}{n}\,\langle H,H \rangle - \langle T,T\rangle + 5 \langle \tilde T , \tilde T \rangle + (4p+1) \operatorname{div} {\tilde H} + (2p-4) \langle{\tilde H}, {\tilde H}\rangle =0. \end{equation} If $p>1$ and at every point of $M$ we have $H \ne 0$, then for a given $(M,g)$ satisfying \eqref{trELmetricadaptedcrit} there exists a metric adapted connection such that $(g, \mathfrak{T})$ is critical for the action \eqref{actiongSmix} with respect to all variations of $\,\mathfrak{T}$ and $g^\perp$-variations of metric. \end{corollary} \begin{corollary} Let $(M,g)$ be a closed Riemannian manifold endowed with $\mD$ integrable and $\widetilde{\mD}$ integrable and totally geodesic, and let $p \neq 2$. Then there exists a metric compatible adapted connection such that $(g, \mathfrak{T})$ is critical for the action \eqref{actiongSmix} with respect to all variations of $\,\mathfrak{T}$ and $g^\perp$-variations of metric if and only if $\mD$ is totally geodesic. \end{corollary} \begin{proof} Under these assumptions we obtain that \eqref{ELmetricadapted} holds if and only if \[ (4p+1)\operatorname{div} {\tilde H} + (2p-4) \langle{\tilde H}, {\tilde H}\rangle =0. \] Integrating this equation on a closed $(M,g)$ and using \eqref{E-DivThm} yields ${\tilde H}=0$. \end{proof} \begin{example} \rm In \cite{FriedrichIvanov} it was proved that on a Sasaki manifold $(M, g, \xi , \eta)$ (that is $M$ with a normal contact metric structure) there exists a unique metric connection with a skew-symmetric, parallel torsion tensor, and its contorsion tensor is given by $\langle\mathfrak{T}_X Y, Z\rangle = \frac{1}{2}\,(\eta \wedge d \eta)(X,Y,Z)$, where $X,Y,Z \in \mathfrak{X}_M$ and $\eta$ is the contact form on $M$. Let $\widetilde{\mD}$ be the one-dimensional distribution spanned by the Reeb field $\xi$. It follows that for this connection we have $\phi =0$ and for $X,Y \in \mD$ \begin{equation*} \chi(X,Y) = -\frac{1}{4}\sum\nolimits_{\,i}\big[\,(\eta\wedge d\eta)(\xi, {\cal E}_i ,X)\cdot\langle\,{\tilde T^\sharp}_\xi {\cal E}_i, Y\rangle + (\eta\wedge d\eta)(\xi, {\cal E}_i, Y)\cdot \langle\,{\tilde T^\sharp}_\xi{\cal E}_i, X\rangle\,\big] = -\,\widetilde{\cal{T}}^\flat(X,Y), \end{equation*} see \eqref{E-chi}, as $d \eta(X,Y) = 2\<X, {\tilde T^\sharp}_\xi\, Y\rangle$. Since $g$ is a Sasaki metric, both distributions are totally geodesic, and for volume-preserving variations the Euler-Lagrange equation \eqref{ELmetric} gets $\lambda\,g^\perp$ on the right-hand side (see Remark~\ref{remarkvolpreserving}) and becomes \begin{equation}\label{ELforFriedrichIvanovConnection} -5\,\widetilde{\cal{T}}^\flat = \lambda\,g^\perp . \end{equation} As on a Sasakian manifolds we have $\widetilde{\cal{T}}^\flat=-\frac{1}{p}\,\langle{\tilde T},{\tilde T}\>g^{\perp}$ and $\langle{\tilde T},{\tilde T}\rangle=p$ (e.g., \cite[Section 3.3]{rz-2}), we see that \eqref{ELforFriedrichIvanovConnection} holds in this case for $\lambda=5$. We can slightly modify this example to obtain a critical metric connection on any contact manifold $(M, \eta)$ with a contact metric structure $g$, by taking $\mathfrak{T}_\xi \xi =0$ and for all $X,Y \in{\cal D}$: \[ \langle\mathfrak{T}_X Y, \xi\rangle = \frac{1}{2}\,(\eta \wedge d \eta)(X,Y,Z) = -\langle\mathfrak{T}_X \xi, Y\rangle,\quad \langle\mathfrak{T}_\xi X, Y\rangle = -\frac{1}{2}\,(\eta\wedge d\eta)(\xi,X, Y). \] For all $X,Y,Z\in{\cal D}$ we can take as $\langle\mathfrak{T}_X Y, Z\rangle$ any 3-form. While no longer with parallel torsion, connection $\nabla + \mathfrak{T}$ will then satisfy all Euler-Lagrange equations (\ref{ELmetric}-i). \end{example} Corollary \ref{corELtrace} can be viewed as an integrability condition for \eqref{ELmetric}. Below we give examples of $\mathfrak{T}$, constructed for metrics $g$ that satisfy \eqref{ELmetrictracenpbig} with particular form of $\chi$, obtaining pairs $(g,\mathfrak{T})$ that are critical points of \eqref{actiongSmix} with respect to variations of $\,\mathfrak{T}$ and $g^\perp$-variations of metric. \begin{proposition} \label{propexample1} Let $n,p>1$ and $H \neq 0$ everywhere on $M$. For any $g$ such that $\widetilde{\mD}$ and $\mD$ are totally umbilical and \eqref{ELmetrictracenpbig} holds with $\chi=0$, there exists a contorsion tensor $\mathfrak{T}$ such that $\mathfrak{T}_X Y \in \mathfrak{X}^\bot$ for all $X,Y\in\mathfrak{X}^\bot$ and $(g, \mathfrak{T})$ is critical for the action \eqref{actiongSmix} with respect to $g^\perp$-variations of metric and arbitrary variations of $\,\mathfrak{T}$. \end{proposition} \begin{proof} Suppose that $\mathfrak{T}_X Y \in \mathfrak{X}^\bot$ for all $X,Y \in \mathfrak{X}^\bot$. Then $\phi^\top = 0$, $\chi =0$, see definitions \eqref{E-chi} (because $\langle\mathfrak{T}_j E_a, {\cal E}_i\rangle=-\langle\mathfrak{T}_j {\cal E}_i, E_a\rangle=0$), $(\operatorname{Tr\,}^\perp \mathfrak{T} )^\top=0$, from equations for critical connections it follows that ${\mD}$ is integrable and \eqref{ELmetric} is an algebraic equation for symmetric (0,2)-tensor $\phi$: \begin{equation}\label{ELmetricspec1} -\frac{8n-3}{n}\,H^\flat \otimes H^\flat +\big(\frac{3p-3}{p} \operatorname{div} {\tilde H} + \frac{2n-1}{n}\,\langle H, H\rangle \big)\, g^{\perp} -\frac12\,\Upsilon_{T,T} -\langle \phi, H \rangle =0. \end{equation} For $H \ne 0$, we can always find $\phi$ (and then $\mathfrak{T}$) satisfying \eqref{ELmetricspec1}. Clearly, such $\phi$ is not unique. \end{proof} \begin{proposition} \label{propexample2} Let $n,p>1$ and $H \neq 0$ everywhere on $M$. For any $g$ such that $\widetilde{\mD}$ is totally umbilical and $\mD$ is totally geodesic and \eqref{ELmetrictracenpbig} holds with $\chi=-\widetilde{\cal T}^\flat$, there exists a contorsion tensor $\mathfrak{T}$ such that $(\mathfrak{T}_X\,\xi)^\perp = {\tilde T^\sharp}_\xi X$ for all $X \in \mathfrak{X}^\bot$, $\xi \in \mathfrak{X}^\top$, and a pair $(g,\, \mathfrak{T})$ is critical for the action \eqref{actiongSmix} with respect to $g^\perp$-variations of metric and arbitrary variations of $\,\mathfrak{T}$. \end{proposition} \begin{proof} For $(\mathfrak{T}_i E_a)^\perp = {\tilde T^\sharp}_a {\cal E}_i$ we have for $X,Y \in \mathfrak{X}^\bot$: \begin{equation*} \chi(X,Y) = \sum\nolimits_{a,j}\langle{\tilde T^\sharp}_a {\cal E}_j, X\rangle\langle{\tilde T^\sharp}_a {\cal E}_j,Y\rangle = -\widetilde{\cal T}^\flat (X,Y). \end{equation*} Then, since $\langle\mathfrak{T}_{i}\,{\cal E}_i, E_a\rangle = -\langle\mathfrak{T}_{i}\,E_a, {\cal E}_i\rangle = -\langle{\tilde T^\sharp}_a {\cal E}_i, {\cal E}_i\rangle =0$, we also get $(\operatorname{Tr\,}^\perp \mathfrak{T} )^\top=0 = {\tilde H}$ and similarly, $\phi^\top=0$. So, \eqref{ELmetric} has the following form: \begin{equation} \label{ELmetricspec2a} -\frac{8n-3}{n}\,H^\flat \otimes H^\flat -\frac12\,\Upsilon_{T,T} - 5\,\widetilde{\cal{T}}^\flat + \frac{2n-1}{n}\,\<H, H\rangle \, g^{\perp} -\langle \phi, H \rangle =0, \end{equation} Again, we get an algebraic equation for symmetric tensor $\phi$, which admits many solutions. \end{proof} Note that in Propostions \ref{propexample1} and \ref{propexample2} instead of condition $H\ne 0$ everywhere on $M$, we can assume that at those points of $M$, where $H=0$ the metric $g$ satisfies \eqref{ELmetricspec1} and \eqref{ELmetricspec2a} with $H=0$ (then these equations do not contain $\phi$). \begin{example}\rm Let $\widetilde{\mD}$ and $\mD$ be totally umbilical, $n,p>1$, ${\cal D}$ integrable and \eqref{ELmetrictracenpbig} hold. Then $\chi=0$ holds, since ${\cal D}$ is integrable, so \eqref{ELmetrictracenpbig} does not contain any components of $\mathfrak{T}$. With these assumptions we can construct a simple example of $\mathfrak{T}$ that satisfies the Euler-Lagrange equations (\ref{critcontorsion1}-i) and \eqref{ELmetricnpbig} in some domain. Let $U$ be a neighborhood of $p \in M$; we choose any local adapted orthonormal frame $(E_a , {\cal E}_i)$ on~$U$. Then, due to $\phi(X,Y) = \phi(X^\perp, Y^\perp)$, we have \begin{eqnarray*} && (\operatorname{div} \phi^\top)({\cal E}_i, {\cal E}_j) = \sum\nolimits_a\langle\nabla_{E_a}(\phi^\top({\cal E}_i, {\cal E}_j)), E_a\rangle +\sum\nolimits_k \langle\nabla_{ {\cal E}_k } ( \phi^\top ({\cal E}_i, {\cal E}_j) ), {\cal E}_k \rangle \nonumber \\ && - \sum\nolimits_{a,m} \langle\phi^\top( {\cal E}_i , {\cal E}_m ) , E_a \rangle \langle \nabla_{E_a} {\cal E}_j , {\cal E}_m \rangle - \sum\nolimits_{a,m} \langle \phi^\top( {\cal E}_m , {\cal E}_j ) , E_a \rangle \langle \nabla_{E_a} {\cal E}_i , {\cal E}_m \rangle \nonumber \\ && -\sum\nolimits_{k,m} \langle \phi^\top( {\cal E}_i , {\cal E}_m ) , {\cal E}_k \rangle \langle \nabla_{{\cal E}_k} {\cal E}_j , {\cal E}_m \rangle - \sum\nolimits_{k,m} \langle \phi^\top( {\cal E}_m , {\cal E}_j ) , {\cal E}_k \rangle \langle \nabla_{{\cal E}_k} {\cal E}_i , {\cal E}_m \rangle . \end{eqnarray*} We define components of $\mathfrak{T}$ with respect to the adapted frame on $U$. Let $( \mathfrak{T}_i {\cal E}_j - \mathfrak{T}_j {\cal E}_i )^\top =0$ for $i \neq j$ and let $(\mathfrak{T}_i E_a)^\top$, $(\mathfrak{T}_a E_b)^\perp$ and $(\mathfrak{T}_a {\cal E}_i)^\perp$ be such that (\ref{critcontorsiontTab},e,f,h) hold on $U$. For all $( i,j ) \neq (p,p)$, consider \eqref{ELmetricnpbig} evaluated on $({\cal E}_i , {\cal E}_j)$ as a system of linear, non-homogeneous, first-order PDEs for $\{ \phi({\cal E}_i , {\cal E}_j) , ( i,j ) \neq (p,p) \}$, assume in this system that $\phi({\cal E}_p , {\cal E}_p) = \frac{n-1}{n}\, H - {\tilde H} - \sum\nolimits_{\,i=1}^{p-1} \phi({\cal E}_i , {\cal E}_i)$, and let $\{ \phi_{ij} , ( i,j ) \neq (p,p) \}$ be any local solution of this system of PDEs on (a subset of) $U$. Let $\mathfrak{T}_i {\cal E}_j + \mathfrak{T}_j {\cal E}_i = \phi_{ij}$ for $(i,j) \neq (p,p)$ and let $\mathfrak{T}_p {\cal E}_p = \frac{1}{2} (\frac{n-1}{n} H - {\tilde H} - \sum_{i=1}^{p-1} \phi_{ii})$, then (\ref{trperpIperpH},i) hold. By the assumption that \eqref{ELmetrictracenpbig} holds and the fact that \eqref{ELmetricnpbig} is a linear, non-homogeneous equation for $\phi$, \eqref{ELmetricnpbig} evaluated on $({\cal E}_p , {\cal E}_p)$ will also be satisfied. Thus, equations (\ref{critcontorsion1}-i) and \eqref{ELmetricnpbig} hold on (a subset of) $U$ for $\mathfrak{T}$ constructed above. \end{example} Note that when we consider adapted variations, we also have the equation dual (with respect to interchanging $\widetilde{\mD}$ and $\mD$) to \eqref{ELmetric}, so we can mix different assumptions from the above examples for different distributions, e.g., conditions $(\mathfrak{T}_i\, E_a)^\perp = {\tilde T^\sharp}_a\,{\cal E}_i$ and $\mathfrak{T}_X Y \in \mathfrak{X}^\top$ for $X,Y \in \mathfrak{X}^\top$. \subsection{Semi-symmetric connections} \label{sec:2-4} The following connections are metric compatible, see \cite{Yano}. Using variations of $\mathfrak{T}$ in this class, we obtain example with explicitly given tensor $\overline\operatorname{Ric}_\mD$. \begin{definition} \rm An affine connection $\bar\nabla$ on $M$ is \textit{semi-symmetric} if its torsion tensor $S$ satisfies $S(X,Y)=\omega(Y)X-\omega(X)Y$, where $\omega$ is a one-form on $M$. For $(M,g)$ we have \begin{equation}\label{Uconnection} \bar\nabla_XY=\nabla_XY + \<U , Y\rangle X -\<X,Y\>U, \end{equation} where $U=\omega^\sharp$ is the dual vector field. \end{definition} We find Euler--Lagrange equations of \eqref{Eq-Smix} as a particular case of (\ref{ELconnection1}-h), using variations of $\mathfrak{T}$ corresponding to semi-symmetric connections. Now we consider variations of a semi-symmetric connection only among connections also satisfying \eqref{Uconnection} for some $U$. \begin{proposition}\label{corUcriticalforI} A semi-symmetric connection ${\bar\nabla}$ on $(M,g,\mD)$ satisfying \eqref{Uconnection} is critical for the action \eqref{Eq-Smix} with fixed $g$ among all semi-symmetric connections if and only if \begin{equation}\label{UcriticalforI} 2p(n-1)\,U^\top - (n-p) {\tilde H} = -(\mathfrak{a}/2)\,s^\top,\quad 2n(p-1)\,U^\perp - (p-n) H = -(\mathfrak{a}/2)\,s^\bot, \end{equation} where $s^\top=(s(\cdot\,,\cdot))^\top$ and $s^\bot=(s(\cdot\,,\cdot))^\bot$. In particular, if $\,n=p=1$ and $s=0$ (no spin) then every semi-symmetric connection is critical among all such connections, because $Q=0$ in this case. \end{proposition} \begin{proof} Let $U_t,\ t\in(-\epsilon, \epsilon)$, be a family of compactly supported vector fields on $M$, and let $U=U_0$ and $\dot U = \partial_t U_t |_{t=0}$. Then for a fixed metric $g$, from \eqref{QforUconnection} we obtain \[ \partial_t Q(U_t) |_{t=0} = (p-n)\langle \dot U, {\tilde H}\rangle + 2p(n-1) \langle U^\top , \dot U \rangle + \langle \dot U , H \rangle (n-p) + 2n(p-1) \langle U^\perp , \dot U \rangle. \] Separating parts with $(\dot U)^\top$ and $(\dot U)^\perp$, we get \[ \partial_t Q(U_t) |_{\,t=0} = \langle \dot U,\ (p-n) {\tilde H} + 2p(n-1) U^\top \rangle + \langle \dot U,\ (n-p) H + 2n(p-1) U^\perp \rangle, \] from which \eqref{UcriticalforI} follow. \end{proof} \begin{remark}\rm By Lemma \ref{lemmasemisymmetric}, if a semi-symmetric connection ${\bar\nabla}$ on $(M,g,\mD)$ is critical for the action \eqref{actiongSmix} with fixed $g$, then both $\widetilde{\cal D}$ and ${\cal D}$ are integrable and totally geodesic. Indeed, let ${\bar \nabla}$ be given by \eqref{Uconnection} and satisfy (\ref{critcontorsion1}-g) and conditions \eqref{critcontorsionspec1}, i.e., it is critical for action \eqref{actiongSmix} with fixed $g$. We find from \eqref{UconnectionImixed} that both $\widetilde{\cal D}$ and $\cal D$ are integrable. Moreover, if $n=p=1$ then \eqref{UconnectiontrtopI} and its dual with (\ref{critcontorsion1}-g) yield $H = 0 = {\tilde H}$ and $U=0$ (i.e., the connection ${\bar \nabla}$ becomes the Levi-Civita connection). If $n>2$ and $p>2$ we also have $H = 0 = {\tilde H}$ and $U=0$, in this case using also \eqref{critcontorsionspec1}. If $n=1$ and $p>1$ we obtain from \eqref{trperpIperpH} that $U^\perp =0$ and from \eqref{critcontorsionspec1}$_1$ that $H=0$, moreover as both distributions are totally umbilical by Corollary~\ref{T-main1}, it follows that they are totally geodesic. \end{remark} \begin{theorem}\label{propUconnectionEL} A pair $(g, \mathfrak{T})$, where $g\in{\rm Riem}(M,\widetilde{\mD},{\mD})$ and $\mathfrak{T}$ corresponds to a semi-symmetric connection on $M$ defined by \eqref{Uconnection}, is critical for \eqref{actiongSmix} with respect to volume-preserving $g^\pitchfork$-variations of metric and variations of $\,\mathfrak{T}$ corresponding to semi-symmetric connections if and only if the following Euler-Lagrange equations are satisfied: \begin{subequations} \begin{eqnarray}\label{UELD} && {r}_{\mD} -\langle\tilde h,\,\tilde H\rangle +\widetilde{\cal A}^\flat -\widetilde{\cal T}^\flat +\Psi +\widetilde{\cal K}^\flat -{\rm Def}_{\mD}\,H + H^\flat\otimes H^\flat -\frac{1}{2}\,\Upsilon_{h,h} -\frac12\,\Upsilon_{T,T} \\ \nonumber && -\frac12\,\big({\rm S}_{\rm mix} +\operatorname{div}(\tilde H -H)\big)\,g^\perp -\frac14\,(p-n)( \operatorname{div} U^\top)\,g^\perp +\frac12\,n(p-1) U^{\perp \flat} \otimes U^{\perp \flat} =\lambda\,g^\perp, \\ \label{UELmixed} && 4\,\langle\theta,\,{\tilde H}\rangle +2(\operatorname{div}(\alpha -\tilde\theta) )_{\,| {\rm V}} +2\langle{\tilde \theta} - {\tilde\alpha}, H\rangle + 2\,H^{\flat} \odot {\tilde H}^{\flat} -2\,{\tilde \delta}_{H} +4\,\Upsilon_{{\tilde\alpha}, \theta} +2\Upsilon_{\alpha, {\tilde\alpha}} +2\,\Upsilon_{{\tilde \theta}, \theta} \nonumber \\ && +\,\frac12\,(n-p){\tilde \delta}_{U^\perp} + \frac12\,(n-p)\langle {\tilde \alpha} - {\tilde \theta} , U^\perp \rangle -(p-n) \langle {\theta} , U^\top \rangle - p(n-1) U^{\top \flat} \otimes U^{\perp \flat} = 0, \end{eqnarray} \end{subequations} and \begin{equation}\label{UcriticalforI-0} 2p(n-1)\,U^\top - (n-p) {\tilde H} = 0,\quad 2n(p-1)\,U^\perp - (p-n) H = 0. \end{equation} \end{theorem} \begin{proof} By Proposition~\ref{L-QQ-first} and \eqref{dtQgforUconnection}, we obtain \begin{eqnarray*} \partial_t \int_M (\bar{{\rm S}}_{\,\rm mix} - {\rm S}_{\,\rm mix})\,{\rm d}\operatorname{vol}_g \hspace*{-2.mm}&=&\hspace*{-2.mm} \int_M \big\langle \frac14\,(p-n)(\operatorname{div} U^\top ) g^\perp -(p-n)\langle {\theta}, U^\top \rangle \nonumber \\ &-& \!\!\frac12\,n(p-1) U^{\perp \flat} \otimes U^{\perp \flat} -p(n-1) U^{\top \flat} \otimes U^{\perp \flat} ,\ B\big\rangle\,{\rm d}\operatorname{vol}_g . \end{eqnarray*} Using (\ref{E-main-0i},b) give rise to (\ref{UELD},b). Finally, notice that \eqref{UcriticalforI-0} is \eqref{UcriticalforI} for vacuum space-time. \end{proof} Although generally $\overline\operatorname{Ric}_{\,\mD}$ in \eqref{E-gravity-gen} has a long expression and is not given here, for particular case of semi-symmetric connections, due to Theorem~\ref{propUconnectionEL}, we present the mixed Ricci tensor explicitly~as \begin{equation}\label{E-Ric-D-semi-sym} \left\{\begin{array}{c} \overline\operatorname{Ric}_{\,\mD\,|\,\mD\times\mD} = \operatorname{Ric}_{\,\mD\,|\,\mD\times\mD} +\frac12\,n(p-1) U^{\perp \flat} \otimes U^{\perp \flat} -\frac14\,(p-n)( \operatorname{div} U^\top)\,g^\perp + \frac{Z}{2-n-p}\,g^\perp, \\ \overline\operatorname{Ric}_{\mD\,|\,V} = \operatorname{Ric}_{\mD\,|\,V} -\frac12\,(n-p)\big({\tilde \delta}_{U^\perp} +\langle {\tilde \alpha} - {\tilde\theta}, U^\perp \rangle\big) +(p-n)\langle\theta, U^\top\rangle + p(n-1) U^{\top\flat}\otimes U^{\perp\flat},\\ \overline\operatorname{Ric}_{\,\mD|\,\widetilde\mD\times\widetilde\mD} = \operatorname{Ric}_{\,\mD|\,\widetilde\mD\times\widetilde\mD} +\frac12\,p(n-1) U^{\top\flat} \otimes U^{\top\flat} -\frac14\,(n-p)(\operatorname{div} U^\bot)\,g^\top + \frac{Z}{2-n-p}\,g^\top, \end{array} \right. \end{equation} also $\overline{\rm S}_{\mD}=\operatorname{Tr\,}_g\overline\operatorname{Ric}_{\,\mD} = {\rm S}_{\mD} + \frac{2}{2-n-p}\,Z$, where $\operatorname{Ric}_{\,\mD}$ and ${\rm S}_{\mD}$ as in Definition~\ref{D-Ric-D}, $n+p>2$ and \begin{equation*} Z=\frac12\,n(p-1)\|U^{\perp}\|^2 +\frac12\,p(n-1)\|U^{\top}\|^2 -\frac14\,p(p-n)\operatorname{div} U^\top -\frac14\,n(n-p)\operatorname{div} U^\bot. \end{equation*} This is because $\overline\operatorname{Ric}_{\,\mD} - \frac{1}{2} \operatorname{Tr\,} (\overline\operatorname{Ric}_{\,\mD}) g = 0$ is equivalent to all three Euler-Lagrange equations for \eqref{actiongSmix}. \begin{example}\rm For a \textit{space-time} $(M^{p+1},g)$ endowed with ${\widetilde\mD}$ spanned by a timelike unit vector field $N$, see Example~\ref{Ex-2-1}, the tensor $\overline\operatorname{Ric}_{\mD}$ has the following particular form (i.e., \eqref{E-Ric-D-semi-sym} with $n=1$): \begin{equation*} \left\{\begin{array}{c} \overline\operatorname{Ric}_{\,\mD\,|\,\mD\times\mD} = \operatorname{Ric}_{\,\mD\,|\,\mD\times\mD} +\frac12\,(p-1) U^{\perp \flat}\otimes U^{\perp\flat} -\frac14\,(p-1)(\operatorname{div} U^\top)\,g^\perp + \frac{Z}{1-p}\,g^\perp, \\ \overline\operatorname{Ric}_{\mD\,|\,V} = \operatorname{Ric}_{\mD\,|\,V} -\frac12\,(1-p)\big({\tilde \delta}_{U^\perp} +\langle {\tilde \alpha} - {\tilde\theta}, U^\perp \rangle\big) ,\\ \overline\operatorname{Ric}_{\,\mD\,|\,\widetilde\mD\times\widetilde\mD} = \operatorname{Ric}_{\,\mD|\,\widetilde\mD\times\widetilde\mD} -\frac14\,\varepsilon_N(1-p)(\operatorname{div} U^\bot) + \varepsilon_N\frac{Z}{1-p}, \end{array} \right. \end{equation*} and $\overline{\rm S}_{\,\mD} = {\rm S}_{\,\mD} +\frac{2\,\varepsilon_N Z}{1-p}$, see \eqref{E-RicD-flow-S}, where $Z=\frac14\,(p-1)\big(\,2\,\|U^{\perp}\|^2 -p\,\operatorname{div} U^\top +\operatorname{div} U^\bot\big)$. Note that $\theta=0$ and $2\,{\tilde\delta}_{U^\perp}(N,\cdot)=(\nabla_N\,(U^\bot))^{\bot\flat}$. \end{example} \begin{remark}\rm By Proposition~\ref{corUcriticalforI}, also \eqref{UcriticalforI} holds, which allows to simplify the Euler-Lagrange equations of Theorem~\ref{propUconnectionEL} as discussed below. If $n=p=1$ then \eqref{UcriticalforI} does not give any restrictions for $U$ and all terms containing $U$ vanish in (\ref{UELD},b) -- as expected from the last sentence in Proposition~\ref{corUcriticalforI}. If $n=1$ and $p>1$ then by \eqref{UcriticalforI} we have ${\tilde H}=0$ and $U^\perp=\frac{1}{2}H$, while $U^\top$ can be arbitrary. We~also have $-\frac{1}{2} \Upsilon_{h,h} = - H^\flat \otimes H^\flat$, and \eqref{UELD} becomes \begin{equation*} -\operatorname{div}{\tilde h} -\widetilde{\cal K}^\flat +2\,\widetilde{\cal{T}}^\flat +\frac{1}{2}\,\big({{\rm S}}_{\,\rm mix} +\operatorname{div}({\tilde H} - H) + \frac{p-1}{2}\,\operatorname{div} U^\top\big)\,g^\perp - \frac{p-1}{4}\,H^{\flat} \otimes H^{\flat} = \lambda\,g^\perp, \end{equation*} where we replaced $r_\mD$ by $\operatorname{div}\tilde h$ (with additional terms) according to \eqref{E-genRicN}, and for \eqref{UELmixed} we have \begin{equation}\label{UELmixedn1} 2\,(\operatorname{div}(\alpha -\tilde\theta) )_{\,| {\rm V}} + \frac{7+p}{4} \langle{\tilde \theta} - {\tilde\alpha}, H\rangle -\frac{7+p}{4} \, {\tilde \delta}_{H} + 2\Upsilon_{\alpha, {\tilde\alpha}} = 0. \end{equation} Let $N \in \widetilde{\cal D}$ and $X \in {\cal D}$. Using results and notation from \cite{rz-2}, we have the following: \begin{eqnarray*} 2 (\operatorname{div} {\tilde\theta})(X,N) \hspace*{-2.mm}&=&\hspace*{-2.mm} (\operatorname{div} {\tilde T^\sharp}_N)(X) + \langle{\tilde T^\sharp}_N H, X\rangle, \\ 2 (\operatorname{div} \alpha)(X, N) \hspace*{-2.mm}&=&\hspace*{-2.mm} \langle\nabla_N H - {\tilde \tau}_1 H, X\rangle, \\ 2 \Upsilon_{\alpha, {\tilde\alpha}}(X,N) \hspace*{-2.mm}&=&\hspace*{-2.mm} \langle{\tilde A}_N H, X\rangle, \\ 2 {\tilde\delta}_H (X,N) \hspace*{-2.mm}&=&\hspace*{-2.mm} \langle\nabla_N H, X\rangle, \\ 2 \langle{\tilde \theta} -{\tilde \alpha}, H \rangle (X,N) \hspace*{-2.mm}&=&\hspace*{-2.mm} -\langle{\tilde T^\sharp}_N H + {\tilde A}_N H, X\rangle , \end{eqnarray*} where ${\tilde \tau}_1 = \operatorname{Tr\,}{\tilde A}_N$. Hence, \eqref{UELmixedn1} holds if and only if for unit $N\in\widetilde{\cal D}$ and all $X \in {\cal D}$ we have \begin{eqnarray*} \frac{1-p}{8}\,\langle\nabla_N H, X\rangle - \langle{\tilde \tau}_1 H, X\rangle - (\operatorname{div} {\tilde T^\sharp}_N)(X) - \frac{15+p}{8}\,\langle{\tilde T^\sharp}_N H, X\rangle +\frac{1-p}{8}\,\langle{\tilde A}_N H, X\rangle = 0 . \end{eqnarray*} If $n>1$ and $p>1$, then using \eqref{UcriticalforI} we reduce \eqref{UELD} to the following: \begin{eqnarray*} && -\operatorname{div}{\tilde h} -\widetilde{\cal K}^\flat +\frac{1}{2}\,\Upsilon_{h,h}+\frac12\,\Upsilon_{T,T} +2\,\widetilde{\cal{T}}^\flat +\frac{1}{2}\,\big({{\rm S}}_{\,\rm mix} +\operatorname{div}({\tilde H} - H)\big)\,g^{\perp} \nonumber \\ && - \frac{(p-n)^2}{8p(n-1)}\,(\operatorname{div} {\tilde H} )\,g^\perp - \frac{(p-n)^2 + 8n(p-1)}{8n(p-1)}\,H^\flat \otimes H^\flat = \lambda\,g^\perp, \end{eqnarray*} and we reduce \eqref{UELmixed} to the following: \begin{eqnarray*} && 4\,\Upsilon_{{\tilde\alpha}, \theta} +2(\operatorname{div}(\alpha -\tilde\theta))_{\,|\,{\rm V}} +2\,\Upsilon_{\alpha, {\tilde\alpha}} +2\,\Upsilon_{{\tilde \theta}, \theta} -\frac{(p-n)^2 + 8n(p-1)}{4n(p-1)}\,{\tilde \delta}_H \\ &&\hskip-17mm -\frac{(p-n)^2 + 8n(p-1)}{4n(p-1)}\,\langle{\tilde \alpha} - {\tilde\theta}, H \rangle +\frac{(n-p)^2 + 8p(n-1)}{2p(n-1)}\,\langle \theta, {\tilde H} \rangle + \frac{(p-n)^2}{4n(p-1)}\,H^\flat \odot {\tilde H}^\flat = 0. \end{eqnarray*} Note that for vacuum space-time the distributions $\widetilde{\cal D}$ and ${\cal D}$ don't need to be umbilical to admit $(g , \mathfrak{T})$ critical for \eqref{actiongSmix} among all metrics and semi-symmetric connections. \end{remark} \section{Auxiliary lemmas} \label{sec:aux} \begin{lemma}\label{L-divX} For any variation $g_t$ of metric and a $t$-dependent vector field $X$ on $M$, we have \begin{equation*} \partial_t\,(\operatorname{div} X) = \operatorname{div} (\partial_t X) +\frac{1}{2}\,X(\operatorname{Tr\,}_{g} B). \end{equation*} \end{lemma} \begin{proof} Differentiating the formula \eqref{eq:div} and using \eqref{E-dotvolg}, we get \begin{eqnarray*} && \partial_t\big((\operatorname{div} X)\,{\rm d}\operatorname{vol}_g\big) = \big(\partial_t\,(\operatorname{div} X) +\frac12\,(\operatorname{div} X)\operatorname{Tr\,}_{g} B\big)\,{\rm d}\operatorname{vol}_{g},\\ && \partial_t\big({\cal L}_{X}({\rm d}\operatorname{vol}_g)\big) = \big(\operatorname{div} (\partial_t X) +\frac{1}{2}\,X(\operatorname{Tr\,}_{g} B)+\frac12\,(\operatorname{div} X)\operatorname{Tr\,}_{g} B\big)\,{\rm d}\operatorname{vol}_{g}. \end{eqnarray*} From this the claim follows. \end{proof} Define symmetric $(1,2)$-tensors $L,G,F$, by the following formulas: \begin{eqnarray} \nonumber L(X,Y) \hspace*{-2.mm}&=&\hspace*{-2.mm} \frac{1}{4} (\Theta^*_{X^\perp} Y^\perp +\Theta^{\wedge*}_{X^\perp} Y^\perp + \Theta^*_{Y^\perp} X^\perp +\Theta^{\wedge*}_{Y^\perp} X^\perp) ,\\ \nonumber G(X,Y) \hspace*{-2.mm}&=&\hspace*{-2.mm} \frac{1}{4}({\Theta}^*_{ X^\perp} Y^\top + {\Theta}^{\wedge*}_{ X^\perp} Y^\top + {\Theta}^{\wedge*}_{Y^\perp} X^\top + {\Theta}^*_{ Y^\perp} X^\top ),\\ \nonumber F(X,Y) \hspace*{-2.mm}&=&\hspace*{-2.mm} \frac{1}{4}(\Theta^*_{X^\top} Y^\perp \!+\Theta^{\wedge*}_{X^\top} Y^\perp \!- \Theta_{X^\top} Y^\perp \!- \Theta^\wedge_{X^\top} Y^\perp \! \\ \label{formulaF} && +\Theta^*_{ Y^\top} X^\perp \!+\Theta^{\wedge*}_{Y^\top}X^\perp \!- \Theta_{Y^\top} X^\perp \!- \Theta^\wedge_{Y^\top} X^\perp), \end{eqnarray} where $\Theta = \mathfrak{T} -\mathfrak{T}^* +\mathfrak{T}^\wedge - \mathfrak{T}^{* \wedge }$ and $(\Theta^\wedge)_X Y = \Theta_Y X$ for all $X,Y \in \mathfrak{X}_M$. The~following equalities (and similar formulas for $\Upsilon_{\alpha, {\tilde \alpha}}$, $\Upsilon_{\theta, {\tilde \alpha}}$, etc.) will be used (recall Remark~\ref{remarkepsilons} for notational conventions): \begin{eqnarray*} \langle\,\langle\alpha, {\tilde H}\rangle, S\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \sum\nolimits_{\,a,i} \<A_{i}(E_a), {\tilde H}\rangle S(E_a,{\cal E}_i),\quad \langle\Upsilon_{\alpha, \theta}, S\rangle = \sum\nolimits_{\,a,i} S(A_{i}(E_a), T^{\sharp}_i(E_a)) ,\\ \Upsilon_{\alpha, {\tilde \theta}}(X,Y) \hspace*{-2.mm}&=&\hspace*{-2.mm} \frac{1}{2}\sum\nolimits_{a,i} \<X, A_{ i} E_a\rangle\, \langle Y, {\tilde T}^\sharp_{a} {\cal E}_i \rangle,\quad X \in \mathfrak{X}^\top,\ \ Y \in \mathfrak{X}^\bot . \end{eqnarray*} The variations of components of $Q$ in \eqref{E-defQ} (used in previous sections) are collected in the following three lemmas; the results for $g^\top$ variations are dual to $g^\bot$-parts in results for $g^\pitchfork$-variations. \begin{lemma}\label{L-dT-2} For any $g^\pitchfork$-variation of metric $g\in{\rm Riem}(M,\,\widetilde{\mD},\,{\mD})$ we have \begin{eqnarray*} && \partial_t\operatorname{Tr\,}^\top\mathfrak{T} = 0,\quad \partial_t\operatorname{Tr\,}^\bot\mathfrak{T} = -\sum\nolimits_{\,i} \big(\frac12\,(\mathfrak{T}_i+\mathfrak{T}^\wedge_i)(B^\sharp{\cal E}_i)^\bot + (\mathfrak{T}_i+\mathfrak{T}^\wedge_i)(B^\sharp{\cal E}_i)^\top\big),\\ && \partial_t\operatorname{Tr\,}^\top\mathfrak{T}^* = \sum\nolimits_{\,a} [\mathfrak{T}^*_a, B^\sharp]\,E_a,\\ && \partial_t\operatorname{Tr\,}^\bot\mathfrak{T}^* = \sum\nolimits_{\,i} \big( [\mathfrak{T}^*_i,B^\sharp]\,{\cal E}_i -\frac12\,(\mathfrak{T}^*_i +\mathfrak{T}^{* \wedge}_i)(B^\sharp{\cal E}_i)^\bot -(\mathfrak{T}^*_i +\mathfrak{T}^{* \wedge}_i)(B^\sharp{\cal E}_i)^\top\big). \end{eqnarray*} \end{lemma} \begin{proof} For any variation $g_t$ of metric and $X,Y\in\mathfrak{X}_M$ we have \begin{equation*} (\partial_t\mathfrak{T}^\wedge)_X Y = (\partial_t\mathfrak{T})_Y X =0,\quad (\partial_t\mathfrak{T}^*)_X = [\mathfrak{T}^*_X, B^\sharp]\,, \end{equation*} where the first formula is obvious, the second one follows from \eqref{E-defTT}$_1$, equality $\partial_t\mathfrak{T}=0$ and \begin{eqnarray*} \langle\mathfrak{T}^*_X B^\sharp(Y),Z\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \langle\mathfrak{T}_X Z,B^\sharp(Y)\rangle = B(\mathfrak{T}_X Z,Y) =\partial_t \langle\mathfrak{T}_X Z,Y\rangle = \partial_t \langle\mathfrak{T}^*_X Y,Z\rangle \\ \hspace*{-2.mm}&=&\hspace*{-2.mm} B(\mathfrak{T}^*_X Y,Z) +\langle\partial_t\mathfrak{T}^*_X Y,Z\rangle = \<B^\sharp\mathfrak{T}^*_X Y,Z\rangle +\langle\partial_t\mathfrak{T}^*_X Y,Z\rangle. \end{eqnarray*} Using the above and \eqref{E-frameE} completes the proof. \end{proof} Lemma~\ref{L-dT-2} is used in the proof of the following \begin{lemma}\label{L-dT-3} For $g^\pitchfork$-variation $g_t$ of metric on $(M,\widetilde{\mD},g,\bar\nabla=\nabla+\mathfrak{T})$ we have \begin{eqnarray}\label{dtIIproduct} \nonumber &&\partial_t \langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V} = -\sum B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_{j} E_a, \mathfrak{T}_a {\cal E}_i\rangle \\ && +\sum B({\cal E}_i, E_b) \big(\langle\mathfrak{T}^*_j {\cal E}_i, \mathfrak{T}_b {\cal E}_j\rangle -\langle\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_{ b} E_a\rangle - \langle\mathfrak{T}^*_{b} E_a, \mathfrak{T}_a {\cal E}_i\rangle - \langle\mathfrak{T}^*_i E_a, \mathfrak{T}_a E_b \rangle\big) , \end{eqnarray} \begin{eqnarray} \label{dtThetaA} \nonumber && \partial_t \langle\Theta, A \rangle = -2 \sum B({\cal E}_i, {\cal E}_j) \big(\langle h (E_a, E_b), {\cal E}_i \rangle \langle{\cal E}_j, \mathfrak{T}_a\,E_b \rangle \\ && -\frac{1}{2} \<h(E_a,E_b),{\cal E}_j\rangle (\langle \Theta_a {\cal E}_i +\Theta_i E_a, E_b \rangle +\langle\Theta_b {\cal E}_i +\Theta_i E_b, E_a \rangle)\big) \nonumber \\ && +\sum B({\cal E}_i, E_b) \big(\langle\Theta_a {\cal E}_j +\Theta_j E_a, {\cal E}_i \rangle \langle h (E_a, E_b), {\cal E}_j \rangle \nonumber \\ && -\langle \Theta_a E_b +\Theta_b E_a, E_c \rangle \langle h (E_a, E_c), {\cal E}_i \rangle + 2\,\langle h (E_a, E_b), {\cal E}_j \rangle \langle{\cal E}_j, \mathfrak{T}_a\,{\cal E}_i \rangle\nonumber \\ && -\frac{1}{2}\,\langle({\tilde A}- {\tilde T}^\sharp )_a {\cal E}_i, {\cal E}_j \rangle (\langle\Theta_a {\cal E}_j +\Theta_j E_a, E_b \rangle +\langle \Theta_b {\cal E}_j +\Theta_j E_b, E_a \rangle) \nonumber \\ && -2\,\langle h (E_b, E_a), {\cal E}_j \rangle \langle{\cal E}_i, \mathfrak{T}_{j} E_a \rangle + 2\,\langle h (E_a, E_b), {\cal E}_j \rangle \<E_a, \mathfrak{T}_j\,{\cal E}_i \rangle \nonumber\\ && -2\,\langle h (E_a, E_c), {\cal E}_i \rangle \<E_b, \mathfrak{T}_a E_c \rangle \big) +\operatorname{div}^\top \langle B_{| V}, G\rangle -\langle B_{| V}, \operatorname{div}^\top G \rangle , \end{eqnarray} \begin{eqnarray} \label{dtThetaT} \nonumber && \partial_t \langle\Theta, T^\sharp\rangle = -2 \sum B({\cal E}_i, {\cal E}_j) \langle T (E_a, E_b), {\cal E}_i \rangle \langle{\cal E}_j, \mathfrak{T}_a E_b \rangle \\ && +\sum B({\cal E}_i, E_b) \big(\langle \Theta_a {\cal E}_j +\Theta_j E_a, {\cal E}_i \rangle \langle T (E_a, E_b), {\cal E}_j \rangle -2\,\langle T (E_a, E_c), {\cal E}_i \rangle \<E_b, \mathfrak{T}_a E_c \rangle \nonumber \\ && -\langle \Theta_a E_b +\Theta_{b} E_a, E_c \rangle \langle T (E_a, E_c), {\cal E}_i \rangle + 2\langle T (E_a, E_b), {\cal E}_j \rangle \langle{\cal E}_j, \mathfrak{T}_a {\cal E}_i \rangle \nonumber \\ && -2\langle T (E_b, E_a), {\cal E}_j \rangle \langle{\cal E}_i, \mathfrak{T}_{j} E_a \rangle + 2\langle T (E_a, E_b), {\cal E}_j \rangle \<E_a, \mathfrak{T}_j {\cal E}_i \rangle\big), \end{eqnarray} \begin{eqnarray} \label{dtThetatildeT} \nonumber && \partial_t \langle\Theta, {\tilde T}^\sharp\rangle = \sum B({\cal E}_i, {\cal E}_j) \big( 2\,\langle{\tilde T}({\cal E}_k, {\cal E}_j), E_a\rangle \langle{\cal E}_k, \mathfrak{T}_a {\cal E}_i\rangle \\ \nonumber && -2\langle{\tilde T}({\cal E}_i, {\cal E}_k), E_a\rangle \langle{\cal E}_j, \mathfrak{T}_a {\cal E}_k\rangle + 2\langle{\tilde T}({\cal E}_k, {\cal E}_j ), E_a\rangle \langle{E}_a, \mathfrak{T}_k {\cal E}_i\rangle\\ && -\frac{1}{2}\,\langle \Theta_a {\cal E}_j +\Theta_{j} E_a, {\cal E}_k \rangle \langle{\tilde T}({\cal E}_i, {\cal E}_k ), E_a \rangle +\frac{1}{2}\,\langle \Theta_a {\cal E}_k +\Theta_k E_a, {\cal E}_i \rangle \langle {\tilde T}({\cal E}_k, {\cal E}_j ), E_a \rangle \nonumber \\ && -\frac{1}{2}\,(\langle \Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_k \rangle -\langle \Theta_a {\cal E}_k +\Theta_k E_a, {\cal E}_i) \rangle \<E_a, {\tilde T}({\cal E}_j, {\cal E}_k) \rangle \big)\nonumber \\ && +\sum B({\cal E}_i, E_b)\big( \frac{1}{2}\,(\langle \Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j \rangle -\langle \Theta_a {\cal E}_j +\Theta_j E_a, {\cal E}_i) \rangle \<E_a,(A + T^\sharp )_j E_b \rangle \nonumber \\ && -2\,\langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \<E_b, \mathfrak{T}_a{\cal E}_j\rangle + 2\,\langle{\tilde T}({\cal E}_j, {\cal E}_i ), E_a\rangle \langle{E}_a, \mathfrak{T}_j E_b\rangle \nonumber \\ && + 2\,\langle{\tilde T}({\cal E}_j, {\cal E}_i ), E_a\rangle \langle{\cal E}_j, \mathfrak{T}_a E_b\rangle -2\,\langle{\tilde T}({\cal E}_k, {\cal E}_j), E_b\rangle \langle{\cal E}_i, \mathfrak{T}_{k}{\cal E}_j\rangle \nonumber \\ && -\langle \Theta_a E_b +\Theta_{b} E_a, {\cal E}_j \rangle \langle {\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \big) +\operatorname{div}^\perp\langle B_{|V}, F \rangle -\langle B_{| V}, \operatorname{div}^\perp F \rangle , \end{eqnarray} \begin{eqnarray} \label{dtThetatildeA} && \partial_t \langle\Theta, {\tilde A} \rangle = \sum B({\cal E}_i, {\cal E}_j) \big(\frac{1}{2}\langle\Theta_k E_a +\Theta_a {\cal E}_k, {\cal E}_i\rangle \langle{\tilde h}({\cal E}_k, {\cal E}_j ), E_a\rangle \nonumber \\ \nonumber && -\frac{1}{2}\langle\Theta_{j} E_a +\Theta_a{\cal E}_j, {\cal E}_k\rangle\langle{\tilde h}({\cal E}_i, {\cal E}_k), E_a\rangle -2\langle{\tilde h}({\cal E}_i, {\cal E}_k ), E_a\rangle \langle{\cal E}_j, \mathfrak{T}_a {\cal E}_k\rangle\\ && -(\langle\Theta_i E_a +\Theta_a{\cal E}_i,{\cal E}_k\rangle +\langle\Theta_k E_a +\Theta_a{\cal E}_k, {\cal E}_i\rangle)\langle{\tilde h}({\cal E}_j, {\cal E}_k), E_a\rangle \nonumber \\ && +(\langle\Theta_k E_a +\Theta_a {\cal E}_k, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_k )\rangle \langle({\tilde A}_a -{\tilde T}^\sharp_a) {\cal E}_i, {\cal E}_k\rangle \nonumber \\ && + 2\langle{\tilde h}({\cal E}_k, {\cal E}_j), E_a\rangle \langle{\cal E}_k, \mathfrak{T}_a {\cal E}_i\rangle +2\langle{\tilde h}({\cal E}_k, {\cal E}_j ), E_a\rangle\langle{E}_a, \mathfrak{T}_k {\cal E}_i \rangle \big) \nonumber \\ && +\sum B({\cal E}_i, E_b)\big((\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i\rangle+\langle\Theta_i E_a +\Theta_a{\cal E}_i, {\cal E}_j)\rangle \langle(A_j + T^\sharp_j) E_b, E_a\rangle \nonumber \\ && -(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i )\rangle \langle (A_j - T^\sharp_j) E_b, E_a\rangle \nonumber \\ && -2\langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \<E_b, \mathfrak{T}_a {\cal E}_j\rangle +2\langle{\tilde h}({\cal E}_j, {\cal E}_i ), E_a\rangle \langle{E}_a, \mathfrak{T}_j E_b\rangle \nonumber \\ \nonumber && + 2\langle{\tilde h}({\cal E}_j, {\cal E}_i ), E_a\rangle \langle{\cal E}_j, \mathfrak{T}_a E_b\rangle -2\langle{\tilde h}({\cal E}_k, {\cal E}_j), E_b\rangle \langle{\cal E}_i, \mathfrak{T}_{k}{\cal E}_j\rangle \\ && -\langle\Theta_{b} E_a +\Theta_a E_b, {\cal E}_j\rangle\langle{\tilde h}({\cal E}_i, {\cal E}_j), E_a\rangle \big) -2 \operatorname{div}^\top \langle B, L \rangle + 2 \langle B, \operatorname{div}^\top L \rangle , \end{eqnarray} \begin{equation}\label{dttracetopI} \partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}, \operatorname{Tr\,}^\perp \mathfrak{T}^*\rangle = \frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \big(\langle\operatorname{Tr\,}^\top\mathfrak{T},\, \mathfrak{T}^*_i {\cal E}_j {-} \mathfrak{T}^*_j {\cal E}_i\rangle\big) -\sum B({\cal E}_i, E_b) \langle\operatorname{Tr\,}^\top\mathfrak{T},\, \mathfrak{T}^*_b {\cal E}_i\rangle, \end{equation} \begin{eqnarray} \label{dttraceperpI} \nonumber && \partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}^*, \operatorname{Tr\,}^\perp \mathfrak{T}\rangle = -\frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}_j {\cal E}_i + \mathfrak{T}_i {\cal E}_j, \operatorname{Tr\,}^\top\mathfrak{T}^* \rangle \\ && + \sum B({\cal E}_i, E_b)\big(\langle\operatorname{Tr\,}^\bot\mathfrak{T}, \mathfrak{T}^*_b {\cal E}_i\rangle - \langle\mathfrak{T}_b {\cal E}_i + \mathfrak{T}_i E_b, \operatorname{Tr\,}^\top\mathfrak{T}^*\rangle \big), \end{eqnarray} \begin{eqnarray} \label{dtIEaH} && \partial_t\langle\,\operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}),\, {\tilde H} - H\rangle = \sum B({\cal E}_i, {\cal E}_j) \big( -\frac{1}{2}\,\delta_{ij} \operatorname{div}((\operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}) )^\top) \nonumber \\ && \nonumber -\<H, {\cal E}_j\rangle \langle \operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}), {\cal E}_i\rangle -\<H, {\cal E}_j\rangle \langle\operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}), {\cal E}_i\rangle +\langle\operatorname{Tr\,}^\top\mathfrak{T}^*, {\cal E}_j\rangle \langle{\cal E}_i, H\rangle\big) \nonumber \\ && +\sum B({\cal E}_i, E_b) \big(\langle{\tilde H}, E_b\rangle \langle \operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}), {\cal E}_i\rangle -\<H, {\cal E}_i\rangle \langle \operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}), E_b\rangle \nonumber \\ && +\,\langle\mathfrak{T}^*_b {\cal E}_i, {\tilde H} - H\rangle +\langle\operatorname{Tr\,}^\top\mathfrak{T}^*, E_b\rangle \langle{\cal E}_i, H\rangle + 2\langleT^\sharp_i E_b , \operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T})\rangle \nonumber \\ && +\,\langle{\cal E}_i,(\operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}))^\perp\rangle \langle\tilde H, E_b\rangle -\langle H, {\cal E}_i\rangle \langle\operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}), E_b\rangle \nonumber \\ && -\,\langle\operatorname{Tr\,}^\top\mathfrak{T}^*, {\cal E}_i\rangle \<E_b, {\tilde H}\rangle -\langle\nabla_b((\operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}))^\perp ), {\cal E}_i\rangle -\langle{\tilde A}_b {\cal E}_i -{\tilde T^\sharp}_b {\cal E}_i, \operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T})\rangle \big) \nonumber \\ && +\operatorname{div}\big(( B^\sharp((\operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}))^\perp))^\top -\frac12\, (\operatorname{Tr\,}_{\mD}B)(\operatorname{Tr\,}^\top(\mathfrak{T}^*-\mathfrak{T}))^\top\big), \end{eqnarray} \begin{eqnarray} \label{dtIeiH} && \partial_t \langle\,\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}), {\tilde H} - H\rangle = \sum B({\cal E}_i, {\cal E}_j) \big(\langle\mathfrak{T}^*_j {\cal E}_i, {\tilde H} - H\rangle +\langle\operatorname{Tr\,}^\bot\mathfrak{T}^*, {\cal E}_i\rangle \langle H, {\cal E}_j\rangle\nonumber \\ && -\<H, {\cal E}_j\rangle \langle \operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}), {\cal E}_i\rangle -\frac{1}{2}\,\delta_{ij} \operatorname{div}((\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}))^\top) -\<H, {\cal E}_j\rangle \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}), {\cal E}_i\rangle \big) \nonumber \\ && +\sum B({\cal E}_i, E_b) \big(\langle{\tilde H}, E_b\rangle \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}), {\cal E}_i\rangle -\<H, {\cal E}_i\rangle \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}), E_b\rangle \nonumber \\ && +\,\langle\operatorname{Tr\,}^\bot\mathfrak{T}^*, E_b\rangle \langle H, {\cal E}_i\rangle +\langle\mathfrak{T}^*_i E_b, {\tilde H} - H\rangle -\langle\operatorname{Tr\,}^\bot\mathfrak{T}^*, {\cal E}_i\rangle \langle{\tilde H}, E_b\rangle +2\langleT^\sharp_i E_b , \operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T})\rangle \nonumber \\ && +\langle{\cal E}_i, \operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T})\rangle \langle\tilde H, E_b\rangle -\langle\nabla_b((\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}))^\perp), {\cal E}_i\rangle \nonumber \\ && -\langle{\tilde A}_b {\cal E}_i - {\tilde T^\sharp}_b {\cal E}_i, \operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}) \rangle -\langle H, {\cal E}_i\rangle \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}), E_b\rangle \big) \nonumber \\ && + \operatorname{div}\big(( B^\sharp((\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}))^\perp))^\top -\frac12\,(\operatorname{Tr\,}_{\mD}B)(\operatorname{Tr\,}^\bot(\mathfrak{T}^*-\mathfrak{T}))^\top\big) . \end{eqnarray} \end{lemma} \begin{proof} To obtain $\partial_t \Theta$, we compute for $X,Y,Z \in \mathfrak{X}_M$: \begin{equation*} \partial_t \langle\mathfrak{T}^{* \wedge}_X Y, Z\rangle = B(\mathfrak{T}^{* \wedge}_X Y, Z) +\langle(\partial_t \mathfrak{T}^{* \wedge})_X Y, Z\rangle. \end{equation*} On the other hand, \begin{equation*} \partial_t \langle\mathfrak{T}^{* \wedge}_X Y, Z\rangle = \partial_t \langle\mathfrak{T}^*_Y X, Z\rangle = B(\mathfrak{T}^*_Y X, Z) +\langle\partial_t (\mathfrak{T}^*_Y X), Z\rangle = \langle\mathfrak{T}^*_Y B^\sharp X, Z \rangle, \end{equation*} so \[ (\partial_t \mathfrak{T}^{*\wedge})_X Y = \mathfrak{T}^*_Y B^\sharp X - B^\sharp\,\mathfrak{T}^{*}_Y X. \] From this we obtain \begin{equation}\label{Eq-dt-Theta} (\partial_t \Theta)_X Y = -(\partial_t \mathfrak{T}^*)_X Y -\partial_t (\mathfrak{T}^{*\wedge})_X Y = -\mathfrak{T}^*_X B^\sharp Y + B^\sharp \mathfrak{T}^*_X Y -\mathfrak{T}^{*}_Y\,B^\sharp X + B^\sharp \mathfrak{T}^{*}_Y X . \end{equation} We shall use Proposition~\ref{prop-Ei-a} and the fact that for $g^\perp$-variations $B(X,Y)=0$ for $X,Y \in \mathfrak{X}^\top$. \textbf{Proof of \eqref{dtIIproduct}}. We have $\langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V} = \sum \langle\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_i E_a\rangle +\sum \langle\mathfrak{T}^*_i E_a, \mathfrak{T}_a {\cal E}_i \rangle$, so \begin{eqnarray*} && \partial_t \langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V} = \sum\big[ B(\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_i E_a) + B(\mathfrak{T}^*_i E_a, \mathfrak{T}_a {\cal E}_i) + \langle\mathfrak{T}^*_a \partial_t {\cal E}_i, \mathfrak{T}_i E_a\rangle \\ &&\hskip-4mm + \langle\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_{\partial_t {\cal E}_i} E_a\rangle + \langle\mathfrak{T}^*_{\partial_t {\cal E}_i} E_a, \mathfrak{T}_a {\cal E}_i\rangle + \langle\mathfrak{T}^*_i E_a, \mathfrak{T}_a \partial_t {\cal E}_i\rangle + \langle(\partial_t \mathfrak{T}^*)_a {\cal E}_i, \mathfrak{T}_i E_a\rangle + \langle(\partial_t \mathfrak{T}^*)_i E_a, \mathfrak{T}_a {\cal E}_i\rangle \big]. \end{eqnarray*} We compute 8 terms above separately: \begin{eqnarray*} && \sum B(\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_i E_a) = \sum \big[ B({\cal E}_j, E_b) \langle\mathfrak{T}^*_a {\cal E}_i, E_b\rangle \langle\mathfrak{T}_i E_a, {\cal E}_j\rangle \\ &&\quad + B({\cal E}_j, E_b) \langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_j\rangle \langle\mathfrak{T}_i E_a, E_b\rangle + B({\cal E}_j, {\cal E}_k) \langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_k\rangle \langle\mathfrak{T}_i E_a, {\cal E}_j\rangle \big],\\ && \sum B(\mathfrak{T}^*_i E_a, \mathfrak{T}_a {\cal E}_i) = \sum \big[ B({\cal E}_j, E_b) \langle\mathfrak{T}^*_i E_a, E_b\rangle \langle\mathfrak{T}_a {\cal E}_i, {\cal E}_j\rangle \\ &&\quad + B({\cal E}_j, E_b) \langle\mathfrak{T}^*_i E_a, {\cal E}_j\rangle \langle\mathfrak{T}_a {\cal E}_i, E_b\rangle + B({\cal E}_k, {\cal E}_j) \langle\mathfrak{T}^*_i E_a, {\cal E}_j\rangle \langle\mathfrak{T}_a {\cal E}_i, {\cal E}_k\rangle \big],\\ &&\sum \langle\mathfrak{T}^*_a \partial_t {\cal E}_i, \mathfrak{T}_i E_a\rangle = -\sum \big[ B({\cal E}_i,E_b) \langle\mathfrak{T}^*_a E_b, \mathfrak{T}_i E_a\rangle +\frac{1}{2} B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_a {\cal E}_j, \mathfrak{T}_i E_a\rangle \big],\\ &&\sum \langle\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_{\partial_t {\cal E}_i} E_a\rangle = -\sum \big[ B({\cal E}_i, E_b) \langle\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_{ b} E_a\rangle +\frac{1}{2} B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_a {\cal E}_i, \mathfrak{T}_{ j} E_a\rangle \big],\\ &&\sum \langle\mathfrak{T}^*_{\partial_t {\cal E}_i} E_a, \mathfrak{T}_a {\cal E}_i\rangle = -\sum \big[ B({\cal E}_i, E_b) \langle\mathfrak{T}^*_{b} E_a, \mathfrak{T}_a {\cal E}_i\rangle +\frac{1}{2} B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_{j} E_a, \mathfrak{T}_a {\cal E}_i\rangle \big],\\ &&\sum \langle\mathfrak{T}^*_i E_a, \mathfrak{T}_a \partial_t {\cal E}_i\rangle = -\sum \big[ B({\cal E}_i, E_b) \langle\mathfrak{T}^*_i E_a, \mathfrak{T}_a E_b\rangle +\frac{1}{2} B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_i E_a, \mathfrak{T}_a {\cal E}_j\rangle\big] ,\\ && \sum \langle (\partial_t \mathfrak{T}^*)_a {\cal E}_i, \mathfrak{T}_i E_a\rangle =\sum \big[ B({\cal E}_i, E_b) \langle\mathfrak{T}^*_a E_b, \mathfrak{T}_i E_a\rangle + B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_a {\cal E}_j, \mathfrak{T}_i E_a\rangle \\ &&\, -B({\cal E}_j, E_b) \langle\mathfrak{T}^*_a {\cal E}_i, E_b\rangle \langle{\cal E}_j, \mathfrak{T}_i E_a\rangle {-}B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_a {\cal E}_k, {\cal E}_j\rangle \langle{\cal E}_i, \mathfrak{T}_k E_a\rangle {-}B({\cal E}_j, E_b) \langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_j\rangle \langle E_b, \mathfrak{T}_i E_a\rangle \big] ,\\ && \sum \langle (\partial_t \mathfrak{T}^*)_i E_a, \mathfrak{T}_a {\cal E}_i\rangle = \sum \big[ B({\cal E}_j, E_a) \langle\mathfrak{T}^*_i {\cal E}_j, \mathfrak{T}_a {\cal E}_i\rangle - B({\cal E}_j, E_b) \langle\mathfrak{T}^*_i E_a, E_b\rangle \langle{\cal E}_j, \mathfrak{T}_a {\cal E}_i\rangle \\ &&\ - B({\cal E}_j, E_b) \langle\mathfrak{T}^*_i E_a, {\cal E}_j\rangle \langle E_b, \mathfrak{T}_a {\cal E}_i\rangle - B({\cal E}_j, {\cal E}_k) \langle\mathfrak{T}^*_i E_a, {\cal E}_j\rangle \langle{\cal E}_k, \mathfrak{T}_a {\cal E}_i\rangle \big]. \end{eqnarray*} Summing the 8 terms computed above and simplifying, we obtain \eqref{dtIIproduct}. \textbf{Proof of \eqref{dtThetaA}}. We have \begin{eqnarray*} && \langle \Theta, A \rangle = \sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle h (E_a, E_b), {\cal E}_i \rangle. \end{eqnarray*} So \begin{eqnarray*} && \partial_t \langle \Theta, A \rangle = \sum \big[ B(\Theta_a {\cal E}_i +\Theta_i E_a, E_b) \langle h (E_a, E_b), {\cal E}_i\rangle + \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle B( h (E_a, E_b), {\cal E}_i) \\ && +\, \langle\Theta_a (\partial_t {\cal E}_i) +\Theta_{\partial_t {\cal E}_i} E_a, E_b\rangle \langle h (E_a, E_b), {\cal E}_i\rangle + \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle\partial_t h (E_a, E_b), {\cal E}_i\rangle \\ && +\, \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle h (E_a, E_b), \partial_t {\cal E}_i\rangle + \langle(\partial_t\Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, E_b\rangle \langle h (E_a, E_b), {\cal E}_i \rangle \big]. \end{eqnarray*} We start from the fourth term of the 6 terms above. Then, from \cite{rz-2}, \begin{eqnarray*} &&\sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle\partial_t h (E_a, E_b), {\cal E}_i\rangle = \sum \frac{1}{2}\big[ \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \\ && +\langle\Theta_b {\cal E}_i +\Theta_i E_b, E_a\rangle\big] \big(\nabla_a B(E_b, {\cal E}_i) - B( h(E_a, E_b ), {\cal E}_i) + B(\nabla_i E_a, E_b) \big). \end{eqnarray*} We have \begin{eqnarray*} \frac{1}{2}\sum \big(\langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle +\langle\Theta_b {\cal E}_i +\Theta_i E_b, E_a \rangle\big) \nabla_a B(E_b, {\cal E}_i) = \operatorname{div}^\top \langle B_{| V}, G \rangle -\langle B_{| V}, \operatorname{div}^\top G \rangle, \end{eqnarray*} because \[ \langle B_{| V}, \operatorname{div} G \rangle = \frac{1}{2} \sum \big(\langle\nabla_a {\Theta}^{\wedge*}_i E_b, E_a\rangle + \langle\nabla_a {\Theta}^*_i E_b, E_a\rangle\big) B(E_b, {\cal E}_i) . \] We also have \begin{eqnarray*} && -\sum\frac{1}{2}\big( \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle +\langle\Theta_b {\cal E}_i +\Theta_i E_b, E_a \rangle\big) B( h(E_a, E_b ), {\cal E}_i) \\ && = -\frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \langle{\cal E}_j, h(E_a,E_b)\rangle ( \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle +\langle\Theta_b {\cal E}_i +\Theta_i E_b, E_a \rangle),\\ && \sum \frac{1}{2}\big(\langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle +\langle\Theta_b {\cal E}_i +\Theta_i E_b, E_a\rangle\big) B(\nabla_i E_a, E_b) \\ && = -\frac{1}{2} \sum B({\cal E}_j, E_b) \langle({\tilde A}- {\tilde T}^\sharp )_a {\cal E}_j, {\cal E}_i \rangle (\langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle +\langle\Theta_b {\cal E}_i +\Theta_i E_b, E_a \rangle). \end{eqnarray*} Now we consider other terms of $\partial_t \langle\Theta, A \rangle$. For the fifth term we have \begin{equation*} \sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle h (E_a, E_b), \partial_t {\cal E}_i\rangle = -\frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle h (E_a, E_b), {\cal E}_j\rangle. \end{equation*} For the first, second and third terms we have \begin{eqnarray*} &&\hskip-7mm \sum B(\Theta_a {\cal E}_i +\Theta_i E_a, E_b) \langle h (E_a, E_b), {\cal E}_i\rangle = \sum B({\cal E}_j, E_b) \langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle \<h (E_a, E_b), {\cal E}_i\rangle ,\\ &&\hskip-7mm \sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle B( h (E_a, E_b), {\cal E}_i) = \sum B({\cal E}_i, {\cal E}_j) \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle h (E_a, E_b), {\cal E}_j\rangle,\\ &&\hskip-7mm \langle\Theta_a (\partial_t {\cal E}_i) {+}\Theta_{\partial_t {\cal E}_i} E_a, E_b\rangle {=} -\frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \langle\Theta_a {\cal E}_j {+}\Theta_j E_a, E_b\rangle {-}\sum B({\cal E}_i, E_c) \langle\Theta_a E_c {+}\Theta_c E_a, E_b\rangle . \end{eqnarray*} Using \eqref{Eq-dt-Theta}, we have \begin{eqnarray*} && \sum \langle(\partial_t \Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, E_b\rangle = \sum\big[-2 B({\cal E}_i, E_c) \langle\mathfrak{T}^*_a E_c, E_b\rangle -2 B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_a {\cal E}_j, E_b\rangle \\ && + 2 B({\cal E}_j, E_b) \langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_j\rangle -2 B({\cal E}_j, E_a) \langle\mathfrak{T}^{*}_{i} {\cal E}_j,E_b\rangle + 2 B({\cal E}_j, E_b) \langle\mathfrak{T}^{*}_i {E}_a, {\cal E}_j\rangle\big] . \end{eqnarray*} Hence, for the sixth term of $\partial_t \langle \Theta, A \rangle$, we have \begin{eqnarray*} && \sum \langle (\partial_t \Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, E_b\rangle \langle h (E_a, E_b), {\cal E}_i\rangle = \sum[ -2 B({\cal E}_i, E_c) \langle h (E_a, E_b), {\cal E}_i\rangle \langle\mathfrak{T}^*_a E_c, E_b\rangle \\ && -2 B({\cal E}_i, {\cal E}_j) \langle h (E_a, E_b), {\cal E}_i\rangle \langle\mathfrak{T}^*_a {\cal E}_j, E_b\rangle + 2 B({\cal E}_j, E_b) \langle h (E_a, E_b), {\cal E}_i\rangle \langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_j\rangle \\ && -2 B({\cal E}_j, E_a) \langle h (E_a, E_b), {\cal E}_i\rangle \langle\mathfrak{T}^{*}_{i} {\cal E}_j,E_b\rangle + 2 B({\cal E}_j, E_b) \langle h (E_a, E_b), {\cal E}_i\rangle \langle\mathfrak{T}^{*}_i {E}_a, {\cal E}_j\rangle \big]. \end{eqnarray*} So finally we get \eqref{dtThetaA}. \textbf{Proof of \eqref{dtThetaT}}. We have \begin{equation*} \langle\Theta, T^\sharp \rangle = \sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle T (E_a, E_b), {\cal E}_i \rangle, \end{equation*} thus \begin{eqnarray*} && \partial_t \langle \Theta, T^\sharp \rangle = \sum \big[ B(\Theta_a {\cal E}_i +\Theta_i E_a, E_b) \langle T (E_a, E_b), {\cal E}_i\rangle \\ && + \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle B( T (E_a, E_b), {\cal E}_i) + \langle\Theta_a (\partial_t {\cal E}_i) +\Theta_{\partial_t {\cal E}_i} E_a, E_b\rangle \<T(E_a, E_b), {\cal E}_i\rangle \\ && + \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle T (E_a, E_b), \partial_t {\cal E}_i\rangle + \langle(\partial_t \Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, E_b\rangle \langle T (E_a, E_b), {\cal E}_i\rangle \big], \end{eqnarray*} because $\partial_t T =0$. We compute 5 terms above separately: \begin{eqnarray*} && \sum B(\Theta_a {\cal E}_i +\Theta_i E_a, E_b) = \sum B({\cal E}_j, E_b) \langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle \langle T (E_a, E_b), {\cal E}_i\rangle ,\\ &&\sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle B( T (E_a, E_b), {\cal E}_i) = \sum B({\cal E}_i, {\cal E}_j) \langle\Theta_a {\cal E}_j +\Theta_j E_a, E_b\rangle \langle T (E_a, E_b), {\cal E}_i\rangle ,\\ && \sum \langle\Theta_a (\partial_t {\cal E}_i) +\Theta_{\partial_t{\cal E}_i} E_a, E_b\rangle \langle T (E_a, E_b), {\cal E}_i\rangle = -\frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \langle\Theta_a {\cal E}_j +\Theta_j E_a, E_b\rangle \<T (E_a, E_b), {\cal E}_i\rangle \\ && -\sum B({\cal E}_i, E_b) \langle\Theta_a E_b +\Theta_b E_a, E_c\rangle \<T (E_a, E_c), {\cal E}_i\rangle ,\\ && \sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle \langle T (E_a, E_b), \partial_t {\cal E}_i\rangle = -\frac{1}{2}\sum B({\cal E}_i, {\cal E}_j) \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle\langle T (E_a, E_b), {\cal E}_j\rangle \big],\\ && \sum \langle (\partial_t \Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, E_b\rangle\langle T (E_a, E_b), {\cal E}_i\rangle = \sum \big[ -2 B({\cal E}_i, E_c) \langle\mathfrak{T}^*_a E_c, E_b\rangle\langle T (E_a, E_b), {\cal E}_i\rangle \\ && -2 B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}^*_a {\cal E}_j, E_b\rangle\langle T (E_a, E_b), {\cal E}_i\rangle + 2 B({\cal E}_j, E_b) \langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_j\rangle\langle T (E_a, E_b), {\cal E}_i\rangle \\ && -2 B({\cal E}_j, E_a) \langle\mathfrak{T}^{*}_{i} {\cal E}_j,E_b\rangle\langle T (E_a, E_b), {\cal E}_i\rangle + 2 B({\cal E}_j, E_b) \langle\mathfrak{T}^{*}_i {E}_a, {\cal E}_j\rangle\langle T (E_a, E_b), {\cal E}_i\rangle \big]. \end{eqnarray*} Finally, we get \eqref{dtThetaT}. \textbf{Proof of \eqref{dtThetatildeT}}. We have $\langle \Theta, {\tilde T}^\sharp \rangle = \sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle$. Now we compute \begin{eqnarray}\label{Enew-6terms} \nonumber && \partial_t \langle \Theta, {\tilde T}^\sharp \rangle = \sum \big[ B(\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j) \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle + \langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle B({\tilde T}({\cal E}_i, {\cal E}_j ), E_a) \\ \nonumber && + \langle\Theta_a (\partial_t {\cal E}_i) +\Theta_{\partial_t {\cal E}_i} E_a, {\cal E}_j\rangle \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle + \langle\Theta_a {\cal E}_i +\Theta_i E_a, \partial_t {\cal E}_j\rangle \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \\ && + \langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle \langle\partial_t {\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle + \langle (\partial_t \Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, {\cal E}_j\rangle \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \big]. \end{eqnarray} Let $U : \mD \times \mD \rightarrow \widetilde{\mD}$ be a $(1,2)$-tensor, given by $\<U_i {\cal E}_j, E_a\rangle=\frac{1}{2}\,(\langle\Theta_a{\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle-\langle\Theta_a{\cal E}_j +\Theta_j E_a, {\cal E}_i\rangle)$. We compute the fifth term in $\partial_t \langle \Theta, {\tilde T}^\sharp \rangle$: \begin{eqnarray*} &&\hskip-4mm 2 \sum\langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle \langle\partial_t {\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle = 2 \sum \<U_i {\cal E}_j, E_a\rangle \langle\partial_t {\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \\ && =\sum \<U_i {\cal E}_j, E_a\rangle \big( 2\langle{\tilde T}(-\frac{1}{2}(B^\sharp {\cal E}_i )^\perp, {\cal E}_j ), E_a\rangle + 2\langle{\tilde T}( {\cal E}_i, -\frac{1}{2}(B^\sharp {\cal E}_j )^\perp ), E_a\rangle \\ && +\langle \nabla_{(B^\sharp {\cal E}_j)^\top} {\cal E}_i -\nabla_{(B^\sharp {\cal E}_i )^\top} {\cal E}_j , E_a \rangle +\langle \nabla_{j}((B^\sharp {\cal E}_i )^\top) -\nabla_{i}((B^\sharp {\cal E}_j )^\top) , E_a \rangle \big),\\ &&\hskip-4mm \sum \<U_i {\cal E}_j, E_a\rangle \langle{\tilde T}(-( B^\sharp {\cal E}_i )^\perp, {\cal E}_j ), E_a\rangle = -\sum B({\cal E}_i, {\cal E}_k) \<U_i {\cal E}_j, {\tilde T}({\cal E}_k, {\cal E}_j )\rangle\\ && = -\frac{1}{2} \sum B({\cal E}_i, {\cal E}_k) \big(\langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle -\langle\Theta_a {\cal E}_j +\Theta_j E_a, {\cal E}_i \rangle\big) \<E_a, {\tilde T}({\cal E}_k, {\cal E}_j )\rangle, \\ &&\hskip-4mm -\sum \<U_i {\cal E}_j, E_a\rangle \langle\nabla_{(B^\sharp {\cal E}_i )^\top} {\cal E}_j, E_a\rangle = \sum B({\cal E}_i, E_b) \langle U_i {\cal E}_j,(A + T^\sharp )_j E_b\rangle \\ && = \frac{1}{2} \sum B({\cal E}_i, E_b)\big(\langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle -\langle\Theta_a {\cal E}_j +\Theta_j E_a, {\cal E}_i\rangle\big)\<E_a,(A + T^\sharp )_j E_b\rangle,\\ &&\hskip-4mm -\sum \<U_i {\cal E}_j, E_a\rangle \langle\nabla_{i}((B^\sharp {\cal E}_j )^\top), E_a\rangle = \sum \big[\langle\nabla_i(B({\cal E}_j, E_a ) U^*_j E_a) , {\cal E}_i\rangle - B({\cal E}_j, E_a ) \langle\nabla_i U^*_j E_a, {\cal E}_i \rangle \big], \end{eqnarray*} where $\<U_j {\cal E}_i, E_a\rangle = \<U^*_j E_a, {\cal E}_i \rangle$. Note that \begin{equation*} \<U^*_j E_a, {\cal E}_i\rangle = \frac{1}{2}\,\langle\Theta_a^* {\cal E}_j +\Theta^{\wedge*}_a {\cal E}_j -\Theta_a {\cal E}_j -\Theta^\wedge_a {\cal E}_j, {\cal E}_i\rangle, \end{equation*} thus, using (1,2)-tensor $F$ defined in \eqref{formulaF}, we can write \begin{equation*} -\sum \<U_i {\cal E}_j, E_a\rangle \langle\nabla_{i}((B^\sharp {\cal E}_j )^\top), E_a\rangle = \operatorname{div}^\perp (\langle B_{|V}, F \rangle) -\langle B_{| V}, \operatorname{div}^\perp F \rangle . \end{equation*} For the first four terms of $\partial_t \langle \Theta, {\tilde T}^\sharp \rangle$, see \eqref{Enew-6terms}, we obtain: \begin{eqnarray*} && B(\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j) = B({\cal E}_j, {\cal E}_k)\langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_k\rangle +B({\cal E}_j, E_b) \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle ,\\ && \sum \langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle B({\tilde T}({\cal E}_i, {\cal E}_j ), E_a) = \sum B(E_a, E_b)\langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_j\rangle \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_b\rangle = 0,\\ && \langle\Theta_a (\partial_t {\cal E}_i) +\Theta_{\partial_t {\cal E}_i} E_a, {\cal E}_j\rangle = -\frac{1}{2}\,B({\cal E}_i, {\cal E}_k) \langle\Theta_a {\cal E}_k +\Theta_k E_a, {\cal E}_j\rangle -B({\cal E}_i, E_b) \langle\Theta_a E_b +\Theta_b E_a, {\cal E}_j\rangle ,\\ && \langle\Theta_a {\cal E}_i +\Theta_i E_a, \partial_t {\cal E}_j\rangle = -\frac{1}{2}\,B({\cal E}_j, {\cal E}_k) \langle\Theta_a {\cal E}_i +\Theta_i E_a, {\cal E}_k\rangle -B({\cal E}_j, E_b) \langle\Theta_a {\cal E}_i +\Theta_i E_a, E_b\rangle . \end{eqnarray*} Using \eqref{Eq-dt-Theta}, we consider \begin{eqnarray*} && \langle (\partial_t \Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, {\cal E}_j\rangle =\langle-\mathfrak{T}^*_a B^\sharp{\cal E}_i + B^\sharp\mathfrak{T}^*_a {\cal E}_i -\mathfrak{T}^{*}_i\,B^\sharp E_a + B^\sharp\mathfrak{T}^{*}_i {E}_a, {\cal E}_j\rangle \\ && +\langle-\mathfrak{T}^*_i B^\sharp E_a + B^\sharp\mathfrak{T}^*_i E_a -\mathfrak{T}^{*}_a\,B^\sharp{\cal E}_i + B^\sharp\mathfrak{T}^{*}_a {\cal E}_i, {\cal E}_j \rangle, \end{eqnarray*} which can be simplified to the following: \begin{eqnarray*} && \langle(\partial_t\Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, {\cal E}_j\rangle = -2\sum\nolimits_{\,k} B({\cal E}_i, {\cal E}_k ) \langle\mathfrak{T}^*_a {\cal E}_k, {\cal E}_j\rangle \\ && -2\sum\nolimits_{\,b} B({\cal E}_i, E_b ) \langle\mathfrak{T}^*_a E_b, {\cal E}_j\rangle + 2\sum\nolimits_{\,k} B({\cal E}_k , {\cal E}_j)\langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_k\rangle + 2\sum\nolimits_{\,b} B({\cal E}_j, E_b) \langle\mathfrak{T}^*_a {\cal E}_i, E_b\rangle \\ && -2\sum\nolimits_{\,k} B({\cal E}_k, E_a) \langle\mathfrak{T}^{*}_{i} {\cal E}_k, {\cal E}_j\rangle + 2\sum\nolimits_{\,b} B({\cal E}_j, E_b) \langle\mathfrak{T}^{*}_i {E}_a, E_b\rangle + 2\sum\nolimits_{\,k} B({\cal E}_k, {\cal E}_j) \langle\mathfrak{T}^{*}_i {E}_a, {\cal E}_k\rangle . \end{eqnarray*} Hence, the sixth term in $\partial_t \langle \Theta, {\tilde T}^\sharp \rangle$ is: \begin{eqnarray*} && \sum \langle (\partial_t \Theta)_a {\cal E}_i + (\partial_t \Theta)_i E_a, {\cal E}_j\rangle \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle = \sum\big[ -2 B({\cal E}_i, {\cal E}_k ) \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^*_a {\cal E}_k, {\cal E}_j\rangle \\ && -2 B({\cal E}_i, E_b ) \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^*_a E_b, {\cal E}_j\rangle + 2 B({\cal E}_k , {\cal E}_j) \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_k\rangle \\ && +2 B({\cal E}_j, E_b) \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^*_a {\cal E}_i, E_b\rangle -2 B({\cal E}_k, E_a) \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^{*}_{i} {\cal E}_k, {\cal E}_j\rangle \\ && +2 B({\cal E}_j, E_b) \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^{*}_i {E}_a, E_b\rangle +2 B({\cal E}_k, {\cal E}_j) \langle{\tilde T}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^{*}_i {E}_a, {\cal E}_k\rangle \big]. \end{eqnarray*} Finally, we get \eqref{dtThetatildeT}. \textbf{Proof of \eqref{dtThetatildeA}}. We have \begin{equation*} \langle \Theta, {\tilde A} \rangle = \sum \langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle. \end{equation*} Hence \begin{eqnarray*} && \partial_t \langle\Theta, {\tilde A} \rangle = \sum \big[ B(\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j) \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle +\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle B({\tilde h}({\cal E}_i, {\cal E}_j ), E_a) \\ && +\langle\Theta_{\partial_t {\cal E}_i} E_a +\Theta_a (\partial_t {\cal E}_i ), {\cal E}_j\rangle \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle +\langle\Theta_i E_a +\Theta_a {\cal E}_i, \partial_t {\cal E}_j\rangle \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \\ && +\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle \langle\partial_t {\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle +\langle(\partial_t \Theta)_i E_a + (\partial_t \Theta)_a {\cal E}_i, {\cal E}_j\rangle \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \big]. \end{eqnarray*} We shall denote by (h) the fifth of the above 6 terms, and write it as sum of seven terms (h1) to (h7): \begin{eqnarray*} && \sum \langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle \langle\partial_t {\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \\ && = \sum\big[ -\frac{1}{2}(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle)\nabla_{a} B({\cal E}_i, {\cal E}_j)\\ && -\frac{1}{2}(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \langle{\tilde h}( B^\sharp {\cal E}_i, {\cal E}_j) + {\tilde h}({\cal E}_i, B^\sharp {\cal E}_j ), E_a\rangle \\ && -\frac{1}{2}\big(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle\big) \langle\nabla_{i}((B^\sharp {\cal E}_j )^\top) +\nabla_{j}((B^\sharp {\cal E}_i )^\top ), E_a\rangle \\ && -\frac{1}{2}(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \langle\nabla_{( B^\sharp {\cal E}_j )^\top} {\cal E}_i +\nabla_{(B^\sharp {\cal E}_i )^\top} {\cal E}_j, E_a\rangle \\ && +\frac{1}{2}(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i\rangle) (\nabla_{i} B({\cal E}_j,E_a) +\nabla_{j} B({\cal E}_i, E_a) )\\ && -\frac{1}{2}(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) (B(\nabla_{i} E_a, {\cal E}_j) + B(\nabla_{j} E_a, {\cal E}_i )) \\ && +\frac{1}{2}(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) ( B(\nabla_{a} {\cal E}_i, {\cal E}_j ) + B(\nabla_{a} {\cal E}_j, {\cal E}_i )) \big]. \end{eqnarray*} We have for the term (h1) above: \begin{eqnarray*} && -\frac{1}{2} \sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \nabla_{a} B({\cal E}_i, {\cal E}_j )\\ && = \sum B({\cal E}_i, {\cal E}_j) \langle\nabla_a(\Theta^*_i {\cal E}_j +\Theta^{\wedge*}_i {\cal E}_j), E_a \rangle -\sum \langle\nabla_a\big(B({\cal E}_i, {\cal E}_j) (\Theta^*_i {\cal E}_j +\Theta^{\wedge*}_i {\cal E}_j) \big), E_a\rangle, \end{eqnarray*} which can be written as \begin{eqnarray*} && -\frac{1}{2} \sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \nabla_{a} B({\cal E}_i, {\cal E}_j )\\ && = -2 \operatorname{div}^\top \langle B, L \rangle + 2 \langle B, \operatorname{div}^\top L \rangle . \end{eqnarray*} For (h2): \begin{eqnarray*} && -\frac{1}{2} \sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \langle{\tilde h}( B^\sharp {\cal E}_i, {\cal E}_j) + {\tilde h}({\cal E}_i, B^\sharp {\cal E}_j ), E_a\rangle \\ && = -\sum B({\cal E}_i, {\cal E}_k) (\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \langle{\tilde h}({\cal E}_k, {\cal E}_j ), E_a\rangle. \end{eqnarray*} Note that for (h3) we can assume $\nabla_X E_a \in \mD$ for all $X \in TM$ at a point, where we compute the formula, and hence \begin{eqnarray*} && -\frac{1}{2} \sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \langle\nabla_{i}((B^\sharp {\cal E}_j )^\top) +\nabla_{j}((B^\sharp {\cal E}_i )^\top ), E_a\rangle \\ && = -\sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \nabla_{i} B( E_a, {\cal E}_j). \end{eqnarray*} For (h5), analogously, \begin{eqnarray*} && \frac{1}{2} \sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) (\nabla_{i} B({\cal E}_j,E_a) +\nabla_{j} B({\cal E}_i, E_a))\\ && = \sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \nabla_{i} B({\cal E}_j, E_a), \end{eqnarray*} so (h3)+(h5)=0. For (h4) we have \begin{eqnarray*} && -\frac{1}{2} \sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \langle\nabla_{( B^\sharp {\cal E}_j )^\top} {\cal E}_i +\nabla_{(B^\sharp {\cal E}_i )^\top} {\cal E}_j, E_a\rangle \\ && = \sum B({\cal E}_j,E_b) (\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \langle (A_i + T^\sharp_i) E_b, E_a\rangle. \end{eqnarray*} For (h6) term we have \begin{eqnarray*} && -\frac{1}{2} \sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) ( B(\nabla_{i} E_a, {\cal E}_j) + B(\nabla_{j} E_a, {\cal E}_i ))\\ && = \sum B({\cal E}_k, {\cal E}_j) (\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) \langle({\tilde A}_a -{\tilde T}^\sharp_a) {\cal E}_k, {\cal E}_i\rangle, \end{eqnarray*} and (h7) term can be written as \begin{eqnarray*} && \frac{1}{2}\sum(\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i \rangle) (B(\nabla_{a} {\cal E}_i, {\cal E}_j ) + B(\nabla_{a} {\cal E}_j, {\cal E}_i )) \\ && = -\sum B(E_b, {\cal E}_i) (\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle +\langle\Theta_j E_a +\Theta_a {\cal E}_j, {\cal E}_i\rangle) \langle (A_j - T^\sharp_j) E_b, E_a\rangle . \end{eqnarray*} Now we compute other terms of $\partial_t \langle \Theta, {\tilde A} \rangle$. Recall that those 6 terms are \begin{eqnarray*} && \partial_t\langle\Theta, {\tilde A}\rangle= \sum\big[ B(\Theta_i E_a+\Theta_a {\cal E}_i,{\cal E}_j)\langle{\tilde h}({\cal E}_i,{\cal E}_j),E_a\rangle \\ && +\,\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle B({\tilde h}({\cal E}_i, {\cal E}_j ), E_a) +\langle\Theta_{\partial_t {\cal E}_i} E_a +\Theta_a (\partial_t {\cal E}_i ), {\cal E}_j\rangle \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \\ && +\,\langle\Theta_i E_a +\Theta_a {\cal E}_i, \partial_t {\cal E}_j\rangle \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle +\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle \langle\partial_t {\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \\ && +\,\langle (\partial_t \Theta)_i E_a + (\partial_t \Theta)_a {\cal E}_i, {\cal E}_j\rangle \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \big]. \end{eqnarray*} For the first and second terms of the above $\partial_t \langle \Theta, {\tilde A} \rangle$ we have \begin{eqnarray*} && B(\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j) =\sum B({\cal E}_j, {\cal E}_k)\langle\Theta_i E_a +\Theta_a {\cal E}_i,{\cal E}_k\rangle + \sum B({\cal E}_j, E_b) \langle\Theta_i E_a +\Theta_a {\cal E}_i, E_b\rangle ,\\ && \sum \langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_j\rangle B({\tilde h}({\cal E}_i, {\cal E}_j ), E_a) = 0, \end{eqnarray*} because $B=0$ on $\widetilde{\mD} \times \widetilde{\mD}$. For the third and fourth terms we have: \begin{eqnarray*} && \langle\Theta_{\partial_t {\cal E}_i} E_a {+}\Theta_a (\partial_t {\cal E}_i ), {\cal E}_j\rangle = \sum[-\frac{1}{2}\,B({\cal E}_i, {\cal E}_k)\langle\Theta_k E_a +\Theta_a{\cal E}_k, {\cal E}_j\rangle -B({\cal E}_i, E_b) \langle\Theta_b E_a +\Theta_a E_b, {\cal E}_j\rangle ],\\ && \langle\Theta_i E_a +\Theta_a {\cal E}_i, \partial_t {\cal E}_j\rangle = \sum\big[-\frac{1}{2}\,B({\cal E}_j, {\cal E}_k)\langle\Theta_i E_a +\Theta_a {\cal E}_i, {\cal E}_k\rangle -B({\cal E}_j, E_b) \langle\Theta_i E_a +\Theta_a {\cal E}_i, E_b\rangle \big]. \end{eqnarray*} For the sixth term, note that \begin{eqnarray*} && \sum \langle(\partial_t\Theta)_a\, {\cal E}_i + (\partial_t\Theta)_i E_a, {\cal E}_j\rangle \langle{\tilde h}({\cal E}_i, {\cal E}_j), E_a\rangle = \sum\big[ -2 B({\cal E}_i, {\cal E}_k ) \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^*_a {\cal E}_k, {\cal E}_j\rangle \\ && -\,2 B({\cal E}_i, E_b ) \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^*_a E_b, {\cal E}_j\rangle + 2 B({\cal E}_k , {\cal E}_j) \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^*_a {\cal E}_i, {\cal E}_k\rangle \\ && +\,2 B({\cal E}_j, E_b) \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^*_a {\cal E}_i, E_b\rangle -2 B({\cal E}_k, E_a) \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^{*}_{i} {\cal E}_k, {\cal E}_j\rangle \\ && +\,2 B({\cal E}_j, E_b) \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^{*}_i {E}_a, E_b\rangle + 2 B({\cal E}_k, {\cal E}_j) \langle{\tilde h}({\cal E}_i, {\cal E}_j ), E_a\rangle \langle\mathfrak{T}^{*}_i {E}_a, {\cal E}_k\rangle \big]. \end{eqnarray*} Finally, we get \eqref{dtThetatildeA}. \textbf{Proof of \eqref{dttracetopI} and \eqref{dttraceperpI}} is straightforward. \textbf{Proof of \eqref{dtIEaH} and \eqref{dtIeiH}}. The variation formulas for these terms appear in the following part of $Q$ in \eqref{E-defQ}: \begin{eqnarray*} && -\langle\operatorname{Tr\,}^\top\mathfrak{T} -\operatorname{Tr\,}^\bot\mathfrak{T} +\operatorname{Tr\,}^\bot\mathfrak{T}^* -\operatorname{Tr\,}^\top\mathfrak{T}^*,\, {\tilde H} - H\rangle \\ && = \langle \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} - H\rangle +\langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} - H\rangle . \end{eqnarray*} We have \begin{eqnarray*} && \partial_t \langle\, \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} - H\rangle = B( \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} - H) \\ && +\sum \langle (\partial_t \mathfrak{T}^*)_k {\cal E}_k, {\tilde H} - H\rangle +\langle \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), \partial_t {\tilde H}\rangle - \langle \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), \partial_t H\rangle ,\\ && B( \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} - H) = \sum B({\cal E}_i, E_b) \big(\langle{\tilde H}, E_b\rangle \langle \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\cal E}_i\rangle \\ && - \<H, {\cal E}_i\rangle \langle \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), E_b\rangle \big) -\sum B({\cal E}_i, {\cal E}_j) \<H, {\cal E}_j\rangle \langle \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\cal E}_i\rangle . \end{eqnarray*} Then we have \begin{eqnarray*} && \sum \langle (\partial_t \mathfrak{T}^*)_a E_a, {\tilde H} - H\rangle = \sum B({\cal E}_i, E_b) \big(\langle\mathfrak{T}^*_b {\cal E}_i, {\tilde H} - H\rangle + \langle\operatorname{Tr\,}^\top\mathfrak{T}^*, E_b\rangle \langle{\cal E}_i, H\rangle \\ && - \langle\operatorname{Tr\,}^\top\mathfrak{T}^*, {\cal E}_i\rangle \<E_b, {\tilde H}\rangle\big) +\sum B({\cal E}_i, {\cal E}_j) \langle\operatorname{Tr\,}^\top\mathfrak{T}^*, {\cal E}_j\rangle \langle{\cal E}_i, H\rangle, \\ && \sum \langle (\partial_t \mathfrak{T}^*)_i {\cal E}_i, {\tilde H} - H\rangle = \sum B({\cal E}_i, {\cal E}_j) \big( \langle\mathfrak{T}^*_j {\cal E}_i, {\tilde H} - H\rangle + \langle\operatorname{Tr\,}^\bot\mathfrak{T}^*, {\cal E}_i\rangle \langle H, {\cal E}_j\rangle \big)\\ && +\sum B({\cal E}_i, E_b) \big(\langle\mathfrak{T}^*_i E_b, {\tilde H} - H \rangle + \langle\operatorname{Tr\,}^\bot\mathfrak{T}^*, E_b\rangle \langle H, {\cal E}_i\rangle - \langle\operatorname{Tr\,}^\bot\mathfrak{T}^*, {\cal E}_i\rangle \langle{\tilde H}, E_b\rangle \big) . \end{eqnarray*} Next, we shall use equations (20) and (21) from \cite{rz-2}: \begin{eqnarray*} \langle\partial_t\tilde H, X\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \langle\,2\langle\theta, X^\top\rangle,\,B\rangle -\frac12\,X^\top(\operatorname{Tr\,}_{\mD}B),\\ \langle\partial_t H, X\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \operatorname{div}(B^\sharp (X^\perp) )^\top +\langle B^\sharp (X^\perp), \tilde H\rangle -\<B^\sharp (X^\perp), H\rangle -\langle {\tilde \delta}_{X^\perp}, B \rangle \nonumber \\ && -\langle \langle {\tilde \alpha} -{\tilde \theta}, X^\perp \rangle, B \rangle - B(H, X^\top). \end{eqnarray*} We have \begin{eqnarray*} && \langle \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), \partial_t {\tilde H}\rangle = 2 \sum B({\cal E}_i, E_b) \langleT^\sharp_i E_b , \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T})\rangle \\ && -\frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \langle{\cal E}_i, {\cal E}_j\rangle \operatorname{div}((\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}))^\top) -\operatorname{div} (\frac12\, (\operatorname{Tr\,}_{\mD}B) (\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}) )^\top) . \end{eqnarray*} Finally, \begin{eqnarray*} &&\qquad \langle\partial_t H, \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T})\rangle =\operatorname{div}(( B^\sharp((\operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}))^\perp) )^\top) \\ && +\sum \big[ B({\cal E}_i, E_b) \langle{\cal E}_i, \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T})\rangle \langle\tilde H, E_b\rangle - B({\cal E}_i, {\cal E}_j) \<H, {\cal E}_j\rangle \langle\operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), {\cal E}_i\rangle \\ && - B({\cal E}_i, E_b) \<H, {\cal E}_i\rangle \langle\operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), E_b\rangle - B({\cal E}_i, E_b) \langle\nabla_b((\operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}) )^\perp ), {\cal E}_i\rangle \\ && - B({\cal E}_i, E_b) \langle({\tilde A}_b -{\tilde T^\sharp}_b){\cal E}_i, \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T})\rangle - B({\cal E}_i, E_b) \langle H, {\cal E}_i\rangle \langle\operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), E_b\rangle \big],\\ && \qquad \langle\partial_t H, \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}) \rangle = \operatorname{div}(( B^\sharp((\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}))^\perp) )^\top) \\ && +\sum \big[ B({\cal E}_i, E_b) \langle{\cal E}_i, \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T})\rangle \langle\tilde H, E_b\rangle - B({\cal E}_i, {\cal E}_j) \<H, {\cal E}_j\rangle \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\cal E}_i\rangle \\ && - B({\cal E}_i, E_b) \<H, {\cal E}_i\rangle \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}) , E_b\rangle - B({\cal E}_i, E_b) \langle\nabla_b((\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}))^\perp ), {\cal E}_i\rangle \\ && - B({\cal E}_i, E_b) \langle({\tilde A}_b -{\tilde T^\sharp}_b){\cal E}_i, \operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T})\rangle - B({\cal E}_i, E_b) \langle H, {\cal E}_i\rangle \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), E_b\rangle \big]. \end{eqnarray*} Summing $\partial_t \langle \operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} - H \rangle$ and $\partial_t \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} - H\rangle$, we obtain \eqref{dtIEaH} and \eqref{dtIeiH}. \end{proof} We have the following results for critical metric connections and $g^\perp$-variations (see Definition \ref{defintionvariationsofg}), that can be considered as a special case of Lemma~\ref{L-dT-3}. \begin{lemma} \label{L-dT-metric} Let $\widetilde{\mD}$ and $\mD$ be both totally umbilical distributions on $(M,g)$. Let $g_t$ be a $g^\perp$-variation of metric $g$ and $\nabla +\mathfrak{T}$ be a metric connection: $\mathfrak{T}^* = -\mathfrak{T}$. If $\mathfrak{T}$ is a critical point for \eqref{actiongSmix} with fixed $g$, then, up to divergences of compactly supported vector fields, the following formulas~hold: \begin{subequations} \begin{eqnarray}\label{metriccon1} && \partial_t \langle\operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}),\ {\tilde H} - H\rangle = \langle B,\ {3}\,H^\flat \odot(\operatorname{Tr\,}^\top \mathfrak{T})^{\perp \flat} +\frac{p-1}{p}\, (\operatorname{div}{\tilde H}) g^\perp \rangle ,\\ \label{metriccon2} && \partial_t \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\tilde H}{-}H\rangle =\langle B,\, 3\frac{n{-}1}{n} H^\flat\otimes H^\flat -\frac{1}{2}\langle\phi, {\tilde H}{-}H\rangle +\operatorname{div}((\operatorname{Tr\,}^\perp \mathfrak{T})^\top) g^\perp\rangle , \\ \label{metriccon3} && \partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}, \operatorname{Tr\,}^\perp \mathfrak{T}^*\rangle = 0 ,\\ \label{metriccon4} && \partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}^*, \operatorname{Tr\,}^\perp \mathfrak{T}\rangle = \langle B,\ \frac{1}{2}\,\langle\,\phi, \operatorname{Tr\,}^\top \mathfrak{T}\,\rangle\, \rangle , \\ \label{metriccon5} && \partial_t \langle \Theta, A \rangle = \langle B,\ \frac{2}{n}\,H^\flat \odot (\operatorname{Tr\,}^\top \mathfrak{T})^{\perp \flat} \rangle , \\ \label{metriccon6} && \partial_t \langle \Theta, {\tilde A} \rangle = \langle B,\ \frac{1}{p}\, \langle \phi, {\tilde H} \rangle + 2 \operatorname{div} L^\top + 8 \chi + 8 \widetilde{\cal{T}}^\flat \rangle,\\ \label{metriccon7} && \partial_t \langle \Theta, T^\sharp \rangle = \langle B,\, \,\Upsilon_{T,T} \rangle, \\ \label{metriccon8} && \partial_t \langle \Theta, {\tilde T}^\sharp \rangle = \langle B,\ 12\,\widetilde{\cal{T}}^\flat + 2 \chi \rangle ,\\ \label{metriccon9} && \partial_t \langle \mathfrak{T}^*, \mathfrak{T}^\wedge \rangle_{\,|\,V} = \langle B,\, \frac12\,\Upsilon_{T,T} -2 \widetilde{\cal{T}}^\flat -\chi \rangle . \end{eqnarray} \end{subequations} \end{lemma} \begin{proof} First we adapt the results of Lemma~\ref{L-dT-3} to the case of $g^\perp$-variation and totally umbilical distributions $\widetilde{\mD}$ and $\mD$. Then we shall use the Euler-Lagrange equations (\ref{ELconnectionNew1}-j), which for a metric connection have the following form: \begin{subequations} \begin{eqnarray} \label{metricconcrit1} && (\mathfrak{T}_V\, U -\mathfrak{T}_U\, V)^\top = 2\,{\tilde T}(U, V), \\ \label{metricconcrit2} && \langle(\mathfrak{T}_U-T^\sharp_U) X,\, Y\rangle = 0, \\ \label{metricconcrit3} && (\operatorname{Tr\,}^\perp \mathfrak{T})^\perp = \frac{n-1}{n} H, \\ \label{metricconcrit4} && (\mathfrak{T}_Y\, X -\mathfrak{T}_X\, Y)^\perp = 2\, T(X, Y), \\ \label{metricconcrit5} && \langle(\mathfrak{T}_X-T^\sharp_X)\,U,\, V\rangle = 0, \\ \label{metricconcrit6} && (\operatorname{Tr\,}^\top \mathfrak{T})^\top = \frac{p-1}{p}\,{\tilde H}, \end{eqnarray} \end{subequations} for all $X,Y\in\widetilde\mD$ and $U,V\in\mD$, and \begin{equation*} (\operatorname{Tr\,}^\perp \mathfrak{T})^\top = -{\tilde H}\quad {\rm for}\quad n>1,\qquad (\operatorname{Tr\,}^\top \mathfrak{T})^\perp = -H\quad {\rm for}\quad p>1. \end{equation*} The last two equations require special assumptions on dimensions of the distributions -- we shall not use them in this proof. For metric connections we also have \[ \Theta = \Theta^\wedge = 2\,(\mathfrak{T} + \mathfrak{T}^\wedge). \] For metric connections, $g^\perp$-variations of metric and totally umbilical distributions, using \eqref{metricconcrit6}, we obtain \begin{eqnarray*} && \partial_t \langle\operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} -H\rangle = \sum B({\cal E}_i, {\cal E}_j) \big(\,3\<H, {\cal E}_j\rangle \langle\operatorname{Tr\,}^\top\mathfrak{T}, {\cal E}_i\rangle \\ && +\,\frac{p-1}{p} \delta_{ij} \operatorname{div}{\tilde H}\,\big) +\operatorname{div}\big(\frac{p-1}{p}(\operatorname{Tr\,}_{\mD}B){\tilde H} - 2( B^\sharp (\operatorname{Tr\,}^\top\mathfrak{T})^\perp )^\top\big) . \end{eqnarray*} Writing divergence of compactly supported vector field as $\operatorname{div} Z$, we finally get \begin{equation*} \partial_t \langle\operatorname{Tr\,}^\top(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} -H\rangle = \sum B({\cal E}_i, {\cal E}_j) \big(\,3\,\<H, {\cal E}_j\rangle \langle\operatorname{Tr\,}^\top\mathfrak{T}, {\cal E}_i\rangle +\frac{p-1}{p}\, \delta_{ij} \operatorname{div}{\tilde H} \big) +\operatorname{div} Z . \end{equation*} Without explicitly using the orthonormal frame, we can write the above as \eqref{metriccon1}. For metric connections, $g^\perp$-variations of metric and totally umbilical distributions, using \eqref{metricconcrit3}, we have: \begin{eqnarray*} && \partial_t \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} -H\rangle =\sum B({\cal E}_i, {\cal E}_j) \big(\,3\,\frac{n-1}{n}\,\<H, {\cal E}_j\rangle \<H, {\cal E}_i\rangle -\frac{1}{2}\,\langle\mathfrak{T}_j {\cal E}_i, {\tilde H} -H\rangle \\ && -\,\frac{1}{2}\,\langle\mathfrak{T}_i {\cal E}_j, {\tilde H} -H\rangle +\delta_{ij} \operatorname{div}((\operatorname{Tr\,}^\bot\mathfrak{T})^\top)\,\big) +\operatorname{div}\big((\operatorname{Tr\,}_{\mD}B) (\operatorname{Tr\,}^\bot\mathfrak{T})^\top -2( B^\sharp (\operatorname{Tr\,}^\bot\mathfrak{T})^\perp )^\top\big) . \end{eqnarray*} Writing divergence of compactly supported vector field as $\operatorname{div} Z$, we finally get \begin{eqnarray*} && \partial_t \langle\operatorname{Tr\,}^\bot(\mathfrak{T}^* -\mathfrak{T}), {\tilde H} - H\rangle = \sum B({\cal E}_i, {\cal E}_j) \big(\,3 \frac{n-1}{n}\,\<H, {\cal E}_j\rangle \<H, {\cal E}_i\rangle \\ && -\frac{1}{2}\,\langle\mathfrak{T}_j {\cal E}_i +\mathfrak{T}_i {\cal E}_j, {\tilde H} - H\rangle +\delta_{ij} \operatorname{div}( (\operatorname{Tr\,}^\bot\mathfrak{T})^\top)\big) +\operatorname{div} Z. \end{eqnarray*} Without explicitly using the orthonormal frame, we can write the above as \eqref{metriccon2}. For metric connections, $g^\perp$-variations of metric and totally umbilical distributions: \begin{equation*} \partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}, \operatorname{Tr\,}^\perp \mathfrak{T}^*\rangle = \frac{1}{2} \sum B({\cal E}_i, {\cal E}_j )\langle\operatorname{Tr\,}^\top\mathfrak{T},\ \mathfrak{T}^*_i {\cal E}_j -\mathfrak{T}^*_j {\cal E}_i\rangle = 0, \end{equation*} as $B({\cal E}_i, {\cal E}_j )$ is symmetric and $\mathfrak{T}^*_i {\cal E}_j -\mathfrak{T}^*_j {\cal E}_i$ is antisymmetric in $i,j$. For metric connections, $g^\perp$-variations of metric and totally umbilical distributions: \begin{equation*} \partial_t \langle\operatorname{Tr\,}^\top \mathfrak{T}^*, \operatorname{Tr\,}^\perp \mathfrak{T}\rangle =\frac{1}{2} \sum B({\cal E}_i, {\cal E}_j) \langle\phi({\cal E}_i, {\cal E}_j ), \operatorname{Tr\,}^\top \mathfrak{T} \rangle . \end{equation*} Without explicitly using the orthonormal frame, we can write the above as \eqref{metriccon4}. For metric connections, $g^\perp$-variations of metric and totally umbilical distributions, using \eqref{metricconcrit2}, we have: \begin{eqnarray*} \partial_t \langle\Theta, A \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \sum B({\cal E}_i, {\cal E}_j) \big(\,2\,\langle{\cal E}_j, {H}/{n}\rangle \langle\operatorname{Tr\,}^\top\mathfrak{T}, {\cal E}_i\rangle - 4 \langle{\cal E}_j, {H}/{n}\rangle \langleT^\sharp_i E_a, E_a\rangle \,\big)\\ \hspace*{-2.mm}&=&\hspace*{-2.mm} 2 \sum B({\cal E}_i, {\cal E}_j) \langle{\cal E}_j, {H}/{n}\rangle \langle\operatorname{Tr\,}^\top\mathfrak{T}, {\cal E}_i\rangle . \end{eqnarray*} Without explicitly using the orthonormal frame, we can write the above as \eqref{metriccon5}. For metric connections, $g^\perp$-variations of metric and totally umbilical distributions: \begin{eqnarray*} && \partial_t \langle \Theta, {\tilde A} \rangle = -2 \operatorname{div}^\top \langle B, L \rangle + 2 \langle B, \operatorname{div}^\top L \rangle \\ && +\,4 \sum B({\cal E}_i, {\cal E}_j) \big(\, \langle\mathfrak{T}_k {\cal E}_i + \mathfrak{T}_i {\cal E}_k, E_a\rangle \langle{\tilde T}^\sharp_a {\cal E}_j, {\cal E}_k\rangle -2 \langle\mathfrak{T}_j E_a, {\cal E}_i\rangle \langle{{\tilde H}}/{p}, E_a\rangle \big) . \end{eqnarray*} Using symmetry $B(X,Y) = B(Y,X)$ for $X,Y \in \mathfrak{X}^\bot$, we obtain: \begin{eqnarray*} && \partial_t\langle \Theta, {\tilde A}\rangle = -2 \operatorname{div}^\top \langle B, L \rangle + 2 \langle B, \operatorname{div}^\top L \rangle\\ && + \sum B({\cal E}_i, {\cal E}_j) \big[\,\frac{1}{p}\,\langle\mathfrak{T}_j {\cal E}_i +\mathfrak{T}_i {\cal E}_j, {\tilde H}\rangle + 4 \langle\mathfrak{T}_k {\cal E}_i +\mathfrak{T}_i {\cal E}_k, {\tilde T}({\cal E}_j, {\cal E}_k)\rangle \,\big]. \end{eqnarray*} Note that \begin{equation*} \langle\mathfrak{T}_k {\cal E}_i +\mathfrak{T}_i {\cal E}_k, {\tilde T}({\cal E}_j, {\cal E}_k)\rangle = \langle\mathfrak{T}_k E_a, {\cal E}_i\rangle \langle{\tilde T^\sharp}_a {\cal E}_k, {\cal E}_j\rangle -\langle\mathfrak{T}_i E_a, {\tilde T^\sharp}_a {\cal E}_j\rangle . \end{equation*} By the above, we can write $\partial_t \langle \Theta, {\tilde A} \rangle$ as \begin{equation}\label{dttildeAmetricadaptedlemma} \partial_t \langle \Theta, {\tilde A} \rangle = -2 \operatorname{div}^\top \langle B, L \rangle +\langle B,\ \frac{1}{p} \langle \phi, {\tilde H} \rangle + 2 \operatorname{div}^\top L - 4\,\psi + 4\sum\nolimits_{a,j} (\mathfrak{T}_j E_a)^{\perp \flat} \odot ({\tilde T^\sharp}_a {\cal E}_j)^{\perp \flat} \rangle, \end{equation} where \begin{equation*} \psi(X,Y) = \frac{1}{2}\sum\nolimits_{\,a} \big(\langle\mathfrak{T}_{X^\perp}\, E_a,\, {\tilde T^\sharp}_a (Y^\perp)\rangle +\langle\mathfrak{T}_{Y^\perp}\, E_a,\, {\tilde T^\sharp}_a (X^\perp)\rangle \big) . \end{equation*} We claim that $\psi$ can be written in terms of tensor $\chi$ introduced in \eqref{E-chi}. Indeed, for arbitrary symmetric (0,2)-tensor $B : {\cal D} \times {\cal D} \rightarrow \mathbb{R}$ we have \begin{equation*} \langle B, \psi \rangle = \langle B, \ -2 \widetilde{\cal{T}}^\flat - \sum\nolimits_{a,j} (\mathfrak{T}_j E_a)^{\perp \flat} \odot ({\tilde T^\sharp}_a {\cal E}_j)^{\perp \flat} \rangle. \end{equation*} Using \eqref{E-chi}, we obtain \begin{equation}\label{psiandchi} \psi = -2\,\widetilde{\cal{T}}^\flat -\chi. \end{equation} Using the following computation: \begin{eqnarray*} && \langle B, \operatorname{div}^\top L \rangle = \langle B, \operatorname{div}^\top L^\top +\operatorname{div}^\top L^\perp \rangle = \langle B, \operatorname{div} L^\top \rangle +\langle B, \langle L^\top, {\tilde H} \rangle \rangle -\langle B, \langle L^\perp, H \rangle \rangle ,\\ && \operatorname{div}^\top \langle B, L \rangle = \operatorname{div} \langle B, L \rangle -\operatorname{div}^\perp \langle B, L \rangle = \operatorname{div} \langle B, L \rangle +\langle B, \langle L^\top, {\tilde H} \rangle \rangle -\operatorname{div}^\perp \langle B, L^\perp \rangle , \end{eqnarray*} we obtain \begin{equation*} -\operatorname{div}^\top \langle B, L \rangle +\langle B, \operatorname{div}^\top L \rangle = -\operatorname{div} \langle B, L^\top \rangle +\langle B, \operatorname{div} L^\top \rangle , \end{equation*} which, together with \eqref{dttildeAmetricadaptedlemma}--\eqref{psiandchi}, up to divergence of a compactly supported vector field, yields~\eqref{metriccon6}. For metric connections, $g^\perp$-variations of metric and totally umbilical distributions we have \begin{equation*} \partial_t \langle \Theta, T^\sharp \rangle = 2 \sum B({\cal E}_i, {\cal E}_j) \langle T (E_a, E_b), {\cal E}_i\rangle \langle\mathfrak{T}_a {\cal E}_j, E_b\rangle . \end{equation*} Using \eqref{metricconcrit4}, we obtain: \begin{equation*} \partial_t \langle \Theta, T^\sharp \rangle = 2 \sum B({\cal E}_i, {\cal E}_j) \langle T (E_a, E_b), {\cal E}_i\rangle \<T(E_a,E_b), {\cal E}_j\rangle . \end{equation*} Without explicitly using the orthonormal frame, we can write the above as \eqref{metriccon7}. For metric connections, $g^\perp$-variations of metric and totally umbilical distributions we have \begin{equation*} \partial_t\langle\Theta, {\tilde T}^\sharp\rangle = -\sum B({\cal E}_i, {\cal E}_j)\langle{\tilde T}({\cal E}_i,{\cal E}_k), E_a\rangle \big(\,4\langle\mathfrak{T}_a{\cal E}_j, {\cal E}_k\rangle + 4\langle\mathfrak{T}_j E_a, {\cal E}_k\rangle -2\,\langle\mathfrak{T}_k E_a, {\cal E}_j\rangle \,\big). \end{equation*} Using \eqref{metricconcrit5}, we obtain: \begin{equation*} \partial_t\langle \Theta, {\tilde T}^\sharp\rangle = \sum B({\cal E}_i, {\cal E}_j) \big(\,4\langle({\tilde T^\sharp}_a)^2 {\cal E}_j, {\cal E}_i\rangle - 4 \langle\mathfrak{T}_j E_a, {\tilde T^\sharp}_a {\cal E}_i\rangle -2 \langle{\tilde T^\sharp}_a {\cal E}_j, {\cal E}_i\rangle \langle\mathfrak{T}_j E_a, {\cal E}_k\rangle \,\big). \end{equation*} Next, we have \begin{equation*} \partial_t \langle \Theta, {\tilde T}^\sharp \rangle = \langle B,\ 4 \widetilde{\cal{T}}^\flat - 4 \psi -2\sum\nolimits_{a,j} (\mathfrak{T}_j E_a)^{\perp \flat} \odot ({\tilde T^\sharp}_a {\cal E}_j)^{\perp \flat} \rangle . \end{equation*} For metric connections, $g^\perp$-variations of metric and totally umbilical distributions we have: \[ \partial_t \langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V} = \sum B({\cal E}_i, {\cal E}_j) \langle\mathfrak{T}_j E_a, \mathfrak{T}_a {\cal E}_i\rangle . \] Using (\ref{metricconcrit2},d,e), we obtain the following: \begin{equation*} \partial_t \langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V} = \sum B({\cal E}_i, {\cal E}_j)\,\big(\,\langle T(E_a, E_b),{\cal E}_j\rangle \langle T(E_a,E_b), {\cal E}_i\rangle +\langle\mathfrak{T}_j E_a, {\tilde T^\sharp}_a {\cal E}_i\rangle \,\big). \end{equation*} We can write, $\partial_t \langle\mathfrak{T}^*, \mathfrak{T}^\wedge\rangle_{\,|\,V} = \langle B,\, \frac12\,\Upsilon_{T,T} +\psi\rangle$ , which, together with \eqref{psiandchi}, yields \eqref{metriccon9}. \end{proof} \begin{lemma} \label{dtQadapted} Let $g_t$ be a $g^\perp$-variation of $g\in{\rm Riem}(M,\widetilde{\mD},{\mD})$, let $\mathfrak{T}$ be the contorsion tensor of a metric connection that is critical for \eqref{actiongSmix} with fixed $g$, and let $\widetilde{\mD}$ and $\mD$ be totally umbilical distributions. Then, up to divergences of compactly supported vector fields, for $Q$ given by \eqref{E-defQ} we have \begin{eqnarray*} && -\partial_t Q = \big\langle \langle \phi,\, \frac{p+2}{2\,p}\,{\tilde H} -\frac{1}{2} H +\frac{1}{2} \operatorname{Tr\,}^\top \mathfrak{T} \rangle -2 \operatorname{div} \phi^\top + 7 \chi +\frac{3n+2}{n}\,H^\flat \odot (\operatorname{Tr\,}^\top \mathfrak{T})^{\perp \flat} \\ &&\quad -\operatorname{div}( (\operatorname{Tr\,}^\perp \mathfrak{T} )^\top)\,g^\perp +\frac{p-1}{p} (\operatorname{div} {\tilde H}) g^\perp - 3 \frac{n-1}{n} H^\flat \otimes H^\flat + 2 \widetilde{\cal{T}}^\flat +\frac{3}{2}\,\Upsilon_{T,T},\ B\big\rangle . \end{eqnarray*} \end{lemma} \begin{proof} Recall that \begin{eqnarray*} \nonumber && L(X,Y) = \frac{1}{4} (\Theta^*_{X^\perp} Y^\perp +\Theta^{\wedge*}_{X^\perp} Y^\perp + \Theta^*_{Y^\perp} X^\perp +\Theta^{\wedge*}_{Y^\perp} X^\perp) , \end{eqnarray*} and let $L^\perp(X,Y) =(L(X,Y) )^\perp$ and $L^\top(X,Y) =(L(X,Y) )^\top$ for $X,Y \in \mathfrak{X}_M$. We have $L = L^\top + L^\perp$. Note that $\<L^\perp(X,Y), Z\rangle = \<L^\perp(X^\perp,Y^\perp), Z^\perp\rangle$ and for metric connections \begin{eqnarray*} && \langle\mathfrak{T}^{\wedge*}_X Y, Z\rangle = \langle\mathfrak{T}^\wedge_X Z, Y\rangle = \langle\mathfrak{T}_Z X, Y\rangle = -\langle\mathfrak{T}_Z Y, X\rangle \\ && = -\langle\mathfrak{T}^\wedge_Y Z, X\rangle = -\langle\mathfrak{T}^{\wedge*}_Y X, Z\rangle = -\langle\mathfrak{T}^{\wedge*\wedge}_X\, Y,Z\rangle, \end{eqnarray*} for all $X,Y,Z \in \mathfrak{X}_M$, so \begin{equation*} 4 \<L(X,Y), Z) = \langle Z, \Theta^*_{X} Y +\Theta^{\wedge*}_{X} Y +\Theta^*_{Y} X +\Theta^{\wedge*}_{Y} X\rangle = -4\<Z, \mathfrak{T}_X Y +\mathfrak{T}^\wedge_X Y\rangle. \end{equation*} Hence, $L^\perp = -(\mathfrak{T} +\mathfrak{T}^\wedge)^\perp$ and $L^\top = -(\mathfrak{T} +\mathfrak{T}^\wedge)^\top$ and for metric connections we obtain $L = - \phi$, see \eqref{E-chi}, which together with Lemma~\ref{L-dT-metric} yields the claim. \end{proof} \begin{lemma} \label{lemmasemisymmetric} Let $\bar\nabla$ be a semi-symmetric connection on $(M,g,\mD)$. a)~Then \eqref{E-defQ} reduces to \begin{equation} \label{QforUconnection} Q = (n-p) \langle U , H - {\tilde H} \rangle + np \langle U , U \rangle - n \langle U^\perp , U^\perp \rangle - p \langle U^\top , U^\top \rangle. \end{equation} b)~For any $g^\pitchfork$-variation of metric $g$ and $Q$ given by \eqref{QforUconnection} we have \begin{eqnarray}\label{dtQgforUconnection} \nonumber \partial_t Q(g_t) |_{\,t=0} \hspace*{-2.mm}&=&\hspace*{-2.mm} \langle\, B ,\ -(n-p) {\tilde \delta}_{U^\perp} - (n-p) \langle {\tilde \alpha} - {\tilde \theta} , U^\perp \rangle +2(p-n) \langle {\theta}, U^\top \rangle \nonumber \\ &-& \!\frac12\,(p-n)( \operatorname{div} U^\top ) g^\perp +n(p-1) U^{\perp \flat} \otimes U^{\perp \flat} +2p(n-1) U^{\top \flat} \odot U^{\perp \flat}\, \rangle . \end{eqnarray} \end{lemma} \begin{proof} a) From \eqref{Uconnection} we obtain \begin{equation}\label{UconnectiontrtopI} \operatorname{Tr\,}^\top \mathfrak{T} = \sum\nolimits_a \<U, E_a\rangle E_a - \sum\nolimits_a \<E_a , E_a\rangle U = U^\top - n\,U. \end{equation} Similarly, $\operatorname{Tr\,}^\perp \mathfrak{T} = U^\perp - p\,U$. We also have \begin{equation}\label{UconnectionImixed} \mathfrak{T}_a {\cal E}_i = \<U, {\cal E}_i\rangle E_a ,\quad \mathfrak{T}_i E_a = \<U , E_a\rangle {\cal E}_i, \end{equation} so we obtain $\langle \mathfrak{T} , \mathfrak{T}^\wedge \rangle_{| V} =0$. Next, we have \begin{eqnarray*} \langle \operatorname{Tr\,}^\top \mathfrak{T} - \operatorname{Tr\,}^\perp \mathfrak{T} , H - {\tilde H} \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} (p-n-1) \langle U^\perp , H \rangle + (n-p-1) \langle U^\top , {\tilde H} \rangle. \end{eqnarray*} We have $(\mathfrak{T} + \mathfrak{T}^\wedge)_i E_a = \<U, E_a\rangle{\cal E}_i + \<U, {\cal E}_i\rangle E_a$. Also \begin{eqnarray*} \langle \operatorname{Tr\,}^\top \mathfrak{T} , \operatorname{Tr\,}^\perp \mathfrak{T} \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} np \langle U , U \rangle - n \langle U^\perp , U^\perp \rangle - p \langle U^\top , U^\top \rangle. \end{eqnarray*} Thus, $\langle \mathfrak{T} + \mathfrak{T}^\wedge, {\tilde A} - {\tilde T^\sharp} + A - T^\sharp \rangle =\langle H + {\tilde H} ,\ U \rangle$. \ b)~By \cite[Lemma~3]{rz-2}, we have: \begin{eqnarray*} \<U^\perp,\ \partial_t U^\perp\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \<U^\perp,\ - (B^\sharp (U^\perp) )^\top\rangle =0,\\ \langle U^\top,\ \partial_t U^\top\rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \<U^\top,\,B^\sharp (U^\perp)\rangle = \langle B,\, U^{\top \flat} \odot U^{\perp \flat}\, \rangle . \end{eqnarray*} Similarly, by \cite[Eq.~(20) and Eq.~(21)]{rz-2}, we have \begin{eqnarray*} \langle\partial_t {\tilde H},\, U \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \operatorname{div} ( (\operatorname{Tr\,}_{\cal D} B ) U^\top ) + \langle\,B,\, 2 \langle {\theta}, U^\top \rangle - \frac{1}{2}\,(\operatorname{div} U^\top ) g^\perp \rangle,\\ \langle\partial_t H , U \rangle \hspace*{-2.mm}&=&\hspace*{-2.mm} \operatorname{div}((B^\sharp(U^\perp))^\top) + \<B, U^\perp\odot({\tilde H} -H) - U^\top\odot H - {\tilde\delta}_{U^\perp} - \langle{\tilde\alpha} - {\tilde\theta}, U^\perp\rangle\,\rangle. \end{eqnarray*} Omitting divergences of compactly supported vector fields and using $B|_{\widetilde{\cal D} \times \widetilde{\cal D}} =0$, we obtain \begin{eqnarray*} \partial_t Q (g_t) |_{\,t=0} \hspace*{-2.mm}&=&\hspace*{-2.mm} (n-p) B(U, H-{\tilde H}) + (n-p)\<U , \partial_t H\rangle - (n-p)\langle\partial_t {\tilde H} , U\rangle + np\, B(U,U)\\ &-& n B(U^\perp , U^\perp) - 2n \langle\partial_t U^\perp , U\rangle - p B(U^\top , U^\top) - 2p \langle\partial_t U^\top , U^\top\rangle, \end{eqnarray*} that reduces to \eqref{dtQgforUconnection}. \end{proof} \baselineskip=12.7pt \end{document}
arXiv
\begin{document} \title{EM algorithm and variants: an informal tutorial} \section{Introduction} The expectation-maximization (EM) algorithm introduced by Dempster et al~\cite{Dempster-77} in 1977 is a very general method to solve maximum likelihood estimation problems. In this informal report, we review the theory behind~EM as well as a number of EM~variants, suggesting that beyond the current state of the art is an even much wider territory still to be discovered. \section{EM background} \label{sec:em} Let~$Y$ a random variable with probability density function~(pdf) $p(y|\theta)$, where~$\theta$ is an unknown parameter vector. Given an outcome~$y$ of~$Y$, we aim at maximizing the likelihood function ${\cal L}(\theta) \equiv p(y|\theta)$ wrt~$\theta$ over a given search space $\Theta$. This is the very principle of maximum likelihood (ML) estimation. Unfortunately, except in not very exciting situations such as, e.g. estimating the mean and variance of a Gaussian population, a ML~estimation problem has generally no closed-form solution. Numerical routines are then needed to approximate it. \subsection{EM as a likelihood maximizer} The EM~algorithm is a class of optimizers specifically taylored to ML problems, which makes it both general and not so general. Perhaps the most salient feature of EM is that it works iteratively by maximizing successive local approximations of the likelihood function. Therefore, each iteration consists of two steps: one that performs the approximation (the E-step) and one that maximizes it (the M-step). But, let's make it clear, not any two-step iterative scheme is an EM~algorithm. For instance, Newton and quasi-Newton methods~\cite{Press-92} work in a similar iterative fashion but do not have much to do with~EM. What essentially defines an EM~algorithm is the philosophy underlying the local approximation scheme -- which, in particular, doesn't rely on differential calculus. The key idea underlying~EM is to introduce a latent variable~$Z$ whose pdf depends on~$\theta$ with the property that maximizing $p(z|\theta)$ is easy or, say, easier than maximizing $p(y|\theta)$. Loosely speaking, we somehow enhance the incomplete data by guessing some useful additional information. Technically, $Z$~can be any variable such that~$\theta\to Z\to Y$ is a Markov chain\footnote{In many presentations of~EM, $Z$~is as an aggregate variable~$(X,Y)$, where~$X$ is some ``missing'' data, which corresponds to the special case where the transition~$Z\to Y$ is deterministic. We believe this restriction, although important in practice, is not useful to the global understanding of~EM. By the way, further generalizations will be considered later in this report.}, i.e. we assume that~$p(y|z,\theta)$ is independent from~$\theta$, yielding a Chapman-Kolmogorov equation: \begin{equation} \label{eq:complete_data_space} p(z,y|\theta)=p(z|\theta)p(y|z) \end{equation} Reasons for that definition will arise soon. Conceptually, $Z$~is a complete-data space in the sense that, if it were fully observed, then estimating~$\theta$ would be an easy game. We will emphasize that the convergence speed of~EM is highly dependent upon the complete-data specification, which is widely arbitrary despite some estimation problems may have seamingly ``natural'' hidden variables. But, for the time being, we assume that the complete-data specification step has been accomplished. \subsection{EM as a consequence of Jensen's inequality} \label{sec:jensen} Quite surprisingly, the original EM~formulation stems from a very simple variational argument. Under almost no assumption regarding the complete variable~$Z$, except its pdf doesn't vanish to zero, we can bound the variation of the log-likelihood function $L(\theta)\equiv\log p(y|\theta)$ as follows: \begin{eqnarray} L (\theta) - L (\theta') & = & \log \frac{p(y|\theta)}{p(y|\theta')} \nonumber \\ & = & \log \int \frac{p(z,y|\theta)}{p(y|\theta')} \, dz \nonumber \\ & = & \log \int \frac{p(z,y|\theta)}{p(z,y|\theta')} \, p(z|y,\theta')\, dz \nonumber\\ & = & \log \int \frac{p(z|\theta)}{p(z|\theta')} \, p(z|y,\theta')\, dz \label{eq:markov}\\ & \geq & \underbrace{\int \log \frac{p(z|\theta)}{p(z|\theta')}\,p(z|y,\theta')\,dz}_{ {\rm Call\ this\ }Q(\theta,\theta')} \label{eq:jensen} \end{eqnarray} Step~(\ref{eq:markov}) results from the fact that~$p(y|z,\theta)$ is independent from~$\theta$ owing to~(\ref{eq:complete_data_space}). Step~(\ref{eq:jensen}) follows from Jensen's inequality (see~\cite{Cover-91} and appendix~\ref{app:jensen}) along with the well-known concavity property of the logarithm function. Therefore, $Q(\theta,\theta')$ is an auxiliary function for the log-likelihood, in the sense that: {\em (i)}~the likelihood variation from $\theta'$ to $\theta$ is always greater than $Q(\theta,\theta')$, and {\em (ii)}~we have $Q(\theta',\theta')=0$. Hence, starting from an initial guess $\theta'$, we are guaranteed to increase the likelihood value if we can find a~$\theta$ such that $Q(\theta,\theta')>0$. Iterating such a process defines an EM algorithm. There is no general convergence theorem for~EM, but thanks to the above mentioned monotonicity property, convergence results may be proved under mild regularity conditions. Typically, convergence towards a non-global likelihood maximizer, or a saddle point, is a worst-case scenario. Still, the only trick behind EM is to exploit the concavity of the logarithm function! \subsection{EM as expecation-maximization} \label{sec:classical} Let's introduce some notations. Developing the logarithm in the right-hand side of~(\ref{eq:jensen}), we may interpret our auxiliary function as a difference: $Q(\theta,\theta') = Q(\theta|\theta') - Q(\theta'|\theta')$, with: \begin{equation} \label{eq:auxiliary} Q(\theta|\theta') \equiv \int \log p(z|\theta)\,p(z|y,\theta')\,dx \end{equation} Clearly, for a fixed~$\theta'$, maximizing $Q(\theta,\theta')$ wrt $\theta$ is equivalent to maximizing $Q(\theta|\theta')$. If we consider the residual function: $R(\theta|\theta')\equiv L(\theta)-Q(\theta|\theta')$, the incomplete-data log-likelihood may be written as: $$ L(\theta) = Q(\theta|\theta') + R(\theta|\theta') $$ The EM~algorithm's basic principle is to replace the maximization of~$L(\theta)$ with that of~$Q(\theta|\theta')$, hopefully easier to deal with. We can ignore $R(\theta|\theta')$ because inequality~(\ref{eq:jensen}) implies that $R(\theta|\theta') \geq R(\theta'|\theta')$. In other words, EM~works because the auxiliary function $Q(\theta|\theta')$ always deteriorates as a likelihood approximation when~$\theta$ departs from~$\theta'$. In an ideal world, the approximation error would be constant; then, maximizing~$Q$ would, not only increase, but truly maximize the likelihood. Unfortunately, this won't be the case in general. Therefore, unless we decide to give up on maximizing the likelihood, we have to iterate -- which gives rise to quite a popular statistical learning algorithm. Given a current parameter estimate $\theta_n$: \begin{itemize} \item E-step: form the auxiliary function $Q(\theta|\theta_n)$ as defined in~(\ref{eq:auxiliary}), which involves computing the posterior distribution of the unobserved variable, $p(z|y,\theta_n)$. The ``E'' in E-step stands for ``expectation'' for reasons that will arise in section~\ref{sec:conditional_expectation}. \item M-step: update the parameter estimate by maximizing the auxiliary function: $$ \theta_{n+1} = \arg\max_\theta Q(\theta|\theta_n) $$ An obvious but important generalization of the M-step is to replace the maximization with a mere increase of $Q(\theta|\theta_n)$. Since, anyway, the likelihood won't be maximized in one iteration, increasing the auxiliary function is enough to ensure that the likelihood will increase in turn, thus preserving the monotonicity property of~EM. This defines generalized EM (GEM) algorithms. More on this later. \end{itemize} \subsection{Some probabilistic interpretations here...} \label{sec:conditional_expectation} For those familiar with probability theory, $Q(\theta|\theta')$ as defined in~(\ref{eq:auxiliary}) is nothing but the conditional expectation of the complete-data log-likelihood in terms of the observed variable, taken at~$Y=y$, and assuming the true parameter value is~$\theta'$: \begin{equation} \label{eq:cecl} Q(\theta|\theta') \equiv {\rm E} \big[\log p(Z|\theta)|y,\theta'\big] \end{equation} This remark explains the ``E'' in E-step, but also yields some probabilistic insight on the auxiliary function. For all~$\theta$, $Q(\theta|\theta')$ is an estimate of the the complete-data log-likelihood that is built upon the knowledge of the incomplete data and under the working assumption that the true parameter value is known. In some way, it is not far from being the ``best'' estimate that we can possibly make without knowing~$Z$, because conditional expectation is, by definition, the estimator that minimizes the conditional mean squared error\footnote{For all $\theta$, we have: $\displaystyle Q(\theta|\theta') = \arg\min_{\mu} \ \int \big[ \log p(z|\theta) - \mu \big]^2 \, p(z|y,\theta')\,dz$.}. Having said that, we might still be a bit suspiscious. While we can grant that $Q(\theta|\theta')$ is a reasonable estimate of the complete-data log-likelihood, recall that our initial problem is to maximize the {\em incomplete-data} (log) likelihood. How good a fit is $Q(\theta|\theta')$ for $L(\theta)$? To answer that, let's see a bit more how the residual~$R(\theta|\theta')$ may be interpreted. We have: \begin{eqnarray} R(\theta|\theta') & = & \log p(y|\theta) - \int \log p(z|\theta)\,p(z|y,\theta')\,dz \nonumber\\ & = & \int \log \frac{p(y|\theta)}{p(z|\theta)}\,p(z|y,\theta')\,dz \nonumber\\ & = & \int \log \frac{p(y|z,\theta)}{p(z|y,\theta)}\,p(z|y,\theta')\,dz, \end{eqnarray} where the last step relies on Bayes' law. Now, $p(y|z,\theta)=p(y|z)$ is independent from~$\theta$ by the Markov property~(\ref{eq:complete_data_space}). Therefore, using the simplified notations $q_{\theta}(z)\equiv p(z|y,\theta)$ and $q_{\theta'}(z) \equiv p(z|y,\theta')$, we get: \begin{equation} \label{eq:kullback} R(\theta|\theta') - R(\theta'|\theta') = \underbrace{\int \log \frac{q_{\theta'}(z)}{q_{\theta}(z)} \, q_{\theta'}(z)\,dz}_{{\rm Call\ this\ } D(q_{\theta'}\|q_{\theta})} \end{equation} In the language of information theory, this quantity $D(q_{\theta'}\|q_{\theta})$ is known as the Kullback-Leibler distance, a general tool to assess the deviation between two pdfs~\cite{Cover-91}. Although it is not, strictly speaking, a genuine mathematical distance, it is always positive and vanishes iff the pdfs are equal which, again and not surprinsingly, comes as a direct consequence of Jensen's inequality. What does that mean in our case? We noticed earlier that the likelihood approximation~$Q(\theta|\theta')$ cannot get any better as~$\theta$ deviates from~$\theta'$. We now realize from equation~(\ref{eq:kullback}) that this property reflects an implicit strategy of ignoring the variations of~$p(z|y,\theta)$ wrt~$\theta$. Hence, a perfect approximation would be one for which~$p(z|y,\theta)$ is independent from~$\theta$. In other words, we would like $\theta\to Y\to Z$ to define a Markov chain... But, look, we already assumed that $\theta\to Z\to Y$ is a Markov chain. Does the Markov property still hold when permuting the roles of~$Y$ and~$Z$? From the fundamental data processing inequality~\cite{Cover-91}, the answer is no in general. Details are unnecessary here. Just remember that the validity domain of~$Q(\theta|\theta')$ as a local likelihood approximation is controlled by the amount of information that both~$y$ and~$\theta$ contain about the complete data. We are now going to study this aspect more carefully. \subsection{EM as a fix point algorithm and local convergence} \label{sec:fixpoint} Quite clearly, EM is a fix point algorithm: $$ \theta_{n+1} = \Phi(\theta_n) \qquad {\rm with}\quad \Phi(\theta') = \arg\max_{\theta\in\Theta} Q(\theta,\theta') $$ Assume the sequence $\theta_n$ converges towards some value $\hat{\theta}$ -- hopefully, the maximum likelihood estimate but possibly some other local maximum or saddle point. Under the assumption that $\Phi$ is continuous, $\hat{\theta}$ must be a fix point for $\Phi$, i.e. $\hat{\theta}=\Phi(\hat{\theta})$. Furthermore, we can approximate the sequence's asymptotic behavior using a first-order Taylor expansion of~$\Phi$ around $\hat{\theta}$, which leads to: $$ \theta_{n+1} \approx S \hat{\theta} + (I-S) \theta_n \qquad {\rm with}\quad S = I - \frac{\partial \Phi}{\partial \theta}\big|_{\hat{\theta}} $$ This expression shows that the rate of convergence is controlled by~$S$, a square matrix that is constant across iterations. Hence, $S$~is called the speed matrix, and its spectral radius\footnote{Let $(\lambda_1,\lambda_2,\ldots,\lambda_m)$ be the complex eigenvalues of~$S$. The spectral radius is $\rho(S)=\min_i |\lambda_i|$.} defines the global speed. Unless the global speed is one, the local convergence of~EM is only linear. We may relate~$S$ to the likelihood function by exploiting the fact that, under sufficient smoothness assumptions, the maximization of~$Q$ is characterized by: $$ \frac{\partial Q}{\partial \theta^t} (\theta_{n+1},\theta_n) =0 $$ From the implicit function theorem, we get the gradient of~$\Phi$: $$ \frac{\partial \Phi}{\partial \theta} = - \Big(\frac{\partial^2 Q}{\partial \theta \partial \theta^t}\Big)^{-1} \frac{\partial^2 Q}{\partial \theta' \partial \theta^t} \quad \Rightarrow \quad S = \Big(\frac{\partial^2 Q}{\partial \theta \partial \theta^t}\Big)^{-1} \Big[ \frac{\partial^2 Q}{\partial \theta \partial \theta^t} + \frac{\partial^2 Q}{\partial \theta' \partial \theta^t} \Big] $$ where, after some manipulations: \begin{eqnarray*} \frac{\partial^2 Q} {\partial \theta \partial \theta^t}\big|_{(\hat{\theta},\hat{\theta})} & = & \int p(z|y,\hat{\theta})\, \underbrace{ \frac{\partial^2\log p(z|\theta)} {\partial \theta \partial \theta^t}\big|_{\hat{\theta}} }_{ {\rm Call\ this\ } - J_z(\hat{\theta}) }\, dz \\[1em] \frac{\partial^2 Q} {\partial \theta' \partial \theta^t}\big|_{(\hat{\theta},\hat{\theta})} & = & \underbrace{ \frac{\partial^2\log p(y|\theta)} {\partial \theta \partial \theta^t}\big|_{\hat{\theta}} }_{ {\rm Call\ this\ } - J_y(\hat{\theta}) } \ - \ \frac{\partial^2 Q} {\partial \theta \partial \theta^t}\big|_{(\hat{\theta},\hat{\theta})} \end{eqnarray*} The two quantities $J_y(\hat{\theta})$ and $J_z(\hat{\theta})$ turn out to be respectively the observed-data information matrix and the complete-data information matrix. The speed matrix is thus given by: \begin{equation} \label{eq:speed_matrix} S = {\cal J}_z(\hat{\theta})^{-1} J_y(\hat{\theta}) \qquad {\rm with}\quad {\cal J}_z(\hat{\theta}) \equiv {\rm E}\big[J_z(\hat{\theta})|y,\hat{\theta}\big] \end{equation} We easily check that: ${\cal J}_z(\hat{\theta})= J_y(\hat{\theta}) + {\cal F}_{z|y}(\hat{\theta})$, where ${\cal F}_{z|y}(\hat{\theta})$ is the Fisher information matrix corresponding to the posterior pdf~$p(z|y, \hat{\theta})$, which is always symmetric positive. Therefore, we have the alternative expression: $$ S = \big[ J_y(\hat{\theta}) + {\cal F}_{z|y}(\hat{\theta}) \big]^{-1} J_y(\hat{\theta}) $$ For fast convergence, we want~$S$ close to identity, so we better have the posterior Fisher matrix as ``small'' as possible. To interpret this result, let's imagine that~$Z$ is drawn from $p(z|y,\hat{\theta})$, which is not exactly true since $\hat{\theta}$ may be at least slightly different from the actual parameter value. The Fisher information matrix represents the average information that the complete data contains about $\hat{\theta}$ conditional to the observed data. In this context, ${\cal F}_{z|y}(\hat{\theta})$ is a measure of missing information, and the speed matrix is the fraction of missing data. The conclusion is that the rate of convergence of EM is governed by the fraction of missing data. \subsection{EM as a proximal point algorithm} \label{sec:proximal} Chr\'etien \& Hero \cite{Chretien-00} note that EM~may also be interpreted as a proximal point algorithm, i.e. an iterative scheme of the form: \begin{equation} \label{eq:proximal} \theta_{n+1} = \arg\max_{\theta} \big[ L(\theta) - \lambda_n \Psi( \theta, \theta_n ) \big], \end{equation} where $\Psi$ is some positive regularization function and $\lambda_n$ is a sequence of positive numbers. Let us see where this result comes from. In section~\ref{sec:conditional_expectation}, we have established the fundamental log-likelihood decomposition underlying~EM, $L(\theta) = Q(\theta|\theta') + R(\theta|\theta')$, and related the variation of $R(\theta|\theta')$ to a Kullback distance~(\ref{eq:kullback}). Thus, for some current estimate $\theta_n$, we can write: $$ Q(\theta|\theta_n) = L(\theta) - D(q_{\theta_n}\|q_{\theta}) - R(\theta_n|\theta_n), $$ where $q_{\theta}(z)\equiv p(z|y,\theta)$ and $q_{\theta_n}(z) \equiv p(x|y,\theta_n)$ are the posterior pdfs of the complete data, under $\theta$ and $\theta_n$, respectively. From this equation, it becomes clear that maximizing $Q(\theta|\theta_n)$ is equivalent to an update rule of the form~(\ref{eq:proximal}) with: $$ \Psi( \theta, \theta_n ) = D(q_{\theta_n}\|q_{\theta}), \qquad \lambda_n \equiv 1 $$ The proximal interpretation of~EM is very useful to derive general convergence results \cite{Chretien-00}. In particular, the convergence rate may be superlinear if the sequence $\lambda_n$ is chosen so as to converge towards zero. Unfortunately, such generalizations are usually intractable because the objective function may no longer simplify as soon as $\lambda_n\not= 1$. \subsection{EM as maximization-maximization} \label{sec:global} Another powerful way of conceptualizing~EM is to reinterpret the E-step as another maximization. This idea, which was formalized only recently by Neal \& Hinton \cite{Neal-98}, appears as a breakthrough in the general understanding of EM-type procedures. Let us consider the following function: \begin{equation} \label{eq:maxmax} L(\theta,q) \equiv {\rm E}_{q} \big[\log p(Z,y|\theta)\big] + H(q) \end{equation} where $q(z)$ is some pdf (yes, any pdf), and $H(q)$ is its entropy \cite{Cover-91}, i.e. $H(q)\equiv - \int \log q(z)\,q(z)\,dz$. We easily obtain an equivalent expression that involves a Kullback-Leiber distance: $$ L(\theta,q) = L(\theta) - D(q\|q_{\theta}), $$ where we still define $q_{\theta}(z)\equiv p(z|y,\theta)$ for notational convenience. The last equation reminds us immediately of the proximal interpretation of~EM which was briefly discussed in section~\ref{sec:proximal}. The main difference here is that we don't impose~$q(z)=p(z|y,\theta)$ for some~$\theta$. Equality holds for {\em any} distribution! Assume we have an initial guess $\theta_n$ and try to find~$q$ that maximizes $L(\theta_n,q)$. From the above discussed properties of the Kullback-Leibler distance, the answer is~$q(z)=q_{\theta_n}(z)$. Now, substitute $q_{\theta_n}$ in~(\ref{eq:maxmax}), and maximize over~$\theta$: this is the same as performing a standard M-step\footnote{To see that, just remember that $p(z,y|\theta)=p(z|\theta)p(y|z)$ where $p(y|z)$ is independent from $\theta$ due to the Markov property~(\ref{eq:complete_data_space}).}! Hence, the conventional~EM algorithm boils down to an alternate maximization of $L(\theta,q)$ over a search space $\Theta \times {\cal Q}$, where ${\cal Q}$ is a suitable set of pdfs, i.e. ${\cal Q}$ must include all pdfs from the set $\{q(z)=p(z|y,\theta),\,\theta\in\Theta\}$. It is easy to check that any global maximizer~$(\hat{\theta},\hat{q})$ of~$L(\hat{\theta},\hat{q})$ is such that $\hat{\theta}$ is also a global maximizer of~$L(\theta)$. By the way, this is also true for local maximizers under weak assumptions~\cite{Neal-98}. The key observation of Neal \& Hinton is that the alternate scheme underlying~EM may be replaced with other maximization strategies without hampering the simplicity of~EM. In the conventional~EM setting, the auxiliary pdf~$q_n(z)$ is always constrained to a specific form. This is to say that~EM authorizes only specific pathways in the expanded search space $\Theta \times {\cal Q}$, yielding some kind of ``labyrinth'' motion. Easier techniques to find its way in a labyrinth include breaking the walls or escaping through the roof. Similarly, one may consider relaxing the maximization constraint in the E-step. This leads for instance to incremental and sparse EM~variants (see section~\ref{sec:deterministic_variants}). \section{Deterministic EM variants} \label{sec:deterministic_variants} We first present deterministic EM~variants as opposed to stochastic variants. Most of these deterministic variants attempt at speeding up the algorithm, either by simplifying computations, or by increasing the rate of convergence (see section~\ref{sec:fixpoint}). \subsection{CEM} \label{sec:cem} Classification~EM \cite{Celeux-92}. The whole~EM story is about introducing a latent variable~$Z$ and performing some inference about its posterior pdf. We might wonder: why not simply estimate~$Z$? This is actually the idea underlying the CEM~algorithm, which is a simple alternate maximization of the functional $p(z,y|\theta)$ wrt both~$\theta$ and~$z$. Given a current parameter estimate $\theta_n$, this leads to: \begin{itemize} \item Classification step: find $\displaystyle z_n = \arg\max_z p(z|y,\theta_n)$. \item Maximization step: find $\displaystyle \theta_{n+1} = \arg\max_\theta p(z_n|\theta)$. \end{itemize} Notice that a special instanciation of~CEM is the well-known $k$-means algorithm. In practice, CEM~has several advantages over~EM, like being easier to implement and typically faster to converge. However, CEM~doesn't maximize the incomplete-data likelihood and, therefore, the monotonicity property of~EM is lost. While CEM~estimates the complete data explicitely, EM~estimates only sufficient statistics for the complete data. In this regard, EM~may be understood as a fuzzy classifier that avoids the statistical efficiency problems inherent to the~CEM approach. Yet, CEM~is often useful in practice. \subsection{Aitken's acceleration} \label{sec:aitken} An early EM~extension \cite{Dempster-77,Louis-82,Meilijson-89}. Aitken's acceleration is a general purpose technique to speed up the convergence of a fixed point recursion with asymptotic linear behavior. Section~\ref{sec:fixpoint} established that, under appropriate smoothness assumptions, EM~may be approximated by a recursion of the form: $$ \theta_{n+1} \approx S \hat{\theta} + (I-S) \theta_n, $$ where $\hat{\theta}$ is the unknown limit and~$S$ is the speed matrix given by~(\ref{eq:speed_matrix}) which depends on this limit. Aitken's acceleration stems from the remark that, if~$S$ was known, then the limit could be computed explicitely in a single iteration, namely: $ \hat{\theta} \approx \theta_0 + S^{-1} (\theta_1-\theta_0)$ for some starting value~$\theta_0$. Despite that~$S$ is unknown and the sequence is not strictly linear, we are tempted to consider the following modified EM~scheme. Given a current parameter estimate~$\theta_n$, \begin{itemize} \item E-step: compute $Q(\theta|\theta_n)$ and approximate the inverse speed matrix: $ \displaystyle S_n^{-1} = J_y(\theta_n)^{-1} \, {\cal J}_z (\theta_n)$. \item M-step: unchanged, get an intermediate value $\displaystyle \theta^* = \arg\max_\theta Q(\theta|\theta_n)$. \item Acceleration step: update the parameter using $\displaystyle \theta_{n+1} = \theta_n + S_n^{-1}(\theta^* - \theta_n)$. \end{itemize} It turns out that this scheme is nothing but the Newton-Raphson method to find a zero of $\theta \mapsto \Phi(\theta)-\theta$, where~$\Phi$ is the map defined by the EM~sequence, i.e. $\Phi(\theta')= \arg\max_\theta Q(\theta|\theta')$. Since the standard~EM sets $S_n=I$ on each iteration, it may be viewed as a first-order approach to the same zero-crossing problem, hence avoiding the expense of computing~$S_n$. Beside this important implementational issue, convergence is problematic using Aitken's acceleration as the monotonicity property of~EM is generally lost. \subsection{AEM} \label{sec:aem} Accelerated~EM \cite{Jamshidian-93}. To trade off between~EM and its Aitken's accelerated version (see section~\ref{sec:aitken}), Jamshidian and Jennrich propose a conjugate gradient approach. Don't be messed up: this is not a traditional gradient-based method (otherwise there would be no point to talk about it in this report). No gradient computation is actually involved in here. The ``gradient'' is the function $\Phi(\theta)-\theta$, which may be viewed as a generalized gradient for the incomplete-data log-likelihood, hence justifying the use of the generalized conjugate gradient method (see e.g.~\cite{Press-92}). Compared to the Aitken's accelerated~EM, the resulting AEM~algorithm doesn't require computing the speed matrix. Instead, the parameter update rule is of the form: $$ \theta_{n+1} = \theta_n + \lambda_n d_n, $$ where~$d_n$ is a direction composed from the current direction $\Phi(\theta_n)-\theta_n$ and the previous directions (the essence of conjugate gradient), and~$\lambda_n$ is a step size typically computed from a line maximization of the complete-data likelihood (which may or may not be cumbersome). As an advantage of line maximizations, the monotonicity property of~EM is safe. Also, from this generalized gradient perspective, it is straightforward to devise EM~extensions that make use of other gradient descent techniques such as the steepest descent or quasi-Newton methods~\cite{Press-92}. \subsection{ECM} \label{sec:ecm} Expectation Conditional Maximization \cite{Meng-93}. This variant (not to be confused with CEM, see above) was introduced to cope with situations where the standard M-step is intractable. It is the first on a list of coordinate ascent-based EM~extensions. In~ECM, the M-step is replaced with a number of lower dimensional maximization problems called CM-steps. This implies decomposing the parameter space as a sum of subspaces, which, up to some possible reparameterization, is the same as splitting the parameter vector into several blocks, $\theta = (t_1, t_2, \ldots, t_s)$. Starting from a current estimate $\theta_n$, the CM-steps update one coordinate block after another by partially maximizing the auxiliary $Q$-function, yielding a scheme similar in essence to Powell's multidimensional optimization method \cite{Press-92}. This produces a sequence $\theta_n = \theta_{n,0} \to \theta_{n,1} \to \theta_{n,2} \to \ldots \to \theta_{n,s-1} \to \theta_{n,s} = \theta_{n+1}$, such that: $$ Q(\theta_n|\theta_n) \leq Q(\theta_{n,1}|\theta_n) \leq Q(\theta_{n,2}|\theta_n) \leq \ldots \leq Q(\theta_{n,s-1}|\theta_n) \leq Q(\theta_{n+1}|\theta_n) $$ Therefore, the auxiliary function is guaranteed to increase on each CM-step, hence globally in the M-step, and so does the incomplete-data likelihood. Hence, ECM is a special case of GEM (see section~\ref{sec:classical}). \subsection{ECME} \label{sec:ecme} ECM either \cite{Liu-94}. This is an extension of ECM where some CM-steps are replaced with steps that maximize, or increase, the incomplete-data log-likelihood $L(\theta)$ rather than the auxiliary $Q$-function. To make sure that the likelihood function increases globally in the M-step, the only requirement is that the CM-steps that act on the actual log-likelihood be performed after the usual $Q$-maximizations. This is because increasing the $Q$-function only increases likelihood from the starting point, namely $\theta_n$, which is held fixed during the M-step (at least, this is what we assume)\footnote{ For example, if one chooses~$\theta^*$ such that $L(\theta^*)\geq L(\theta_n)$ and, then, $\theta_{n+1}$ such that $Q(\theta_{n+1}|\theta_n)\geq Q(\theta^*|\theta_n)$, the only conlusion is that the likelihood increases from~$\theta_n$ to~$\theta^*$, but may actually decrease from~$\theta^*$ to~$\theta_{n+1}$ because $\theta^*$ is not the starting point of~$Q$. Permuting the $L$-maximization and the $Q$-maximization, we have $Q(\theta^*|\theta_n)\geq Q(\theta_n|\theta_n)$, thus $L(\theta^*)\geq L(\theta_n)$, and therefore $L(\theta_{n+1})\geq L(\theta_n)$ since we have assumed $L(\theta_{n+1})\geq L(\theta^*)$. This argument generalizes easily to any intermediate sequence using the same cascade inequalities as in the derivation of~ECM (see section~\ref{sec:ecm}).}. Starting with $Q$-maximizations is guaranteed to increase the likelihood, and of course subsequent likelihood maximizations can only improve the situation. With the correct setting, ECME is even more general than~GEM as defined in section~\ref{sec:classical} while inheriting its fundamental monotonicity property. An example application of ECME is in mixture models, where typically mixing proportions are updated using a one-step Newton-Raphson gradient descent on the incomplete-data likelihood, leading to a simple additive correction to the usual EM~update rule \cite{Liu-94}. At least in this case, ECME has proved to converge faster than standard~EM. \subsection{SAGE} \label{sec:sage} Space-Alternating Generalized EM \cite{Fessler-94,Fessler-95}. In the continuity of ECM and ECME (see sections~\ref{sec:ecm} and ~\ref{sec:ecme}), one can imagine defining an auxiliary function specific to each coordinate block of the parameter vector. More technically, using a block decomposition $\theta=(t_1,t_2,\ldots,t_s)$, we assume that, for each block~$i=1,\ldots,s$, there exists a function $Q_i(\theta|\theta')$ such that, for all~$\theta$ and~$\theta'$ with identical block coordinates except (maybe) for the $i$-th block, we have: $L(\theta) - L(\theta') \geq Q_i(\theta|\theta') - Q_i(\theta'|\theta')$. This idea has two important implications. First, the usual ECM scheme needs to be rephrased, because changing the auxiliary function across CM-steps may well result in decreasing the likelihood, a problem worked around in~ECME with an appropriate ordering of the CM-steps. In this more general framework, though, there may be no such fix to save the day. In order to ensure monotonicity, at least some CM-steps should start with ``reinitializing'' their corresponding auxiliary function, which means... performing an E-step. It is important to realize that, because the auxiliary function is coordinate-specific, so is the E-step. Hence, each ``CM-step'' becomes an EM~algorithm in itself which is sometimes called a ``cycle''. We end up with a nested algorithm where cycles are embedded in a higher-level iterative scheme. Furthermore, how to define the $Q_i$'s? From section~\ref{sec:em}, we know that the standard~EM auxiliary function $Q(\theta|\theta')$ is built from the complete-data space~$Z$; see in particular equation~(\ref{eq:cecl}). Fessler \& Herro introduce {\em hidden-data spaces}, a concept that generalizes complete-data spaces in the sense that hidden-data spaces may be coordinate-specific, i.e. there is a hidden variable~$Z_i$ for each block~$t_i$. Formally, given a block decomposition $\theta = (t_1, t_2, \ldots, t_s)$, $Z_i$ is a hidden-data space for $t_i$ if: $$ p(z_i,y|\theta) = p(y|z_i,\{t_{j\not=i}\}) \, p(z_i|\theta) $$ This definition's main feature is that the conditional probability of~$Y$ knowing~$Z_i$ is allowed to be dependent on every parameter block but~$t_i$. Let us check that the resulting auxiliary function fulfils the monotonicity condition. We define: $$ Q_i(\theta|\theta') \equiv E\big[\log p(Z_i|\theta)|y,\theta'\big] $$ Then, applying Jensen's inequality~(\ref{eq:jensen}) like in section~\ref{sec:jensen}, we get: $$ L (\theta) - L (\theta') \geq Q_i(\theta|\theta') - Q_i(\theta'|\theta') + \int \log \frac{p(y|z_i,\theta)}{p(y|z_i,\theta')}\,p(z_i|y,\theta')\,dz_i $$ When~$\theta$ and~$\theta'$ differ only by~$t_i$, the integral vanishes because the conditional pdf~$p(y|z_i,\theta)$ is independent from~$t_i$ by the above definition. Consequently, maximizing $Q_i(\theta|\theta')$ with respect to $t_i$ only (the other parameters being held fixed) is guaranteed to increase the incomplete-data likelihood. Specific applications of~SAGE to the Poisson imaging model or penalized least-squares regression were reported to converge much faster than standard~EM. \subsection{CEMM} \label{sec:cemm} Component-wise EM for Mixtures \cite{Celeux-01}. Celeux et al extend the SAGE methodology to the case of constrained likelihood maximization, which arises typically in mixture problems where the sum of mixing proportions should equate to one. Using a Lagrangian dualization approach, they recast the initial problem into unconstrained maximization by defining an appropriate penalized log-likelihood function. The~CEMM algorithm they derive is a natural coordinatewise variant of~EM whose convergence to a stationary point of the likelihood is established under mild regularity conditions. \subsection{AECM} \label{sec:aecm} Alternating ECM \cite{Meng-97,Meng-99}. In an attempt to summarize earlier contributions, Meng \& van Dyk propose to cast a number of EM~extensions into a unified framework, the so-called~AECM algorithm. Essentially, AECM~is a SAGE~algorithm (itself a generalization of both ECM and ECME) that includes another data augmentation trick. The idea is to consider a family of complete-data spaces indexed by a working parameter~$\alpha$. More formally, we define a joint~pdf $q(z,y|\theta,\alpha)$ as depending on both~$\theta$ and~$\alpha$, yet imposing the constraint that the corresponding marginal incomplete-data pdf be preserved: $$ p(y|\theta) = \int q(z,y|\theta,\alpha)\,dz, $$ and thus independent from~$\alpha$. In other words, $\alpha$~is identifiable only given the complete data. A simple way of achieving such data augmentation is to define~$Z=f_{\theta,\alpha}(Z_0)$, where~$Z_0$ is some reference complete-data space and~$f_{\theta,\alpha}$ is a one-to-one mapping for any~$(\theta,\alpha)$. Interestingly, it can be seen that~$\alpha$ has no effect if~$f_{\theta,\alpha}$ is insensitive to~$\theta$. In~AECM, $\alpha$ is tuned beforehand so as that to minimize the fraction of missing data~(\ref{eq:speed_matrix}), thereby maximizing the algorithm's global speed. In general, however, this initial mimimization cannot be performed exactly since the global speed may depend on the unknown maximum likelihood parameter. \subsection{PX-EM} \label{sec:px-em} Parameter-Expanded EM \cite{Liu-98,Liu-03}. Liu et al revisit the working parameter method suggested by Meng and van Dyk~\cite{Meng-97} (see section~\ref{sec:aecm}) from a slighlty different angle. In their strategy, the joint pdf~$q(z,y|\theta, \alpha)$ is defined so as to meet the two following requirements. First, the baseline model is embedded in the expanded model in the sense that $q(z,y|\theta, \alpha_0)=p(z,y|\theta)$ for some null value~$\alpha_0$. Second, which is the main difference with~AECM, the observed-data marginals are consistent up to a many-to-one reduction function~$r(\theta,\alpha)$, $$ p\big(y|r(\theta,\alpha)\big) = \int q(z,y|\theta,\alpha)\,dz, $$ for all~$(\theta,\alpha)$. From there, the trick is to to ``pretend'' estimating~$\alpha$ iteratively rather than pre-processing its value. The PX-EM algorithm is simply an~EM on the expanded model with additional instructions after the M-step to apply the reduction function and reset~$\alpha$ to its null value. Thus, given a current estimate~$\theta_n$, the E-step forms the auxiliary function corresponding to the expanded model from $(\theta_n,\alpha_0)$, which amounts to the standard E-step because~$\alpha=\alpha_0$. The M-step then provides $(\theta^*,\alpha^*)$ such that $q(y|\theta^*,\alpha^*)\geq q(y|\theta_n,\alpha_0)$, and the additional reduction step updates $\theta_n$ according to $\theta_{n+1}=r(\theta^*,\alpha^*)$, implying $p(y|\theta_{n+1})=q(y|\theta^*,\alpha^*)$. Because $q(y|\theta_n,\alpha_0)=p(y|\theta_n)$ by construction of the expanded model, we conclude that $p(y|\theta_{n+1})\geq p(y|\theta_n)$, which shows that PX-EM preserves the monotonicity property of~EM. In some way, PX-EM capitalizes on the fact that a large deviation between the estimate of~$\alpha$ and its known value~$\alpha_0$ is an indication that the parameter of interest~$\theta$ is poorly estimated. Hence, PX-EM adjusts the M-step for this deviation via the reduction function. A variety of examples where PX-EM converges much faster than~EM is reported in \cite{Liu-98}. Possible variants of PX-EM include the coordinatewise extensions underlying~SAGE. \subsection{Incremental EM} Following the maximization-maximization approach discussed in section~\ref{sec:global}, Neal \& Hinton~\cite{Neal-98} address the common case where observations are i.i.d. Then, we have $p(y|\theta)=\prod_i p(y_i|\theta)$ and, similarly, the global EM~objective function~(\ref{eq:maxmax}) reduces to: $$ L(\theta,q) = \sum_i \big\{ {\rm E}_{q_i} \big[\log p(Z_i,y_i|\theta)\big] + H(q_i) \big\}, $$ where we can search for~$q$ under the factored form $q(z)=\prod_i q_i(z)$. Therefore, for a given~$\theta$, maximizing $L(\theta,q)$ wrt~$q$ is equivalent to maximizing the contribution of each data item wrt~$q_i$, hence splitting the global maximization problem into a number of simpler maximizations. Incremental EM~variants are justified from this remark, the general idea being to update~$\theta$ by visiting the data items sequencially rather than from a global E-step. Neal \& Hinton demonstrate an incremental~EM variant for mixtures that converges twice as fast as standard~EM. \subsection{Sparse EM} Another suggestion of Neal \& Hinton~\cite{Neal-98} is to track the auxiliary distribution $q(z)$ in a subspace of the original search space~${\cal Q}$ (at least for a certain number of iterations). This general strategy includes sparse EM~variants where $q$~is updated only at pre-defined plausible unobserved values. Alternatively, ``winner-take-all'' EM~variants such as the CEM~algorithm~\cite{Celeux-92} (see section~\ref{sec:cem}) may be seen in this light. Such procedures may have strong computational advantages but, in counterpart, are prone to estimation bias. In the maximization-maximization interpretation of~EM, this comes as no surprise since these approaches ``project'' the estimate on a reduced search space that may not contain the maximum likelihood solution. \section{Stochastic EM variants} \label{sec:stochastic_variants} While deterministic EM~variants were mainly motivated by convergence speed considerations, stochastic variants are more concerned with other limitations of standard~EM. One is that the EM auxiliary function~(\ref{eq:auxiliary}) involves computing an integral that may not be tractable in some situations. The idea is then to replace this tedious computation with a stochastic simulation. As a typical side effect of such an approach, the modified algorithm inherits a lesser tendancy to getting trapped in local maxima, yielding improved global convergence properties. \subsection{SEM} Stochastic EM \cite{Celeux-85}. As noted in section~\ref{sec:conditional_expectation}, the standard EM auxiliary function is the best estimate of the complete-data log-likelihood in the sense of the conditional mean squared error. The idea underlying SEM, like other stochastic EM variants, is that there might be no need to ask for such a ``good'' estimate. Therefore, SEM replaces the standard auxiliary function with: $$ \hat{Q}(\theta|\theta') = \log p(z'|\theta'), $$ where $z'$ is a random sample drawn from the posterior distribution of the unobserved variable\footnote{Notice that when $Z$ is defined as $Z=(X,Y)$, this simulation reduces to a random draw of the missing data~$X$.}, $p(z|y,\theta')$. This leads to the following modified iteration; given a current estimate $\theta_n$: \begin{itemize} \item Simulation step: compute $p(z|y,\theta_n)$ and draw an unobserved sample $z_n$ from $p(z|y,\theta_n)$. \item Maximization step: find $\displaystyle \theta_{n+1} = \arg\max_\theta p(z_n|\theta)$. \end{itemize} By construction, the resulting sequence $\theta_n$ is an homogeneous Markov chain\footnote{The draws need to be mutually independent conditional to $(\theta_1,\theta_2,\ldots,\theta_n)$, i.e. $p(z_1,z_2,\ldots,z_n|\theta_1,\theta_2,\ldots,\theta_n)=\prod_i p(z_i|\theta_i)$.} which, under mild regularity conditions, converges to a stationary pdf. This means in particular that $\theta_n$ doesn't converge to a unique value! Various schemes can be used to derive a pointwise limit, such as averaging the estimates over iterations once stationarity has been reached (see also SAEM regarding this issue). It was established in some specific cases that the stationary pdf concentrates around the likelihood maximizer with a variance inversly proportional to the sample size. However, in cases where several local maximizers exist, one may expect a multimodal behavior. \subsection{DA} Data Augmentation algorithm \cite{Tanner-87}. Going further into the world of random samples, one may consider replacing the M-step in SEM with yet another random draw. In a Bayesian context, maximizing $p(z_n|\theta)$ wrt~$\theta$ may be thought of as computing the mode of the posterior distribution $p(\theta|z_n)$, given by: $$ p(\theta|z_n) = \frac{p(z_n|\theta)p(\theta)}{\int p(z_n|\theta')p(\theta')\, d\theta'} $$ where we can assume a flat (or non-informative) prior distribution for~$\theta$. In DA, this maximization is replaced with a random draw $\theta_{n+1} \sim p(\theta|z_n)$. From equation~(\ref{eq:complete_data_space}), we easily check that $p(\theta|z_n)=p(\theta|z_n,y)$. Therefore, DA alternates conditional draws $z_n|(\theta_n,y)$ and $\theta_{n+1}|(z_n,y)$, which is the very principle of a Gibbs sampler. Results from Gibbs sampling theory apply, and it is shown under general conditions that the sequence~$\theta_n$ is a Markov chain that converges in distribution towards~$p(\theta|y)$. Once the sequence has reached stationarity, averaging~$\theta_n$ over iterations yields a random variable that converges to the conditional mean~${\rm E}(\theta|y)$, which is an estimator of~$\theta$ generally different from the maximum likelihood but not necessarily worse. Interesting enough, several variants of~DA have been proposed recently \cite{Liu-99,Liu-03} following the parameter expansion strategy underlying the PX-EM algorithm described in section~\ref{sec:px-em}. \subsection{SAEM} \label{sec:saem} Stochastic Approximation type EM \cite{Celeux-95}. The SAEM algorithm is a simple hybridation of EM and SEM that provides a pointwise convergence as opposed to the erratic behavior of SEM. Given a current estimate $\theta_n$, SAEM performs a standard EM iteration in addition to the SEM iteration. The parameter is then updated as a weighted mean of both contributions, yielding: $$ \theta_{n+1} = (1-\gamma_{n+1})\theta^{EM}_{n+1} + \gamma_{n+1}\theta^{SEM} _{n+1}, $$ where $0\leq\gamma_n\leq 1$. Of course, to apply SAEM, the standard EM needs to be tractable. The sequence $\gamma_n$ is typically chosen so as to decrease from~$1$ to~$0$, in such a way that the algorithm is equivalent to SEM in the early iterations, and then becomes more similar to EM. It is established that SAEM converges almost surely towards a local likelihood maximizer (thus avoiding saddle points) under the assumption that $\gamma_n$ decreases to~$0$ with $\lim_{n\to\infty} (\gamma_n/\gamma_{n+1})=1$ and $\sum_n \gamma_n \to \infty$. \subsection{MCEM} Monte Carlo EM \cite{Wei-90}. At least formally, MCEM turns out to be a generalization of SEM. In the SEM simulation step, draw $m$~independent samples $z_n^{(1)}, z_n^{(2)}, \ldots, z_n^{(m)}$ instead of just one, and then maximize the following function: $$ \hat{Q} (\theta|\theta_n) = \frac{1}{m} \sum_{j=1}^m \log p(z_n^{(j)}|\theta), $$ which, in general, converges almost surely to the standard~EM auxiliary function thanks to the law of large numbers. Choosing a large value for~$m$ justifies calling this Monte Carlo something. In this case, $\hat{Q}$~may be seen as an empirical approximation of the standard~EM auxiliary function, and the algorithm is expected to behave similarly to~EM. On the other hand, choosing a small value for~$m$ is not forbidden, if not advised (in particular, for computational reasons). We notice that, for~$m=1$, MCEM reduces to SEM. A possible strategy consists of increasing progressively the parameter~$m$, yielding a ``simulated annealing'' MCEM which is close in spirit to~SAEM. \subsection{SAEM2} Stochastic Approximation EM \cite{Delyon-99}. Delyon et al propose a generalization of~MCEM called SAEM, not to be confused with the earlier~SAEM algorithm presented in section~\ref{sec:saem}, although both algorithms promote a similar simulated annealing philosophy. In this version, the auxiliary function is defined recursively by averaging a Monte Carlo approximation with the auxiliary function computed in the previous step: $$ \hat{Q}_n(\theta) = (1-\gamma_n)\hat{Q}_{n-1}(\theta) + \frac{\gamma_n}{m_n} \sum_{j=1}^{m_n} \log p(z_n^{(j)}|\theta), $$ where $z_n^{(1)}, z_n^{(2)}, \ldots, z_n^{(m_n)}$ are drawn independently from~$p(z|y,\theta_n)$. The weights~$\gamma_n$ are typically decreased across iterations in such a way that~$\hat{Q}_n(\theta)$ eventually stabilizes at some point. One may either increase the number of random draws~$m_n$, or set a constant value~$m_n\equiv 1$ when simulations have heavy computanional cost compared to the maximization step. The convergence of~SAEM2 towards a local likelihood maximizer is proved in \cite{Delyon-99} under quite general conditions. Kuhn et al~\cite{Kuhn-02} further extend the technique to make it possible to perform the simulation under a distribution~$\Pi_{\theta_n}(z)$ simpler to deal with than the posterior pdf~$p(z|y,\theta_n)$. Such a distribution may be defined as the transition probability of a Markov chain generated by a Metropolis-Hastings algorithm. If~$\Pi_\theta(z)$ is such that its associated Markov chain converges to~$p(z|y,\theta)$, then the convergence properties of~SAEM2 generalize under mild additional assumptions. \section{Conclusion} This report's primary goal is to give a flavor of the current state of the art on EM-type statistical learning procedures. We also hope it will help researchers and developers in finding the literature relevant to their current existential questions. For a more comprehensive overview, we advise some good tutorials that are found on the internet~\cite{Couvreur-96,Bilmes-98,BergerA-98,Dellaert-02,vanDyk-00,Liu-03}. \appendix \section{Appendix} \subsection{Maximum likelihood quickie} \label{app:ml} Let~$Y$ a random variable with pdf $p(y|\theta)$, where $\theta$ is an unknown parameter vector. Given an outcome~$y$ of~$Y$, maximum likelihood estimation consists of finding the value of~$\theta$ that maximizes the probability~$p(y|\theta)$ over a given search space~$\Theta$. In this context, $p(y|\theta)$ is seen as a function of~$\theta$ and called the likelihood function. Since it is often more convenient to manipulate the logarithm of this expression, we will focus on the equivalent problem of maximizing the log-likelihood function: $$ \hat{\theta}(y) = \arg\max_{\theta\in\Theta} L(y,\theta) $$ where the log-likelihood $L(y,\theta) \equiv \log p(y|\theta)$ is denoted $L(y,\theta)$ to emphasize the dependance in $y$, contrary to the notation $L(\theta)$ usually employed throghout this report. Whenever the log-likelihood is differentiable wrt $\theta$, we also define the score function as the log-likelihood gradient: $$ S(y,\theta) = \frac{\partial L}{\partial \theta}(y,\theta) $$ In this case, a fundamental result is that, for all vector $U(y,\theta)$, we have: $$ {\rm E}(SU^t) = \frac{\partial}{\partial \theta} {\rm E}(U^t) - {\rm E} \Big( \frac{\partial U^t}{\partial \theta} \Big) $$ where the expectation is taken wrt the distribution $p(y|\theta)$. This equality is easily obtained from the logarithm differentiation formula and some additional manipulations. Assigning the ``true'' value of $\theta$ in this expression leads to the following: \begin{itemize} \item ${\rm E}(S)=0$ \item $\displaystyle {\rm Cov}(S,S)=- {\rm E} \Big( \frac{\partial S^t}{\partial \theta} \Big)$ (Fisher information) \item If $U(y)$ is an unbiased estimator of $\theta$, then ${\rm Cov}(S,U)={\rm Id}$. \item In the case of a single parameter, the above result implies ${\rm Var}(U) \geq \frac{1}{{\rm Var}(S)}$ from the Cauchy-Schwartz inequality, i.e. the Fisher information is a lower bound for the variance of~$U$. Equality occurs iff $U$ is an affine function of $S$, which imposes a specific form to $p(y|\theta)$ (Darmois theorem). \end{itemize} \subsection{Jensen's inequality} \label{app:jensen} For any random variable~$X$ and any real continuous concave function~$f$, we have: $$ f\big[{\rm E}(X)\big] \geq {\rm E}\big[f(X)\big], $$ If $f$ is strictly concave, equality occurs iff $X$ is deterministic. \input{em.biblio} \end{document}
arXiv
\begin{document} \title{The Generalized Recurrent Set and Strong Chain Recurrence} \author{Jim Wiseman} \address{Agnes Scott College \\ Decatur, GA 30030} \email{[email protected]} \thanks{This work was supported by a grant from the Simons Foundation (282398, JW)} \begin{abstract} Fathi and Pageault have recently shown a connection between Auslander's generalized recurrent set $\gr(f)$ and Easton's strong chain recurrent set. We study $\gr(f)$ by examining that connection in more detail, as well as connections with other notions of recurrence. We give equivalent definitions that do not refer to a metric. In particular, we show that $\gr(f^k)=\gr(f)$ for any $k>0$, and give a characterization of maps for which the generalized recurrent set is different from the ordinary chain recurrent set. \end{abstract} \maketitle \section{Introduction} Auslander's generalized recurrent set $\gr(f)$ (defined originally for flows (see \cite{Auslander}), and extended to maps (see \cites{A,AA})) is an important object of study in dynamical systems. (See, for example, \cites{Nit,Nit2,Peix1,Peix2,ST,ST2,AusNC,KK,Garay}.) Fathi and Pageault have recently shown (\cite{FP}) that $\gr(f)$ can be defined in terms of Easton's strong chain recurrent set (\cite{E}) (although they did not use the strong chain recurrent terminology). (See \cites{ABC,Y} for more on the literature on the strong chain recurrent set.) In this paper we study the generalized recurrent set by examining that connection in more detail, as well as connections with other notions of recurrence. In particular, we show that $\gr(f^k)=\gr(f)$ for any $k>0$, and give a characterization of maps for which the generalized recurrent set is different from the ordinary chain recurrent set. The strong chain recurrent set depends on the choice of metric, and thus Fathi and Pageault's description of $\gr(f)$ involves metrics. Since the generalized recurrent set itself is a topological invariant, it is useful to be able to describe it in terms of strong chain recurrence without referring to a metric (especially in the noncompact case, as in \cites{AA}). We give definitions with topological versions of strong $\ep$-chains that do not involve a metric. The paper is organized as follows. We give definitions and examples in Section~\ref{sect:defns}, and discuss Fathi and Pageault's Ma\~n\'e set in Section~\ref{sect:mane}. In Section~\ref{sect:GR} we turn to the generalized recurrent set, giving a topological definition and showing, in particular, that there exists a metric for which the strong chain recurrent set equals $\gr(f)$. In Section~\ref{sect:powers} we show that $\gr(f^k)=\gr(f)$ for any $k>0$. Finally, in Section~\ref{sect:relation} we consider the relationship between the generalized recurrent set and the ordinary chain recurrent set. Thanks to Todd Fisher and David Richeson for useful conversations on these topics, and to the anonymous referee for very prompt and helpful comments and perspective. Among other things, the referee provided a greatly improved proof of Theorem~\ref{thm:mwequal}. \section{Definitions and examples} \label{sect:defns} Throughout this paper, let $(X,d)$ be a compact metric space and $f:X\to X$ a continuous map. Recurrence on noncompact spaces is more complicated and will be the subject of future work. \begin{defn} An {\em $(\ep,f,d)$-chain} (or {\em $(\ep,d)$-chain}, if it is clear what the map is, or \emph{$\ep$-chain}, if the metric is also clear) of length $n$ from $x$ to $y$ is a sequence $(x=x_0, x_1, \dots, x_n=y)$ such that $d(f(x_{i-1}),x_i)\le\ep$ for $i=1,\dots,n$. A point $x$ is {\em chain recurrent} if for every $\ep>0$, there is an $\ep$-chain from $x$ to itself. We denote the set of chain recurrent points by $\crec(f)$. Two points $x$ and $y$ in $\crec(f)$ are \emph{chain equivalent} if there are $\ep$-chains from $x$ to $y$ and from $y$ to $x$ for any $\ep>0$. The map $f$ is \emph{chain transitive} on a subset $N$ of $X$ if for every $x,y\in N$ and every $\ep>0$, there is an $\ep$-chain from $x$ to $y$; the chain equivalence classes are called the \emph{chain transitive components}. \end{defn} \begin{remark} Chain recurrence depends only on the topology, not on the choice of metric (see, for example, \cite{Franks}). \end{remark} The following definitions are due to Easton~\cite{E}. \begin{defn} A {\em strong $(\ep,f,d)$-chain} (or {\em strong $(\ep,d)$-chain} or \emph{strong $\ep$-chain}) from $x$ to $y$ is a sequence $(x=x_0, x_1, \dots, x_n=y)$ such that $\sum_{i=1}^n d(f(x_{i-1}),x_i)\le\ep$. A point $x$ is {\em $d$-strong chain recurrent} (or \emph{strong chain recurrent}) if for every $\ep>0$, there is a strong $(\ep,d)$-chain from $x$ to itself. We denote the set of strong chain recurrent points by $\scrwo_d(f)$. Two points $x$ and $y$ in $\scrwo_d(f)$ are \emph{$d$-strong chain equivalent} (or \emph{strong chain equivalent}) if there are strong $(\ep,d)$-chains from $x$ to $y$ and from $y$ to $x$ for any $\ep>0$. A subset $N$ of $X$ is \emph{$d$-strong chain transitive} (or \emph{strong chain transitive}) if every $x$ and $y$ in $N$ are $d$-strong chain equivalent; the strong chain equivalence classes are called the \emph{strong chain transitive components}. \end{defn} \begin{ex} \label{ex:halfcirc} Let $X_1$ be the circle with the usual topology, and let $f_1:X_1\to X_1$ be a homeomorphism that fixes every point on the left semicircle $C_1$ and moves points on the right semicircle clockwise (see Figure~\ref{fig:halfcirc}). Then for any choice of metric $d$, we have $\scrwo_d(f_1)=C_1$, and each point in $C_1$ is a strong chain transitive component. \end{ex} \begin{figure} \caption{$f_1:X_1\to X_1$} \label{fig:halfcirc} \end{figure} \begin{remark} In general, strong chain recurrence does depend on the choice of metric. See Example 3.1 in \cite{Y}, or the following example from \cite{FP}. \end{remark} \begin{ex}[\cite{FP}] \label{ex:cantor} Consider the circle with the usual topology, and a map that fixes a Cantor set and moves all other points clockwise (see Figure~\ref{fig:cantor}). Choose a metric $d_2$ for which the Cantor set has Lebesgue measure 0; call the resulting metric space $X_2$, the map $f_2$, and the Cantor set $K_2$. Then $\operatorname{SCR}_{d_2}(f_2) = X_2$. Or we can choose a metric $d_3$ for which the Cantor set has positive Lebesgue measure, and call the resulting metric space $X_3$, with map $f_3$ and Cantor set $K_3$. Then $\operatorname{SCR}_{d_3}(f_3) = K_3$. \end{ex} \begin{figure} \caption{$f_2:X_2\to X_2$ and $f_3:X_3\to X_3$} \label{fig:cantor} \end{figure} \begin{remark} Fathi and Pageault~\cite{FP} define a function $L_d:X\times X\to [0,\infty]$, which they call the {\em $d$-Mather barrier}, by $L_d(x,y)= \inf \sum_{i=1}^n d(f(x_{i-1}),x_i)$, where the infimum is over all sequences $(x=x_0, x_1, \dots, x_n=y)$ from $x$ to $y$. (Zheng used a similar function in \cite{Zheng1}.) They then define the {\em $d$-Aubry set} to be $\{x\in X : L_d(x,x)=0\}$. Thus their $d$-Aubry set is identical to Easton's strong chain recurrent set. Similarly, they define an equivalence relation on the $d$-Aubry set by setting $x$ and $y$ equivalent if $L_d(x,y) = L_d(y,x)=0$, and call the equivalence classes $d$-Mather classes. Thus the $d$-Mather classes are exactly the $d$-strong chain transitive components. \end{remark} To eliminate the dependence on the metric in $\scrwo_d$, we can take either the intersection or the union over all metrics, giving us two different sets. \begin{defn}[\cite{FP}] The \emph{Ma\~n\'e set $\mane(f)$} is $\bigcup_{d'} \operatorname{SCR}_{d'}(f)$ and the \emph{generalized recurrent set $\gr(f)$} is $\bigcap_{d'} \operatorname{SCR}_{d'}(f)$, where the union and the intersection are both over all metrics $d'$ compatible with the topology of $X$. (Fathi and Pageault show (\cite{FP}) that this definition of the generalized recurrent set is equivalent to the usual definitions; see Section~\ref{sect:GR}.) \end{defn} Thus we have $\gr(f) \subset \scrwo_d(f) \subset \mane(f) \subset \crec(f)$; all of the inclusions can be strict, as the following example shows. \begin{ex} Let $X$ be the disjoint union of the spaces $X_1$, $X_2$, and $X_3$ from Examples~\ref{ex:halfcirc} and \ref{ex:cantor}, with the induced metric $d$. Define the map $f:X\to X$ by $f(x)=f_i(x)$ for $x\in X_i$. Then we have $\gr(f) =C_1 \cup K_2 \cup K_3$, $\scrwo_d(f) = C_1 \cup X_2 \cup K_3$, $\mane(f) = C_1 \cup X_2 \cup X_3$, and $\crec(f) = X_1 \cup X_2 \cup X_3$. \end{ex} \section{The Ma\~n\'e set $\mane(f)$} \label{sect:mane} We give an equivalent definition of the Ma\~n\'e set $\mane(f)$ based on strong $\ep$-chains, but using a topological definition of chains that does not depend on the metric (Corollary~\ref{cor:newmane}). We begin with some notation. Let $X\times X$ be the product space, and let $\Delta_X$ be the diagonal, $\Delta_X=\{(x,x):x \in X\}$. To avoid confusion, we will use calligraphic letters like $\prodset{N}$ for other subsets of $X\times X$, and reserve italic letters like $N$ for subsets of $X$. Let $B_d(x;\ep)$ (or $B(x;\ep)$ if the metric is clear) be the closed $\ep$-ball around $x$, $B_d(x;\ep) =\{y\in X : d(x,y)\le\ep\}$. Let $\prodset{V}_d(\ep)$ (or $\prodset{V}(\ep)$) be the closed $\ep$-neighborhood of the diagonal $\Delta_X$ in $X\times X$, $\prodset{V}_d(\ep) = \{(x_1,x_2) : d(x_1,x_2)\le\ep\}$, and $\prodset{V}^\circ_d(\ep)$ (or $\prodset{V}^\circ(\ep)$) the open $\ep$-neighborhood, $\prodset{V}^\circ_d(\ep) = \{(x_1,x_2) : d(x_1,x_2)<\ep\}$. For $\prodset{N}\subset X\times X$, we denote by $\prodset{N}^n$ the $n$-fold composition of $\prodset{N}$ with itself, $\prodset{N}\circ\prodset{N}\cdots\circ\prodset{N}$, that is, \begin{align*} \prodset{N}^n = & \{(x,y) : \text{there exists $z_0=z,z_1,\ldots,z_n=y\in X$} \\ & \text{ such that $(z_{i-1},z_i)\in\prodset{N}$ for $i=1,\ldots,n$}\}. \end{align*} \begin{defn}\label{defn:nchain} Let $\prodset{N}$ be a neighborhood of $\Delta_X$. An \emph{$(\prodset{N},f)$-chain} (or simply \emph{$\prodset{N}$-chain} if the map is clear) from $x$ to $y$ is a sequence of points $(x=x_0, x_1, \dots, x_n=y)$ in $X$ such that $(f(x_{i-1}),x_i) \in \prodset{N}$ for $i=1,\ldots,n$. \end{defn} Thus $(x,y)\in \prodset{N}^n$ exactly when there is an $(\prodset{N},\id)$-chain of length $n$ from $x$ to $y$, where $\id$ is the identity map. \begin{defn}\label{defn:mrelations} We now define three relations on $X$. We write $y>_{d'}z$ if for any $\ep>0$, there is a strong $(\ep,f,d')$-chain from $y$ to $z$. We write $y >_\mrel z$ if $y>_{d'}z$ for some compatible metric $d'$; set $\prodset{M}= \{(y,z)\in X \times X: y >_\mrel z\}$. We write $y >_\wrel z$ if for any closed neighborhood $\prodset{D}$ of the diagonal in $X\times X$, there exist a closed symmetric neighborhood $\prodset{N}$ of the diagonal and an integer $n>0$ such that $\prodset{N}^{3^n} \subset \prodset{D}$ and there is an $(\prodset{N},f)$-chain of length $n$ from $y$ to $z$; set $\prodset{W}= \{(y,z)\in X \times X: y >_\wrel z\}$. \end{defn} \begin{thm}\label{thm:mwequal} The relations $\mrel$ and $\wrel$ are equal. \end{thm} \begin{proof} We will show that $\prodset{M}\subset \prodset{W}\subset \overline\prodset{W}\subset \mrel$ (where $\overline\wrel$ is the closure of $\wrel$ in $X\times X$), and so they are all equal. We first show that $\prodset{M}\subset \wrel$. Let $(y,z)$ be a point in $\mrel$; then there is a metric $d'$ such that for any $\ep>0$, there is a strong $(\ep,f,d')$-chain from $y$ to $z$. Given $\prodset D$, choose $\ep$ such that $\prodset{V}_{d'}(\ep) \subset \prodset D$. (Such an $\ep$ exists since $X\times X $ is compact.) Let $(x_0=y,x_1,\ldots,x_n=z)$ be a strong $(\ep/2,d')$-chain from $y$ to $z$. For $1\le i \le n$, define $\ep_i = d'(f(x_{i-1}),x_i)$, and let $B_i = B_{d'}(x_i;\ep_i/2)$ (note that $B_i$ is the single point $\{x_i\}$ if $\ep_i =0$). Finally, define $\prodset N$ by $\prodset{N}=\prodset{V}_{d'}(\frac{\ep}{2\cdot3^n}) \bigcup (\bigcup_{i=1}^n B_i\times B_i)$. Since $(f(x_{i-1}),x_i)$ is in $B_i\times B_i$, $(x_0=y,x_1,\ldots,x_n=z)$ is an $(\prodset{N},f)$-chain. To see that $\prodset{N}^{3^n} \subset \prodset{D}$, let $z_0, z_1,\ldots,z_{3^n}$ be a sequence with $(z_{j-1},z_j)\in\prodset{N}$ for $1\le j\le 3^n$; we want to show that $d'(z_0,z_{3^n}) \le \ep$. Observe that if $z_j$ and $z_k$ are both in $B_i$ for some $i$ and some $j<k$, then $z_0, z_1,\ldots,z_{j-1},z_j,z_k,z_{k+1},\ldots,z_{3^n}$ is also an $(\prodset{N},\id)$-chain from $z_0$ to $z_{3^n}$, possibly of shorter length. Thus we may assume that for each $B_i$, the chain contains at most one pair of points in $B_i$ and that any two such points are adjacent in the chain; two adjacent points that are not in the same $B_i$ must be within $\frac{\ep}{2\cdot3^n}$ of each other. Therefore $d'(z_0,z_{3^n}) \le 3^n\cdot \frac{\ep}{2\cdot 3^n} + \sum \ep_i \le \frac{\ep}{2}+\frac{\ep}{2}$. To show that $\overline\prodset{W}\subset \mrel$, we need the following metrization lemma. \begin{lemma}[\cite{Kelley}*{Lemma~6.12}] Let $\{\prodset U_n\}_{n=0}^\infty$ be a sequence of symmetric subsets of $X\times X$ with $\prodset U_0 = X\times X$ and $\bigcap_{n=0}^\infty \prodset U_n = \Delta_X$. If for every $n\ge1$, $\prodset U_n^3 \subset \prodset U_{n-1}$, then there exists a metric $d'$ on $X$ such that $\prodset U_n \subset \prodset{V}^\circ_{d'}(2^{-n})\subset \prodset U_{n-1}$. \end{lemma} (The lemma actually says that there exists a pseudo-metric, but since $X$ is metrizable, any pseudo-metric is a metric.) Let $(y,z)$ be a point in $\overline\wrel$; we will construct a metric $d'$, depending on $(y,z)$, such that $y >_{d'}z$ (and so $y >_\mrel z$). We construct the sequence for the metrization lemma by induction. Let $\prodset A_0 = X\times X$. Then assume that a closed, symmetric neighborhood of the diagonal $\prodset A_k$ has been constructed. Let $\prodset A_k'$ be a closed, symmetric neighborhood of the diagonal such that $(\prodset A_k')^3 \subset \prodset A_k$ and $(f\times f)(\prodset A_k') \subset \prodset A_k$ (this is possible by compactness and uniform continuity). We can choose $\prodset A_k'$ inside $\prodset{V}_d(1/n)$ to guarantee that the $\prodset A_k$'s will shrink to $\Delta_X$. Since $(y,z) \in \overline\wrel$, there exists a point $(y_k,z_k) \in \wrel$ with $(y,y_k) \in \prodset A_k'$ and $(z,z_k) \in \prodset A_k'$. Then there exist a closed symmetric neighborhood $\prodset A_{k+1}$ of the diagonal and an integer $n_k$ such that there is an $\prodset A_{k+1}$-chain of length $n_k$ from $y_k$ to $z_k$ and $(\prodset A_{k+1})^{3^{n_k}} \subset \prodset A_k'$. Then we can apply the metrization lemma (after renumbering) to the sequence $$ \prodset A_0, (\prodset A_1)^{3^{n_0}}, (\prodset A_1)^{3^{n_0-1}}, \dots, (\prodset A_1)^{3^{2}}, (\prodset A_1)^{3}, \prodset A_1,(\prodset A_2)^{3^{n_1}}, (\prodset A_2)^{3^{n_1-1}}, \dots $$ to obtain the compatible metric $d'$. For any $\ep>0$, choose $k$ so that $\prodset A_k \subset \prodset{V}^\circ_{d'}(\ep/3)$; then $\prodset A_{k+1} \subset \prodset{V}^\circ_{d'}(2^{-n_k}\ep/3)$. If we take our $\prodset A_{k+1}$-chain of length $n_k$ from $y_k$ to $z_k$, $(y_k,x_1,\dots,x_{n_k-1},z_k)$, and change the beginning and ending points to get a chain $(x_0=y,x_1,\dots,x_{n_k-1},x_{n_k}=z)$ from $y$ to $z$, we have that $\sum_{i=1}^{n_k} d'(f(x_{i-1}),x_i) \le n_k\cdot(2^{-n_k}\ep/3) + d'(f(y),f(y_k)) + d'(z, z_k) \le \ep/3 + \ep/3 + \ep/3 = \ep$. \end{proof} \begin{cor}\label{cor:newmane} A point $x\in X$ is in $\mane(f)$ if and only if for any closed neighborhood $\prodset{D}$ of the diagonal in $X\times X$, there exist a closed symmetric neighborhood $\prodset{N}$ of the diagonal and an integer $n>0$ such that $\prodset{N}^{3^n} \subset \prodset{D}$ and there is an $(\prodset{N},f)$-chain of length $n$ from $x$ to itself. \end{cor} \begin{proof} Clearly $x\in\mane(f)$ if and only if $x >_\mrel x$. \end{proof} In particular, $\mane(f)$ is closed, since we saw that $\mrel$ is closed. \begin{prop} \label{prop:mfnotmfk} In general, $\mane(f|_{\mane(f)}) \ne \mane(f)$. \end{prop} \begin{proof} See \cite{Y} (Example~3.1 and the examples constructed in Theorem~4.2), or the following example. \end{proof} \begin{ex}\label{ex:spiralhalf} Let $X_4$ be the disk with the usual topology, and let $f_4:X_4\to X_4$ be a map that fixes the center point $(0,0)$ and the left outer semicircle $C_4$, moves points on the right outer semicircle clockwise, and moves interior points other than the center in a clockwise spiral out toward the outer circle $S_4$ (see Figure~\ref{fig:spiralhalf}). Then $\mane(f_4) = \{(0,0)\} \cup S_4$, but $\mane(f_4|_{\mane(f_4)}) = \{(0,0)\} \cup C_4$. \end{ex} \begin{figure} \caption{$f_4:X_4\to X_4$} \label{fig:spiralhalf} \end{figure} Fathi and Pageault show (\cite{FP}*{Thm.~3.5}) that for homeomorphisms, $\mane(f)= \Fix(f) \cup \crec(f|_{X\backslash\Int(\Fix(f))})$. Thus $\mane(f)$ depends strongly on the set of fixed points, but not on the other periodic points. This can lead to counterintuitive results, as the following example shows. \begin{ex} \label{ex:twocircs} Let $f_1:X_1\to X_1$ be the homeomorphism from Example~\ref{ex:halfcirc}. Define the space $X = X_1\times \Z_2$ and the homeomorphism $f:X\to X$ by $f(x,0) = (f_1(x), 1)$ and $f(x,1) = (f_1(x), 0)$. Then $f$ has no fixed points, and so we have $\mane(f) = \crec(f)= X$, which is somewhat counterintuitive since $f$ is just two copies of $f_1$ and $\mane(f_1) = C_1$, the left semicircle. By the definition of $\mane(f)$, for every point in $X$, there must be a metric $d$ such that $x \in \scrwo_d(f)$. One can show that if we give $X_1\times\{0\}$ the usual Euclidean metric, and $X_1\times\{1\}$ the usual metric on the left semicircle and the metric induced by the Minkowski ? function (\cite{Mink}) on the right semicircle, we get $\scrwo_d(f)=X$. \end{ex} Thus $M(f)$ occupies a middle ground between $\crec(f)$ and $\gr(f)$, and is perhaps of less dynamical interest than either, so we now turn to $\gr(f)$. \section{The generalized recurrent set $\gr(f)$} \label{sect:GR} Part of the usefulness of the generalized recurrent set $\gr(f)$ stems from the fact that it can be defined in terms of several different dynamical concepts. As we have seen, Fathi and Pageault give a definition in terms of the strong chain recurrent set, and we will give one using a topological version of strong $\ep$-chains. We begin by reviewing existing results. Following the notation in \cite{FP}, let $\theta:X\to\mathbb R$ be a Lyapunov function for $f$ (that is, $\theta(f(x))\le \theta(x)$ for all $x$), and let $N(\theta)$ be the set of neutral points, that is, $N(\theta)=\{x\in X: \theta(f(x))=\theta(x)\}$. Denote by $L(f)$ the set of continuous Lyapunov functions for $f$, and by $L_{d'}(f)$ the set of Lipschitz (with respect to the metric $d'$) Lyapunov functions for $f$. \begin{prop}[\cites{AA,FP,A}] \label{prop:gre} The following definitions for the generalized recurrent set $\gr(f)$ are equivalent. \begin{enumerate} \item (\cite{FP}) \label{item:fp} $\bigcap_{d'} \operatorname{SCR}_{d'}(f)$, where the intersection is over all metrics $d'$ compatible with the topology of $X$. \item (\cite{FP}) $\bigcap_{d'}\bigcap_{\theta\in L_{d'}(f)} N(\theta)$, where the outer intersection is over all metrics $d'$ compatible with the topology of $X$. \item (\cites{A,AA}) $\bigcap_{\theta\in L_{(f)}} N(\theta)$. \item (\cites{A,AA}) \label{item:smallest}The set of points $x$ such that $(x, x)$ is an element of the smallest closed, transitive relation containing the graph of $f$. \item (\cites{A,AA}) \label{item:tf}The set of points $x$ such that $(x, x)$ is an element of $\mathcal Gf$, where $\mathcal Gf$ is as defined below. \end{enumerate} \end{prop} \begin{defn}[\cites{A,AA}] \label{defn:gf} $\mathcal Gf$ is defined using transfinite recursion. For any subset $\prodset{R}$ of $X\times X$, define its orbit $\prodset{O(R)}$ by $\prodset{O(R)}= \bigcup_{i\ge1}\prodset{R}^i$, and define $\prodset{NW}(\prodset{R})$ to be $\overline{\prodset{O(R)}}$ (the closure, in $X\times X$, of $\prodset{O(R)}$). Let $\prodset{NW}_0(f)$ be the graph of $f$, that is, $\prodset{NW}_0(f)=\{(x,f(x)): x \in X\}$, and define inductively $\prodset{NW}_{\alpha+1}(f) = \prodset{NW}(\prodset{NW}_\alpha(f))$ for $\alpha$ an ordinal number and $\prodset{NW}_\beta(f)=\overline{\bigcup_{\alpha<\beta}\prodset{NW}_\alpha(f)}$ for $\beta$ a limit ordinal. This will stabilize at some countable ordinal $\gamma$, and we define $\mathcal Gf$ to be $\prodset{NW}_\gamma(f)$. Note that $\mathcal Gf$ is the smallest closed, transitive relation containing the graph of $f$ referred to in Proposition~\ref{prop:gre}(\ref{item:smallest}). \end{defn} Again, we give a definition based on strong $\ep$-chains, but using a topological definition of chains that does not depend on the metric. \begin{defn} Let $\Sigma = \{\prodset{N}_i\}_{i=1}^\infty$ be a sequence of neighborhoods of the diagonal $\Delta_X$. A $(\Sigma,f)$-chain (or simply $\Sigma$-chain) is a finite sequence of points $(x=x_0, x_1, \dots, x_n=y)$ in $X$ such that $(f(x_{i-1}),x_i) \in \prodset{N}_{\sigma(i)}$ ($i=1,\ldots,n$) for some injection $\sigma: \{1,\ldots,n\}\to\mathbb{N}$. (The injection $\sigma$ is the same for all $i$.) Note that since $\sigma$ is one-to-one, each neighborhood $\prodset{N}_i$ can be used at most once in any $\Sigma$-chain. \end{defn} \begin{thm} \label{thm:newgrdef} A point $x\in X$ is in $\gr(f)$ if and only if for any sequence $\Sigma$ of neighborhoods of the diagonal $\Delta_X$, there exists a $(\Sigma,f)$-chain from $x$ to $x$. \end{thm} \begin{proof} We prove a slightly stronger result, in terms of relations. As in Definition~\ref{defn:mrelations}, we write $y>_{d'}z$ if for any $\ep>0$, there is a strong $(\ep,f,d')$-chain from $y$ to $z$. We write $y >_\arel z$ if $y>_{d'}z$ for all compatible metrics $d'$, and set $\prodset{A}= \{(y,z)\in X \times X: y >_\arel z\}$. We write $y>_\prodset{C} z$ if there is a $\Sigma$-chain from $y$ to $z$ for any sequence $\Sigma$ of neighborhoods of $\Delta_X$, and set $\prodset{C} = \{(y,z)\in X\times X: y >_\prodset{C} z\}$. We will show that $\mathcal Gf = \prodset C = \arel$, by proving that $\mathcal Gf \subset \prodset C \subset \prodset{A}\subset \mathcal Gf$. We begin with the following lemma. \begin{lemma} \label{lem:closed} The set $\prodset{C}$ is closed in $X\times X$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:closed}] Let $\{(y_j,z_j)\}_{j=1}^{\infty}$ be a sequence of points in $\prodset{C}$ with $\displaystyle\lim_{j\to\infty}(y_j,z_j)=(y,z)$; we must show that $(y,z)\in\prodset{C}$. First, observe that if $\Sigma'$ is a subsequence of $\Sigma$, then any $\Sigma'$-chain is also a $\Sigma$-chain. Similarly, if $\prodset{N}_i' \subset \prodset{N}_i$ for all $i$, then any $\{\prodset{N}_i' \}_{i=1}^\infty$-chain is also a $\Sigma$-chain. Let $\Sigma = \{\prodset{N}_i\}_{i=1}^\infty$ be any sequence of neighborhoods of $\Delta_X$. For $i=1$ and $2$, choose $\widetilde{\prodset{N}_i}$ to be a neighborhood of the diagonal small enough that $\widetilde{\prodset{N}_i}^2 \subset \prodset{N}_i$. Choose a $K$ large enough that $(f(y), f(y_K))\in \widetilde{\prodset{N}_1}$ and $(z_K, z) \in \widetilde{\prodset{N}_2}$. Define a new sequence $\Sigma' = \{\prodset{N}_i \cap \widetilde{\prodset{N}_1} \cap \widetilde{\prodset{N}_2}\}_{i=3}^\infty$. Since $(y_K,z_K)\in \prodset{C}$, there is a $\Sigma'$-chain $(x_0=y_K, x_1,\ldots, x_n=z_K)$ from $y_K$ to $z_K$. Thus $(f(y), f(y_K))\in \widetilde{\prodset{N}_1}$ and $(f(y_K),x_1)\in \widetilde{\prodset{N}_1}$, so $(f(y),x_1) \in \widetilde{\prodset{N}_1}^2 \subset \prodset{N}_1$. Similarly, $(f(x_{n-1}), z_K) \in \widetilde{\prodset{N}_2}$ and $(z_K, z) \in \widetilde{\prodset{N}_2}$, so $(f(x_{n-1}),z) \in \widetilde{\prodset{N}_2}^2 \subset \prodset{N}_2$. Therefore $(y, x_1,\ldots,x_{n-1},z)$ is a $\Sigma$-chain from $y$ to $z$. Since $\Sigma$ was arbitrary, we have $(y,z)\in\prodset{C}$. \end{proof} The relation $\prodset C$ clearly contains the graph of $f$ and is transitive, so $\mathcal Gf \subset \prodset C$ by Proposition~\ref{prop:gre}(\ref{item:smallest}). Next we show that $\prodset C \subset \arel$. Take $y>_{\prodset C} z$, and let $d'$ be any compatible metric and $\ep$ any positive number. Define the sequence $\Sigma=\{\prodset{N}_i\}_{i=1}^\infty$ by $\prodset{N}_i=\prodset{V}_{d'}(\ep/2^i)$. Then any $\Sigma$-chain is a strong $(\ep,d')$-chain. Since $\ep$ was arbitrary, we have $y>_{d'}z$; since $d'$ was arbitrary, we have $y >_\arel z$, as desired. Finally, we show that $\prodset{A}\subset \mathcal Gf$. Let $(y,z)$ be a point in $\arel$. We first consider $(y,z) \in \arel$ with $y\ne z$, and let $\theta$ be a continuous Lyapunov function for $f$. Define a metric $d'$ by $d'(x_1,x_2) = d(x_1,x_2) + |\theta(x_2)-\theta(x_1)|$; as in the proof of \cite{FP}*{Thm.~3.1}, $\theta$ is Lipschitz with respect to $d'$. Since $(y,z) \in \arel$, we have that $y >_{d'} z$, and so $\theta(y)\ge\theta(z)$ by \cite{FP}*{Lemma~2.5}. Since $\theta$ was arbitrary, we have $(y,z) \in \mathcal Gf$ by \cite{AA}*{p.~51} (note that the opposite inequality convention is used in the definition of Lyapunov function in \cite{AA}, that is, $\theta(f(x))\ge \theta(x)$). For $y = z$, we show that if $(y,y) \not\in \mathcal Gf$, then $(y,y) \not\in \arel$. The fact that $(y,y) \not\in \mathcal Gf$ means exactly that $y\not\in \gr(f)$, and so there exists a continuous Lyapunov function $\theta$ with $\theta(f(y)) < \theta(y)$ (\cite{AA}*{Theorem~5}). Then $y \not\in N(\theta)$, and since $\theta$ is Lipschitz with respect to a compatible metric, we have that $y \not>_\arel y$ by \cite{FP}*{Theorem~2.6}. \end{proof} The next theorem, which follows from results in \cite{FP}, shows that we can obtain the generalized recurrent set as the strong chain recurrent set for a particular metric, which is much easier to work with than the intersection over all compatible metrics. \begin{thm} \label{thm:grisscr} There exists a metric $d^\ast$ compatible with the topology such that $\gr(f) = \operatorname{SCR}_{d^\ast}(f)$. \end{thm} \begin{proof} By \cite{FP}*{Thm.~3.1}, there exists a continuous Lyapunov function $\theta$ for $f$ such that $N(\theta) = \gr(f)$. Define $d^\ast$ by $d^\ast(x,y) = d(x,y) + |\theta(y)-\theta(x)|$; as in the proof of \cite{FP}*{Thm.~3.1}, $\theta$ is Lipschitz with respect to this metric. Then, by \cite{FP}*{Thm.~2.6}, $\operatorname{SCR}_{d^\ast}(f) \subset N(\theta) = \gr(f)$. Since $\gr(f) \subset \operatorname{SCR}_{d^\ast}(f)$ by Proposition~\ref{prop:gre}(\ref{item:fp}), we have $\gr(f) = \operatorname{SCR}_{d^\ast}(f)$. \end{proof} \begin{prop}\label{prop:grnotrestrict} In general, $\gr(f|_{\gr(f)})\subsetneq\gr(f)$. \end{prop} \begin{proof} See Example~\ref{ex:spiralhalf}, or the examples in Theorem~4.2 of \cite{Y}. \end{proof} By analogy with Birkhoff's center depth (\cite{Birk}), which involves the nonwandering set, or Yokoi's $*$-depth (\cite{Y}), which involves the strong chain recurrent set, we can define the generalized recurrence depth of $f$ as follows. \begin{defn} Let $\gr^0(f) =X$ and $\gr^1(f)=\gr(f)$. For any ordinal number $\alpha+1$, define $\gr^{\alpha+1}(f)=\gr(f|_{\gr^\alpha(f)})$, and for a limit ordinal $\beta$, define $\gr^\beta(f)=\bigcap_{\alpha<\beta}\gr^\alpha(f)$. This will stabilize at some countable ordinal $\gamma$, and we define the \emph{generalized recurrence depth} of $f$ to be $\gamma$. \end{defn} The following result follows immediately from work in \cite{Y}. \begin{prop} For any countable ordinal $\gamma$, there exists a compact metric space $X_\gamma$ and a continuous map $f_\gamma:X_\gamma\to X_\gamma$ such that the generalized recurrence depth of $f_\gamma$ is $\gamma$. \end{prop} \begin{proof} Yokoi defines $*$-depth as the ordinal at which the sequence $\scrwo_d^0(f)=X$, $\scrwo_d^1(f)=\scrwo_d(f|_{\scrwo_d^0(f)})=\scrwo_d(f)$, $\scrwo_d^2(f)=\scrwo_d(f|_{\scrwo_d^1(f)}),\dots$ stabilizes, and constructs a series of examples to prove that any countable ordinal is realizable as the $*$-depth of some map (\cite{Y}*{Thm.~4.2}). It is clear that in the examples, $\gr^\alpha(f)=\scrwo_d^\alpha(f)$ for all $\alpha$, so these examples also give our result. \end{proof} We discuss maps for which the generalized recurrence depth is greater than one (that is, $\gr(f|_{\gr(f)})\subsetneq\gr(f)$) in Section~\ref{sect:relation}. \section{Generalized recurrence for powers of $f$} \label{sect:powers} It is well known that $\crec(f^k)=\crec(f)$ for any $k>0$ (see, for example, \cite{AHK}*{Prop.~1.1}). The corresponding statement is not true in general for $\scrwo_d(f)$, $\mane(f)$, or the nonwandering set. (See \cite{Y}*{Ex.~3.4}, or consider Example~\ref{ex:twocircs}: $\mane(f^2) = \scrwo_d(f^2) = C_1\times \Z_2$, while $\mane(f)$ and $\scrwo_d(f)$ both equal the entire space $X$. See \cites{CN,Saw} for examples for the nonwandering set.) We now show that it is true for the generalized recurrent set: \begin{thm}\label{thm:grpower} For any $k\ge1$, $\gr(f^k)=\gr(f)$. \end{thm} \begin{proof} It is clear that $\gr(f^k)\subset\gr(f)$, so we will prove the opposite inclusion. We use Theorem~\ref{thm:newgrdef}. Given any $x\in\gr(f)$ and any sequence $\Sigma = \{\prodset{N}_i\}_{i=1}^\infty$ of neighborhoods of $\Delta_X$, we will find a $(\Sigma,f^k)$-chain from $x$ to $x$. Without loss of generality, we can assume that $\prodset{N}_1\supset\prodset{N}_2\supset\dots$ (if not, replace each $\prodset{N}_i$ by $\bigcap_{j\le i}\prodset{N}_j$). Define the sets $\prodset{N}_i'$, $i\ge1$, by choosing each $\prodset{N}_i'$ small enough that $\prodset{N}_i' \subset \prodset{N}_{i-1}'$ ($i>1$) and for any $(\prodset{N}_i',f)$-chain (Defn.~\ref{defn:nchain}) of length $k$ from a point $y$ to a point $z$, we have $(f^k(y),z)\in\prodset{N}_{i}$. Define new sequences $\Sigma'$ and $\Sigma_j'$, $0\le j <k$, by $\Sigma'=\{\prodset{N}_{i}'\}_{i=1}^\infty$ and $\Sigma_j' =\{\prodset{N}_{ki-j}'\}_{i=1}^\infty$, and note that any $(\Sigma_0',f)$-chain is also a $(\Sigma_j',f)$-chain for $0<j<k$, as well as a $(\Sigma',f)$-chain. Since $x\in\gr(f)$, there is a $(\Sigma_0',f)$-chain $(x_0=x, x_1,\ldots,x_n=x)$ from $x$ to itself, with $(f(x_{i-1}),x_i) \in \prodset{N}_{k\sigma(i)}'$ ($i=1,\ldots,n$) for some injection $\sigma:\{1,\ldots,n\} \to\mathbb{N}$. We may assume that the length $n$ of this chain is a multiple of $k$. (If not, concatenate it with itself $k$ times, considering the $(j+1)$st copy $(0\le j< k)$ as a $(\Sigma_j',f)$-chain; this will be a $(\Sigma',f)$-chain.) For $i=0,k, 2k,\ldots$, define $m_i=\min\{k\sigma(i+1),k\sigma(i+2),\ldots,k\sigma(i+k)\}$. Then $(x_i,x_{i+1},\ldots,x_{i+k})$ is an $(\prodset{N}_{m_i}',f)$-chain, so $(f^k(x_i),x_{i+k})\in\prodset{N}_{m_i}$, and $(x_0=x, x_k,\allowbreak x_{2k},\ldots,x_n=x)$ is a $(\Sigma,f^k)$-chain from $x$ to $x$. \end{proof} \section{Relation to ordinary chain recurrence and chain transitivity} \label{sect:relation} In many cases the generalized recurrent set equals the chain recurrent set. In this section we give conditions for the two sets to be equal, and discuss what it means for the dynamics if they are not equal. Yokoi (\cite{Y}) defines a Lyapunov function $\theta$ to be \emph{pseudo-complete} if \begin{enumerate} \item $\theta(f(x))=\theta(x)$ if and only if $x\in\scrwo_d(f)$, and \item $\theta$ is constant on each $d$-strong chain transitive component. \end{enumerate} \begin{thm}[{\cite{Y}*{Thm.~5.3}}] $\scrwo_d(f) = \crec(f)$ if and only if there exists a pseudo-complete Lyapunov function $\theta$ for $f$ such that the image $\theta(\scrwo_d(f))$ is totally disconnected. \end{thm} We obtain a similar statement for $\gr(f)$ using results from \cite{FP}. \begin{prop}\label{prop:griscr} $\gr(f) = \crec(f)$ if and only if there exists a Lyapunov function $\theta$ for $f$ such that \begin{enumerate} \item $\theta(f(x))=\theta(x)$ if and only if $x\in\gr(f)$, \item the image $\theta(\gr(f))$ is totally disconnected. \end{enumerate} \end{prop} \begin{proof} The ``only if'' direction follows from the existence of a Lyapunov function $\theta$ for $f$ that is strictly decreasing off of $\crec(f)$ and such that $\theta(\crec(f))$ is nowhere dense (\cite{Franks}). We prove the ``if'' direction. By hypothesis, $N(\theta) = \gr(f)$. So $\theta(N(\theta))$ is totally disconnected, and Corollary~1.9 of \cite{FP} implies that $\crec(f)\subset N(\theta)= \gr(f)$. Since it is always true that $\gr(f)\subset\crec(f)$, we have $\gr(f) = \crec(f)$. \end{proof} The following result shows that if the upper box dimension of $\crec(f)$ is small enough, then $\scrwo_d(f)=\crec(f)$. (See \cite{Pesin}*{\S 6} for the definition of upper box dimension, which depends on the choice of metric.) \begin{thm} If the upper box dimension of the space $(\crec(f),d)$ is less than one, then two points $x$ and $y$ are chain equivalent if and only if they are $d$-strong chain equivalent. In particular, $\scrwo_d(f)=\crec(f)$. \end{thm} Note that the theorem applies in the case that the space $X$ itself has upper box dimension less than one. \begin{proof} If $x$ and $y$ are $d$-strong chain equivalent, they are \emph{a fortiori} chain equivalent, so we will prove the opposite implication. Let $X_x \subset \crec(f)$ be the chain transitive component containing $x$ and $y$. Let $D$ be the upper box dimension of $(X_x,d)$. Define $t_\ep(x,y)$ to be the smallest $n$ such that there is an $\ep$-chain of length $n$ from $x$ to $y$. It follows from Proposition~22 of \cite{RWcrr} (more precisely, from the discussion in the proof of that result) that there exists a constant $C>0$ (independent of $x$ and $y$) such that for small enough $\ep$, $t_\ep(x,y) \le C/\ep^D$. Thus, if $(x=x_0, x_1, \dots, x_n=y)$ is the shortest $\ep$-chain from $x$ to $y$, then $\sum_{i=1}^n d(f(x_{i-1}),x_i)\le (C/\ep^D)\ep = C\ep^{1-D}$, which goes to zero as $\ep\to0$, and so there is a strong $\ep'$-chain from $x$ to $y$ for any $\ep'$. \end{proof} \begin{cor} Let $d^\ast$ be the metric from Theorem~\ref{thm:grisscr} (so $\operatorname{SCR}_{d^\ast}(f) = \gr(f)$). If the upper box dimension of the space $(\crec(f),d^\ast)$ is less than one, then $\gr(f)=\crec(f)$. \end{cor} We will use the following equivalence relation on $\gr(f)$ to help classify maps for which $\gr(f)\ne\crec(f)$. \begin{defn} Since the three relations $>_{\mathcal Gf}$, $>_\arel$, and $>_{\prodset C}$ from Theorem~\ref{thm:newgrdef} are identical, they all induce the same equivalence relation on $\gr(f)$, which we will denote by $\sim_f$. \end{defn} The quotient space $\gr(f)/\sim_f$ first appears, to the best of my knowledge, in \cite{A}*{Exercise~3.17}. In \cite{FP}, the equivalence relation $\sim_\arel$ is referred to as ``Mather equivalence.'' \begin{remark} While $\sim_f$ is an equivalence relation on $\gr(f)$, the chains in the definition(s) are not required to remain in $\gr(f)$. As we saw in Prop.~\ref{prop:grnotrestrict}, $\gr(f|_{\gr(f)})$ is not necessarily equal to $\gr(f)$. And even if the two sets are equal, the equivalence relations $\sim_f$ and $\sim_{f|_{\gr(f)}}$ may be different, as the following example shows. . \end{remark} \begin{ex} Let $X_5$ be the disk with the usual topology, and $f_5$ a map that fixes the center point $(0,0)$ and the boundary circle $S_5$ and moves other points in a spiral toward the boundary (see Figure~\ref{fig:spiralcirc}). Then $\gr(f_5) = \{(0,0)\} \cup S_5$ and $\gr(f_5|_{\gr(f_5)}) = \gr(f_5)$. There are two $\sim_{f_5}$ equivalence classes, $\{(0,0)\}$ and $S_5$, but each point is its own $\sim_{f_5|_{\gr(f_5)}}$ equivalence class. \end{ex} \begin{figure} \caption{$f_5:X_5\to X_5$} \label{fig:spiralcirc} \end{figure} However, we do have the following result from \cite{AA}. \begin{thm}\label{thm:gfct} The map $f$ restricted to a $\sim_f$ equivalence class is chain transitive. \end{thm} \begin{proof} This follows from applying the second part of \cite{AA}*{Lemma 12} to the $\sim_f$ equivalence class. \end{proof} Under what circumstances is $\sim_f$ equivalence different from chain equivalence? We have a partial answer: \begin{prop}\label{prop:identity} Let $f$ be chain transitive on an invariant subset $N$ of $\gr(f)$, and assume that $x \not\sim_f y$ for some pair of points $x$ and $y$ in $N$. Then $N/\sim_f$ is a nontrivial connected set, and the factor map $N/\sim_f \to N/\sim_f$ is the identity. \end{prop} \begin{proof} Let $M$ be the quotient space $N/\sim_f$, and $\pi:N\to M$ the projection. By hypothesis, $M$ contains more than one point. Since the $\sim_f$ equivalence classes are $f$-invariant, $f|_N$ induces the identity map on $M$. Assume that $M$ is not connected, and let $U$, $V$ be a separation of $M$. Then $\pi^{-1}(U)$, $\pi^{-1}(V)$ is a separation of $N$. Since $f(\pi^{-1}(U))\subset \pi^{-1}(U)$ and $f(\pi^{-1}(V))\subset \pi^{-1}(V)$, there is no $\ep$-chain from any point in $\pi^{-1}(U)$ to any point in $\pi^{-1}(V)$ for any $\ep < d(\pi^{-1}(U), \pi^{-1}(V))$, contradicting chain transitivity. \end{proof} In the examples that we have seen where the chain recurrent set is strictly larger than the generalized recurrent set, the difference was in some sense caused by the presence of a large set of fixed points (either an interval or a Cantor set). However, the two sets can be different even if there are no fixed points, as the following example shows. \begin{ex} Consider the map $f=f_1\times \rho$ on the torus $S^1\times S^1$, where $f_1$ is the map from Example~\ref{ex:halfcirc} and $\rho$ is an irrational rotation. Then $\crec(f) =S^1\times S^1$, while $\gr(f) = C_1 \times S^1$. \end{ex} However, the map in this example factors, by projection onto the first coordinate, onto a map with many fixed points. This observation leads to the following characterization of maps for which the generalized recurrent set is strictly contained in the chain recurrent set. \begin{thm}\label{thm:unctble} If $\gr(f) \ne \crec(f)$, then $f$ factors onto a map with uncountably many fixed points. \end{thm} \begin{proof} Theorem~3.1 of \cite{FP} tells us that there is a Lyapunov function $\theta:X\to\mathbb R$ for $f$ such that $\theta(f(x))=\theta(x)$ if and only if $x\in\gr(f)$, so, by Proposition~\ref{prop:griscr}, we must have that the image $\theta(\gr(f))$ contains an interval. Proposition~3.2 of \cite{FP} says that $\theta$ is constant on each $\sim_f$ equivalence class, so $\theta$ induces a map $\bar\theta$ on the quotient $\gr(f)/\sim_f$. Since the image $\bar\theta (\gr(f)/\sim_f) =\theta(\gr(f))$ contains an interval, we must have that $\gr(f)/\sim_f$ is uncountable. If, as in \cite{AA}, we extend the equivalence relation $\sim_f$ from $\gr(f)$ to an equivalence relation $\sim$ on all of $X$ by setting $\sim\ =\ \sim_f \cup \Delta_X$ (that is, $x\sim y$ if $x=y$ or $x\in\gr(f)$, $y\in\gr(f)$, and $x\sim_f y$), then $f$ factors onto the map $\bar f: X/\sim \to X/\sim$, with fixed points $\gr(f)/\sim_f=\gr(f)/\sim$. \end{proof} \begin{cor} If $\gr(f) \ne \crec(f)$, then either \begin{enumerate} \item $\gr(f)/\sim_f$ contains a nontrivial connected set, or \item $\gr(f)/\sim_f$ is homeomorphic to the disjoint union of a Cantor set and a countable set. \end{enumerate} \end{cor} \begin{proof} The Cantor-Bendixson theorem (\cite{Sierp}*{Thm.~47}) says that $\gr(f)/\sim_f$ can be written as the disjoint union of a perfect set $P$ and a countable set. Since $\gr(f)/\sim_f$ is uncountable, the set $P$ must be nonempty. If $\gr(f)/\sim_f$ does not contain a nontrivial connected set, then it is totally disconnected, and so $P$ is a nonempty, totally disconnected, compact, perfect set, that is, a Cantor set. \end{proof} \begin{cor} If the generalized recurrence depth of $f$ is greater than one (that is, if $\gr(f|_{\gr(f)})\subsetneq\gr(f)$), then $f$ factors onto a map with uncountably many fixed points. \end{cor} \begin{proof} It follows from Theorem~\ref{thm:gfct} that $\crec(f|_{\gr(f)})= \gr(f)$. So we can apply the reasoning in Theorem~\ref{thm:unctble} to the map $f|_{\gr(f)}:\gr(f)\to\gr(f)$. We extend the equivalence relation $\sim_{f|_{\gr(f)}}$ from $\gr(f|_{\gr(f)})$ to all of $X$ by setting $\sim = \sim_{f|_{\gr(f)}} \cup \Delta_X$; the induced map on $X/\sim$ will have the uncountable set $\gr(f|_{\gr(f)})/\sim$ as the fixed point set. \end{proof} \begin{bibdiv} \begin{biblist} \bib{ABC}{article}{ author={Abbondandolo, Alberto}, author={Bernardi, Olga}, author={Cardin, Franco}, title={Chain recurrence, chain transitivity, {L}yapunov functions and rigidity of {L}agrangian submanifolds of optical hypersurfaces}, date={2015}, journal={preprint}, } \bib{A}{book}{ author={Akin, Ethan}, title={The general topology of dynamical systems}, series={Graduate Studies in Mathematics}, publisher={American Mathematical Society, Providence, RI}, date={1993}, volume={1}, ISBN={0-8218-3800-8}, review={\MR{1219737 (94f:58041)}}, } \bib{AA}{article}{ author={Akin, Ethan}, author={Auslander, Joseph}, title={Generalized recurrence, compactifications, and the {L}yapunov topology}, date={2010}, ISSN={0039-3223}, journal={Studia Math.}, volume={201}, number={1}, pages={49\ndash 63}, url={http://dx.doi.org/10.4064/sm201-1-4}, review={\MR{2733274 (2012a:37013)}}, } \bib{AHK}{article}{ author={Akin, Ethan}, author={Hurley, Mike}, author={Kennedy, Judy~A.}, title={Dynamics of topologically generic homeomorphisms}, date={2003}, ISSN={0065-9266}, journal={Mem. Amer. Math. Soc.}, volume={164}, number={783}, pages={viii+130}, url={http://dx.doi.org/10.1090/memo/0783}, review={\MR{1980335 (2004j:37024)}}, } \bib{Auslander}{article}{ author={Auslander, Joseph}, title={Generalized recurrence in dynamical systems}, date={1964}, journal={Contributions to Differential Equations}, volume={3}, pages={65\ndash 74}, review={\MR{0162238 (28 \#5437)}}, } \bib{AusNC}{incollection}{ author={Auslander, Joseph}, title={Non-compact dynamical systems}, date={1973}, booktitle={Recent advances in topological dynamics ({P}roc. {C}onf., {Y}ale {U}niv., {N}ew {H}aven, {C}onn., 1972; in honor of {G}ustav {A}rnold {H}edlund)}, publisher={Springer, Berlin}, pages={6\ndash 11. Lecture Notes in Math., Vol. 318}, review={\MR{0394613 (52 \#15414)}}, } \bib{Birk}{book}{ author={Birkhoff, George~D.}, title={Dynamical systems}, series={With an addendum by Jurgen Moser. American Mathematical Society Colloquium Publications, Vol. IX}, publisher={American Mathematical Society, Providence, R.I.}, date={1966}, review={\MR{0209095 (35 \#1)}}, } \bib{CN}{article}{ author={Coven, Ethan~M.}, author={Nitecki, Zbigniew}, title={Nonwandering sets of the powers of maps of the interval}, date={1981}, ISSN={0143-3857}, journal={Ergodic Theory Dynamical Systems}, volume={1}, number={1}, pages={9\ndash 31}, review={\MR{627784 (82m:58043)}}, } \bib{E}{incollection}{ author={Easton, Robert}, title={Chain transitivity and the domain of influence of an invariant set}, date={1978}, booktitle={The structure of attractors in dynamical systems ({P}roc. {C}onf., {N}orth {D}akota {S}tate {U}niv., {F}argo, {N}.{D}., 1977)}, series={Lecture Notes in Math.}, volume={668}, publisher={Springer, Berlin}, pages={95\ndash 102}, review={\MR{518550 (80j:58051)}}, } \bib{FP}{article}{ author={Fathi, Albert}, author={Pageault, Pierre}, title={Aubry-{M}ather theory for homeomorphisms}, date={2015}, ISSN={0143-3857}, journal={Ergodic Theory Dynam. Systems}, volume={35}, number={4}, pages={1187\ndash 1207}, url={http://dx.doi.org/10.1017/etds.2013.107}, review={\MR{3345168}}, } \bib{Franks}{incollection}{ author={Franks, John}, title={A variation on the {P}oincar\'e-{B}irkhoff theorem}, date={1988}, booktitle={Hamiltonian dynamical systems ({B}oulder, {CO}, 1987)}, series={Contemp. Math.}, volume={81}, publisher={Amer. Math. Soc., Providence, RI}, pages={111\ndash 117}, url={http://dx.doi.org/10.1090/conm/081/986260}, review={\MR{986260 (90e:58095)}}, } \bib{Garay}{article}{ author={Garay, B.~M.}, title={Auslander recurrence and metrization via {L}iapunov functions}, date={1985}, ISSN={0532-8721}, journal={Funkcial. Ekvac.}, volume={28}, number={3}, pages={299\ndash 308}, url={http://www.math.kobe-u.ac.jp/~fe/xml/mr0852116.xml}, review={\MR{852116 (87g:54087)}}, } \bib{Kelley}{book}{ author={Kelley, John~L.}, title={General topology}, publisher={D. Van Nostrand Company, Inc., Toronto-New York-London}, date={1955}, review={\MR{0070144 (16,1136c)}}, } \bib{KK}{article}{ author={Kotus, Janina}, author={Klok, Fopke}, title={A sufficient condition for {$\Omega$}-stability of vector fields on open manifolds}, date={1988}, ISSN={0010-437X}, journal={Compositio Math.}, volume={65}, number={2}, pages={171\ndash 176}, url={http://www.numdam.org/item?id=CM_1988__65_2_171_0}, review={\MR{932642 (89m:58114)}}, } \bib{Nit}{article}{ author={Nitecki, Zbigniew}, title={Explosions in completely unstable flows. {I}. {P}reventing explosions}, date={1978}, ISSN={0002-9947}, journal={Trans. Amer. Math. Soc.}, volume={245}, pages={43\ndash 61}, url={http://dx.doi.org/10.2307/1998856}, review={\MR{511399 (81e:58030)}}, } \bib{Nit2}{article}{ author={Nitecki, Zbigniew}, title={Recurrent structure of completely unstable flows on surfaces of finite {E}uler characteristic}, date={1981}, ISSN={0002-9327}, journal={Amer. J. Math.}, volume={103}, number={1}, pages={143\ndash 180}, url={http://dx.doi.org/10.2307/2374191}, review={\MR{601464 (82d:58059)}}, } \bib{Mink}{article}{ author={Parad{\'{\i}}s, J.}, author={Viader, P.}, author={Bibiloni, L.}, title={The derivative of {M}inkowski's {$?(x)$} function}, date={2001}, ISSN={0022-247X}, journal={J. Math. Anal. Appl.}, volume={253}, number={1}, pages={107\ndash 125}, url={http://dx.doi.org/10.1006/jmaa.2000.7064}, review={\MR{1804596 (2002c:11092)}}, } \bib{Peix2}{article}{ author={Peixoto, Maria L{\'u}cia~Alvarenga}, title={Characterizing {$\Omega$}-stability for flows in the plane}, date={1988}, ISSN={0002-9939}, journal={Proc. Amer. Math. Soc.}, volume={104}, number={3}, pages={981\ndash 984}, url={http://dx.doi.org/10.2307/2046825}, review={\MR{964882 (89j:58061)}}, } \bib{Peix1}{article}{ author={Peixoto, Maria L{\'u}cia~Alvarenga}, title={The closing lemma for generalized recurrence in the plane}, date={1988}, ISSN={0002-9947}, journal={Trans. Amer. Math. Soc.}, volume={308}, number={1}, pages={143\ndash 158}, url={http://dx.doi.org/10.2307/2000955}, review={\MR{946436 (90b:58137)}}, } \bib{Pesin}{book}{ author={Pesin, Yakov~B.}, title={Dimension theory in dynamical systems}, series={Chicago Lectures in Mathematics}, publisher={University of Chicago Press, Chicago, IL}, date={1997}, ISBN={0-226-66221-7; 0-226-66222-5}, url={http://dx.doi.org/10.7208/chicago/9780226662237.001.0001}, note={Contemporary views and applications}, review={\MR{1489237 (99b:58003)}}, } \bib{RWcrr}{article}{ author={Richeson, David}, author={Wiseman, Jim}, title={Chain recurrence rates and topological entropy}, date={2008}, ISSN={0166-8641}, journal={Topology Appl.}, volume={156}, number={2}, pages={251\ndash 261}, url={http://dx.doi.org/10.1016/j.topol.2008.07.005}, review={\MR{2475112 (2010c:37027)}}, } \bib{Saw}{article}{ author={Sawada, Ken}, title={On the iterations of diffeomorphisms without {$C^{0}-\Omega $}-explosions: an example}, date={1980}, ISSN={0002-9939}, journal={Proc. Amer. Math. Soc.}, volume={79}, number={1}, pages={110\ndash 112}, url={http://dx.doi.org/10.2307/2042398}, review={\MR{560595 (81h:58055)}}, } \bib{Sierp}{book}{ author={Sierpinski, Waclaw}, title={General topology}, series={Mathematical Expositions, No. 7}, publisher={University of Toronto Press, Toronto}, date={1952}, note={Translated by C. Cecilia Krieger}, review={\MR{0050870 (14,394f)}}, } \bib{ST}{article}{ author={Souza, Josiney~A.}, author={Tozatti, H{\'e}lio V.~M.}, title={Prolongational limit sets of control systems}, date={2013}, ISSN={0022-0396}, journal={J. Differential Equations}, volume={254}, number={5}, pages={2183\ndash 2195}, url={http://dx.doi.org/10.1016/j.jde.2012.11.020}, review={\MR{3007108}}, } \bib{ST2}{article}{ author={Souza, Josiney~A.}, author={Tozatti, H{\'e}lio V.~M.}, title={Some aspects of stability for semigroup actions and control systems}, date={2014}, ISSN={1040-7294}, journal={J. Dynam. Differential Equations}, volume={26}, number={3}, pages={631\ndash 654}, url={http://dx.doi.org/10.1007/s10884-014-9379-9}, review={\MR{3274435}}, } \bib{Y}{article}{ author={Yokoi, Katsuya}, title={On strong chain recurrence for maps}, date={2015}, journal={Annales Polonici Mathematici}, volume={114}, pages={165\ndash 177}, } \bib{Zheng1}{article}{ author={Zheng, Zuo-Huan}, title={Chain transitivity and {L}ipschitz ergodicity}, date={1998}, ISSN={0362-546X}, journal={Nonlinear Anal.}, volume={34}, number={5}, pages={733\ndash 744}, url={http://dx.doi.org/10.1016/S0362-546X(97)00581-6}, review={\MR{1634815 (99i:54056)}}, } \end{biblist} \end{bibdiv} \end{document}
arXiv
Corporate Finance & Accounting Corporate Finance Overhead Rate By Will Kenton What Is the Overhead Rate? Formula and Calculation Using the Overhead Rate Direct Cost vs. the Overhead Rate Limitations of the Overhead Rate Examples of Overhead Rates The overhead rate is a cost allocated to the production of a product or service. Overhead costs are expenses that are not directly tied to production such as the cost of the corporate office. To allocate overhead costs, an overhead rate is applied to the direct costs tied to production by spreading or allocating the overhead costs based on specific measures. For example, overhead costs may be applied at a set rate based on the number of machine hours or labor hours required for the product. Overhead Rate Formula and Calculation Although there are multiple ways to calculate an overhead rate, below is the basis for any calculation: Overhead rate=Indirect costsAllocation measure\text{Overhead rate} = \frac{\text{Indirect costs}}{\text{Allocation measure}}Overhead rate=Allocation measureIndirect costs​ Note that: Indirect costs are the overhead costs or costs that are not directly tied to the production of a product or service. Allocation measure is any type of measurement that's necessary to make the product or service. It could be the number of direct labor hours or machine hours for a particular product or a period. The calculation of the overhead rate has a basis on a specific period. So, if you wanted to determine the indirect costs for a week, you would total up your weekly indirect or overhead costs. You would then take the measurement of what goes into production for the same period. So, if you were to measure the total direct labor cost for the week, the denominator would be the total weekly cost of direct labor for production that week. Finally, you would divide the indirect costs by the allocation measure to achieve how much in overhead costs for every dollar spent on direct labor for the week. Overhead rate is a cost allocated to the production of a product or service. Overhead costs are expenses that are not directly tied to production such as the cost of the corporate office. By analyzing how much it costs in overhead for every hour the machine is producing the company's goods, management can properly price the product to make sure there's enough profit margin to compensate for its indirect costs. A company that excels at monitoring and improving its overhead rate can improve its bottom line or profitability. The overhead rate is a cost added on to the direct costs of production in order to more accurately assess the profitability of each product. In more complicated cases, a combination of several cost drivers may be used to approximate overhead costs. Overhead expenses are generally fixed costs, meaning they're incurred whether or not a factory produces a single item or a retail store sells a single product. Fixed costs would include building or office space rent, utilities, insurance, supplies, maintenance, and repair. Overhead costs also include administrative salaries and some professional and miscellaneous fees that are tucked under selling, general, and administrative (SG&A) within a firm's operating expenses on the income statement. Unless a cost can be directly attributable to a specific revenue-generating product or service, it will be classified as overhead, or as an indirect expense. It is often difficult to assess precisely the amount of overhead costs that should be attributed to each production process. Costs must thus be estimated based on an overhead rate for each cost driver or activity. It is important to include indirect costs that are based on this overhead rate in order to price a product or service appropriately. If a company prices its products that do not cover its overhead costs, the business will be unprofitable. Direct Costs vs. the Overhead Rate Direct costs are costs directly tied to a product or service that a company produces. Direct costs can be easily traced to their cost objects. Cost objects can include goods, services, departments, or projects. Direct costs include direct labor, direct materials, manufacturing supplies, and wages tied to production. The overhead rate allocates indirect costs to the direct costs tied to production by spreading or allocating the overhead costs based on the dollar amount for direct costs, total labor hours, or even machine hours. The overhead rate has limitations when applying it to companies that have few overhead costs or when their costs are mostly tied to production. Also, it's important to compare the overhead rate to companies within the same industry. A large company with a corporate office, a benefits department, and a human resources division will have a higher overhead rate than a company that's far smaller and with less indirect costs. The equation for the overhead rate is overhead (or indirect) costs divided by direct costs or whatever you're measuring. Direct costs typically are direct labor, direct machine costs, or direct material costs—all expressed in dollar amounts. Each one of these is also known as an "activity driver" or "allocation measure." Example 1: Costs in Dollars Let's assume a company has overhead expenses that total $20 million for the period. The company wants to know how much overhead relates to direct labor costs. The company has direct labor expenses totaling $5 million for the same period. To calculate the overhead rate: Divide $20 million (indirect costs) by $5 million (direct labor costs). Overhead rate = $4 or ($20/$5), meaning that it costs the company $4 in overhead costs for every dollar in direct labor expenses. Example 2: Cost per Hour The overhead rate can also be expressed in terms of the number of hours. Let's say a company has overhead expenses totaling $500,000 for one month. During that same month, the company logs 30,000 machine hours to produce their goods. Divide $500,000 (indirect costs) by 30,000 (machine hours). Overhead rate = $16.66, meaning that it costs the company $16.66 in overhead costs for every hour the machine is in production. By analyzing how much it costs in overhead for every hour the machine is producing the company's goods, management can properly price the product to make sure there's enough profit margin to compensate for the $16.66 per hour in indirect costs. Of course, management also has to price the product to cover the direct costs involved in the production, including direct labor, electricity, and raw materials. A company that excels at monitoring and improving its overhead rate can improve its bottom line or profitability. How Activity Cost Drivers Can Allocate Indirect Costs An activity cost driver is a component of a business process. Activity cost drivers are used in activity-based costing, and they give a more accurate determination of the true cost of a business activity by considering the indirect expenses. Activity-Based Costing (ABC) Activity-based costing (ABC) is a system that tallies the costs of overhead activities and assigns those costs to products. Direct Cost A direct cost is a price that can be completely attributed to the production of specific goods or services. Variable Overhead Spending Variance Definition Variable overhead spending variance is the difference between actual variable overheads and standard variable overheads based on the budgeted costs. Cost Accounting Definition Cost accounting is a form of managerial accounting that aims to capture a company's total cost of production by assessing its variable and fixed costs. Overhead Definition Overhead refers to the ongoing business expenses not directly attributed to creating a product or service. How are direct costs and variable costs different? Does gross profit include labor and overhead? How are period costs and product costs different? How to Treat Overhead Expenses in Cost Accounting Are depreciation and amortization included in gross profit? How can a company have a negative gross profit margin?
CommonCrawl
Variational bicomplex In mathematics, the Lagrangian theory on fiber bundles is globally formulated in algebraic terms of the variational bicomplex, without appealing to the calculus of variations. For instance, this is the case of classical field theory on fiber bundles (covariant classical field theory). The variational bicomplex is a cochain complex of the differential graded algebra of exterior forms on jet manifolds of sections of a fiber bundle. Lagrangians and Euler–Lagrange operators on a fiber bundle are defined as elements of this bicomplex. Cohomology of the variational bicomplex leads to the global first variational formula and first Noether's theorem. Extended to Lagrangian theory of even and odd fields on graded manifolds, the variational bicomplex provides strict mathematical formulation of classical field theory in a general case of reducible degenerate Lagrangians and the Lagrangian BRST theory. See also • Calculus of variations • Lagrangian system • Jet bundle References • Takens, Floris (1979), "A global version of the inverse problem of the calculus of variations", Journal of Differential Geometry, 14 (4): 543–562, doi:10.4310/jdg/1214435235, ISSN 0022-040X, MR 0600611, S2CID 118169017 • Anderson, I., "Introduction to variational bicomplex", Contemp. Math. 132 (1992) 51. • Barnich, G., Brandt, F., Henneaux, M., "Local BRST cohomology", Phys. Rep. 338 (2000) 439. • Giachetta, G., Mangiarotti, L., Sardanashvily, G., Advanced Classical Field Theory, World Scientific, 2009, ISBN 978-981-283-895-7. External links • Dragon, N., BRS symmetry and cohomology, arXiv:hep-th/9602163 • Sardanashvily, G., Graded infinite-order jet manifolds, Int. G. Geom. Methods Mod. Phys. 4 (2007) 1335; arXiv:0708.2434
Wikipedia
\begin{document} \title{Local Wiener's Theorem and Coherent Sets of Frequencies} \author{S.Yu.Favorov} \address{Serhii Favorov, \newline\hphantom{iii} Karazin's Kharkiv National University \newline\hphantom{iii} Svobody sq., 4, \newline\hphantom{iii} 61022, Kharkiv, Ukraine} \email{[email protected]} \maketitle {\small \begin{quote} \noindent{\bf Abstract.} Using a local analog of the Wiener-Levi theorem, we investigate the class of measures on Euclidean space with discrete support and spectrum. Also, we find a new sufficient conditions for a discrete set in Euclidean space to be a coherent set of frequencies. AMS Mathematics Subject Classification: 52C23, 42B35, 42A75 \noindent{\bf Keywords: Weiner's Theorem, absolute convergent Dirichlet series, tempered distributions, Fourier transform, measure with discrete support, lattice, coherent set of frequencies} \end{quote} } {\bf 1. The Wiener--Levi Theorem}. The following theorem is known as the Wiener--Levi Theorem (see, for example, \cite{Z}, Ch.VI): \begin{Th} Let $$ F(t)=\sum_{n\in{\mathbb Z}}c_ne^{2\pi int} $$ be an absolutely convergent Fourier series, and $h(z)$ be a holomorphic function on a neighborhood of the closure of the set $\{F(t):\,t\in [0,1]\}$. Then the function $h(F(t))$ admits an absolutely convergent Fourier series expansion as well. \end{Th} For $h(z)=1/z$ the theorem is well-known as Wiener's Theorem. Denote by $W$ the algebra of absolutely convergent series $$ f(x)=\sum_n c_ne^{2\pi i\langle x,\lambda_n\rangle},\quad \lambda_n\in{\mathbb R}^d,\ x\in{\mathbb R}^d, $$ with the norm $\|f\|_W=\sum_n|c_n|$. In \cite{F0} we proved \begin{Th}\label{T} Let $K\subset{\mathbb C}$ be an arbitrary compact set, $h(z)$ be a holomorphic function on a neighborhood of $K$, and $f\in W$. Then there is a function $g\in W$ such that if $f(x)\in K$ then $h(f(x))=g(x)$. \end{Th} For $K=\overline{f({\mathbb R}^d)}$ we obtain the global Wiener--Levi Theorem for functions from the class $W$. Note that exponents of $g$ belong to the span over ${\mathbb Z}$ of exponents of $f$. The main consequence of Theorem \ref{T} is the following result: \begin{Th}[\cite{F0}]\label{1} For every $f\in W$ and $\varepsilon>0$ there is a function $g\in W$ such that $f(x)g(x)=1$ for all $x\in{\mathbb R}^d$ such that $|f(x)|\ge\varepsilon$ and $g(x)=0$ for all $x\in{\mathbb R}^d$ such that $|f(x)|\le\varepsilon/2$. \end{Th} {\bf 2. Tempered distributions and measures and their Fourier transform.} To show applications of Theorem \ref{1}, we recall some definitions (see, for example, \cite{R}). Denote by $S({\mathbb R}^d)$ the Schwartz space of test functions $\varphi\in C^\infty({\mathbb R}^d)$ with finite norms $$ p_m(\varphi)=\sup_{{\mathbb R}^d}(\max\{1,|x|\})^m\max_{k_1+\dots+k_d\le m} |D^k(\varphi(x))|,\quad m=0,1,2,\dots, $$ $k=(k_1,\dots,k_d)\in({\mathbb N}\cup\{0\})^d,\ D^k=\partial^{k_1}_{x_1}\dots\partial^{k_d}_{x_d}$. These norms generate a topology on $S({\mathbb R}^d)$, and elements of the space $S^*({\mathbb R}^d)$ of continuous linear functionals on $S({\mathbb R}^d)$ are called tempered distributions. For every tempered distribution $f$ there exist $C>0$ and $m\in{\mathbb N}\cup\{0\}$ such that for all $\varphi\in S({\mathbb R}^d)$ $$ |f(\varphi)|\le Cp_m(\varphi). $$ The Fourier transform of a tempered distribution $f$ is defined by the equality \begin{equation}\label{f} \hat f(\varphi)=f(\hat\varphi)\quad\mbox{for all}\quad\varphi\in S({\mathbb R}^d), \end{equation} where $$ \hat\varphi(y)=\int_{{\mathbb R}^d}\varphi(x) e^{-2\pi i\langle x,y\rangle}dx $$ is the Fourier transform of the function $\varphi$. Note that the Fourier transform of every tempered distribution is also a tempered distribution. But here we consider only the case when $f$ and $\hat f$ are complex Radon measures on ${\mathbb R}^d$. For example, if $$ f(x)=\sum_n c_ne^{2\pi i\langle x,\gamma_n\rangle}\in W, $$ then the Fourier transform of the measure $f(x)dx$ is equal to $\sum_n c_n\delta_{\gamma_n}$, where $\delta_\gamma$ means the unit mass at $\gamma\in{\mathbb R}^d$. Also, if $\mu^0=\sum_{k\in{\mathbb Z}^d}\delta_k$, then by the Poisson formula, $$ \sum_{n\in{\mathbb Z}^p}f(n)=\sum_{n\in{\mathbb Z}^p}\hat f(n), \quad f\in S({\mathbb R}^d), $$ and we have $\hat\mu^0=\mu^0$. Therefore, if $L$ is a full-rank lattice, i.e., $L=A({\mathbb Z}^d)$ for some nonsingular linear operator $A$ in ${\mathbb R}^d$, and $\mu^1=\sum_{\lambda\in L+a}\delta_\lambda$ for some $a\in{\mathbb R}^d$, then \begin{equation}\label{p} \hat\mu^1(dy)=(\det A)^{-1}\sum_{\lambda\in L^*} e^{-2\pi i\langle y,a\rangle}\delta_\lambda(dy), \end{equation} where $L^*=\{y\in{\mathbb R}^d: <\lambda,y>\in{\mathbb Z}\quad \forall \lambda\in L\}$ is the conjugate lattice. Let $\nu$ be a Radon measure on ${\mathbb R}^d$. Denote by $\nu_t$ its shift on $t\in{\mathbb R}^d$, i.e., $$ \int g(x)\nu_t(dx)=\int g(x+t)\nu(dx). $$ A measure $\nu$ is {\it translation bounded} if variations of its translations $|\nu_t|$ are bounded in the unit ball uniformly in $t\in{\mathbb R}^d$. Note that every translation bounded measure on ${\mathbb R}^d$ satisfies the condition \begin{equation}\label{m} |\nu|(B(0,r))=O(r^d),\quad r\to\infty, \end{equation} therefore it belongs to $S^*({\mathbb R}^d)$. Here $B(x,r)=\{t\in{\mathbb R}^d:\,|t-x|<r\}$. The measure $\nu$ on ${\mathbb R}^d$ is {\it atomic}, if it has the form $$ \nu=\sum_{\lambda\in\Lambda} c_\lambda\delta_\lambda,\quad c_\lambda\in{\mathbb C},\ \mbox{ with countable }\Lambda\subset{\mathbb R}^d. $$ If this is the case, we will write $\nu(\lambda):=c_\lambda$. Also, we shall say that $\Lambda$ is a {\it support} of $\nu$. The measure $\nu$ on ${\mathbb R}^d$ has {\it a uniformly discrete support $\Lambda$}, if $$ \inf\{|x-x'|:\,x,\,x'\in\Lambda, x\neq x'\}>0. $$ Such measures are the main object in the theory of Fourier quasicrystals (see \cite{C}-\cite{F2}, \cite{F0}-\cite{La1}, \cite{LO1}-\cite{Mo1}). This theory was developed in connection with the experimental discovery of non-periodic atomic structures with diffraction patterns consisting of spots, which was made in the mid '80s. Remark also that some properties of tempered distributions with discrete and closed or atomic supports were considered in \cite{F3} - \cite{F0}, \cite{LA}, \cite{P}. In particular, the following statements are implicitly contained in \cite{F0}. But for completeness we present proofs of them at the end of this article. \begin{Pro}\label{P3} If $\nu$ is a translation bounded measure and $\psi\in S({\mathbb R}^d)$, then the total mass of the variation $|\psi\nu_t|$ of the measure $\psi\nu_t$ is bounded uniformly in $t\in{\mathbb R}^d$. Moreover, for every $\varepsilon>0$ there is $r(\varepsilon)<\infty$ such that the mass of restriction of each measure $|\psi\nu_t|$ on the set $\{x\in{\mathbb R}^d:\,|x|>r(\varepsilon)\}$ is less then $\varepsilon$. \end{Pro} \begin{Pro}[also, see \cite{KL} and \cite{LA}]\label{P0} If $\nu$ is a measure from $S^*({\mathbb R}^d)$ with uniformly discrete support $\Lambda$, and $\hat\nu$ is a measure satisfying (\ref{m}), then $\sup_{\lambda\in\Lambda}|\nu(\lambda)|<\infty$, hence the measure $\nu$ is translation bounded. \end{Pro} \begin{Pro}\label{P1} If $\nu$ is a measure from $S^*({\mathbb R}^d)$, $\hat\nu$ is an atomic measure satisfying (\ref{m}), and $\psi\in S({\mathbb R}^d)$, then the convolution $(\psi\star\nu)(t)=\int\psi(t-x)\nu(dx)$ belongs to $W$, and its Fourier transform equals $\hat\psi\hat\nu$. \end{Pro} \begin{Pro}\label{P2} If $\nu$ is a translation bounded measure, $\hat\nu$ is a translation bounded atomic measure, and $g\in W$, then the Fourier transform $\widehat{g\nu}$ of the product $g\nu$ is a translation bounded atomic measure. \end{Pro} {\bf 3. Properties of measures with uniformly discrete support.} The following theorem belongs to Y.Meyer \cite{M1}. \begin{Th} Let $\mu=\sum_{\lambda\in\Lambda} a_\lambda\delta_\lambda$ be a measure on ${\mathbb R}$ with a discrete and closed support $\Lambda$ and the set $\{x=a_\lambda:\,\lambda\in\Lambda\}$ is finite. If $\mu\in S^*({\mathbb R})$ and its Fourier transform $\hat\mu$ is a translation bounded measure on ${\mathbb R}$, then $$ \Lambda=E\triangle\bigcup_{j=1}^N(\alpha_j{\mathbb Z}+\beta_j),\quad \alpha_j>0,\ \beta_j\in{\mathbb R}, $$ where the set $E$ is finite. \end{Th} Here $A\triangle B$ means the symmetrical difference between $A$ and $B$. In \cite{K} M.Kolountzakis extended the above theorem to measures on ${\mathbb R}^d$. Also, he replaced the condition "the measure $\hat\mu$ is translation bounded" with the weaker one "the measure $\hat\mu$ satisfies condition (\ref{m})". He also found a condition for the support of $\mu$ to be a finite union of translations of several full-rank lattices. His result is very close to Cordoba's one: \begin{Th}[\cite{C}] Let $\mu=\sum_{\lambda\in\Lambda} a_\lambda\delta_\lambda$ be a measure on ${\mathbb R}^d$ with a uniformly discrete support $\Lambda$ and a finite set $\{x=a_\lambda:\,\lambda\in\Lambda\}$. If $\hat\mu$ is atomic and translation bounded measure, then $\Lambda$ is a finite union of translations of several, possibly incommensurable, full-rank lattices. \end{Th} \noindent In papers \cite{F2} and \cite{F3} we get a small elaboration of Cordoba's result. In particular, we replaced the conditions "$a_\lambda$ from a finite set" by "$|a_\lambda|$ from a finite set". But Cordoba's type theorems are not true for some uniformly discrete measures $\mu=\sum_{\lambda\in\Lambda} a_\lambda\delta_\lambda$ with translation bounded $\hat\mu$ and a countable set $\{a_\lambda\}_{\lambda\in\Lambda}$ (\cite{LO2}). Theorem \ref{1} makes it possible to obtain such a result: \begin{Th}[\cite{F0}]\label{2} Let $\mu=\sum_{\lambda\in\Lambda} a_\lambda\delta_\lambda$ be a measure on ${\mathbb R}^d$ with a uniformly discrete support $\Lambda$ such that $\inf_\Lambda|a_\lambda|>0$, and $\hat\mu$ be an atomic measure satisfying (\ref{m}). Then $\Lambda$ is a finite union of translations of several disjoint full-rank lattices. \end{Th} Note that even if both $\supp\mu$ and $\supp\hat\mu$ are uniformly discrete, they can be finite unions of translations of incommensurate lattices (\cite{F2}). We supplement Theorem \ref{2} with a description of the measure $\mu$. \begin{Th}\label{3} In conditions of Theorem \ref{2}, \begin{equation}\label{r1} \mu=\sum_{j=1}^N F_j(y)\Delta^j, \end{equation} where $\Delta^j$ are sums of unit masses at the points of some lattices $L_j$ or their translations, and $F_j(y)=\sum_s b_s e^{2\pi i\langle y,\alpha_s^j\rangle}\in W$ with a bounded set of $\alpha_s^j\in{\mathbb R}^d$. Moreover, \begin{equation}\label{r2} \hat\mu=\sum_{j=1}^N e^{2\pi i\langle x,\lambda_j\rangle}\nu^j, \end{equation} where $\nu^j$ are $d$-periodic atomic measures with full-rank lattices $L_j^*$ of periods, and $\lambda_j\in\Lambda$. \end{Th} {\bf Proof of Theorem \ref{3}}. Let $\eta<(1/2)\inf\{|\lambda-\lambda'|:\, \lambda,\lambda'\in\Gamma\}$, and $\psi$ be an odd $C^\infty$ function such that $\psi(0)=1$ and $\supp\psi\subset B(0,\eta)$. Clearly, $(\psi\star\mu)(\lambda)=\mu(\lambda)$ for $\lambda\in\Lambda$\ and, by Proposition \ref{P1}, the function $g=\psi\star\mu\in W$. By Theorem \ref{2}, the support $\Lambda$ of the measure $\mu$ is a finite union of translations of full-rank lattices $\lambda_j+L_j, \ j=1,\dots,N$. Set $$ \Delta^j=\sum_{\lambda\in L_j+\lambda_j}\delta_\lambda. $$ We have \begin{equation}\label{h} \mu=\sum_{j=1}^N g\Delta^j, \end{equation} with $g(x)=\sum_n c_ne^{2\pi i\langle x,\gamma_n\rangle}\in W$. For every fixed $j$ and each $\gamma\in{\mathbb R}^d$ there is $\alpha$ inside the parallelepiped generated by corresponding $L_j^*$ such that $\gamma-\alpha\in L_j^*$, therefore, $e^{2\pi i\langle x,\gamma\rangle}=e^{2\pi i\langle x,\alpha\rangle}$ for $x\in L_j$. Collecting similar terms for this $j$, we obtain (\ref{r1}). Next, by (\ref{p}), $\hat\Delta^j$ is a uniformly discrete and translation bounded measure. Furthermore, the measure $g\Delta^j$ coincides with the restriction of the measure $\mu$ to $\lambda_j+L_j$ and, by Proposition \ref{P2}, its Fourier transform $\widehat{g\Delta^j}$ is an atomic translation bounded measure. Set $$ \nu^j=e^{-2\pi i\langle\lambda_j,y\rangle}\widehat{g\Delta^j}. $$ The converse Fourier transform of $\nu^j$ is equal to $$ (\nu^j)\check\ =\sum_{x\in L_j+\lambda_j}\mu(x)\delta_{x-\lambda_j}=\sum_{x\in L_j}\mu(x+\lambda_j)\delta_x, $$ and the converse Fourier transform of the measure $\nu^j_a$ for $a\in{\mathbb R}^d$ is equal to $$ e^{2\pi i\langle a,x\rangle}\sum_{x\in L_j}\mu(x+\lambda_j)\delta_x, $$ which coincides with $(\nu^j)\check\ $ for each $a\in L_j^*$. Therefore, $\nu^j_a=\nu^j$. So, $\nu^j$ is $d$-periodic with the lattice $L_j^*$ of periods and, by (\ref{h}), $$ \hat\mu=\sum_{j=1}^N\widehat{g\Delta^j}=\sum_{j=1}^N e^{2\pi i\langle\lambda_j,y\rangle}\nu^j. $$ ~ \rule{7pt}{7pt} {\bf 4. Coherent sets of frequencies.} Let us remember that a uniformly discrete set $\Upsilon\subset{\mathbb R}^d$ is {\it a coherent set of frequencies} (or satisfies Kahane's property), if every limit of a sequence of finite sums $$ \sum c_\lambda e^{2\pi i\langle x,\lambda \rangle},\quad\lambda\in\Upsilon,\quad c_\lambda\in{\mathbb C}, $$ with respect to the topology of uniform convergence on every compact subset of ${\mathbb R}^d$ is almost periodic in the sense of H.Bohr. \begin{Th}[Y.Meyer, \cite{M2}]\label{3a} Let $\mu=\sum_{\lambda\in\Lambda} a_\lambda\delta_\lambda$ be a Radon measure from $S^*({\mathbb R}^d)$ with a uniformly discrete support $\Lambda$ and $\hat\mu$ be a translation bounded Radon measure. Then the set $\Upsilon=\{\lambda:\, a_\lambda=1\}$ is a coherent set of frequencies. \end{Th} Using Theotem \ref{1}, we obtain the following result. \begin{Th}\label{4} Let measures $\mu=\sum_{\lambda\in\Lambda} \mu(\lambda)\delta_\lambda$ and $\hat\mu$ be atomic translation bounded measures from $S^*({\mathbb R}^d)$, and let $\Upsilon\subset\Lambda$ be such that for all $\lambda\in\Upsilon$ and some $\varepsilon>0$ i) $|\mu(\lambda)|\ge\varepsilon$, ii) $|\lambda-\lambda'|\ge\varepsilon$ for all $\lambda'\in\Lambda\setminus\{\lambda\}$. \noindent Then the set $\Upsilon$ is a coherent set of frequencies. \end{Th} \begin{Cor} Let $\mu=\sum_{\lambda\in\Lambda} \mu(\lambda)\delta_\lambda$ be a measure from $S^*({\mathbb R}^d)$ with a uniformly discrete support $\Lambda$ and $\hat\mu$ be an atomic translation bounded measure. Then for every $\varepsilon>0$ the set $\Upsilon=\{\lambda:\, |\mu(\lambda)|\ge\varepsilon\}$ is a coherent set of frequencies. \end{Cor} Indeed, by Proposition \ref{P0}, the measure $\mu$ is translation bounded, therefore all the conditions of Theorem \ref{4} are met. {\bf Proof of Theorem \ref{4}}. Let $\eta<\varepsilon/2$, and $\psi$ be the same as in the proof of Theorem \ref{3}. By Proposition \ref{P1}, the function $g=\psi\star\mu\in W$. Then $g(\lambda)=(\psi\star\mu)(\lambda)=\mu(\lambda)$ for $\lambda\in\Upsilon$. By Theorem \ref{1}, there is $h\in W$ such that $g(\lambda)h(\lambda)=1$ under condition $|g(\lambda)|\ge\varepsilon$, in particular, for all $\lambda\in\Upsilon$. Fix a parameter $t\in{\mathbb R}^d$. Let $F(x)$ be a convolution of the function $\psi$ and the measure $e^{2\pi i\langle x, t\rangle}h(x)\mu(x)$, i.e., $$ F(x)=\sum_{\lambda\in\Lambda}\psi(x-\lambda)e^{2\pi i\langle\lambda, t\rangle}h(\lambda)\mu(\lambda). $$ By Proposition \ref{P2}, the measure $\widehat{h\mu}$ is translation bounded. Using Proposition \ref{P1}, we see that $\hat F(y)=\hat\psi(\widehat{h\mu})_t(y)$. Applying Proposition \ref{P3}, we get that the total mass of the measure $\hat F(y)$ is bounded by some constant $C<\infty$, and the mass of its restriction to the set $\{x\in{\mathbb R}^d:\,|x|>r\}$ is less than $1/2$ for a suitable $r<\infty$. Note that $C$ and $r$ are independent of $t$. Taking into account that $F(x)$ is the converse Fourier transform of the measure $\hat F(y)$ and obvious equality $F(\lambda)=e^{2\pi i\langle\lambda, t\rangle}$ for all $\lambda\in\Upsilon$, we get $$ e^{2\pi i\langle\lambda, t\rangle}=\int_{{\mathbb R}^d} e^{2\pi i\langle y,\lambda\rangle}\hat F(dy). $$ Now, let $\sum c_\lambda e^{2\pi i\langle t,\lambda\rangle}$ be any finite sum of exponents with $\lambda\in\Upsilon$. We have $$ \left|\sum c_\lambda e^{2\pi i\langle t,\lambda\rangle}\right|= \left|\int_{{\mathbb R}^d}\sum c_\lambda e^{2\pi i\langle y,\lambda\rangle}\hat F(dy)\right| $$ $$ \le \left|\int_{|y|\le r}\sum c_\lambda e^{2\pi i\langle y,\lambda\rangle}\hat F(dy)\right|+\left|\int_{|y|>r}\sum c_\lambda e^{2\pi i\langle y,\lambda\rangle}\hat F(dy)\right|. $$ The first integral in the right-hand side does not exceed $$ C\sup_{|y|\le r}\left|\sum c_\lambda e^{2\pi i\langle y,\lambda\rangle}\right|, $$ the second one is bounded by $$ \frac{1}{2}\sup_{t\in{\mathbb R}^d}\left|\sum c_\lambda e^{2\pi i\langle t,\lambda\rangle}\right|. $$ Thus, $$ \sup_{t\in{\mathbb R}^d}\left|\sum c_\lambda e^{2\pi i\langle t,\lambda\rangle}\right| \le 2C\sup_{|y|\le r}\left|\sum c_\lambda e^{2\pi i\langle y,\lambda\rangle}\right|. $$ Therefore, the uniform convergence of the sequence of exponential sums on the ball $B(0,r)$ implies the uniform convergence on ${\mathbb R}^d$, and the limit of the sequence is an almost periodic function in the sense of H.Bohr. ~ \rule{7pt}{7pt} {\bf 5. Proofs of propositions \ref{P3} - \ref{P2}}. To prove Proposition \ref{P3}, fix $t\in{\mathbb R}^d$. Denote by $N(r)$ the variation $|\nu_t|$ in the ball $B(0,r)$. Since the measure $\nu$ is translation bounded, we see that $N(r)\le C(1+r)^d$ with a constant $C$ independent of $t$. Also, $|\psi(x)|\le C'(1+|x|)^{-d-1}$ for all $x\in{\mathbb R}^d$ with a constants $C'$. Therefore, integrating by parts, we obtain the estimate $$ \int_{{\mathbb R}^d}|\psi(x)||\nu_t|(dx)\le C'\int_0^\infty(1+r)^{-d-1}dN(r)\le CC'(d+1)\int_0^\infty(1+r)^{-2}dr. $$ Also, if $r(\varepsilon)$ is sufficiently large, we get $$ \int_{|x|>r(\varepsilon)}|\psi(x)||\nu_t|(dx)\le C'\int_{r(\varepsilon)}^\infty(1+r)^{-d-1}dN(r)\le CC'(d+1)\int_{r(\varepsilon)}^\infty(1+r)^{-2}dr<\varepsilon. $$ To prove Proposition \ref{P0}, note that (\ref{f}) implies the equality $$ \nu(\lambda)=\int\hat\psi(x-\lambda)\nu(dx)=\int\psi(y)e^{2\pi i\langle\lambda,y\rangle}\hat\nu(dx), $$ where $\psi$ is the same function as in the proof of Theorem \ref{3}. Arguing as in the proof of Proposition \ref{P3}, we see that the module of the latter integral is bounded uniformly in $\lambda\in\Lambda$. To prove Proposition \ref{P1}, we can repeat arguments from the proof of Proposition \ref{P3} and get that the total variation of the measure $\hat\psi\hat\nu$ is finite. Since the measure $\hat\psi\hat\nu$ is atomic, we see that it has the form $$ \sum_{n=1}^\infty b_n\delta_{\gamma_n},\qquad \sum_n|b_n|<\infty, $$ and its converse Fourier transform is equal $$ \sum_{n=1}^\infty b_n e^{2\pi i\langle \gamma_n,x\rangle}\in W. $$ On the other hand, the converse Fourier transform of the function $\varphi(x):=\psi(x-t)\in S({\mathbb R}^d)$ equals $\hat\psi(y)e^{2\pi i\langle y,t\rangle}$. Since $\psi$ is an odd function, we get by formula (\ref{f}) $$ (\psi\star\nu)(t)=\int\psi(t-x)\nu(dx)=\int e^{2\pi i\langle t,y\rangle}\hat\psi(y)\hat\nu(dy). $$ Therefore, the converse Fourier transform of the measure $\hat\psi\hat\nu$ is equal $\psi\star\nu$. To prove Proposition \ref{P2}, note that the function $g$ is bounded, hence the measure $g\nu$ is translation bounded. Set $$ \nu^\gamma(x)=e^{2\pi i\langle x,\gamma\rangle}\nu(x),\quad \gamma\in{\mathbb R}^d. $$ Suppose $g(x)=\sum_n c_ne^{2\pi i\langle x,\gamma_n\rangle}\in W$. Then $$ g\nu=\sum\nolimits_n c_n\nu^{\gamma_n},\qquad \widehat{g\nu} =\sum\nolimits_n c_n\widehat{\nu^{\gamma_n}}. $$ Note that $\sum_n|c_n|<\infty$, and $\widehat{\nu^{\gamma_n}}$ are atomic measures, hence $\widehat{g\nu}$ is an atomic measure too. Then for each $y\in{\mathbb R}^d$ we get $|\widehat{\nu^\gamma}|(B(y,1))=|\hat\nu|(B(y-\gamma,1))$. Therefore, $$ |\widehat{g\nu}|(B(y,1))\le \sum_n |c_n||\widehat{\nu^{\gamma_n}}|(B(y,1))\le \sup_{t\in{\mathbb R}^d}|\hat\nu|(B(t,1))\sum_n |c_n|. $$ So, $\widehat{g\nu}$ is a translation bounded atomic measure. ~ \rule{7pt}{7pt} \end{document}
arXiv
\begin{document} \title[Strong subadditivity condition for qudit state] {Quantum strong subadditivity condition for systems without subsystems} \author{Margarita A Man'ko and Vladimir I Man'ko} \address{P N Lebedev Physical Institute, Leninskii Prospect 53, Moscow 119991, Russia} \ead{[email protected]} \ead{[email protected]} \begin{abstract} The strong subadditivity condition for the density matrix of a quantum system, which does not contain subsystems, is derived using the qudit-portrait method. An example of the qudit state in the seven-dimensional Hilbert space corresponding to spin $j=3$ is presented in detail. New entropic inequalities in the form of subadditivity condition and strong subadditivity condition for spin tomograms determining the qudit states are obtained and given on examples of $j=2$ and 3. \end{abstract} \pacs{03.65.-w, 03.65.Ta, 02.50.Cw, 03.67.-a} \section{Introduction} The quantum correlations between the subsystems of composite systems provide specific entropic inequalities relating von Neumann entropies of the system and its subsystems. For example, the quantum correlations are responsible for violation of classical entropic inequality for bipartite classical systems $H(1)\leq H(1,2)$, where $H(1,2)$ is the Shannon entropy of bipartite classical system and $H(1)$ is the Shannon entropy\cite{Shannon} of its subsystem. This inequality having intuitively clear interpretation that the disorder in total system is either the same or larger than the disorder of its subsystems is not true for quantum bipartite system. It is known that, for two-qubit pure maximum entangled state with density matrix $\rho(1,2)$, the von Neumann entropy $S(1,2)=-\mbox{Tr}\,\rho(1,2)\ln\rho(1,2)=0$, but von Neumann entropy for one-qubit state $S(1)=-\mbox{Tr}\,\rho(1)\ln\rho(1)$ with $\rho(1)=-\mbox{Tr}_2\,\rho(1,2)$ has the maximum possible for qubit value, i.e., $S(1)= \ln 2$. Thus, in this state $S(1)>S(1,2)$, i.e., quantum correlations between two qubits in the composite system (consisting of two qubits) provide not only the violation of the Bell inequalities~\cite{Bell,HornClauser} but also yield the violation of the classical entropic inequality. For bipartite systems, both classical and quantum, there exist entropic inequalities, called the subadditivity conditions, which are the same for Shannon entropies and von Neumann entropies. The quantum subadditivity condition can be proved, e.g., by using the tomographic-probability description of spin states~\cite{Mendes}. Recent review of the tomographic representation of classical and quantum mechanics can be found in \cite{NuovoCim,VovaJETP}. For tripartite systems, both classical and quantum, there also exist entropic inequalities, called the strong subadditivity conditions, which have the same form for classical Shannon entropies in the classical case and for von Neumann entropies in the quantum case. Lieb and Ruskai were the first who proved the quantum strong subadditivity condition~\cite{LiebRuskai}. The tomographic-probability approach to the strong subadditivity condition was discussed in \cite{Mendes}. Various aspects of entropic inequalities and the quantum strong subadditivity condition for three-partite systems can be found in \cite{Ruskai,Lieb,6,4,5,8,RitaFP}. Recently, it was shown that the subadditivity condition exists not only in bipartite quantum systems but also in the systems which do not contain subsystems, e.g., for one qutrit~\cite{VovJRLR2013}. An approach to derive the subadditivity condition for the qutrit state is based on the method called the qubit portrait of qudit states~\cite{portJRLR}, later on used in \cite{LupoJPA} to study the entanglement in two qudit systems. The aim of this paper is to show that the strong subadditivity condition can be obtained for the quantum systems which do not have subsystems. For this, we apply the qudit-portrait method (which is a generalization of the qubit-portrait method) that, in fact, is acting by a specific positive map on the density matrix. The map is described by the action on a vector by the matrix with matrix elements equal either to zero or unity. Such matrices are used to get density matrices of subsystems by partial tracing of the density matrices of the composite-system states. In this case, the system density matrix first is mapped onto the vector, and then the map matrix acts onto this vector. The obtained vector is mapped again onto the new density matrix. For composite systems, the portrait method is identical to taking the partial trace of the system density matrix. Since the $N$-dimensional density matrix of the composite system state and the state of a qudit in the $N$-dimensional Hilbert space have the identical properties, there exists a possibility to obtain and apply the strong subadditivity condition available for the composite system to the system without subsystems. Our aim is to present the strong subadditivity condition for an arbitrary probability $N$-vector describing the classical system without subsystems. In the quantum case, we present the strong subadditivity condition for an arbitrary density $N$$\times$$N$-matrix describing the state of a system without subsystems. This paper is organized as follows. In section~2, we consider the classical system described by the probability $N$-vector and show the example of $N=7$ in detail. In section~3, we consider the quantum system state associated with the density $N$$\times$$N$-matrix and present the example of $N=7$. We give our conclusions and prospectives in section~4, where we also discuss the possible consequences for the systems of qudits and quantum correlations in these systems in the context of strong subadditivity conditions obtained. In Appendix, the entropic inequalities for tomograms of some qudit states are presented. \section{Classical strong subadditivity condition} We consider a classical system for which one has a random variable. The probabilities to get the values of this random variable are described by a probability vector $\vec p=(p_1,p_2,\ldots,p_N)$, where $p_k\geq 0$ and $\sum_{k=1}^Np_k=1$. The system has no subsystems, and the order in this system is described by the Shannon entropy \begin{equation}\label{1} H=-\sum_{k=1}^Np_k\ln p_k, \end{equation} which satisfies the inequality $H\geq 0$, takes the maximum value for $\vec p$ with components $p_k=N^{-1}$, and equals $H_{\rm max}=\ln N$. If the classical system has two subsystems 1 and 2 and two random variables, the probability to get the two values of these random variables is described by the nonnegative numbers ${\cal P}_{kj}$, $k=1,2,\ldots, N_1$, and $j=1,2,\ldots, N_2$. The probabilities satisfy the normalization condition $\sum_{k=1}^{N_1}\sum_{j=1}^{N_2}{\cal P}_{kj}=1$, and the Shannon entropy of the system state reads \begin{equation}\label{2} H(1,2)=-\sum_{k=1}^{N_1}\sum_{j=1}^{N_2}{\cal P}_{kj}\ln{\cal P}_{kj}. \end{equation} The joint probability distribution ${\cal P}_{kj}$ provides the marginal distributions for systems 1 and 2 as follows: \begin{equation}\label{3} P_{1k}=\sum_{j=1}^{N_2}{\cal P}_{kj},\quad P_{2j}=\sum_{k=1}^{N_1}{\cal P}_{kj}. \end{equation} Thus, we have two Shannon entropies associated with marginal distributions~(\ref{3}), and they read \begin{equation}\label{4} H(1)=-\sum_{k=1}^{N_1}P_{1k}\ln P_{1k},\quad H(2)=-\sum_{j=1}^{N_2} P_{2j}\ln P_{2j}. \end{equation} It is known that these entropies satisfy the subadditivity condition written in the form of inequality \begin{equation}\label{5} H(1)+H(2)\geq H(1,2), \end{equation} and the Shannon information is defined as the difference \begin{equation}\label{6} I=H(1)+H(2)-H(1,2). \end{equation} If the classical system has three subsystems (1, 2, and 3) with three random variables, the joint probability distribution describing the results of measurement of the random variables is related to the nonnegative numbers $\Pi_{kjl}$ $(k=1,2,\ldots N_1$, $j=1,2,\ldots N_2$, and $l=1,2,\ldots N_3)$. The nonnegative numbers determine the marginal probability distributions \begin{equation}\label{7} {\cal P}_{kj}^{(12)}=\sum_{l=1}^{N_3}\Pi_{kjl},\quad{\cal P}_{jl}^{(23)}=\sum_{k=1}^{N_1}\Pi_{kjl},\quad P_{j}^{(2)}=\sum_{k=1}^{N_1}\sum_{l=1}^{N_3}\Pi_{kjl}. \end{equation} The Shannon entropies associated with these probability distributions satisfy the strong subadditivity condition \begin{equation}\label{9} H(1,2)+H(2,3)\geq H(1,2,3)+H(2), \end{equation} where \begin{equation}\label{10} H(1,2,3)=-\sum_{k=1}^{N_1}\sum_{j=1}^{N_2}\sum_{l=1}^{N_3}\Pi_{kjl}\ln\Pi_{kjl}, \end{equation} and entropies $H(1,2)$, $H(2,3)$, and $H(2)$ associated with distributions ${\cal P}_{kj}^{(12)}$, ${\cal P}_{jl}^{(23)}$, and $P_{j}^{(2)}$ are given by (\ref{2}) and (\ref{4}) with obvious substitutions. In \cite{VovJRLR2013}, it was suggested to obtain an analog of the subadditivity condition~(\ref{5}) for the system without subsystems. The general scheme to get such inequality is to write the probability vector $\vec p$ with components $p_k$ $(k=1,2,\ldots,N)$ in a matrix form with matrix elements ${\cal P}_{kj}$. Then inequality~(\ref{5}) can be obtained in view of the above procedure. Here, we apply this method to map the probability vector $\vec p$ onto the table of numbers with three indices $\Pi_{kjl}$. As a result, we can obtain the strong subadditivity condition for the system without subsystems. We demonstrate this procedure on the example of $\vec p$ with 8 components. Let us define a map given by the equalities \begin{eqnarray} &&p_1=\Pi_{111},\quad p_2=\Pi_{112},\quad p_3=\Pi_{121},\quad p_4=\Pi_{122},\nonumber\\[-2mm] &&\label{11}\\[-2mm] &&p_5=\Pi_{211},\quad p_6=\Pi_{212},\quad p_7=\Pi_{221},\quad p_8=\Pi_{222}.\nonumber \end{eqnarray} The map introduced provides an inequality, which is the strong subadditivity condition associated with the table $\Pi_{kjl}$. To point out a peculiarity of the strong subadditivity condition, we consider the case of $N=7$. It is the prime number, and the system with the probability vector has no subsystems. Thus, we have 7 nonnegative numbers $p_1,p_2,\ldots,p_7$ and the normalization condition $p_1+p_2+\cdots+p_7=1$. Also we add an extra component $p_8=0$ to the probability vector. We added zero components to the probability vector since there is a mismatch of numbers $2^n$ and $2k+1$ (in the case under consideration, numbers 8 and 7). This means that, in the previous picture of the 8-dimensional probability vector, we consider the probability distribution with the constraint $p_8=0$ that provides the constraint $\Pi_{222}=0$ in map~(\ref{11}). Applying inequality~(\ref{9}) and formula~(\ref{10}), we obtain the strong subadditivity condition in the case of the probability 7-vector, which we express in an explicit form in terms of the vector components \begin{eqnarray} \fl\left(-\sum_{k=1}^{7}p_{k}\ln p_{k}\right)-(p_1+p_2+p_5+p_6)\ln(p_1+p_2+p_5+p_6)\nonumber\\ \fl-(p_3+p_4+p_7)\ln(p_3+p_4+p_7) \leq -(p_1+p_2)\ln(p_1+p_2)-(p_3+p_4)\ln(p_3+p_4)\nonumber\\ \fl-(p_5+p_6)\ln(p_5+p_6)-p_7\ln p_7-(p_1+p_5)\ln(p_1+p_5)-(p_2+p_6)\ln(p_2+p_6)\nonumber\\ \fl -(p_3+p_7)\ln(p_3+p_7)-p_4\ln p_4.\label{12} \end{eqnarray} This inequality is valid if one makes an arbitrary permutation of $7!$ permutations of the vector components of the probability vector $\vec p$. Inequality~(\ref{12}) can be presented in the form, where the terms $-p_4\ln p_4$ and $-p_7\ln p_7$ are removed from the both sides of the inequality. Analogously, we can write the subadditivity condition following the approach of \cite{VovJRLR2013}. For example, we have \begin{eqnarray} &&-\sum_{k=1}^{7}p_{k}\ln p_{k}\leq -(p_1+p_2+p_5+p_6)\ln(p_1+p_2+p_5+p_6)\nonumber\\ &&-(p_3+p_4+p_7)\ln(p_3+p_4+p_7)-(p_1+p_3)\ln(p_1+p_3)\nonumber\\ &&-(p_2+p_4)\ln(p_2+p_4)-(p_5+p_7)\ln(p_5+p_7)-p_4\ln p_4.\label{13} \end{eqnarray} Also we can rewrite this inequality removing the term $-p_4\ln p_4$ from the both sides of the inequality. We see that this inequality is valid for a system without subsystems. For example, in the case of quantum particle with spin $j=3$, the state of this particle is determined by the spin tomogram $w(m,\vec n)$~\cite{DodPLA,OlgaJETP}, where the spin projection $m=-3,-2,-1,0,1,2,3$, and the unit vector $\vec n$ determines the quantization axes. The tomographic-probability distribution (spin tomogram) of any qudit state with the density matrix $\rho$ is determined by diagonal matrix elements of the rotated density matrix as $w(m,\vec n)=\langle m\mid u\rho u^\dagger\mid m\rangle$, where the unitary matrix $u$ is the matrix of irreducible representation of the rotation group, and it depends on the Euler angles determining the unit vector $\vec n$. Thus, the tomogram is the probability distribution of the spin projection $m$ on the direction $\vec n$. We can identify the components of the probability vector $\vec p$ with the tomographic probabilities. Then we have the inequality -- the subadditivity condition for the spin tomographic probabilities: \begin{eqnarray}\label{sub1}\fl -\sum_{ m=-3}^3w(m,\vec n)\ln w(m,\vec n)\leq -\big[w(-3,\vec n)+w(-2,\vec n)+w(1,\vec n)+w(2,\vec n)\big]\nonumber\\ \hspace{30mm}\times\ln\big[w(-3,\vec n)+w(-2,\vec n)+w(1,\vec n)+w(2,\vec n)\big]\nonumber\\ -\big[w(-1,\vec n)+w(0,\vec n)+w(3,\vec n)\big]\ln\big[w(-1,\vec n)+w(0,\vec n)+w(3,\vec n)\big]\nonumber\\ -\big[w(-3,\vec n)+w(1,\vec n)\big]\ln\big(w(-3,\vec n)+w(1,\vec n)\big]\nonumber\\ -\big[w(-2,\vec n)+w(2,\vec n)\big)\ln\big[w(-2,\vec n)+w(2,\vec n)\big]\nonumber\\ -\big[w(-1,\vec n)+w(3,\vec n)\big]\ln\big[w(-1,\vec n)+w(3,\vec n)\big]-w(0,\vec n)\ln w(0,\vec n). \end{eqnarray} This inequality describes some properties of quantum correlations in the spin system with $j=3$. In spite of the fact that this system does not have subsystems, inequality~(\ref{sub1}), being corresponded to the subadditivity condition, is valid for any direction of the vector $\vec n$. Other examples of tomographic inequalities are given in Appendix. \section{Strong subadditivity condition for one qudit state} In this section, we obtain the strong subadditivity condition for a system without subsystems written in the form of an inequality for von Neumann entropies associated with the initial density matrix of the spin-$j$ state and its qubit (or qudit) portraits. The qubit (or qudit) portrait~\cite{portJRLR,MVJRLR} of the initial density matrix is a specific positive map of this matrix obtained following the procedure: Any $N$$\times$$N$-matrix $\rho_{jk}$ is considered as the column vector $\vec\rho$ with components $(\rho_{11},\rho_{12},\ldots,\rho_{1N}, \rho_{21},\rho_{22},\ldots,\rho_{2N},\ldots,\rho_{N1},\rho_{N2},\ldots, \rho_{NN})$. We multiply this vector by the matrix $M$ which contains only units and zeros, and the units and zeros are matrix elements to provide that the new vector $\vec\rho_M$ obtained is considered as a new matrix $(\rho_M)_{jk}$ being the density matrix. It is easy to prove that, for any density matrix of a multi-qudit system $\rho(1,2,\ldots, M)$, one can calculate the density matrix of an arbitrary subsystem of qudits $\rho(1,2,\ldots, M')$ by means of a portrait of the initial density matrix. In view of this observation, we extend the entropic inequalities available for composite systems of qudits to arbitrary density matrices including the density matrices of a single qudit. We show the result of such an approach on an example of the strong subadditivity condition known for three-partite quantum systems~\cite{LiebRuskai}. For the qudit state with $j=3$ and the density matrix $\rho$ with matrix elements $\rho_{kj}$, $k,j=1,2,\ldots,7$, the strong subadditivity condition found turns out to be \begin{equation}\label{SSC1} -\mbox{Tr}\left(\rho\ln\rho\right)-\mbox{Tr}\left(R_2\ln R_2\right)\leq -\mbox{Tr}\left(R_{12}\ln R_{12}\right)-\mbox{Tr}\left(R_{23}\ln R_{23}\right), \end{equation} where the density matrix $R_{12}$ has matrix elements expressed in terms of the density matrix $\rho_{jk}$ as follows: \begin{equation}\label{SSC12} R_{12}= \pmatrix{\rho_{11}+\rho_{22}&\rho_{13}+\rho_{24}&\rho_{15}+\rho_{26}&\rho_{17}\cr \rho_{31}+\rho_{42}&\rho_{33}+\rho_{44}&\rho_{35}+\rho_{46}&\rho_{37}\cr \rho_{51}+\rho_{62}&\rho_{53}+\rho_{64}&\rho_{55}+\rho_{66}&\rho_{57}\cr \rho_{71}&\rho_{73}&\rho_{75}&\rho_{77}\cr}.\end{equation} The density matrix $R_{23}$ reads \begin{equation}\label{SSC23} R_{23}= \pmatrix{\rho_{11}+\rho_{55}&\rho_{12}+\rho_{56}&\rho_{13}+\rho_{57}&\rho_{14}\cr \rho_{21}+\rho_{65}&\rho_{22}+\rho_{66}&\rho_{23}+\rho_{67}&\rho_{24}\cr \rho_{31}+\rho_{75}&\rho_{32}+\rho_{76}&\rho_{33}+\rho_{77}&\rho_{34}\cr \rho_{41}&\rho_{42}&\rho_{43}&\rho_{44}\cr},\end{equation} while the matrix $R_{2}$ is \begin{equation}\label{SSC2} R_2= \pmatrix{\rho_{11}+\rho_{22}+\rho_{55}+\rho_{66}&\rho_{13}+\rho_{24}+\rho_{57}\cr \rho_{31}+\rho_{42}+\rho_{75}&\rho_{33}+\rho_{44}+\rho_{77}\cr}.\end{equation} The inequality for von Neumann entropies associated with the matrices $\rho$, $R_{12}$, $R_{23}$, and $R_{2}$ has a form of the strong subadditivity condition for a three-partite system with the density matrix $\rho(1,2,3)$ obtained in \cite{LiebRuskai}. The other entropic inequality for the spin-2 state with the density matrix $\rho_{jk}$, $j,k=1,2,3,4,5$ has the form~(\ref{SSC1}) with the matrices $R_{12}$, $R_{23}$, and $R_{2}$ as follows: \begin{eqnarray}\fl R_{12}= \pmatrix{\rho_{11}+\rho_{22}&\rho_{13}+\rho_{24}&\rho_{15}\cr \rho_{31}+\rho_{42}&\rho_{33}+\rho_{44}&\rho_{35}\cr \rho_{51}&\rho_{53}&\rho_{55}\cr},\qquad R_{23}= \pmatrix{\rho_{11}+\rho_{55}&\rho_{12}&\rho_{13}&\rho_{14}\cr \rho_{21}&\rho_{22}&\rho_{23}&\rho_{24}\cr \rho_{31}&\rho_{32}&\rho_{33}&\rho_{34}\cr \rho_{41}&\rho_{42}&\rho_{43}&\rho_{44}\cr},\label{SSC12-2}\\ R_2= \pmatrix{\rho_{11}+\rho_{22}+\rho_{55}&\rho_{13}+\rho_{24}\cr \rho_{31}+\rho_{42}&\rho_{33}+\rho_{44}\cr}.\label{SSC2-2} \end{eqnarray} \section{Conclusions} To conclude, we list our main results. We proved matrix inequalities for arbitrary nonnegative Hermitian $N$$\times $$N$-matrices with trace equal to unity. If the matrix is identified with the density matrix of qudit state, the matrix inequalities obtained are entropic inequalities characterizing quantum correlations in the system. Employing the positive map of an arbitrary density matrix corresponding to the qubit (or qudit) portrait of the density matrix of a multiqudit state identified with the calculation of the subsystem-state density matrices, we obtained an analog of the strong subadditivity condition for the state of the system, which does not contain any subsystems. This result is an extension of the approach~\cite{VovJRLR2013}, where the subadditivity condition was obtained for quantum systems without subsystems. We derived the entropic inequalities for the qudit-state tomograms and showed examples of the subadditivity condition and the strong subadditivity condition for the spin states with $j=2$ and $j=3$, respectively. We presented the entropic inequalities for density matrices --- analogs of the strong subadditivity condition for $j=3$ --- in the form of an explicit matrix inequality. We formulated the approach to find new entropic inequalities for both cases: (i)~the probability distributions and related Shannon entropies and (ii)~the density matrices and related von Neumann entropies. For given arbitrary integer $N$, one can construct many integers $N=N'+K$, such that $N'=n_1n_2$, where $n_1$ and $n_2$ are integers. If there exists the probability vector with $N$ components, a new probability vector with $N'$ components can be constructed, and the $K$ components of the constructed vector can be assumed as zero components. Then the numbers $1,2,\ldots,N'$ can be mapped onto pairs of integers $(1,1),(1,2),\ldots,(1,n_2),(2,1),(2,2),\ldots,(2,n_2),\ldots,(n_1,1),(n_1,2),\ldots,(n_1,n_2)$. This means that the probability vector constructed is mapped onto a matrix with matrix elements analogous to the joint probability distribution of two random variables. In view of the known subadditivity condition for this joint probability distribution, one has the entropic inequality, which can be expressed in terms of components of the initial probability $N$-vector. We used such a procedure to obtain the both classical and quantum strong subadditivity conditions. The physical interpretation of the obtained strong subadditivity condition needs an extra clarification. There is a possibility to connect the new entropic inequalities with such state characteristics as purity or such parameters as Tr$\,\hat\rho^n$, as well as with correlations between different groups of measurable quantities. The entropic interpretation can be given to the correlations between groups of the tomographic-probability values. The tomographic distributions and their relations to different quasidistributions obtained in \cite{JPA-Rui} can be used to derive entropic inequalities associated with analytic signals. New relations for $q$-entropies obtained for multipartite systems in \cite{MVJRLR} and associated with entropic inequalities discussed in \cite{7} can be also considered for systems without subsystems, in view of the approach developed. We apply this procedure to find new equalities and inequalities for probability distributions and density matrices of quantum states in a future publication. \section*{Appendix} We present the entropic inequalities -- the subadditivity conditions for the spin tomographic probability distributions $w(m,\vec n)$ for one qudit with spin $j=2$ and $j=3$ as follows: $j=2$, $~m=-2,-1,0,1,2$, and $~\vec n=(sin\theta\cos\varphi,sin\theta\sin\varphi,cos\theta)$, \begin{eqnarray*} \fl -\big[w(-2,\vec n)+w(-1,\vec n)+w(0,\vec n)\big]\ln\big[w(-2,\vec n)+w(-1,\vec n)+w(0,\vec n)\big]\nonumber\\ \fl-\big[w(2,\vec n)+w(2,\vec n)\big]\ln\big[w(1,\vec n)+w(2,\vec n)\big]-\big[w(-2,\vec n)+w(1,\vec n)\big]\ln\big[w(-2,\vec n)+w(1,\vec n)\big]\nonumber\\ \fl -\big[w(0,\vec n)+w(2,\vec n)\big]\ln\big(w(0,\vec n)+w(2,\vec n)\big] \nonumber\\ \fl \geq -\big[w(-2,\vec n)\ln w(-2,\vec n)+w(0,\vec n)\ln w(0,\vec n)+w(1,\vec n)\ln w(1,\vec n)+w(2,\vec n)\ln w(2,\vec n)\big],\nonumber\\ \end{eqnarray*} $j=3$, $~m=-3,-2,-1,0,1,2,3$, and $~\vec n=(sin\theta\cos\varphi,sin\theta\sin\varphi,cos\theta)$, \begin{eqnarray*} \fl -\big[w(-3,\vec n)\ln w(-3,\vec n)+w(-2,\vec n)\ln w(-2,\vec n) +w(-1,\vec n)\ln w(-1,\vec n)\nonumber\\ \fl +w(1,\vec n)\ln w(1,\vec n)+w(2,\vec n)\ln w(2,\vec n)\big]\nonumber\\ \fl -\big[w(-1,\vec n)+w(0,\vec n)+w(3,\vec n)\big]\ln\big[w(-1,\vec n)+w(0,\vec n)+w(3,\vec n)\big]\nonumber\\ \fl -\big[w(-3,\vec n)+w(-2,\vec n)+w(1,\vec n)+w(2,\vec n)\big]\ln\big[w(-3,\vec n)+w(-2,\vec n)+w(1,\vec n)+w(2,\vec n)\big] \nonumber\\ \fl \leq -\big[w(-3,\vec n)+w(-2,\vec n)\big]\ln\big[w(-3,\vec n)+w(-2,\vec n)\big]\nonumber\\ \fl -\big[w(-1,\vec n)+w(0,\vec n)\big]\ln\big[w(-1,\vec n)+w(0,\vec n)\big] -\big[w(1,\vec n)+w(2,\vec n)\big]\ln\big[w(1,\vec n)+w(2,\vec n)\big]\nonumber\\ \fl -\big[w(-3,\vec n)+w(1,\vec n)\big]\ln\big[w(-3,\vec n)+w(1,\vec n)\big]\nonumber\\ \fl -\big[w(-2,\vec n)+w(2,\vec n)\big]\ln\big[w(-2,\vec n)+w(2,\vec n)\big]\nonumber\\ \fl -\big[w(-1,\vec n)+w(3,\vec n)\big]\ln\big[w(-1,\vec n)+w(3,\vec n)\big]. \nonumber \end{eqnarray*} One can easily obtain the other inequalities by arbitrary permutations of the spin projections, i.e., the set of all $m$ can be replaced by arbitrary permutations of the values of spin projections. Also these inequalities can be checked experimentally. \section*{References} \end{document}
arXiv
Understand the difference between clinical measured ultrafiltrationand real ultrafiltration in peritoneal dialysis Zanzhe Yu1, Zhuqing Wang1, Qin Wang1, Minfang Zhang1, Haijiao Jin1, Li Ding1, Hao Yan1, Jiaying Huang1, Yan Jin1, Simon Davies2, Wei Fang1 & Zhaohui Ni1 BMC Nephrology volume 22, Article number: 382 (2021) Cite this article It has been noticed for years that ultrafiltration (UF) is important for survival in peritoneal dialysis. On the other hand, precise and convenient UF measurement suitable for patient daily practice is not as straight forward as it is to measure UF in the lab. Both overfill and flush before fill used to be source of measurement error for clinical practice. However, controversy finding around UF in peritoneal dialysis still exists in some situation. The current study was to understand the difference between clinical measured UF and real UF. The effect of evaporation and specific gravity in clinical UF measurement were tested in the study. Four different brands of dialysate were purchased from the market. The freshest dialysate available in the market were intentionally picked. The bags were all 2 L, 2.5% dextrose and traditional lactate buffered PD solution. They were stored in four different conditions with controlled temperature and humidity. The bags were weighted at baseline, 6 months and 12 months of storage. Specific gravity was measured in mixed 24 h drainage dialysate from 261 CAPD patients when they come for their routine solute clearance test. There was significant difference in dialysate bag weight at baseline between brands. The weight declined significantly after 12 month's storage. The weight loss was greater in higher temperature and lower humidity. The dialysate in non-PVC package lose less weight than PVC package. The specific gravity of dialysate drainage was significantly higher than pure water and it was related to dialysate protein concentration. Storage condition and duration, as well as the type of dialysate package have significant impact in dialysate bag weight before use. Evaporation is likely to be the reason behind. The fact that specific gravity of dialysate drainage is higher than 1 g/ml overestimates UF in manual exchanges, which contributes to systemic measurement error of ultrafiltration in CAPD. Trial registration ClinicalTrials.gov ID: NCT03864120 (March 8, 2019) (Understand the Difference Between Clinical Measured Ultrafiltration and Real Ultrafiltration). It has been noticed for years that ultrafiltration (UF) is important for survival in peritoneal dialysis. Adequate UF has been part of the guideline target [1,2,3]. It is also an important parameter of peritoneal membrane function. Incorrect UF measurement may mislead the diagnosis of ultrafiltration failure [4]. Precise measurement of UF is also the base of correct estimation of other solute removal, such as sodium removal and urea and creatinine clearance [5, 6]. Clinical measured UF is different from measuring fluid volume in lab. Convenient and minimal risk of exposure to body fluid, both need to be considered. For these reasons, weight the bags is preferable than measuring volume for manual exchanges. In early days, it was common to neglect overfill, which contributed to systemic UF measurement error [4, 7, 8]. As an example, It is widely accepted that CAPD is as good as APD in terms of preserving residual renal function, if not better. In clinical practice, it is also common to switch patient from CAPD to APD with unsatisfied fluid status. Meanwhile, a favored 24 h UF in CAPD was noticed in several studies at that time [6, 9, 10]. Neglecting overfill used to be the reason to over estimate UF in CAPD [4, 7, 11]. Current clinical UF measurements suggested is to weight the "whole" drained bag and minus the weight of empty bag and the expected input volume, the labeled volume plus overfill volume [5, 8]. It minimizes the work load and the risk of exposing to body fluids for the patients and medical staff. It is a reasonable measurement method for daily practice. However, controversy finding around UF in peritoneal dialysis still exists to some degree. For example, there were studies which had clearly accounted for overfill still found a favored UF in CAPD compared to APD [12]. The question is whether there is any other issue around clinical UF measurement has not been clarified? The current study was to understand the difference between clinical measured UF and real UF. The effect of evaporation and specific gravity in clinical UF measurement were tested in the study. The study adheres to CONSORT 2010 reporting guideline. Study design and material Four different brands of dialysate were purchased from the market. The dialysate were all 2 L, 2.5% dextrose and traditional lactate buffered PD solution. Brand A and B were in PVC package. Brand C and D were in non-PVC package (Table 1). The freshest dialysate available in the market were intentionally picked. The time duration between manufacture data to baseline measurement were from 43 to 105 days. Table 1 Features of the four brands bags used in the study At baseline, the bags were weighted as whole. The outer package of 4 bags in each brands were removed and weighted separately. The other intact bags were then stored in four different conditions with controlled temperature and humidity. The intact bags were weighted at baseline, 6 months and 12 months of storage. The detailed temperature and humidity of each condition was shown as following. Condition 1, 5 °C and uncontrolled humidity. N = 5 for brand A, B, D. N = 4 for brand C. Condition 2, 25 °C and 40% humidity. N = 5 for brand A, B, D. N = 3 for brand C. Condition 4,40 °C and 20% humidity. N = 5 for brand A, B, D. N = 4 for brand C. Sample size calculation According to preliminary measurement, a 2 L dialysate bag was weighted around 2200 ± 5 g. n = 4 should be big enough to pick 10 g difference between different brands. (Type I error, 0.05 and power = 0.8) N = 5 should be big enough to pick 12 g (SD = 5) change before and after storage. N = 3 should be enough to pick 20 g (SD = 5) change before and after storage. Two hundred sixty-one CAPD patients followed up in our center were enrolled in the study. The study was performed in accordance with the Declaration of Helsinki. The study got the ethics approval by Shanghai Jiaotong University School of medicine, Renji Hospital Ethics Committee (2018)078. Written informed consent were get from each participate. All patients were on lactate buffered dextrose only solution. They were going through their routine dialysis adequacy test. Specific gravity of the drainage dialysate was measured by weighting 1 ml of the mixed 24 h drained dialysate, the same sample as their dialysis adequacy test. Specific gravity of pure water was also measured by the same method to serve as control. Dialysate sodium, potassium, protein and glucose were also measured in the mixed 24 h drained dialysate. One way ANOVA was used to measure the difference between different brands. General linear model were used for repeated measurement of dialysate weight. One sample t test was used to clarify the difference between dialysate specific gravity and water. The correlation between specific gravity and other parameter were identified by Pearson correlation. IBM SPSS statistics 20 was the software used for the study. Dialysate bags of different brand weight different at baseline There was significant difference in weight between the four different brands. The weight was from 2221.9 ± 1.9 g to 2261 ± 3.7 g for the whole 2 L bag with outer package (P < 0.01). The outer package itself was different in weight too. It was between 19.1 ± 0.4 g to 21.6 ± 0.3 g (P < 0.01). But the big weight difference of whole dialysate bag could not be explained by the weight difference in outer package (Table 2). Table 2 Dialysate weight at baseline in different brand Dialysate bags lost weight over 12 months of storage Storage duration and condition had impact on weight loss Over the 12 months of storage, all dialysate bags lost weight. The dialysate bags lost more weight at 12 months compared to 6 months (P < 0.01, Table 2). The higher temperature and lower humidity storage condition was related to more significant weight loss (Table 3). Table 3 Dialysate weight at baseline, 6 months and 12 months in different conditions Dialysate with PVC package lost more weight than non-PVC package The weight loss of each brand over 12 month's storage was shown in Fig. 1 and Additional file 1: Table 1. The weight of dialysate bag at 12 months depended on baseline weight, storage condition and package type (PVC or non-PVC). PVC package was related to greater weight loss over 12 month's storage. Generalized linear model were displayed in Table 4. Weight loss in each brand over 12 months storage in different condition. Over the 12 month's storage, dialysate bags lost significant weight. The weight loss was greater in higher temperature and lower humidity. PVC package was related to more significant weight loss over 12 month's storage. Brand A and brand B were in PVC package. Brand C and brand D were in non-PVC package Table 4 generalized linear model of dialysate weight at 12 months of storage The equation of dialysate bag weight according to the generalized linear model can be expressed as following: $$\mathsf{Weight}\ \mathsf{at}\ \mathsf{12}\ \mathsf{months}\ \left[\mathsf{condition}\left({\mathsf{i}}\right),\mathsf{PVC}\left({\mathsf{j}}\right)\right]=\mathsf{0.871}\ast \mathsf{Baseline}\ \mathsf{weight}+\mathsf{condition}\left(\mathsf{i}\right)+\mathsf{PVC}\left(\mathsf{j}\right)+\mathsf{220.28}\ \left(\mathsf{i}\mathsf{ntercept}\right)$$ The specific gravity of dialysate drainage was significantly higher than water and it was related to dialysate protein concentration The specific gravity of dialysate drainage was 1.0136 ± 0.009 g/l, which was significantly higher than pure water (n = 261, P < 0.01). All patients enrolled were on manual exchange and on traditional lactate buffered dextrose solution. The correlation between specific gravity and dialysate protein concentration was significant (r = 0.139, P = 0.024) (Table 5). Table 5 correlation between specific gravity and other solute concentration The size of potential measurement misleading of weighting the drained bag to estimate UF in clinical practice Taking the average specific gravity from our dextrose only cohort (1.0136 g/ml), the potential over estimation of UF in a CAPD patient with 8 L input volume and 1 L UF was calculated as following. Reported UF (L, misleading by kg) $$=\left[\mathsf{8L}\ \left(\mathsf{input}\ \mathsf{volume}\right)+\mathsf{1L}\ \left(\mathsf{UF}\right)\right]\ast \mathsf{1.0136}\ \left(\mathsf{g}/\mathsf{ml}\right)-\mathsf{8L}\ \left(\mathsf{input}\ \mathsf{volume}\right)$$ $$=\mathsf{1.122}\left(\mathsf{L},\mathsf{misleading}\ \mathsf{by}\ \mathsf{kg}\right)\ \left(\mathsf{over}\ \mathsf{estimate}\ \mathsf{by}\ \mathsf{0.12}\mathsf{2L}\right)$$ For icodextrin, the specific gravity is even higher than dextrose solution. Icodextrin is not available in Shanghai. Prof Simon Davies shared the data in Stoke on Trent.The mean specific gravity of icodextrin long dwell was1.026 ± 0.006 g/ml. In another word, the potential over estimation of UF in CAPD for a single icodextrin dwell (2 L) with 0.4 L UF was calculated below. $$=\left[\mathsf{2L}\ \left(\mathsf{input}\ \mathsf{volume}\right)+\mathsf{0}.\mathsf{4L}\ \left(\mathsf{UF}\right)\right]\ast \mathsf{1.026}\ \left(\mathsf{g}/\mathsf{ml}\right)-\mathsf{2L}\ \left(\mathsf{input}\ \mathsf{volume}\right)$$ UF is clearly important for patient survival in peritoneal dialysis. It is also an important parameter for peritoneal membrane function. Precise measurement of UF is also the base of correct estimation of other solute removal, such as sodium removal and urea and creatinine clearance. Clinical UF measurement is different from measuring fluid volume in the lab. It should be as simple as possible for the patient to measure several times per day. It should have minimal risk to expose the patient or care giver to body fluid. Current clinical UF measurements as suggested by Bernardini J and Mahon A is weight the "whole" drained bag and minus the weight of empty bag and the expected input volume, the labeled volume plus overfill volume [5, 8]. Some carefully designed clinical trials measured dialysate bags before and after, which solve most problem of uncertain UF measurement but not all. It also mean more treatment load for patients. The current study was to understand the difference between clinical measured UF and real UF. The effect of evaporation and specific gravity in clinical UF measurement were tested in the study. Overfill existed in all brands, but different in each brand We knew overfill existed in all brands. But how big the difference was was not clear to the public. Theoretically, overfill may be different between brands, type of bags and even manufacture batches. We picked the 2 L, lactate buffered, 2.5% dextrose dialysate from four different brands. It was just to get a rough idea of how big the difference was. Ideally, the manufacturer should be encouraged to publish regular audits of overfill for each type of dialysate. Storage condition made difference over long storage duration As a general rule, close to room temperature (25 °C) was suggested for any medication storage if without specific instruction. However, in real life, the dialysate stored in family was likely to be in a non-air conditioned room. A wide range of storage condition was possible worldwide. According to the storage instruction of most commercial dialysate, lower than 0 degree should be avoided. For the higher limitation of storage condition, the instruction in some countries stated that more than brief exposure up to 40 °C should be avoided and recommend the product be stored at room temperature (25 °C). In some countries the instruction of dialysate did not mention it. From the current study, we definitely suggested to store dialysate bags in cool condition as far as possible. We also gave strong evidence why more than brief exposure up to 40 °C should be avoided. It could also be a problem in clinical trials. For example, in studies mean to test new PD solutions. All the new solutions for the whole study may be produced in one batch and stored for further use throughout the whole study. The study may last for 1 year. The control group, in most cases, using the commercially available solution, was likely to use the relatively fresh bags as they are continuously produced. This difference may cause systemic error. For clinical trials, weight dialysate bags before and after is strongly suggested. The fact that temperature and humidity had effect on dialysate volume may also contribute to the center effect of ultrafiltration and sodium removal in multicenter observational study or national registration study. PVC and non PVC package show difference in evaporation We also noticed the different character in evaporation between PVC and non PVC package. So far, there was no clinical data on UF comparing PVC and non PVC package. An ongoing clinical trial from China may give us some useful information [13]. The problem of storage duration and difference in evaporation character should be carefully treated. Neglecting the effect of specific gravity leaded to overestimation of UF in CAPD It was not surprise that the specific gravity of dialysate is slightly higher than pure water. However, it had never been estimated how big this effect was. UF in manual exchange was measured by weight and transform to volume by dividing 1 g/ml (specific gravity of pure water). While in APD, UF was directly measured in volume by the APD machine. The study clearly demonstrated the gap between weight and volume was big enough to give systemic error when comparing UF between CAPD and APD. However, measuring dialysate volume manually was not feasible. It may cause even bigger measurement error and also increase the risk of body fluid exposure. Weight instead of volume measurement was still a reasonably way for daily practice. Mobile volume measuring tool such as flowmeter may help with this problem in clinical trial scenario. In the current study, we got the dialysate bags from market. We had tried our best to get the freshest dialysate available in the market for the study. The time between manufacture to baseline measurement was still slightly different between brands (from 43 to 105 days). Evaporation process started from manufacture in principle. But the effect should be small and it is what the patients actually get in real life. Secondly, theoretically, overfill should be different between brand, type and even batch. Only one type of dialysate bags in one batch from each brand were picked in the current study. However, the study was design to establish the significant difference does exist, rather than focus on the exact figure of the difference. There was argument that in real life dialysate was not likely to store in the extreme conditions as in the current study. Taking the fact the dialysate bags may be stored in patient's home rather than in special medical storage, the bags were likely to be stored in a room without air condition. In many regions of the world, high room temperatures (over 30-35 °C) are not infrequent for long periods of the year. In conclusion, precise UF measurement in peritoneal dialysis is much more complicated than we thought. Storage condition and duration, as well as the type of dialysate package have significant impact in dialysate bag weight before use. Evaporation is likely to be the reason behind. The fact that specific gravity of dialysate drainage is higher than 1 g/ml overestimates UF in manual exchanges, which contributes to systemic measurement error of UF in CAPD. The datasets used and/or analysed during the current study available from the corresponding author on reasonable request. Dombros N, Dratwa M, Feriani M, Gokal R, Heimburger O, Krediet R, et al. European best practice guidelines for peritoneal dialysis. 7 Adequacy of peritoneal dialysis. Nephrol Dial Transplant. 2005;20(Suppl 9):ix24–7. Woodrow G, Davies S. Renal association clinical practice guideline on peritoneal dialysis. Nephron Clin Pract. 2011;118(Suppl 1):c287–310. Woodrow G, Fan SL, Reid C, Denning J, Pyrah AN. Renal association clinical practice guideline on peritoneal dialysis in adults and children. BMC Nephrol. 2017;18:333. La Milia V, Pozzoni P, Crepaldi M, Locatelli F. Overfill of peritoneal dialysis bags as a cause of underestimation of ultrafiltration failure. Perit Dial Int. 2006;26:503–5. Bernardini J, Florio T, Bender F, Fried L, Piraino B. Methods to determine drain volume for peritoneal dialysis clearances. Perit Dial Int. 2004;24:182–5. Rodriguez-Carmona A, Fontan MP. Sodium removal in patients undergoing CAPD and automated peritoneal dialysis. Perit Dial Int. 2002;22:705–13. Davies SJ. Overfill or ultrafiltration? We need to be clear. Perit Dial Int. 2006;26:449–51. Mahon A, Fan SL. Accuracy of ultrafiltration volume measurements for patients on peritoneal dialysis. Perit Dial Int. 2005;25:92–3. Rodriguez-Carmona A, Perez-Fontan M, Garca-Naveiro R, Villaverde P, Peteiro J. Compared time profiles of ultrafiltration, sodium removal, and renal function in incident CAPD and automated peritoneal dialysis patients. Am J Kidney Dis. 2004;44:132–45. Ortega O, Gallar P, Carreno A, Gutierrez M, Rodriguez I, Oliet A, et al. Peritoneal sodium mass removal in continuous ambulatory peritoneal dialysis and automated peritoneal dialysis: influence on blood pressure control. Am J Nephrol. 2001;21:189–93. McCafferty K, Fan SL. Are we underestimating the problem of ultrafiltration in peritoneal dialysis patients? Perit Dial Int. 2006;26:349–52. Maharjan SRS, Davenport A. Comparison of sodium removal in peritoneal dialysis patients treated by continuous ambulatory and automated peritoneal dialysis. J Nephrol. 2019;32:1011–9. Zhou J, Cao X, Lin H, Ni Z, He Y, Chen M, et al. Safety and effectiveness evaluation of a domestic peritoneal dialysis fluid packed in non-PVC bags: study protocol for a randomized controlled trial. Trials. 2015;16:592. I am grateful to MsFenglun Chen and MrYuntao Wang for their technical support and DrQiang Yao for her general support for the study. It is an INVESTIGATOR INITIATED study, sponsored by Renal Care IIR from Baxter, China (NCT03864120). Department of Nephrology, Renji Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai, China Zanzhe Yu, Zhuqing Wang, Qin Wang, Minfang Zhang, Haijiao Jin, Li Ding, Hao Yan, Jiaying Huang, Yan Jin, Wei Fang & Zhaohui Ni Faculty of Medicine and Health Sciences, Keele University, Keele, UK Zanzhe Yu Zhuqing Wang Qin Wang Minfang Zhang Haijiao Jin Li Ding Hao Yan Jiaying Huang Yan Jin Wei Fang Zhaohui Ni Zanzhe Yu Made a substantial contribution to the concept and design of the work. Zhuqing Wang analysis the data. Qin Wang Acquisition and interpretation of data. Minfang Zhang Acquisition and interpretation of data. HajiaoJin Approved the version to be published. Li Ding Approved the version to be published. Hao Yan Approved the version to be published. Jiaying Huang analysis the data. Yan Jin Approved the version to be published. Simon J Davies Acquisition and interpretation of data. Wei Fang Acquisition and interpretation of data. Zhaohui Ni Acquisition and interpretation of data. The authors read and approved the final manuscript. Correspondence to Zhaohui Ni. The study was performed in accordance with the Declaration of Helsinki. The study got the ethics approval by Shanghai Jiaotong University School of medicine, Renji Hospital Ethics Committee (2018)078. Written informed consent were get from each participate. Informed Consent to Publish-Not applicable. There is no competing interest. Additional file 1: Table 1. whole dialysate bag weight according to different brand and storage condition at different time points. Yu, Z., Wang, Z., Wang, Q. et al. Understand the difference between clinical measured ultrafiltrationand real ultrafiltration in peritoneal dialysis. BMC Nephrol 22, 382 (2021). https://doi.org/10.1186/s12882-021-02589-3
CommonCrawl
\begin{definition}[Definition:Algorithm/Formal Specification] An '''algorithm''' can be implemented formally as a computational method $\left({Q, I, \Omega, f}\right)$ as follows: Let $A$ be a finite set of symbols. Let $A^*$ be the set of all collations on $A$: :$\left\{ {x_1 x_2 \cdots x_n: n \ge 0, \forall j: 1 \le j \le n: x_j \in A}\right\}$ The states of the computation are encoded so as to be represented by elements of $A^*$. Let $N \in \Z_{\ge 0}$. Let $Q$ be the set of all ordered pairs $\left({\sigma, j}\right)$ where $\sigma \in A^*, j \in \Z: 0 \le j \le N$. Let $I \subseteq Q$ such that $j = 0$. Let $\Omega \subseteq Q$ such that $j = N$. Let $\theta, \sigma \in A^*$. Then $\theta$ '''occurs in $\sigma$''' {{iff}} $\sigma$ has the form: :$\alpha \theta \omega$ where $\alpha, \omega \in A^*$. Let $f$ be a mapping of the following type: :$f \left({\left({\sigma, j}\right)}\right) = \begin{cases}\left({\sigma, a_j}\right) : & \sigma_j \text { does not occur in } \sigma \\ \left({\alpha \theta_j \omega, b_j}\right) : & \alpha \text { is the shortest element of $A^*$ such that } \sigma = \alpha \theta_j \omega\end{cases}$ :$f \left({\left({\sigma, N}\right)}\right) = \left({\sigma, N}\right)$ \end{definition}
ProofWiki
\begin{document} \title[IBVP of the KdV]{Initial boundary value problem for Korteweg-de Vries equation: a review and open problems} \author[Capistrano--Filho]{Roberto A. Capistrano--Filho} \address{Departmento de Matem\'atica, Universidade Federal de Pernambuco 50740-545, Recife (PE), Brazil.} \email{[email protected]} \author[Sun]{Shu-Ming Sun} \address{Department of Mathematics, Virginia Tech, Blacksburg, VA 24061, United State} \email{[email protected]} \author[Zhang]{Bing-Yu Zhang} \address{Department of Mathematical Sciences, University of Cincinnati, Ohio 45221-0025, United State} \email{[email protected]} \subjclass[2010]{35Q53, 35Q35, 53C35} \keywords{KdV equation, Well-posedness, Non-homogeneous boundary value problem, Boundary integral operators, Initial boundary value problem} \begin{abstract} In the last 40 years the study of initial boundary value problem for the Korteweg-de Vries equation has had the attention of researchers from various research fields. In this note we present a review of the main results about this topic and also introduce interesting open problems which still requires attention from the mathematical point of view. \end{abstract} \maketitle \section{Introduction} In 1834 John Scott Russell, a Scottish naval engineer, was observing the Union Canal in Scotland when he unexpectedly witnessed a very special physical phenomenon which he called a wave of translation \cite{Russel1844}. He saw a particular wave traveling through this channel without losing its shape or velocity, and was so captivated by this event that he focused his attention on these waves for several years, not only built water wave tanks at his home conducting practical and theoretical research into these types of waves, but also challenged the mathematical community to prove theoretically the existence of his solitary waves and to give an a priori demonstration a posteriori. A number of researchers took up Russell’s challenge. Boussinesq was the first to explain the existence of Scott Russell’s solitary wave mathematically. He employed a variety of asymptotically equivalent equations to describe water waves in the small-amplitude, long-wave regime. In fact, several works presented to the Paris Academy of Sciences in 1871 and 1872, Boussinesq addressed the problem of the persistence of solitary waves of permanent form on a fluid interface \cite{Boussinesq,Boussinesq1,Boussinesq2,Boussinesq3}. It is important to mention that in 1876, the English physicist Lord Rayleigh obtained a different result \cite{Rayleigh}. After Boussinesq theory, the Dutch mathematicians D. J. Korteweg and his student G. de Vries derived a nonlinear partial differential equation in 1895 that possesses a solution describing the phenomenon discovered by Russell, \begin{equation}\label{kdv}\frac{\partial\eta}{\partial{t}}=\frac{3}{2}\sqrt{\frac{g}{l}}\frac{\partial}{\partial{x}}\left(\frac{1}{2}\eta^2+\frac{3}{2}\alpha\eta+\frac{1}{3}\beta\frac{\partial^2\eta}{\partial{x^2}}\right), \end{equation} in which $\eta$ is the surface elevation above the equilibrium level, $l$ is an arbitrary constant related to the motion of the liquid, $g$ is the gravitational constant, and $\beta=\frac{l^3}{3}-\frac{Tl}{\rho g}$ with surface capillary tension $T$ and density $\rho$. The equation (\ref{kdv}) is called the Korteweg-de Vries equation in the literature, often abbreviated as the KdV equation, although it had appeared explicitly in Boussinesq’s massive 1877 Memoir \cite{Boussinesq3}, as equation (283bis) in a footnote on page 360\footnote{The interested readers are referred to \cite{jager2006, pego1998} for history and origins of the Korteweg-de Vries equation.}. Eliminating the physical constants by using the following change of variables $$t\to\frac{1}{2}\sqrt{\frac{g}{l\beta}}t, \quad x\to-\frac{x}{\beta}, \quad u\to-\left(\frac{1}{2}\eta+\frac{1}{3}\alpha\right)$$ one obtains the standard Korteweg-de Vries equation $$u_t + 6uu_x + u_{xxx}= 0,$$ which is now commonly accepted as a mathematical model for the unidirectional propagation of small-amplitude long waves in nonlinear dispersive systems. This note is concerned with the main results already obtained for the initial-boundary value problem (IBVP) of the KdV equation posed on a finite interval $(0,L)$. The first paper which treated this problem was given by Bubnov in 1979 \cite{Bubnov79} when he considered the IBVP of the KdV equation on the finite interval $(0,1)$ with general boundary conditions. After that, many authors worked on improving the existing results and presenting new results in the last 30 years. Our intention here is to present the main results on this field. Also, we give some further comments and, at the end, discuss open problems related to the IBVP of the KdV equation in a bounded domain. \section{A review of IBVP for KdV} Consider the IBVP of the KdV equation posed on a finite interval $(0,L)$ \begin{equation}\label{1.1} u_t+u_x+u_{xxx}+uu_x=0,\qquad u(x,0)=\phi(x), \qquad 0<x<L, \ t>0 \end{equation} with general non-homogeneous boundary conditions posed on the two ends of the interval $(0,L)$, \begin{equation} \label{1.2} B_1u=h_1(t), \qquad B_2 u= h_2 (t), \qquad B_3 u= h_3 (t) \qquad t>0, \end{equation} where \[ B_i u =\sum _{j=0}^2 \left(a_{ij} \partial ^j_x u(0,t) + b_{ij} \partial ^j_x u(L,t)\right), \qquad i=1,2,3,\] and $a_{ij}, \ b_{ij}$, $ j=0, 1,2, \ i=1,2,3,$ are real constants. The following natural question arises: \emph{Under what assumptions on the coefficients $a_{ij}, \ b_{ij} $ in (\ref{1.2}), is the IBVP (\ref{1.1})-(\ref{1.2}) well-posed in the classical Sobolev space $H^s (0,L)$?} As mentioned before, Bubnov \cite{Bubnov79} studied the following IBVP of the KdV equation on the finite interval $(0,1)$: \begin{equation}\label{1.3} \begin{cases} u_t +uu_x+u_{xxx}=f, \quad u(x,0)=0, \quad x\in (0,1), \ t\in (0,T), \\ \alpha _1 u_{xx}(0,t)+\alpha _2 u_x (0,t)+\alpha _3 u(0,t)=0, \\ \beta_1 u_{xx} (1,t)+\beta _2 u_x (1,t)+ \beta _3 u(1,t) =0, \\ \chi _1 u_x (1,t)+ \chi _2 u(1,t)=0 \end{cases} \end{equation} and obtained the following result. \noindent {\bf Theorem $\mathcal{A}$ }\cite{Bubnov79}: \emph{Assume that \begin{equation}\label{1.4} \begin{cases} if \ \alpha _1 \beta_1 \chi _1 \ne 0, \ then \ F_1>0, \ F_2 >0, \\ if \ \beta _1\ne 0, \ \chi _1 \ne 0, \ \alpha _1 =0, \ then \ \alpha _2=0, \ F_2 >0, \ \alpha _3 \ne 0, \\ if \ \beta _1 =0, \ \chi _1 \ne 0, \ \alpha _1 \ne 0, \ then \ F_1 >0, \ F_3 \ne 0, \\ if \ \alpha _1=\beta _1 =0, \ \chi _1 \ne 0, \ then \ F_3\ne 0, \ \alpha _2 =0, \ \alpha _3 \ne 0, \\ if \ \beta _1 =0, \ \alpha _1 \ne 0, \ \chi _1 =0, \ then \ F_1 >0, \ F_3 \ne 0, \\ if \ \alpha _1=\beta _1 =\chi _1 =0, \ then \ \alpha _2 =0, \ \alpha _3 \ne 0, \ F_3 \ne 0, \end{cases} \end{equation} where \[ F_1=\frac{\alpha _3}{ \alpha _1} -\frac{\alpha _2^2}{2\alpha _1^2}, \ F_2 =\frac{\beta_2 \chi _2}{\beta _1 \chi _1} -\frac{\beta _3}{\beta _1} -\frac{\chi _2^2}{2\chi _1^2}, \ F_3 =\beta _2 \chi_2-\beta _1\chi _1 . \] For any given \[ f\in H^1_{loc}(0, \infty ; L^2 (0,1)) \ with \ \ f(x,0)=0, \] there exists a $T>0$ such that (\ref{1.3}) admits a unique solution \[ u\in L^2 (0, T; H^3 (0,1)) \ with \ u_t \in L^{\infty} (0,T; L^2 (0,1))\cap L^2 (0,T; H^1 (0,1)) .\] } The main tool used by Bubnov to prove his theorem is the following Kato type smoothing property for solution $u$ of the linear system associated to the IBVP (\ref{1.3}), \begin{equation}\label{1.5} \begin{cases} u_t +u_{xxx}=f, \quad u(x,0)=0, \quad x\in (0,1), \ t\in (0,T), \\ \alpha _1 u_{xx}(0,t)+\alpha _2 u_x (0,t)+\alpha _3 u(0,t)=0, \\ \beta_1 u_{xx} (1,t)+\beta _2 u_x (1,t)+ \beta _3 u(1,t) =0, \\ \chi _1 u_x (1,t)+ \chi _2 u(1,t)=0. \end{cases} \end{equation} Under the assumptions (\ref{1.4}): \begin{equation*}f\in L^2(0,T; L^2 (0,1))\implies u\in L^2 (0,T; H^1 (0,1))\cap L^{\infty} (0,T; L^2 (0,1)) \end{equation*} and \[ \|u\|_{L^2 (0,T; H^1 (0,1))}+ \|u\|_{L^{\infty} (0,T; L^2 (0,1))} \leq C\|f\|_{L^2 (0,T; L^2 (0,T))} \] where $C>0$ is a constant independent of $f$. In the past thirty years since the work of Bubnov, various boundary-value problems of the KdV equation have been studied. In particular, the following three classes of IBVPs of the KdV equation on the finite interval $(0,L)$, \begin{equation}\label{1.7} \begin{cases} u_t +u_x +uu_x +u_{xxx}=0, \ u(x,0)=\phi (x), \quad x\in (0,L), \ t>0, \\ u(0,t)= h_1(t), \quad u(L,t) = h_2 (t), \quad u_x (L,t) =h_3 (t), \end{cases} \end{equation} \begin{equation}\label{1.8} \begin{cases} u_t +u_x +uu_x +u_{xxx}=0, \ u(x,0)=\phi (x), \quad x\in (0,L), \ t>0, \\ u(0,t)= h_1(t), \quad u(L,t) = h_2 (t), \quad u_{xx} (L,t) =h_3 (t), \end{cases} \end{equation} and \begin{equation}\label{1.8a} \begin{cases} u_t +u_x +uu_x +u_{xxx}=0, \ u(x,0)=\phi (x), \quad x\in (0,L), \ t>0, \\ u_{xx}(0,t)= h_1(t), \quad u_x(L,t) = h_2 (t), \quad u_{xx} (L,t) =h_3 (t), \end{cases} \end{equation} as well as the IBVPs of the KdV equation posed in a quarter plane have been intensively studied in the past twenty years (cf. \cite{BSZ03FiniteDomain,bsz-finite,ColGhi97,Fam83,Fam89, faminskii2004, faminskii2007, Holmer06,KrZh,KrIvZh,RiUsZh} and the references therein) following the rapid advances of the study of the pure initial value problem of the KdV equation posed on the whole line $\mathbb{R}$ or on the periodic domain $\mathbb{T}$ (cf. \cite{BS76,BS78,Bourgain93a,Bourgain93b,Bourgain97,ColKeel03,Fam83,Fam89,Fam99,KPV89,KPV91,KPV91-1,KPV93,KPV93b,KPV96} and the references therein). The nonhomogeneous IBVP (\ref{1.7}) was first studied by Faminskii in \cite{Fam83,Fam89} and was shown to be well-posed in the spaces $L^2 (0,L)$ and $H^3 (0,L)$. \noindent {\bf Theorem $\mathcal{B}$} \cite{Fam83,Fam89} \emph{Let $T>0$ be given. For any $\phi \in L^2 (0,L)$ and $\vec{h}= (h_1, h_2, h_3)$ belonging of \[ W^{\frac13, 1}(0,T)\cap L^{6+\epsilon} (0,T)\cap H^{\frac16} (0,T)\times W^{\frac56 +\epsilon, 1} (0,T)\cap H^{\frac13} (0,T)\times L^2 (0,T),\] the IBVP (\ref{1.7}) admits a unique solution $$u\in C([0,T]; L^2 (0,L))\cap L^2 (0,T; H^1 (0,L)).$$ Moreover, the solution map is continuous in the corresponding spaces.} \emph{In addition, if $\phi \in H^3 (0,L)$, $ h_1' \in W^{\frac13, 1}(0,T)\cap L^{6+\epsilon} (0,T)\cap H^{\frac16} (0,T)$, $h_2'\in W^{\frac56 +\epsilon, 1} (0,T)\cap H^{\frac13 } (0,T)$ and $ h_3' \in L^2 (0,T)$ with $$ \phi (0)=h_1 (0), \phi (L)=h_2 (0), \ \phi' (L) = h_3 (0),$$ then the solution $u\in C^1([0,T]; H^3 (0,L))\cap L^2 (0,T; H^4(0,L))$.} Bona \textit{et al.} in \cite{BSZ03FiniteDomain} showed that the IBVP (\ref{1.7}) is locally well-posed in the space $H^s (0,L)$ for any $s\geq 0$: \noindent {\bf Theorem $\mathcal{C}$} \cite{BSZ03FiniteDomain}: \emph{Let $s\geq 0$ , $r>0$ and $T>0$ be given.} \emph{There exists a $T^*\in (0, T]$ such that for any $s-$compatible $\phi \in H^s (0,L)$ and \[ \vec{h}= (h_1, h_2, h_3) \in H^{\frac{s+1}{3}}(0,T) \times H^{\frac{s+1}{3}}(0,T)\times H^{\frac{s}{3}} (0,T) \] satisfying \[ \| \phi \| _{H^s (0,L)} + \| \vec{h}\|_{H^{\frac{s+1}{3}}(0,T) \times H^{\frac{s+1}{3}}(0,T)\times H^{\frac{s}{3}} (0,T)} \leq r,\] the IBVP (\ref{1.7}) admits a unique solution \[ u\in C([0,T^*]; H^s (0,L))\cap L^2 (0,T^*; H^{s+1} (0,L)).\] Moreover, the corresponding solution map is analytic in the corresponding spaces.} Holmer \cite{Holmer06} proved that IBVP (\ref{1.7}) is locally well-posed in the space $H^s (0,L)$ for any $-\frac34 <s< \frac12$, and Bona \textit{et al.} in \cite{bsz-finite} showed that the IBVP (\ref{1.7}) is locally well-posed $H^s (0,L)$ for any $s>-1$. As for the IBVP (\ref{1.8}), its study began with the work of Colin and Ghidalia in late 1990's \cite{ColGhi97,ColGhi97a,ColGhi01}. They obtained in \cite{ColGhi01} the following results. \begin{itemize} \item[(i)] \emph{Given $h_j\in C^1([0, \infty)), \ j=1,2,3$ and $\phi \in H^1 (0,L)$ satisfying $h_1(0)=\phi (0)$, there exists a $T>0$ such that the IBVP (\ref{1.8}) admits a solution (in the sense of distribution)} \[ u\in L^{\infty}(0,T; H^1(0,L))\cap C([0,T]; L^2 (0,L)) .\] \item[(ii)] \emph{The solution $u$ of the IBVP (\ref{1.8}) exists globally in $H^1(0,L)$ if the size of its initial value $\phi \in H^1 (0,L)$ and its boundary values $h_j\in C^1([0, \infty )), \ j=1,2,3$ are all small.} \end{itemize} In addition, they showed that the associate linear IBVP \begin{equation}\label{1.9} \begin{cases} u_t+u_x+u_{xxx}=0,\qquad u(x,0)=\phi(x) & x\in (0,L), \ t\in \mathbb{R}^+ \\ u(0,t)=0,\ u_x(L,x)=0,\ u_{xx}(L,t)=0 \end{cases} \end{equation} possesses a strong smoothing property: \emph{ For any $\phi \in L^2 (0,L)$, the linear IBVP (\ref{1.9}) admits a unique solution $$u\in C(\mathbb{R}^+; L^2 (0,L))\cap L^2 _{loc} (\mathbb{R}^+; H^1 (0,L)).$$} Aided by this smoothing property, Colin and Ghidaglia showed that the homogeneous IBVP (\ref{1.8}) is locally well-posed in the space $L^2 (0,L)$. \noindent {\bf Theorem $\mathcal{D}$} \cite{ColGhi01} \emph{Assuming $h_1=h_2=h_3\equiv 0$, then for any given $\phi \in L^2 (0,L)$, there exists a $T>0$ such that the IBVP (\ref{1.8}) admits a unique weak solution $u\in C([0,T]; L^2 (0,L))\cap L^2 (0,T; H^1 (0,L))$.} Returning the attention to the IBVP \eqref{1.8}, Rivas \textit{et al.} in \cite{RiUsZh}, showed that the solutions exist globally as long as their initial values and the associated boundary data are small, they proved the following result: \noindent {\bf Theorem $\mathcal{E}$} \cite{RiUsZh} \emph{Let $s\geq 0$ with $s\neq\frac{2j-1}{2}, \text{} \text{}j=1,2,3...$ There exist positive constants $\delta$ and $T$ such that for any $s-$compatible $\phi \in H^s (0,L)$ and $\vec{h}= (h_1, h_2, h_3)$ on the class $$B^s_{(t,t+T)}:=H^{\frac{s+1}{3}}(t,t+T) \times H^{\frac{s}{3}}(t,t+T)\times H^{\frac{s-1}{3}} (t,t+T) $$ with $ \|\phi \|_{H^s (0,L)} + \|\vec{h}\|_{B^s_{(t,t+T)}} \leq \delta,$ and $\sup_{t\geq0}\|\vec{h}\|_{B^s_{(t,t+T)}}<\infty,$ the IBVP (\ref{1.9}) admits a unique solution \[ u\in Y^s_{(t,t+T)}:=C([t,t+T]; H^s (0,L))\cap L^2 (t,t+T; H^{s+1}(0,L))\] such that for any $t\geq0,$ $\sup_{t\geq0}\|\vec{v}\|_{Y^s_{(t,t+T)}}<\infty.$} More recently, Kramer \textit{et al.} in \cite{KrIvZh} showed that the IBVP \eqref{1.8} is locally well-posedness in the classical Sobolev space $H^s(0,L)$, for $s>-\frac{3}{4}$, which provides a positive answer to one of the open questions of Colin and Ghidalia \cite{ColGhi01}. Kramer and Zhang in \cite{KrZh}, studied the following non-homogeneous boundary value problem, \begin{equation}\label{1.3-g} \begin{cases} u_t +uu_x+u_{xxx}=0, \quad u(x,0)=\phi (x), \quad x\in (0,1), \ t\in (0,T), \\ \alpha _1 u_{xx}(0,t)+\alpha _2 u_x (0,t)+\alpha _3 u(0,t)=h_1(t), \\ \beta_1 u_{xx} (1,t)+\beta _2 u_x (1,t)+ \beta _3 u(1,t) =h_2(t), \\ \chi _1 u_x (1,t)+ \chi _2 u(1,t)=h_3 (t). \end{cases} \end{equation} They showed that the IBVP (\ref{1.3-g}) is locally well-posed in the space $H^s (0,1)$ for any $s\geq 0$ under the assumption (\ref{1.4}). \noindent {\bf Theorem $\mathcal{F}$} \cite{KrZh} \emph{Let $s\geq 0$ and $T>0$ be given and assume (\ref{1.4}) holds. For any $r>0$, there exists a $T^*\in (0,T]$ such that for any $s-$compatible $\phi \in H^s (0,1)$, $h_j\in H^{\frac{s+1}{3}}(0,T), j=1,2,3$ with \[ \|\phi \|_{H^s (0,1)} + \|h_1\|_{H^{\frac{s+1}{3}}(0,T)} +\|h_2\|_{H^{\frac{s+1}{3}}(0,T)}+\|h_3\|_{H^{\frac{s+1}{3}}(0,T)} \leq r,\] the IBVP (\ref{1.3-g}) admits a unique solution \[ u\in C([0,T^*]; H^s (0,1))\cap L^2 (0,T^*; H^{s+1}(0,1)) .\] Moreover, the solution $u$ depends continuously on its initial data $\phi $ and the boundary values $h_j, j=1,2,3$ in the respective spaces.} Recently, Capistrano--Filho \textit{et al.} \cite{CCFZh} studied the IBVP \eqref{1.8a}. The authors proved the local well-posedness for this system. More precisely: \noindent {\bf Theorem $\mathcal{G}$} \cite{CCFZh} \emph{ Let $T>0$ and $s\geq0$. There exists a $T^*\in(0,T]$ such that for any $(\phi , \vec{h}) \in X_{ T}$, where \[ X_{T}:= H^s (0,L)\times H^{\frac{s-1}{3}}(0,T)\times H^{\frac{s}{3}}(0,T)\times H^{\frac{s-1}{3}}(0,T)\] the IBVP \eqref{1.8a} admits a unique solution $$ u\in C([0,T];H^s (0,L))\cap L^2(0,T;H^{s+1}(0,L)) $$ In addition, the solution $u$ possesses the hidden regularities $$\partial_x^lu\in L^{\infty}(0,L;H^{\frac{s+1-l}{3}}(0,T^*)) \quad \text{ for }\quad l=0,1,2.$$ and, moreover, the corresponding solution map is Lipschitz continuous.} Finally, in a recently work, Capistrano--Filho \textit{et al.} in \cite{CaSunZha2018} studied the well-posedness of IBVP (\ref{1.1})-(\ref{1.2}). The authors proposed the following hypotheses on those coefficients $a_{ij}, \ b_{ij}$, $ j,i=0, 1,2,3$: \begin{itemize} \item[(A1)] $ a_{12}=a_{11}=0, \ a_{10}\ne0, \ b_{12}=b_{11}=b_{10}=0$; \item[(A2)] $a_{12}\ne0, \ b_{12}=0$; \item[(B1)] $b_{22}=b_{21}=0,\ b_{20}\ne0, \ a_{22}=a_{21}=a_{20} =0$; \item[(B2)] $b_{22}\ne 0, \ a_{22}=0 $; \item[(C)] $b_{32}=0, \ b_{31}\ne 0, \ a_{32}=a_{31}=0.$ \end{itemize} For $s\geq 0$, consider the set $$H^s_0(0,L):=\{\phi(x)\in H^s(0,L): \phi^{(k)}(0)=\phi^{(k)}(L)=0\}$$ with $k=0,1,2, \cdots , [s]$ and $$H^s_0(0,T]:=\{h(t)\in H^s(0,T):h^{(j)}(0)=0\},$$ for $j=0,1,...,, [s] $. In addition, letting \begin{equation*} \begin{cases} {\mathcal{ H}}_1^s (0,T) := H_0^{\frac{s+1}{3}}(0,T]\times H_0^{\frac{s+1}{3}}(0,T]\times H_0^{\frac{s}{3}}(0,T], \\ {\mathcal{ H}}^s_2 (0,T):= H_0^{\frac{s+1}{3}}(0,T]\times H_0^{\frac{s-1}{3}}(0,T]\times H_0^{\frac{s}{3}}(0,T],\\ {\mathcal{ H}}^s_3 (0,T):= H_0^{\frac{s-1}{3}}(0,T]\times H_0^{\frac{s+1}{3}}(0,T]\times H_0^{\frac{s}{3}}(0,T], \\ {\mathcal{ H}}^s_4 (0,T):= H_0^{\frac{s-1}{3}}(0,T]\times H_0^{\frac{s-1}{3}}(0,T]\times H_0^{\frac{s}{3}}(0,T] \end{cases} \end{equation*} and \begin{equation*} \begin{cases} {\mathcal{ W}}_1^s (0,T) := H^{\frac{s+1}{3}}(0,T)\times H^{\frac{s+1}{3}}(0,T)\times H^{\frac{s}{3}}(0,T), \\ {\mathcal{ W}}^s_2 (0,T):= H^{\frac{s+1}{3}}(0,T)\times H^{\frac{s-1}{3}}(0,T)\times H^{\frac{s}{3}}(0,T),\\ {\mathcal{ W}}^s_3 (0,T):= H^{\frac{s-1}{3}}(0,T)\times H^{\frac{s+1}{3}}(0,T)\times H^{\frac{s}{3}}(0,T), \\ {\mathcal{ W} }^s_4 (0,T):= H^{\frac{s-1}{3}}(0,T)\times H^{\frac{s-1}{3}}(0,T)\times H^{\frac{s}{3}}(0,T), \end{cases} \end{equation*} they proved the following well-posedness results for the IBVP (\ref{1.1})-(\ref{1.2}): \noindent {\bf Theorem $\mathcal{H}$} \cite{CaSunZha2018} \emph{ Let $s\geq 0$ with $s\neq\frac{2j-1}{2}, \text{} \text{}j=1,2,3..., $ and $T>0$ be given. If one of the assumptions below is satisfied, \begin{itemize} \item[(i)] (A1), (B1) and (C) hold, \item[(ii)] (A1), (B2) and (C) hold, \item[(iii)] (A2), (B1) and (C) hold, \item[(iv)] (A2), (B2) and (C) hold, \end{itemize} then, for any $r>0$, there exists a $T^*\in (0, T]$ such that for any $$(\phi , \vec{h})\in H^s_0 (0,L)\times{\mathcal H}^s_1(0,T)$$ satisfying $\|(\phi, \vec{h})\|_{L^2 (0,L)\times{\mathcal H}^0_1(0,T)} \leq r$, the IBVP (\ref{1.1})-(\ref{1.2}) admits a solution $$u\in C([0,T^*]; H^s (0,L))\cap L^2 (0,T^*;H^{s+1}(0,L))$$ possessing the hidden regularity (the sharp Kato smoothing properties) $$\partial_x^lu\in L^{\infty}(0,L;H^{\frac{s+1-l}{3}}(0,T^*)) \quad \text{ for }\quad l=0,1,2.$$ Moreover, the corresponding solution map is analytically continuous.} \section{Further comments} Before presenting the main ideas to prove Theorem $\mathcal{G}$, let us introduce the following boundary operators $\mathcal{B}_k, \ k=1,2,3,4$ as $\mathcal{B}_k = \mathcal{B}_{k,0}+ \mathcal{B}_{k,1}$ with \[ \mathcal{B}_{1,0}v:= ( v(0,t), v (L,t), v_{x}(L,t)), \quad \mathcal{B}_{2,0}v:=( v(0,t), v_{x}(L,t), v_{xx}(L,t) ),\] \[ \mathcal{B}_{3,0}v:=( v_{xx}(0,t), v(L,t), v_{x}(L,t)), \quad \mathcal{B}_{4,0}v:=( v_{xx}(0,t), v_{x}(L,t), v_{xx}(L,t))\] and \begin{align*} & \mathcal{B}_{1,1}v:=\left (0,\ 0, \ 0\right ),\\ &\mathcal{B}_{2,1}v:=\left (0,\ b_{30}v(L,t),\ a_{21} v_x (0,t) + b_{20} v(L,t)\right),\\ &\mathcal{B}_{3,1}v:=\left(a_{10} v(0,t) +a_{11} v_x (0,t),\ 0,\ a_{30}v(0,t) \right),\\ & \mathcal{B}_{4,1}v:=\left( \sum _{j=0}^1 a_{1j} \partial ^j_x v(0,t)+ b_{10} v(L,t),\ a_{30}v(0,t)+b_{30}v(L,t), \ \sum _{j=0}^1 a_{2j} \partial ^j_x v(0,t) + b_{20} v(L,t)\right). \end{align*} Thus, the assumptions imposed on the boundary conditions in Theorem $\mathcal{G}$ can be reformulated as follows: \begin{itemize} \item[(i)] $((A1), (B1), (C)) \Leftrightarrow \mathcal{B}_{1}v= \vec{h},$ \item[(ii)] $((A1),(C),(B2)) \Leftrightarrow \mathcal{B}_{2}v= \vec{h},$ \item[(iii)] $((A2), (B1), (C)) \Leftrightarrow \mathcal{B}_{3}v= \vec{h},$ \item[(iv)] $((A2),(C),(B2)) \Leftrightarrow \mathcal{B}_{4}v= \vec{h}.$ \end{itemize} In \cite{CaSunZha2018}, to prove Theorem $\mathcal{G}$, the authors first studied the linear IBVP \begin{equation}\label{y-1} \begin{cases} u_t +u_{xxx} +\delta_k u=f, \quad x\in (0,L), \quad t >0\\ u(x,0)= \phi (x), \\ \mathcal{B}_{k,0} u= \vec{h}, \end{cases} \end{equation} for $k=1,2,3,4$, to establish all the linear estimates needed for dealing with the nonlinear IBVP (\ref{1.1})-(\ref{1.2}). Here $\delta _k=0$ for $k=1,2,3$ and $\delta _4=1$. After that, they considered the nonlinear map $\Gamma $ defined by the following IBVP: \begin{equation}\label{y-2} \begin{cases} u_t +u_{xxx} +\delta_ku= -v_x -vv_x +\delta_kv , \quad x\in (0,L), \quad t >0\\u(x,0)= \phi (x), \\ \mathcal{B}_{k,0} u= \vec{h}-\mathcal{B}_{k,1} v, \end{cases} \end{equation} showing thus that $\Gamma$ is a contraction in an appropriate space whose fixed point will be the desired solution of the nonlinear IBVP (\ref{1.1})-(\ref{1.2}) by using the sharp Kato smoothing property of the solution of the IBVP (\ref{y-1}). The main point here is to demonstrate the smoothing properties for solutions of the IBVP (\ref{y-1}). In order to overcome this difficulty, Capistrano--Filho \textit{et al.} in \cite{CaSunZha2018} needed to study the following IBVP \begin{equation}\label{y-3} \begin{cases} u_t +u_{xxx}+\delta _ku=0, \quad x\in (0,L), \quad t >0,\\ u(x,0)= 0, \\ \mathcal{B}_{k,0} u= \vec{h}. \end{cases} \end{equation} The corresponding solution map $\vec{h} \to u$ will be called the \textit{boundary integral operator} denoted by ${\mathcal W}_{bdr} ^{(k)}$. An explicit representation formula is given for this boundary integral operator that plays an important role in showing the solution of the IBVP (\ref{y-3}) possesses the smoothing properties. The needed smoothing properties for solutions of the IBVP (\ref{y-1}) will then follow from the smoothing properties for solutions of the IBVP (\ref{y-3}) and the well-known sharp Kato smoothing properties for solutions of the Cauchy problem \[ u_t +u_{xxx} +\delta _ku=0, \quad u(x,0)=\psi (x), \quad x, \ t\in \mathbb{R}.\] Finally, the following comments are now given in order: \begin{remark} The temporal regularity conditions imposed on the boundary values $\vec{h}$ on Theorem $\mathcal{G}$ are optimal (cf. \cite{BSZ02,BSZ04,BSZ06}). \end{remark} \begin{remark} As a comparison, note that the assumptions of Theorem $\mathcal{A}$ are equivalent to one of the following boundary conditions imposed on the equation in (\ref{1.3}): a) $$u(0,t)=0, \quad u(1,t)=0, \quad u_x (1,t)=0;$$ b) $$u_{xx}(0,t)+au_x(0,t)+bu(0,t)=0, \quad u_x(1,t)=0, \quad u(1,t)=0$$ with \begin{equation}\label{z-1} a>b^2/2;\end{equation} c)$$u(0,t)=0, \quad u_{xx}(1,t)+au_x(1,t)+bu(1,t)=0, \quad u_x(1,t)+cu(1,t)=0,$$ with \begin{equation}\label{z-2} ac>b-c^2/2;\end{equation} d) $$u_{xx}(0,t)+a_1u_x(0,t)+a_2 u(0,t)=0, $$ $$ u_{xx}(1,t)+b_1u_x(1,t)+b_2 u(1,t)=0,$$ and $$u_x(1,t)+cu(1,t)=0,$$ with \begin{equation} \label{z-3} a_2 > a_1^2/2, \quad b_1c > b_2 -c^2/2 .\end{equation} It follows from Theorem $\mathcal{G}$ that conditions (\ref{z-1}), (\ref{z-2}) and (\ref{z-3}) for Theorem $\mathcal{A}$ can be removed. \end{remark} \section{Open problems} While the results reported in this paper gave a significant improvement in the theory of initial boundary value problems of the KdV equation on a finite interval, there are still many questions to be addressed for the following IBVP: \begin{equation}\label{4.1} \begin{cases} u_t+u_x+u_{xxx}+uu_x=0, \qquad 0<x<L, \ t>0,\\ u(x,0)=\phi(x),\\ {\mathcal B}_ku=\vec{h}. \end{cases} \end{equation} Here we list a few of them which are most interesting to us. \noindent$\bullet$ {\em Is the IBVP (\ref{4.1}) globally well-posed in the space $H^s (0,L)$ for some $s\geq 0$ or equivalently, does any solution of the IBVP (\ref{4.1}) blow up in the some space $H^s (0,L)$ in finite time?} It is not clear if the IBVP (\ref{4.1}) is globally well-posed or not even in the case of $\vec{h}\equiv 0 $. It follows Theorem $\mathcal{G}$ (see \cite{CaSunZha2018}) that a solution $u$ of the IBVP (\ref{4.1}) blows up in the space $H^s (0,L)$ for some $s\geq 0$ at a finite time $T>0$ if and only if \[ \lim_{t\to T^-}\| u(\cdot, t) \|_{L^2 (0,L)} =+\infty .\] Consequently, it suffices to establish a global a priori $L^2 (0,L)$ estimate \begin{equation}\label{priori} \sup _{0\leq t< \infty} \|u(\cdot, t)\|_{L^2 (0,L)} < +\infty ,\end{equation} for solutions of the IBVP (\ref{4.1}) in order to obtain the global well-posedness of the IBVP (\ref{4.1}) in the space $H^s (0,L)$ for any $s\geq 0$. However, estimate (\ref{priori}) is known to be held only in one case \[ \begin{cases} u_t+u_x +uu_x + u_{xxx}=f, \qquad 0<x<L, \ t>0\\ u(x,0)=\phi(x)\\ u(0,t)=h_1 (t), \ u(L,t)= h_2 (t), \ u_x (L,t) =h_3 (t). \end{cases} \] \noindent$\bullet$ {\em Is the IBVP well-posed in the space $H^s (0,L)$ for some $s\leq -1$?} Theorem $\mathcal{G}$ ensures that the IBVP (\ref{4.1}) is locally well-posed in the space $H^s (0,L)$ for any $s\geq 0$. Theorem $\mathcal{G}$ can also be extended to the case of $-1< s\leq 0$ using the same approach developed in \cite{bsz-finite}. For the pure initial value problems (IVP) of the KdV equation posed on the whole line $\mathbb{R}$ or on torus $\mathbb{T}$, \begin{equation} \label{p-1} u_t +uu_x +u_{xxx}=0, \quad u(x,0)= \phi (x), \quad x, \ t\in \mathbb{R} \end{equation} and \begin{equation} \label{p-2} u_t +uu_x +u_{xxx}=0, \quad u(x,0)= \phi (x), \quad x \in \mathbb{T}, \ t\in \mathbb{R} , \end{equation} it is well-known that the IVP (\ref{p-1}) is well-posed in the space $H^s (\mathbb{R})$ for any $s\geq -\frac34$ and is (conditionally) ill-posed in the space $H^s (\mathbb{R}) $ for any $s< -\frac34$ in the sense the corresponding solution map cannot be uniformly continuous. As for the IVP (\ref{p-2}), it is well-posed in the space $H^s (\mathbb{T}) $ for any $s\geq -1$. The solution map corresponding to the IVP (\ref{p-2}) is real analytic when $s>-\frac12$, but only continuous (not even locally uniformly continuous) when $-1\leq s<-\frac12$. Whether the IVP (\ref{p-1}) is well-posed in the space $H^s(\mathbb{R})$ for any $s<-\frac34$ or the IVP (\ref{p-2}) is well-posed in the space $H^s (\mathbb{T})$ for any $s< -1$ is still an open question. On the other hand, by contrast, the IVP of the KdV-Burgers equation \[ u_t +uu_x +u_{xxx}-u_{xx}=0, \quad u(x,0)=\phi (x), \quad x\in \mathbb{R}, \ t>0 \] is known to be well-posed in the space $H^s(\mathbb{R}) $ for any $s\geq -1$, but is known to be ill-posed for any $s<-1$. We conjecture that the IBVP (\ref{4.1}) is ill-posed in the space $H^s (0,L)$ for any $s<-1$. Finally, still concerning with well-posedness problem, while the approach developed recently in \cite{CaSunZha2018} studies the nonhomogeneous boundary value problems of the KdV equation on $(0,L)$ with quite general boundary conditions, there are still some boundary value problems of the KdV equation that the approach do not work, for example \begin{equation} \label{p-3} \begin{cases} u_t +uu_x +u_{xxx}=0, \quad x\in (0,L)\\ u(x,0)= \phi (x), \\ u(0,t)=u(L,t), \ u_x (0,t)=u_x (L,t), \ u_{xx} (0,t)= u_{xx}(L,t) \end{cases} \end{equation} and \begin{equation} \label{p-4} \begin{cases} u_t +uu_x +u_{xxx}=0, \quad x\in (0,L),\\ u(x,0)= \phi (x), \\ u(0,t)=0, \ u (L,t)=0, \ u_{x} (0,t)= u_{x}(L,t) . \end{cases} \end{equation} A common feature for these two boundary value problems is that the $L^2-$norm of their solutions are conserved: \[ \int ^L_0 u^2 (x,t) dx =\int ^L_0 \phi ^2 (x) dx \qquad \mbox{for any $t\in \mathbb{R}$}.\] The IBVP (\ref{p-3}) is equivalent to the IVP (\ref{p-2}) which was shown by Kato \cite{Kato79,Kato83} to be well-posed in the space $H^s (\mathbb{T})$ when $s>\frac32$ as early as in the late 1970s. Its well-posedness in the space $H^s (\mathbb{T})$ when $s\leq \frac32$ , however, was established 24 years later in the celebrated work of Bourgain \cite{Bourgain93a,Bourgain93b} in 1993. As for the IBVP (\ref{p-4}), its associated linear problem \begin{equation*} \begin{cases} u_t +u_{xxx}=0, \quad x\in (0,L),\\ u(x,0)= \phi (x), u(0,t)=0, \\ u (L,t)=0, \ u_{x} (0,t)= u_{x}(L,t) \end{cases} \end{equation*} has been shown by Cerpa (see, for instance, \cite{cerpatut}) to be well-posed in the space $H^s (0,L)$ forward and backward in time. However, the following problem is still unknown: \noindent$\bullet$ {\em Is the nonlinear IBVP (\ref{p-4}) well-posed in the space $H^s (0,L)$ for some $s$} ? \subsection{Control theory} Control theory for KdV equation has been extensively studied in the past two decades and the interested reader is referred to \cite{cerpatut} for an overall view of the subject. As it is possible to see in the paper above, several authors have addressed the study of control theory of the IBVP (see, e.g, \cite{CCFZh,CerRivZha,Rosier97}), who worked on the following four problems related to the IBVP \eqref{4.1}: \begin{equation*}\label{4.4} \mathcal{B}_{1,0}v:=\begin{cases} u(0,t)=h_{1,1}(t),& t\geq 0,\\ u (L,t)=h_{2,1}(t), & t\geq 0, \\ u_{x}(L,t)=h_{3,1}(t),& t\geq 0, \end{cases} \quad \mathcal{B}_{2,0}v:=\begin{cases} u(0,t)=h_{1,2}(t),& t\geq 0, \\ u_{x}(L,t)=h_{2,2}(t),& t\geq 0, \\ u_{xx}(L,t)=h_{3,2}(t),& t\geq 0, \end{cases} \end{equation*} \begin{equation*}\label{4.6} \mathcal{B}_{3,0}u:=\begin{cases} u_{xx}(0,t)=h_{1,3}(t)&t\geq 0,\\ u(L,t)=h_{2,3}(t), &t\geq 0,\\ u_{x}(L,t)=h_{3,3}(t),&t\geq 0 \end{cases} \quad \mathcal{B}_{4,0}u:=\begin{cases} u_{xx}(0,t)=h_{1,4}(t), & t\geq 0,\\ u_{x}(L,t)=h_{2,4}(t), & t\geq 0, \\ u_{xx}(L,t)=h_{3,4}(t),& t\geq 0. \end{cases} \end{equation*} The first class of problem \eqref{4.1}--$\mathcal{B}_{1,0}v$ was studied by Rosier \cite{Rosier97} considering only the control input $h_{3,1}$ (i.e. $h_{1,1}=h_{2,1}=0$). It was shown in \cite{Rosier97} that the exact controllability of the linearized system holds in $L^2(0,L)$ if and only if, $L$ does not belong to the following countable set of critical lengths \begin{equation*} \mathcal{N}:=\left\{ \frac{2\pi}{\sqrt{3}}\sqrt{k^{2}+kl+l^{2}} \,:k,\,l\,\in\mathbb{N}^{\ast}\right\}. \end{equation*} The analysis developed in \cite{Rosier97} shows that when the linearized system is controllable, the same is true for the nonlinear case. Note that the converse is false, as it was proved in \cite{cerpa,cerpa1,coron}, that is, the (nonlinear) KdV equation is controllable even when $L$ is a critical length but the linearized system is non controllable. The existence of a discrete set of critical lengths for which the exact controllability of the linearized equation fails was also noticed by Glass and Guerrero in \cite{GG1} when $h_{2,1}$ is taken as control input (i.e. $h_{1,1}=h_{3,1}=0$). Finally, it is worth mentioning the result by Rosier \cite{Rosier2} and Glass and Guerrero \cite{GG} for which $h_{1,1}$ is taken as control input (i.e. $h_{2,1}=h_{3,1}=0$). They proved that system \eqref{4.1} with boundary conditions $\mathcal{B}_{1,0}v$ is then null controllable, but not exactly controllable, because of the strong smoothing effect. Recently, Cerpa \textit{et al.} in \cite{CerRivZha} proved similar results to those obtained by Rosier \cite{Rosier97} for the system \eqref{4.1} with boundary conditions $\mathcal{B}_{2,0}v$. More precisely, the authors consider the system with one, two or three controls. In addition, using the well-posedness properties proved by Kramer \textit{et al.} in \cite{KrIvZh}, they also proved that the controls $h_{i,2}$, $i=1,2,3$ belong to sharp spaces and the locally exact controllability of the linear system associated to \eqref{4.1} holds if, and only if, L does not belong to the following countable set of critical lengths \begin{equation} \mathcal{F}:=\left\{ L\in\mathbb{R}^+: L^2=-(a^2+ab+b^2) \text{ with } a,b\in\mathbb{C} \text{ satisfying } \frac{e^a}{a^2}=\frac{e^b}{b^2}=\frac{e^{-(a+b))}}{(a+b)^2}\right\}. \label{critical1} \end{equation} Moreover, they showed that the nonlinear system \eqref{4.1} with boundary conditions $\mathcal{B}_{2,0}v$ is locally exactly controllable \textit{via} the contraction mapping principle. Recently, Caicedo \textit{et al.}, in \cite{CCFZh}, proved the controllability results for the system \eqref{1.8a}, that is, system \eqref{4.1} with boundary conditions $\mathcal{B}_{4,0}v$. Naturally, they used the same approaches that have worked effectively for system \eqref{4.1} with boundary condition $\mathcal{B}_{1,0}v$ and $\mathcal{B}_{2,0}v$. In particular, if only $h_{2,4}(t)$ is used, they showed that the system \eqref{4.1} with boundary conditions $\mathcal{B}_{4,0}v$ is \textit{locally exactly controllable} as long as \begin{equation} L\notin\mathcal{R}:=\mathcal{N}\cup\left\{k\pi:k\in\mathbb{N}^{\ast}\right\}. \label{critical_new} \end{equation} Thus, with respect of the control issue, a natural and interesting open problem arises here: \noindent$\bullet$ {\em Is the IBVP \eqref{4.1}, with general boundary condition, controllable?} \end{document}
arXiv
How do you evaluate $\int_{0}^{1} \frac{(3x^3-x^2+2x-4)}{\sqrt{x^2-3x+2}} \, dx$? [duplicate] How to integrate the product of two or more polynomials raised to some powers, not necessarily integral (3 answers) Problem about evaluating $\int_0^1 {3x^3 -x^2 +2x -4\over \sqrt {x^2-3x+2} } \; dx $ [duplicate] (2 answers) Saw this problem on a FaceBook meme that said the pin code to his ATM debit card is the solution to the following problem: $$\int_{0}^{1} \frac{(3x^3-x^2+2x-4)}{\sqrt{x^2-3x+2}} \, dx$$ I was trying to see how we could break this up into an easier integrals but nothing comes to mind at first glance. Perhaps complex integration is possible? calculus integration complex-analysis definite-integrals $\begingroup$ $$x-1$$ a common factor , choose $$\dfrac{x-1}{x-2}=t$$ $\endgroup$ – lab bhattacharjee Dec 13 '19 at 4:22 $\begingroup$ @labbhattacharjee How do you express the rest of the numerator ($3x^2+2x+4$) in terms of $t$? $\endgroup$ – Don Thousand Dec 13 '19 at 4:33 $\begingroup$ @DonThousand $$t=\dfrac{x-1}{x-2}=1+\frac{1}{x-2} \\ t-1=\frac{1}{x-2} \\x-2=\frac{1}{t-1} \\ x =\frac{2t-1}{t-1}$$ $\endgroup$ – N. S. Dec 13 '19 at 4:41 $\begingroup$ Seems unlikely to be an ATM code wolframalpha.com/input/… $\endgroup$ – saulspatz Dec 13 '19 at 5:01 $\begingroup$ @MartinSleziak Holy crap, thanks for finding a better duplicate $\endgroup$ – Maximilian Janisch Dec 13 '19 at 15:23 As an alternative to the substitution described in the comments, the anti-derivative of expressions of the form $P(x)/\sqrt{ax^2+bx+c}$, $(a\ne 0)$, where $P(x)$ is a non-constant polynomial is: $$\int \frac{P(x)}{\sqrt{ax^2+bx+c}}\mathrm{d}x=Q(x)\sqrt{ax^2+bx+c}+\lambda\int\frac{1}{\sqrt{ax^2+bx+c}}\mathrm{d}x $$ where $Q(x)$ is a polynomial with undetermined coefficients of one degree less than $P(x)$ and $\lambda$ is an unknown number. To find the coefficients, differentiate both sides, get rid of the square root, and equate coefficients for the powers of $x$. In this case: $$\int \frac{3x^3-x^2+2x-4}{\sqrt{x^2-3x+2}}\mathrm{d}x=\left(x^2+\frac{13}{4}x+\frac{101}{8}\right)\sqrt{x^2-3x+2}+\frac{135}{16}\int \frac{1}{\sqrt{x^2-3x+2}}\mathrm{d}x$$ and $$\int \frac{1}{\sqrt{x^2-3x+2}}\mathrm{d}x=\int \frac{1}{\sqrt{\left(x-\frac{3}{2}\right)^2-\frac{1}{4}}}\mathrm{d}x=\ln\left|x-\frac{3}{2}+\sqrt{x^2-3x+2}\right|+C $$ Update: In your case, $P(x)$, the polynomial in the numerator, has degree $3$, so $Q(x)$ has degree $2$: $Q(x)=Ax^2+Bx+C$. So you have $$\int \frac{3x^3-x^2+2x-4}{\sqrt{x^2-3x+2}}\mathrm{d}x=\left(Ax^2+Bx+C\right)\sqrt{x^2-3x+2}+\lambda\int \frac{1}{\sqrt{x^2-3x+2}}\mathrm{d}x$$ and after differentiation: $$\frac{3x^3-x^2+2x-4}{\sqrt{x^2-3x+2}}=(2Ax+B)\sqrt{x^2-3x+2}+(Ax^2+Bx+C)\frac{2x-3}{2\sqrt{x^2-3x+2}}+\frac{\lambda}{\sqrt{x^2-3x+2}} $$ Now, multiply both sides by the square root to remove it, and equate coefficients for the powers of $x$. bjorn93bjorn93 $\begingroup$ I know about division of polynomials but I've never heard of differentiating in that manner to arrive at the final answer. I differentiated and did not get what you got. $\endgroup$ – adam Dec 13 '19 at 9:49 $\begingroup$ @adam see my update $\endgroup$ – bjorn93 Dec 13 '19 at 11:52 Sage solves the integral in no time. The indefinite integral is $$ \sqrt{x^2 - 3x + 2}\left(x^2 + \frac{13}4 x + \frac{101}8\right) + \frac{135}{16}\log\left(3 - 2x - 2\sqrt{x^2 - 3x + 2}\right).$$ And the definite integral is $\frac{135}{16}\log(3 + 2\sqrt 2)-\frac{101}{8}\sqrt 2\approx -2.981267$. What kind of pin code is that? WhatsUpWhatsUp $\begingroup$ How the hell do you get that ? Lol well at least I know its solvable. $\endgroup$ – adam Dec 13 '19 at 7:37 $\begingroup$ Indeed math.stackexchange.com/q/3336353/631742 $\endgroup$ – Maximilian Janisch Dec 13 '19 at 14:18 $\begingroup$ As I said in the answer, I solved it with Sage. And I didn't expect this many upvotes, as I really didn't do much work myself, other than simplifying and formatting the output... $\endgroup$ – WhatsUp Dec 13 '19 at 14:24 Substitute $2x-3=-\cosh t$, or $x=\dfrac{3-\cosh t}2$. $$\int_0^1\frac{3x^3-x^2+2x-4}{\sqrt{x^2-3x+2}}dx=-\int_0^{\text{arcosh }2}(3x^3-x^2+2x-4)\frac{\dfrac{\sinh t}2}{\dfrac{\sinh t}2}dt.$$ $$3x^3-x^2+2x-4=-\frac{-3\cosh^3t+25\cosh^2t-77\cosh t+55}8\\ =-\frac1{32}\cosh 3t+\frac7{16}\cosh 2t-\frac{95}{32}\cosh t+\frac{137}{16}.$$ The rest is routine work. Yves DaoustYves Daoust 156k1313 gold badges9494 silver badges256256 bronze badges Not the answer you're looking for? Browse other questions tagged calculus integration complex-analysis definite-integrals or ask your own question. How to integrate the product of two or more polynomials raised to some powers, not necessarily integral Problem about evaluating $\int_0^1 {3x^3 -x^2 +2x -4\over \sqrt {x^2-3x+2} } \; dx $ Evaluating $\int_{0}^{1}{\frac{3x^3 - x^2 + 2x - 4}{\sqrt{x^2 -3x+2}}dx}$ How to evaluate $\int_0^1\frac{3x^3-x^2+2x-4}{\sqrt{x^2-3x+2}}~dx$? Complex part of a contour integration not using contour integration How to rewrite this integral for numerical evaluation How can an improper integral have multiple values? Prove that $\int_0^4 \frac{\ln x}{\sqrt{4x-x^2}}~dx=0$ (without trigonometric substitution)
CommonCrawl
Differentially mutated subnetworks discovery Morteza Chalabi Hajkarim1, Eli Upfal2 & Fabio Vandin ORCID: orcid.org/0000-0003-2244-23203 We study the problem of identifying differentially mutated subnetworks of a large gene–gene interaction network, that is, subnetworks that display a significant difference in mutation frequency in two sets of cancer samples. We formally define the associated computational problem and show that the problem is NP-hard. We propose a novel and efficient algorithm, called DAMOKLE, to identify differentially mutated subnetworks given genome-wide mutation data for two sets of cancer samples. We prove that DAMOKLE identifies subnetworks with statistically significant difference in mutation frequency when the data comes from a reasonable generative model, provided enough samples are available. We test DAMOKLE on simulated and real data, showing that DAMOKLE does indeed find subnetworks with significant differences in mutation frequency and that it provides novel insights into the molecular mechanisms of the disease not revealed by standard methods. The analysis of molecular measurements from large collections of cancer samples has revolutionized our understanding of the processes leading to a tumour through somatic mutations, changes of the DNA appearing during the lifetime of an individual [1]. One of the most important aspects of cancer revealed by recent large cancer studies is inter-tumour genetic heterogeneity: each tumour presents hundreds-thousands mutations and no two tumours harbour the same set of DNA mutations [2]. One of the fundamental problems in the analysis of somatic mutations is the identification of the handful of driver mutations (i.e., mutations related to the disease) of each tumour, detecting them among the thousands or tens of thousands that are present in each tumour genome [3]. Inter-tumour heterogeneity renders the identification of driver mutations, or of driver genes (genes containing driver mutations), extremely difficult, since only few genes are mutated in a relatively large fraction of samples while most genes are mutated in a low fraction of samples in a cancer cohort [4]. Recently, several analyses (e.g, [5, 6]) have shown that interaction networks provide useful information to discover driver genes by identifying groups of interacting genes, called pathways, in which each gene is mutated at relatively low frequency while the entire group has one or more mutations in a significantly large fraction of all samples. Several network-based methods have been developed to identify groups of interacting genes mutated in a significant fraction of tumours of a given type and have been shown to improve the detection of driver genes compared to methods that analyze genes in isolation [5, 7,8,9]. The availability of molecular measurements in a large number of samples for different cancer types have also allowed comparative analyses of mutations in cancer [5, 10, 11]. Such analyses usually analyze large cohorts of different cancer types as a whole employing methods to find genes or subnetworks mutated in a significant fraction of tumours in one cohort, and also analyze each cancer type individually, with the goal to identify: pathways that are common to various cancer types; pathways that are specific to a given cancer type. For example, [5] analyzed 12 cancer types and identified subnetworks (e.g., a TP53 subnetwork) mutated in most cancer types as well as subnetworks (e.g., a MHC subnetwork) enriched for mutations in one cancer type. In addition, comparative analyses may also be used for the identification of mutations of clinical relevance [12]. For example: comparing mutations in a patients that responded to a given therapy with mutations in patients (of the same cancer type) that did not respond to the same therapy may identify genes and subnetworks associated with response to therapy; comparing mutations in patients whose tumours metastasized with mutations in patients whose tumours did not metastasize may identify mutations associated with the insurgence of metastases. Pathways that are significantly mutated only in a specific cancer type may not be identified by analyzing one cancer type at the time or all samples together (Fig. 1), but, interestingly, to the best of our knowledge no method has been designed to directly identify sets of interacting genes that are significantly more mutated in a set of samples compared to another. The task of finding such sets is more complex than the identification of subnetworks significantly mutated in a set of samples, since subnetworks that have a significant difference in mutations in two sets may display relatively modest frequency of mutation in both set of samples, whose difference can be assessed as significant only by the joint analysis of both sets of samples. Identification of subnetworks with significant difference in mutation frequency in two set of samples \({\mathcal {C}}, {\mathcal {D}}\). The blue subnetwork is significantly more mutated in \({\mathcal {D}}\) than in \({\mathcal {C}}\), but it is not detected by methods that look for the most significantly mutated subnetworks in \({\mathcal {C}}\) or in \({\mathcal {D}}\) or in \({\mathcal {C}}\cup {\mathcal {D}}\), since the orange subnetwork is in each case mutated at much higher frequency Several methods have been designed to analyze different aspects of somatic mutations in a large cohort of cancer samples in the context of networks. Some methods analyze mutations in the context of known pathways to identify the ones significantly enriched in mutations (e.g., [13]). Other methods combine mutations and large interaction networks to identify cancer subnetworks [5, 14, 15]. Networks and somatic mutations have also been used to prioritarize mutated genes in cancer [7, 8, 16,17,18] and for patients stratification [6, 19]. Some of these methods have been used for the identification of common mutation patterns or subnetworks in several cancer types [5, 10], but to the best of our knowledge no method has been designed to identify mutated subnetworks with a significant difference in two cohorts of cancer samples. Few methods studied the problem of identifying subnetworks with significant differences in two sets of cancer samples using data other than mutations. [20] studied the problem of identifying optimally discriminative subnetworks of a large interaction network using gene expression data. Mall et al. [21] developed a procedure to identify statistically significant changes in the topology of biological networks. Such methods cannot be readily applied to find subnetworks with significant difference in mutation frequency in two sets of samples. Other related work use gene expression to characterize different cancer types: [22] defined a pathway-based score that clusters samples by cancer type, while [23] defined pathway-based features used for classification in various settings, and several methods [24,25,26,27,28] have been designed for finding subnetworks with differential gene expression. In this work we study the problem of finding subnetworks with frequency of mutation that is significantly different in two sets of samples. In particular, our contributions are fourfold. First, we propose a combinatorial formulation for the problem of finding subnetworks significantly more mutated in one set of samples than in another and prove that such problem is NP-hard. Second, we propose DifferentiAlly Mutated subnetwOrKs anaLysis in cancEr (DAMOKLE), a simple and efficient algorithm for the identification of subnetworks with a significant difference of mutation in two sets of samples, and analyze DAMOKLE proving that it identifies subnetworks significantly more mutated in one of two sets of samples under reasonable assumptions for the data. Third, we test DAMOKLE on simulated data, verifying experimental that DAMOKLE correctly identifies subnetworks significantly more mutated in a set of samples when enough samples are provided in input. Fourth, we test DAMOKLE on large cancer datasets comprising two cancer types, and show that DAMOKLE identifies subnetworks significantly associated with one of the two types which cannot be identified by state-of-the-art methods designed for the analysis of one set of samples. Methods and algorithms This section presents the problem we study, the algorithm we propose for its solution, and the analysis of our algorithm. In particular, "Computational problem" section formalizes the computational problem we consider; "Algorithm" section presents DifferentiAlly Mutated subnetwOrKs anaLysis in cancEr (DAMOKLE), our algorithm for the solution of the computational problem; "Analysis of DAMOKLE" section describes the analysis of our algorithm under a reasonable generative model for mutations; "Statistical significance of the results" section presents a formal analysis of the statistical significance of subnetworks obtained by DAMOKLE; and "Permutation testing" section describes two permutation tests to assess the significance of the results of DAMOKLE for limited sample sizes. Computational problem We are given measurements on mutations in m genes \(\mathcal {G}=\{1,\dots ,m\}\) on two sets \({\mathcal {C}}=\{c_1,\dots ,c_{n_C}\},{\mathcal {D}}=\{d_1,\dots ,d_{n_D}\}\) of samples. Such measurements are represented by two matrices C and D, of dimension \(m \times n_C\) and \(m \times n_D\), respectively, where \(n_C\) (resp., \(n_D\)) is the number of samples in \({\mathcal {C}}\) (resp., \({\mathcal {D}}\)). \(C(i,j)=1\) (resp., \(D(i,j)=1\)) if gene i is mutated in the j-th sample of \({\mathcal {C}}\) (resp., \({\mathcal {D}}\)) and \(C(i,j)=0\) (resp., \(D(i,j)=0\)) otherwise. We are also given an (undirected) graph \(G=(V,E)\), where vertices \(V = \{1,\dots ,m \}\) are genes and \((i,j) \in E\) if gene i interacts with gene j (e.g., the corresponding proteins interact). Given a set of genes \(S \subset \mathcal {G}\), we define the indicator function \(c_{S}(c_i)\) with \(c_{S}(c_i)=1\) if at least one of the genes of S is mutated in sample \(c_i\), and \(c_{S}(c_i)=0\) otherwise. We define \(c_{S}(d_i)\) analogously. We define the coverage \(c_{S}({\mathcal {C}})\) of S in \({\mathcal {C}}\) as the fraction of samples in \({\mathcal {C}}\) for which at least one of the genes in S is mutated in the sample, that is $$\begin{aligned} c_{S}({\mathcal {C}}) = \frac{\sum _{i=1}^{n_C} c_{S}(c_i)}{n_C} \end{aligned}$$ and, analogously, define the coverage \(c_{S}({\mathcal {D}})\) of S in \({\mathcal {D}}\) as \(c_{S}({\mathcal {D}}) = \frac{\sum _{i=1}^{n_D} c_{S}(d_i)}{n_D}.\) We are interested in identifying sets of genes S, with \(|S|\le k\), corresponding to connected subgraphs in G and displaying a significant difference in coverage between \({\mathcal {C}}\) and \({\mathcal {D}}\), i.e., with a high value of \(|c_{S}({\mathcal {C}})-c_{S}({\mathcal {D}})|\). We define the differential coverage \(dc_{S}({\mathcal {C}},{\mathcal {D}})\) as \(dc_{S}({\mathcal {C}},{\mathcal {D}}) = c_{S}({\mathcal {C}})-c_{S}({\mathcal {D}}).\) In particular, we study the following computational problem. The differentially mutated subnetworks discovery problem: given a value \(\theta\) with \(\theta \in [0,1]\), find all connected subgraphs S of G of size \(\le k\) such that \(dc_{S}({\mathcal {C}},{\mathcal {D}}) \ge \theta\). Note that by finding sets that maximize \(dc_{S}({\mathcal {C}},{\mathcal {D}})\) we identify sets with significantly more mutations in \({\mathcal {C}}\) than in \({\mathcal {D}}\), while to identify sets with significantly more mutations in \({\mathcal {D}}\) than in \({\mathcal {C}}\) we need to find sets maximizing \(dc_{S}({\mathcal {D}},{\mathcal {C}})\). In addition, note that a subgraph S in the solution may contain genes that are not mutated in \({\mathcal {C}}\cup {\mathcal {D}}\) but that are needed for the connectivity of S. We have the following. The differentially mutated subnetworks discovery problem is NP-hard. The proof is by reduction from the connected maximum coverage problem [14]. In the connected maximum coverage problem we are given a graph G defined on a set \(V=\{v_1,\dots ,v_n\}\) of n vertices, a family \(\mathcal {P} = \{P_1,\dots ,P_n\}\) of subsets of a universe I (i.e., \(P_i \in 2^{I}\)), with \(P_i\) being the subset of I covered by \(v_i \in V\) and value k, and we want to find the subgraph \(C^* = \{v_{i_1},\dots , v_{i_k}\}\) with k nodes of G that maximizes \(|\cup _{j=1}^k P_{i_j}|\). Given an instance of the connected maximum coverage problem, we define an instance of the differentially mutated subnetworks discovery problem as follows: the set \(\mathcal {G}\) of genes corresponds to the set V of vertices of G in the connected maximum coverage problem, and the graph G is the same as in the instance of the maximum coverage instance; the set \({\mathcal {C}}\) is given by the set I and the matrix C is defined as \(C_{i,j}=1\) if \(i \in P_j\), while \({\mathcal {D}}=\emptyset\). Note that for any subgraph S of G, the differential coverage \(dc_D({\mathcal {C}},{\mathcal {D}})= c_{S}({\mathcal {C}}) - c_{S}({\mathcal {D}}) = c_{S}({\mathcal {C}})\) and \(c_{S}({\mathcal {C}}) = |\cup _{g \in S} P_{g}|/|I|\). Since |I| is the same for all solutions, the optimal solution of the differentially mutated subnetworks discovery instance corresponds to the optimal solution to the connected maximum coverage instance, and viceversa. \(\square\) We now describe DifferentiAlly Mutated subnetwOrKs anaLysis in cancEr (DAMOKLE), an algorithm to solve the differentially mutated subnetworks discovery problem. DAMOKLE takes in input mutation matrices C and D for two sets \({\mathcal {C}}\), \({\mathcal {D}}\) of samples, a (gene–gene) interaction graph G, an integer \(k>0\), and a real value \(\theta \in [0,1]\), and returns subnetworks S of G with \(\le k\) vertices and differential coverage \(dc_{S}({\mathcal {C}},{\mathcal {D}}) \ge \theta\). Subnetworks reported by DAMOKLE are also maximal (no vertex can be added to S while maintaining the connectivity of the subnetwork, \(|S| \le k\) and \(dc_{S}({\mathcal {C}},{\mathcal {D}}) \ge \theta\)). DAMOKLE is described in Algorithm 1. DAMOKLE starts by considering each edge \(e=\{u,v\} \in E\) of G with differential coverage \(dc_{\{u,v\}}({\mathcal {C}},{\mathcal {D}})\ge \theta /(k-1)\), and for each such e identifies subnetworks including e to be reported in output using Algorithm 2. GetSolutions, described in Algorithm 2, is a recursive algorithm that, give a current subgraph S, identifies all maximal connected subgraphs \(S', |S'| \le k\), containing S and with \(dc_{S'}({\mathcal {C}},{\mathcal {D}}) \ge \theta\). This is obtained by expanding S one edge at the time and stopping when the number of vertices in the current solution is k or when the addition of no vertex leads to an increase in differential coverage \(dc_{S}({\mathcal {C}},{\mathcal {D}})\) for the current solution S. In Algorithm 2, N(S) refers to the set of edges with exactly one vertex in the set S. The motivation for design choices of DAMOKLE are provided by the results in the next section. Analysis of DAMOKLE The design and analysis of DAMOKLE are based on the following generative model for the underlying biological process. For each gene \(i \in \mathcal {G}=\{1,2,...,m\}\) there is an a-priori probability \(p_i\) of observing a mutation in gene i. Let \(H\subset \mathcal {G}\) be the connected subnetwork of up to k genes that is differentially mutated in samples of \({\mathcal {C}}\) w.r.t. samples of \({\mathcal {D}}\). Mutations in our samples are taken from two related distributions. In the "control" distribution F a mutation in gene i is observed with probability \(p_i\) independent of other genes' mutations. The second distribution \(F_H\) is analogous to the distribution F but we condition on the event \(E(H)=\)"at least one gene in H is mutated in the sample". For genes not in H, all mutations come from distribution F. For genes in H, in a perfect experiment with no noise we would assume that samples in \({\mathcal {C}}\) are taken from \(F_H\) and samples from \({\mathcal {D}}\) are taken from F. However, to model realistic, noisy data we assume that with some probability q the "true" signal for a sample is lost, that is the sample from \({\mathcal {C}}\) is taken from F. In particular, samples in \({\mathcal {C}}\) are taken with probability \(1-q\) from \(F_H\) and with probability q from F. Let p be the probability that H has at least one mutation in samples from the control model F, \(p= 1-\prod _{j\in H} (1-p_j)\approx \sum _{j\in H} p_j.\) Clearly, we are only interested in sets \(H\subset \mathcal {G}\) with \(p\ll 1\). If we focus on individual genes, the probability gene i is mutated in a sample from \({\mathcal {D}}\) is \(p_i\), while the probability that it is mutated in a sample from \({\mathcal {C}}\) is \(\frac{(1-q)p_i}{1-\prod _{j\in H} (1-p_j)}+qp_i.\) Such a gap may be hard to detect with a small number of samples. On the other hand, the probability of E(H) (i.e., of at least one mutation in the set H) in a sample from \({\mathcal {C}}\) is \((1-q) +q(1-\prod _{j\in H} (1-p_j)) = 1-q + qp\), while the probability of E(H) in a sample from \({\mathcal {D}}\) is \(1-\prod _{j\in H} (1-p_j) = p\) which is a more significant gap, when \(p \ll 1.\) The efficiency of DAMOKLE is based on two fundamental results. First we show that it is sufficient to start the search only in edges with relatively high differential coverage. If \(dc_{S}({\mathcal {C}},{\mathcal {D}}) \ge \theta,\) then, in the above generating model, with high probability (asymptotic in \(n_C\) and \(n_D\) )there exist an edge \(e \in S\) such that \(dc_{\{e\}}({\mathcal {C}},{\mathcal {D}}) \ge (\theta -\epsilon )/(k-1),\) for any \(\epsilon >0.\) For a set of genes \(S'\subset \mathcal {G}\) and a sample \(z\in {\mathcal {C}} \cup {\mathcal {D}}\), let \(Count(S',z)\) be the number of genes in \(S'\) mutated in sample z. Clearly, if for all \(z\in {\mathcal {C}} \cup {\mathcal {D}}\), we have \(Count(S,z)=1\), i.e. each sample has no more than one mutation in S, then $$\begin{aligned} dc_{S}({\mathcal {C}},{\mathcal {D}})=\, & {} c_{S}({\mathcal {C}})-c_{S}({\mathcal {D}}) =\,\frac{\sum _{i=1}^{n_C} c_{S}(c_i)}{n_C} - \frac{\sum _{i=1}^{n_D} c_{S}(d_i)}{n_D} \\=\, & {} \frac{\sum _{i=1}^{n_C} \sum _{j\in S} Count (\{j\}, c_i)}{n_C} - \frac{\sum _{i=1}^{n_D} \sum _{j\in S} Count(\{j\},d_i)}{n_D} \\= \,& {} \sum _{j\in S} \left( \frac{\sum _{i=1}^{n_C} Count (\{j\}, c_i)}{n_C} - \frac{\sum _{i=1}^{n_D} Count(\{j\},d_i)}{n_D} \right) \\\ge & {} \theta . \end{aligned}$$ Thus, there is a vertex \(j^*=\arg \max _{j\in S} \left( \frac{\sum _{i=1}^{n_C} Count (\{j\}, c_i)}{n_C} - \frac{\sum _{i=1}^{n_D} Count(\{j\},d_i)}{n_D} \right)\) such that \(dc_{\{j^*\}}({\mathcal {C}},{\mathcal {D}}) =\frac{\sum _{i=1}^{n_C} Count (\{j^*\}, c_i)}{n_C} - \frac{\sum _{i=1}^{n_D} Count(\{j^*\},d_i)}{n_D} \ge \theta /k.\) Since the set of genes S is connected, there is an edge \(e=(j^*, \ell )\) for some \(\ell \in S\). For that edge, $$\begin{aligned} dc_{\{e \}}({\mathcal {C}},{\mathcal {D}}) \ge \frac{\theta -dc_{\{\ell \}}({\mathcal {C}},{\mathcal {D}})}{k-1} +dc_{\{\ell \}}({\mathcal {C}},{\mathcal {D}}) \ge \frac{\theta }{k-1}. \end{aligned}$$ For the case when the assumption \(Count(S,z)=1\) for all \(z \in {\mathcal {C}}\cup {\mathcal {D}}\) does not hold, let $$\begin{aligned} Mul(S, {\mathcal {C}},{\mathcal {D}})= & {} \frac{\sum _{i=1}^{n_C} \sum _{j\in S} Count (\{j\}, c_i)}{n_C} - \frac{\sum _{i=1}^{n_C} c_{S}(c_i)}{n_C} \\&+ \frac{\sum _{i=1}^{n_D} Count(\{j\},d_i)}{n_D} -\frac{\sum _{i=1}^{n_D} c_{S}(d_i)}{n_D}. \end{aligned}$$ $$\begin{aligned} \sum _{j\in S} \left( \frac{\sum _{i=1}^{n_C} Count (\{j\}, c_i)}{n_C} - \frac{\sum _{i=1}^{n_D} Count(\{j\},d_i)}{n_D} \right) - Mul(S, {\mathcal {C}},{\mathcal {D}}) \ge \theta \end{aligned}$$ $$\begin{aligned} dc_{\{e \}}({\mathcal {C}},{\mathcal {D}})\ge \frac{\theta +Mul(S, {\mathcal {C}},{\mathcal {D}}) }{k-1}. \end{aligned}$$ Since the probability of having more than one mutation in S in a sample from \({\mathcal {C}}\) is at least as high as from a sample from \({\mathcal {D}}\), we can normalize (similar to the proof of Theorem 2 below) and apply Hoeffding bound (Theorem 4.14 in [29]) to prove that $$\begin{aligned} Prob(Mul(S, {\mathcal {C}},{\mathcal {D}}) < -\epsilon )\le 2e^{-2\epsilon ^2 n_C n_D/(n_C+n_D)}. \end{aligned}$$ \(\square\) The second result motivates the choice, in Algorithm 2, of adding only edges that increase the score of the current solution (and to stop if there is no such edge). If subgraph S can be partitioned as \(S= S' \cup \{j\} \cup S'',\) and \(dc_{\mathcal {S'}\cup \{j\}}({\mathcal {C}},{\mathcal {D}})< dc_{\mathcal {S'}}({\mathcal {C}},{\mathcal {D}})- p p_j,\) then with high probability (asymptotic in \(n_{{\mathcal {D}}}\) ) \(dc_{S \setminus \{j\}}({\mathcal {C}},{\mathcal {D}}) > dc_{S}({\mathcal {C}},{\mathcal {D}}).\) We first observe that if each sample in \({\mathcal {D}}\) has no more than 1 mutation in S then \(dc_{\mathcal {S'}\cup \{j\}}({\mathcal {C}},{\mathcal {D}})< dc_{\mathcal {S'}}({\mathcal {C}},{\mathcal {D}})\) implies that \(dc_{\{j\}}({\mathcal {C}},{\mathcal {D}})<0\), and therefore, under this assumption, \(dc_{S \setminus \{j\}}({\mathcal {C}},{\mathcal {D}}) > dc_{S}({\mathcal {C}},{\mathcal {D}})\). To remove the assumption that a sample has no more than one mutation in S, we need to correct for the fraction of samples in \({\mathcal {D}}\) with mutations both in j and \(S''\). With high probability (asymptotic in \(n_D\)) this fraction is bounded by \(pp_j +\epsilon\) for any \(\epsilon >0\). \(\square\) Statistical significance of the results To compute a threshold that guarantees statistical confidence of our finding, we first compute a bound on the gap in a non significant set. Assume that S is not a significant set, i.e., \({\mathcal {C}}\) and \({\mathcal {D}}\) have the same distribution on S, then $$\begin{aligned} Prob( dc_{S}({\mathcal {C}},{\mathcal {D}}) > \epsilon )\le 2e^{-2 \epsilon ^2 n_{{\mathcal {C}}}n_{{\mathcal {D}}}/(n_{{\mathcal {C}}}+n_{{\mathcal {D}}})}. \end{aligned}$$ Let \(X_1,\dots , X_{n_C}\) be independent random variables such that \(X_i=1/n_C\) if sample \(c_i\) in \({\mathcal {C}}\) has a mutation in S, otherwise \(X_i=0\). Similarly, let \(Y_1,\dots , Y_{n_D}\) be independent random variables such that \(Y_i= -1/n_D\) if sample \(d_i\) in \({\mathcal {D}}\) has a mutation in S, otherwise \(Y_i=0\). Clearly \(dc_{S}({\mathcal {C}},{\mathcal {D}}) = \sum _{i=1}^{n_C} X_i + \sum _{i=1}^{n_D} Y_i\), and since S is not significant \(E\left[\sum _{i=1}^{n_C} X_i +\sum _{i=1}^{n_D} Y_i\right]=0\). To apply Hoeffding bound (Theorem 4.14 in [29]), we note that the sum \(\sum _{i=1}^{n_C} X_i + \sum _{i=1}^{n_D} Y_i\) has \(n_C\) variables in the range \([0,1/n_C]\), and \(n_D\) variables in the range \([-1/n_D, 0]\). Thus, $$\begin{aligned} Prob( dc_{S}({\mathcal {C}},{\mathcal {D}}) > \epsilon )\le 2e^{(-2 \epsilon ^2 )/(n_c/n_c^2 + n_d/n_D^2)} = 2e^{-2 \epsilon ^2 n_{{\mathcal {C}}}n_{{\mathcal {D}}}/(n_{{\mathcal {C}}}+n_{{\mathcal {D}}})}. \end{aligned}$$ Let \(N_{k}\) be the set of subnetworks under consideration, or the set of all connected components of size \(\le k\). We use Theorem 2 to obtain guarantees on the statistical significance of the results of DAMOKLE in terms of the Family-Wise Error Rate (FWER) or of the False Discovery Rate (FDR) as follows: FWER: if we want to find just the subnetwork with significant maximum differential coverage, to bound the FWER of our method by \(\alpha\) we use the maximum \(\epsilon\) such that \(N_{k} 2e^{-2 \epsilon ^2 n_{{\mathcal {C}}}n_{{\mathcal {D}}}/(n_{{\mathcal {C}}}+n_{{\mathcal {D}}})}\le \alpha .\) FDR: if we want to find several significant subnetworks with high differential coverage, to bound the FDR by \(\alpha\) we use the maximum \(\epsilon\) such that \({N_{k} 2e^{-2 \epsilon ^2 n_{{\mathcal {C}}}n_{{\mathcal {D}}}/(n_{{\mathcal {C}}}+n_{{\mathcal {D}}})}}/n(\alpha ) \le \alpha\), where \(n(\alpha )\) is the number of sets with differential coverage \(\ge \epsilon\). Permutation testing While Theorem 2 shows how to obtain guarantees on the statistical significance of the results of DAMOKLE by appropriately setting \(\theta\), in practice, due to relatively small sample sizes and to inevitable looseness in the theoretical guarantees, a permutation testing approach may be more effective in estimating the statistical significance of the results of DAMOKLE and provide more power for the identification of differentially mutated subnetworks. We consider two permutation tests to assess the association of mutations in the subnetwork with the highest differential coverage found by DAMOKLE. The first test assesses whether the observed differential coverage can be obtained under the independence of mutations in genes by considering the null distribution in which each gene is mutated in a random subset (of the same cardinality as observed in the data) of all samples, independently of all other events. The second test assesses whether, under the observed marginal distributions for mutations in sets of genes, the observed differential coverage of a subnetwork can be obtained under the independence between mutations and samples' memberships (i.e., being a sample of \({\mathcal {C}}\) or a sample of \({\mathcal {D}}\)), by randomly permuting the samples memberships. Let \(dc_{S}({\mathcal {C}},{\mathcal {D}})\) be the differential coverage observed on real data for the solution S with highest differential coverage found by DAMOKLE (for some input parameters). For both tests we estimate the p-value as follow: generate N (permuted) datasets from the null distribution; run DAMOKLE (with the same input parameters used on real data) on each of the N permuted datasets; let x be the number of permuted datasets in which DAMOKLE reports a solution with differential coverage \(\ge dc_{S}({\mathcal {C}},{\mathcal {D}})\): then the p-value of S is \((x+1)/(N+1)\). We implemented DAMOKLE in PythonFootnote 1 and tested it on simulated and on cancer data. Our experiments have been conducted on a Linux machine with 16 cores and 256 GB of RAM. For all our experiments we used as interaction graph G the HINT+HI2012 networkFootnote 2, a combination of the HINT network [30] and the HI-2012 [31] set of protein–protein interactions, previously used in [5]. In all cases we considered only the subnetwork with the highest differential coverage among the ones returned by DAMOKLE. We first present the results on simulated data ("Simulated data" section) and then present the results on cancer data ("Cancer data" section). Simulated data We tested DAMOKLE on simulated data generated as follows. We assume there is a subnetwork S of k genes with differential coverage \(dc_{S}({\mathcal {C}},{\mathcal {D}})= c\). In our simulations we set \(|{\mathcal {C}}|=|{\mathcal {D}}|=n\). For each sample in \({\mathcal {D}}\), each gene g in G (including genes in S) is mutated with probability \(p_g\), independently of all other events. For samples in \({\mathcal {C}}\), we first mutated each gene g with probability \(p_g\) independently of all other events. We then considered the samples of \({\mathcal {C}}\) without mutations in S, and for each such sample we mutated, with probability c, one gene of S, chosen uniformly at random. In this way c is the expectation of the differential coverage \(dc_{S}({\mathcal {C}},{\mathcal {D}})\). For genes in \(G \setminus S\) we used mutation probabilities \(p_g\) estimated from oesophageal cancer data [32]. We considered only value of \(n \ge 100\), consistent with sample sizes in most recent cancer sequencing studies. (The latest ICGC data releaseFootnote 3 from April 30\(^{th}\), 2018 has data for \(\ge 500\) samples for \(81\%\) of the primary sites). The goal of our investigation using simulated data is to evaluate the impact of various parameters on ability of DAMOKLE to recover S or part of it. In particular, we studied the impact of three parameters: the differential coverage \(dc_{S}({\mathcal {C}},{\mathcal {D}})\) of the planted subnetwork S; the number k of genes in S; and the number n of samples in each class. To evaluate the impact of such parameters, for each combination of parameters in our experiments we generated 10 simulated datasets and run DAMOKLE on each dataset with \(\theta = 0.01\), recording the fraction of times that DAMOKLE reported S as the solution with the highest differential coverage, and the fraction of genes of S that are in the solution with highest differential coverage found by DAMOKLE. We first investigated the impact of the differential coverage \(c = dc_{S}({\mathcal {C}},{\mathcal {D}})\). We analyzed simulated datasets with \(n=100\) samples in each class, where \(k=5\) genes are part of the subnetwork S, for values of \(c = 0.1, 0.22, 0.33, 0.46, 0.6, 0.8\),. We run DAMOKLE on each dataset with \(k=5\). The results are shown in Fig. 2a. For low values of the differential coverage c, with \(n=100\) samples DAMOKLE never reports S as the best solution found and only a small fraction of the genes in S are part of the solution reported by DAMOKLE. However, as soon as the differential coverage is \(\ge 0.45\), even with \(n=100\) samples in each class DAMOKLE identifies the entire planted solution S most of the times, and even when the best solution does not entirely corresponds to S, more than \(80\%\) of the genes of S are reported in the best solution. For values of \(c \ge 0.6\), DAMOKLE always reports the whole subnetwork S as the best solution. Given that many recent large cancer sequencing studies consider at least 200 samples, DAMOKLE will be useful to identify differentially mutated subnetworks in such studies. a Performance of DAMOKLE as a function of the differential coverage \(dc_{S}({\mathcal {C}},{\mathcal {D}})\) of subnetwork S. The figure shows (red) the fraction of times, out of 10 experiments, that the best solution corresponds to S and (blue) the fraction of genes in S that are reported in the best solution by DAMOKLE. For the latter, error bars show the standard deviation on the 10 experiments. \(n=100\) and \(k=5\) for all experiments. b Performance of DAMOKLE as a function of the number k of genes in subnetwork S. \(n=100\) and \(dc_{S}({\mathcal {C}},{\mathcal {D}})=0.46\) for all experiments. c Performance of DAMOKLE as a function of the number n of samples in \({\mathcal {C}},{\mathcal {D}}\). \(k=10\) and \(dc_{S}({\mathcal {C}},{\mathcal {D}})=0.46\) for all experiments We then tested the performance of DAMOKLE as a function of the number of genes k in S. We tested the ability of DAMOKLE to identify a subnetwork S with differential coverage \(dc_{S}({\mathcal {C}},{\mathcal {D}})=0.46\) in a dataset with \(n=100\) samples in both \({\mathcal {C}}\) and \({\mathcal {D}}\), when the number k of genes in S varies as \(k=5,7,9\). The results are shown in Fig. 2b. As expected, when the number of genes in S increases, the fraction of times S is the best solution as well as the fraction of genes reported in the best solution by S decreases, and for \(k=9\) the best solution found by DAMOKLE corresponds to S only \(10\%\) of the times. However, even for \(k=9\), on average most of the genes of S are reported in the best solution by DAMOKLE. Therefore DAMOKLE can be used to identify relatively large subnetworks mutated in a significantly different number of samples even when the number of samples is relatively low. Finally, we tested the performance of DAMOKLE as the number of samples n in each set \({\mathcal {C}},{\mathcal {D}}\) increases. In particular, we tested the ability of DAMOKLE to identify a relatively large subnetwork S of \(k=10\) genes with differential coverage \(dc_S({\mathcal {C}},{\mathcal {D}}) = 0.46\) as the number of samples n increases. We analyzed simulated datasets for \(n=100, 250, 500\). The results are shown in Fig. 2. For \(n=100\), when \(k=10\), DAMOKLE never reports S as the best solution and only a small fraction of all genes in S are reported in the solution. However, for \(n=250\), while DAMOKLE still reports S as the best solution only \(10\%\) of the times, on average \(70\%\) of the genes of S are reported in the best solution. More interestingly, already for \(n=500\), DAMOKLE always reports S as the best solution. These results show that DAMOKLE can reliably identify relatively large differentially mutated subnetworks from currently available datasets of large cancer sequencing studies. Cancer data We use DAMOKLE to analyze somatic mutations from The Cancer Genome Atlas. We first compared two similar cancer types and two very different cancer types to test whether DAMOKLE behaves as expected on these types. We then analyzed two pairs of cancer types where differences in alterations are unclear. In all cases we run DAMOKLE with \(\theta =0.1\) and obtained p-values with the permutation tests described in "Permutation testing" section. We used DAMOKLE to analyze 188 samples of lung squamous cell carcinoma (LUSC) and 183 samples of lung adenocarcinoma (LUAD). We only considered single nucleotide variants (SNVs)Footnote 4 and use \(k=5\). DAMOKLE did not report any significant subnetwork, in agreement with previous work showing that these two cancer types have known differences in gene expression [33] but are much more similar with respect to SNVs [34]. Colorectal vs ovarian cancer We used DAMOKLE to analyze 456 samples of colorectal adenocarcinoma (COADREAD) and 496 samples of ovarian serous cystadenocarcinoma (OV) using only SNVs.Footnote 5 For \(k=5\), DAMOKLE identifies the significant (\(p<0.01\) according to both tests in "Permutation testing" section) subnetwork APC, CTNNB1, FBXO30, SMAD4, SYNE1 with differential coverage 0.81 in COADREAD w.r.t. OV. APC, CTNNB1, and SMAD4 are members of the WNT signaling and TFG-\(\beta\) signaling pathways. The WNT signaling pathway is one of the cascades that regulates stemness and development, with a role in carcinogenesis that has been described mostly for colorectal cancer [35], but altered Wnt signaling is observed in many other cancer types [36]. The TFG-\(\beta\) signaling pathway is involved in several processes including cell growth and apoptosis, that is deregulated in many diseases, including COADREAD [35]. The high differential coverage of the subnetwork is in accordance with COADREAD being altered mostly by SNVs and OV being altered mostly by copy number aberrations (CNAs) [37]. Esophagus-stomach cancer We analyzed SNVs and CNAs in 171 samples of esophagus cancer and in 347 samples of stomach cancer [32].Footnote 6 The number of mutations in the two sets is not significantly different (t-test p = 0.16). We first considered single genes, identifying TP53 with high (\(>0.5\)) differential coverage between the two cancer types. Alterations in TP53 have then be removed for the subsequent DAMOKLE analysis. We run DAMOKLE with \(k=4\) with \({\mathcal {C}}\) being the set of stomach tumours and \({\mathcal {D}}\) being the set of esophagus tumours. DAMOKLE identifies the significant (\(p<0.01\) for both tests in "Permutation testing" section) subnetwork \(S=\) {ACTL6A, ARID1A, BRD8, SMARCB1} with differential coverage 0.26 (Fig. 3a, b). Interestingly, all four genes in the subnetwork identified by DAMOKLE are members of the chromatin organization machinery recently associated with cancer [38, 39]. Such subnetwork is not reported as differentially mutated in the TCGA publication comparing the two cancer types [32]. BRD8 is only the top-16 gene by differential coverage, while ACTL6 and SMARCB1 are not among the top-2000 genes by differential coverage. We compared the results obtained by DAMOKLE with the results obtained by HotNet2 [5], a method to identify significantly mutated subnetworks, using the same mutation data and the same interaction network as input: none of the genes in S appeared in significant subnetworks reported by HotNet2. Results of DAMOKLE analysis of esophagus tumours and stomach tumours and of diffuse gliomas. a Subnetwork S with significant (\(p<0.01\)) differential coverage in esophagus tumours vs stomach tumours (interactions from HINT+HI2012 network). b Fractions of samples with mutations in genes of S in esophagus tumours and in stomach tumours. c Subnetwork S with significant (\(p<0.01\)) differential coverage in LGG samples vs GBM samples (interactions from HINT+HI2012 network). d Fractions of samples with mutations in genes of S in LGG samples and GBM samples Diffuse gliomas We analyzed single nucleotide variants (SNVs) and copy number aberrations (CNAs) in 509 samples of lower grade glioma (LGG) and in 303 samples of glioblastoma multiforme (GBM).Footnote 7 We considered nonsilent SNVs, short indels, and CNAs. We removed from the analysis genes with \(<6\) mutations in both classes. By single gene analysis we identified IDH1 with high (\(>0.5\)) differential coverage, and removed alterations in such gene for the DAMOKLE analysis. We run DAMOKLE with \(k=5\) with \({\mathcal {C}}\) being the set of GBM samples and \({\mathcal {D}}\) being the set of LGG samples. The number of mutations in \({\mathcal {C}}\) and in D is not significantly different (t-test p = 0.1). DAMOKLE identifies the significant (\(p<0.01\) for both tests in "Permutation testing" section) subnetwork \(S=\) {CDKN2A, CDK4, MDM2, MDM4, RB1} (Fig. 3c, d). All genes in S are members of the p53 pathway or of the RB pathway. The p53 pathway has a key role in cell death as well as in cell division, and the RB pathway plays a crucial role in cell cycle control. Both pathways are well known glioma cancer pathways [40]. Interestingly, [41] did not report any subnetwork with significant difference in mutations between LGG and GBM samples. CDK4, MDM2, MDM4, and RB1 do not appear among the top-45 genes by differential coverage. We compared the results obtained by DAMOKLE with the results obtained by HotNet2. Of the genes in our subnetwork, only CDK4 and CDKN2A are reported in a significantly mutated subnetwork (\(p <0.05\)) obtained by HotNet2 analyzing \({\mathcal {D}}\) but not analyzing \({\mathcal {C}}\), while MDM2, MDM4, and RB1 are not reported in any significant subnetwork obtained by HotNet2. In this work we study the problem of finding subnetworks of a large interaction network with significant difference in mutation frequency in two sets of cancer samples. This problem is extremely important to identify mutated mechanisms that are specific to a cancer (sub)type as well as for the identification of mechanisms related to clinical features (e.g., response to therapy). We provide a formal definition of the problem and show that the associated computational problem is NP-hard. We design, analyze, implement, and test a simple and efficient algorithm, DAMOKLE, which we prove identifies significant subnetworks when enough data from a reasonable generative model for cancer mutations is provided. Our results also show that the subnetworks identified by DAMOKLE cannot be identified by methods not designed for the comparative analysis of mutations in two sets of samples. We tested DAMOKLE on simulated and real data. The results on simulated data show that DAMOKLE identifies significant subnetworks with currently available sample sizes. The results on two large cancer datasets, each comprising genome-wide measurements of DNA mutations in two cancer subtypes, shows that DAMOKLE identifies subnetworks that are not found by methods not designed for the comparative analysis of mutations in two sets of samples. While we provide a first method for the differential analysis of cohorts of cancer samples, several research directions remain. First, differences in the frequency of mutation of a subnetwork in two sets of cancer cohorts may be due to external (or hidden) variables, as for example the mutation rate of each cohort. While at the moment we ensure before running the analysis that no significant difference in mutation rate is present between the two sets, performing the analysis while correcting for possible differences in such confounding variable or in others would greatly expand the applicability of our method. Second, for some interaction networks (e.g., functional ones) that are relatively more dense than the protein–protein interaction network we consider, requiring a minimum connectivity (e.g., in the form of fraction of all possible edges) in the subnetwork may be beneficial, and the design of efficient algorithms considering such requirement is an interesting direction of research. Third, different types of mutation patterns (e.g., mutual exclusivity) among two set of samples could be explored (e.g., extending the method proposed in [42]). Fourth, the inclusion of additional types of measurements, as for example gene expression, may improve the power of our method. Fifth, the inclusion of noncoding variants in the analysis may provide additional information to be leveraged to assess the significance of subnetworks. The implementation is available at https://github.com/VandinLab/DAMOKLE. http://compbio-research.cs.brown.edu/pancancer/hotnet2/. https://dcc.icgc.org/. http://cbio.mskcc.org/cancergenomics/pancan_tcga/. http://www.cbioportal.org/study?id=stes_tcga_pub#summary. https://media.githubusercontent.com/media/cBioPortal/datahub/master/public/lgggbm_tcga_pub.tar.gz. Garraway LA, Lander ES. Lessons from the cancer genome. Cell. 2013;153(1):17–37. https://doi.org/10.1016/j.cell.2013.03.002. Cancer Genome Atlas Research Network. Integrated genomic characterization of pancreatic ductal adenocarcinoma. Cancer Cell. 2017;32(2):185. Vogelstein B, Papadopoulos N, Velculescu VE, Zhou S, Diaz LA Jr, Kinzler KW. Cancer genome landscapes. Science. 2013;339(6127):1546–58. https://doi.org/10.1126/science.1235122. Vandin F. Computational methods for characterizing cancer mutational heterogeneity. Front Genet. 2017;8:83. Leiserson MDM, Vandin F, Wu H-T, Dobson JR, Eldridge JV, Thomas JL, Papoutsaki A, Kim Y, Niu B, McLellan M, Lawrence MS, Gonzalez-Perez A, Tamborero D, Cheng Y, Ryslik GA, Lopez-Bigas N, Getz G, Ding L, Raphael BJ. Pan-cancer network analysis identifies combinations of rare somatic mutations across pathways and protein complexes. Nat Genet. 2015;47(2):106–14. https://doi.org/10.1038/ng.3168. Hofree M, Shen JP, Carter H, Gross A, Ideker T. Network-based stratification of tumor mutations. Nat Methods. 2013;10(11):1108–15. https://doi.org/10.1038/nmeth.2651. Shrestha R, Hodzic E, Sauerwald T, Dao P, Wang K, Yeung J, Anderson S, Vandin F, Haffari G, Collins CC, et al. HIT'nDRIVE: patient-specific multidriver gene prioritization for precision oncology. Genome Res. 2017;27(9):1573–88. Hristov BH, Singh M. Network-based coverage of mutational profiles reveals cancer genes. arXiv preprint arXiv:1704.08544. 2017. Cowen L, Ideker T, Raphael BJ, Sharan R. Network propagation: a universal amplifier of genetic associations. Nat Rev Genet. 2017;18(9):551. Hoadley KA, Yau C, Wolf DM, Cherniack AD, Tamborero D, Ng S, Leiserson MD, Niu B, McLellan MD, Uzunangelov V, et al. Multiplatform analysis of 12 cancer types reveals molecular classification within and across tissues of origin. Cell. 2014;158(4):929–44. Kandoth C, McLellan MD, Vandin F, Ye K, Niu B, Lu C, Xie M, Zhang Q, McMichael JF, Wyczalkowski MA, Leiserson MDM, Miller CA, Welch JS, Walter MJ, Wendl MC, Ley TJ, Wilson RK, Raphael BJ, Ding L. Mutational landscape and significance across 12 major cancer types. Nature. 2013;502(7471):333–9. https://doi.org/10.1038/nature12634. Zehir A, Benayed R, Shah RH, Syed A, Middha S, Kim HR, Srinivasan P, Gao J, Chakravarty D, Devlin SM, et al. Mutational landscape of metastatic cancer revealed from prospective clinical sequencing of 10,000 patients. Nat Med. 2017;23(6):703. Vaske CJ, Benz SC, Sanborn JZ, Earl D, Szeto C, Zhu J, Haussler D, Stuart JM. Inference of patient-specific pathway activities from multi-dimensional cancer genomics data using paradigm. Bioinformatics. 2010;26(12):237–45. Vandin F, Upfal E, Raphael BJ. Algorithms for detecting significantly mutated pathways in cancer. J Comput Biol. 2011;18(3):507–22. Ciriello G, Cerami E, Sander C, Schultz N. Mutual exclusivity analysis identifies oncogenic network modules. Genome Res. 2012;22(2):398–406. Kim Y-A, Cho D-Y, Dao P, Przytycka TM. Memcover: integrated analysis of mutual exclusivity and functional network reveals dysregulated pathways across multiple cancer types. Bioinformatics. 2015;31(12):284–92. Pulido-Tamayo S, Weytjens B, De Maeyer D, Marchal K. SSA-ME detection of cancer driver genes using mutual exclusivity by small subnetwork analysis. Sci Rep. 2016;6:36257. Cho A, Shim JE, Kim E, Supek F, Lehner B, Lee I. MUFFINN: cancer gene discovery via network analysis of somatic mutation data. Genome Biol. 2016;17(1):129. Le Morvan M, Zinovyev A, Vert J-P. Netnorm: capturing cancer-relevant information in somatic exome mutation data with gene networks for cancer stratification and prognosis. PLoS Computat Biol. 2017;13(6):1005573. Dao P, Wang K, Collins C, Ester M, Lapuk A, Sahinalp SC. Optimally discriminative subnetwork markers predict response to chemotherapy. Bioinformatics. 2011;27(13):205–13. Mall R, Cerulo L, Bensmail H, Iavarone A, Ceccarelli M. Detection of statistically significant network changes in complex biological networks. BMC Syst Biol. 2017;11(1):32. Young MR, Craft DL. Pathway-informed classification system (PICS) for cancer analysis using gene expression data. Cancer Inform. 2016;15:40088. Kim S, Kon M, DeLisi C. Pathway-based classification of cancer subtypes. Biol Direct. 2012;7(1):21. Ideker T, Ozier O, Schwikowski B, Siegel AF. Discovering regulatory and signalling circuits in molecular interaction networks. Bioinformatics. 2002;18(suppl–1):233–40. Dittrich MT, Klau GW, Rosenwald A, Dandekar T, Müller T. Identifying functional modules in protein–protein interaction networks: an integrated exact approach. Bioinformatics. 2008;24(13):223–31. Gu J, Chen Y, Li S, Li Y. Identification of responsive gene modules by network-based gene clustering and extending: application to inflammation and angiogenesis. BMC Syst Biol. 2010;4(1):47. Jiao Y, Widschwendter M, Teschendorff AE. A systems-level integrative framework for genome-wide DNA methylation and gene expression data identifies differential gene expression modules under epigenetic control. Bioinformatics. 2014;30(16):2360–6. He H, Lin D, Zhang J, Wang Y-p, Deng H-w. Comparison of statistical methods for subnetwork detection in the integration of gene expression and protein interaction network. BMC Bioinform. 2017;18(1):149. Mitzenmacher M, Upfal E. Probability and computing: randomization and probabilistic techniques in algorithms and data analysis. Cambridge: Cambridge University Press; 2017. Das J, Yu H. HINT: high-quality protein interactomes and their applications in understanding human disease. BMC Syst Biol. 2012;6:92. https://doi.org/10.1186/1752-0509-6-92. Yu H, Tardivo L, Tam S, Weiner E, Gebreab F, Fan C, Svrzikapa N, Hirozane-Kishikawa T, Rietman E, Yang X, Sahalie J, Salehi-Ashtiani K, Hao T, Cusick ME, Hill DE, Roth FP, Braun P, Vidal M. Next-generation sequencing to generate interactome datasets. Nat Methods. 2011;8(6):478–80. https://doi.org/10.1038/nmeth.1597. Cancer Genome Atlas Research Network. Integrated genomic characterization of oesophageal carcinoma. Nature. 2017;541(7636):169–75. Sun F, Yang X, Jin Y, Chen L, Wang L, Shi M, Zhan C, Shi Y, Wang Q. Bioinformatics analyses of the differences between lung adenocarcinoma and squamous cell carcinoma using the cancer genome atlas expression data. Mol Med Rep. 2017;16(1):609–16. Chen F, Zhang Y, Parra E, Rodriguez J, Behrens C, Akbani R, Lu Y, Kurie J, Gibbons DL, Mills GB, et al. Multiplatform-based molecular subtypes of non-small-cell lung cancer. Oncogene. 2017;36(10):1384. Network Cancer Genome Atlas. Comprehensive molecular characterization of human colon and rectal cancer. Nature. 2012;487(7407):330. Zhan T, Rindtorff N, Boutros M. Wnt signaling in cancer. Oncogene. 2017;36(11):1461. Ciriello G, Miller ML, Aksoy BA, Senbabaoglu Y, Schultz N, Sander C. Emerging landscape of oncogenic signatures across human cancers. Nat Genet. 2013;45(10):1127. Saladi SV, Ross K, Karaayvaz M, Tata PR, Mou H, Rajagopal J, Ramaswamy S, Ellisen LW. ACTL6A is co-amplified with p63 in squamous cell carcinoma to drive YAP activation, regenerative proliferation, and poor prognosis. Cancer Cell. 2017;31(1):35–49. Lu C, Allis CD. SWI/SNF complex in cancer. Nat Genet. 2017;49(2):178–9. Vogelstein B, Kinzler KW. Cancer genes and the pathways they control. Nat Med. 2004;10(8):789–99. Ceccarelli M, Barthel FP, Malta TM, Sabedot TS, Salama SR, Murray BA, Morozova O, Newton Y, Radenbaugh A, Pagnotta SM, et al. Molecular profiling reveals biologically discrete subsets and pathways of progression in diffuse glioma. Cell. 2016;164(3):550–63. Basso RS, Hochbaum DS, Vandin F. Efficient algorithms to discover alterations with complementary functional association in cancer. arXiv preprint arXiv:1803.09721. 2018. FV designed the study. FV and EU designed the algorithms. FV and MCH implemented the software and performed the computational analysis. All authors interpreted the results. All authors wrote the manuscript. All authors read and approved the final manuscript. This work is supported, in part, by University of Padova projects SID2017 and STARS: Algorithms for Inferential Data Mining and by NSF grant IIS-1247581. The results presented in this manuscript are in whole or part based upon data generated by the TCGA Research Network: http://cancergenome.nih.gov/. Biotech Research and Innovation Centre, University of Copenhagen, Copenhagen, Denmark Morteza Chalabi Hajkarim Department of Computer Science, Brown University, Providence, RI, USA Eli Upfal Department of Information Engineering, University of Padova, Padova, Italy Fabio Vandin Correspondence to Fabio Vandin. Hajkarim, M.C., Upfal, E. & Vandin, F. Differentially mutated subnetworks discovery. Algorithms Mol Biol 14, 10 (2019). https://doi.org/10.1186/s13015-019-0146-7 Somatic mutations Differential analysis
CommonCrawl
\begin{document} \title[Cohomological tensor functors]{Cohomological tensor functors on representations of the General Linear Supergroup} \author{{\rm Th. Heidersdorf, R. Weissauer}} \date{} \begin{abstract} We define and study cohomological tensor functors from the category $T_n$ of finite-dimensional representations of the supergroup $Gl(n|n)$ into $T_{n-r}$ for $0 <r \leq n$. In the case $DS: T_n \to T_{n-1}$ we prove a formula $DS(L) = \bigoplus \Pi^{n_i} L_i$ for the image of an arbitrary irreducible representation. In particular $DS(L)$ is semisimple and multiplicity free. We derive a few applications of this theorem such as the degeneration of certain spectral sequences and a formula for the modified superdimension of an irreducible representation. \end{abstract} \thanks{2010 {\it Mathematics Subject Classification}: 17B10, 17B20, 17B55, 18D10, 20G05.} \maketitle \setcounter{tocdepth}{1} \tableofcontents \SkipTocEntry\section*{Introduction} Little is known about the decomposition of tensor products between finite-dimensional representations of the general linear supergroup $Gl(m|n)$ over an algebraically closed field of characteristic 0. In this article we define and study \textit{cohomological tensor functors} from the category $T_n= Rep(Gl(n|n))$ of finite-dimensional representations of $Gl(n|n)$ to $T_{n-r}$ for $0 < r \leq n$. One of our aims is to reduce questions about tensor products between irreducible representations by means of these functors to lower rank cases so that these can hopefully be inductively understood. This is indeed the case for small $n$ as the $Gl(1|1)$-case has been completely worked out in \cite{Goetz-Quella-Schomerus} and the $Gl(2|2)$-case is partially controlled by the theory of mixed tensors \cite{Heidersdorf-mixed-tensors} \cite{Heidersdorf-Weissauer-gl-2-2}. Along the way we obtain formulas for the (modified) superdimensions of irreducible representations. The tensor functors that we study are variants and generalizations of a construction due to Duflo-Serganova \cite{Duflo-Serganova} and Serganova \cite{Serganova-kw}. For any $x \in X = \{ x \in \mathfrak{g}_1 \ | \ [x,x] = 0\},$ where $\mathfrak{g}_1$ denotes the odd part of the underlying Lie superalgebra $\mathfrak{{\mathfrak{gl}}}(m|n)$, the cohomology of the complex associated to $(V,\rho) \in T_n$ \[ \xymatrix{ \ldots \ar[r]^-{\rho(x)} & V \ar[r]^-{\rho(x) } & V \ar[r]^-{\rho(x) } & V \ar[r]^-{\rho(x) } & \ldots } \] defines a functor $V \mapsto V_x: T_n \to T_{n-r}$ (where $r$ is the so-called rank of $x$) which preserves tensor products. The category $T_n$ splits in two abelian subcategories $T_n = {\mathcal R}_n \oplus \Pi {\mathcal R}_n$ where $\Pi$ denotes the parity shift (lemma \ref{thm:decomposition}). We therefore focus on the ${\mathcal R}_n$-case and fix a special $x$ of rank 1 in section \ref{DF} and denote the corresponding tensor functor $DS: {\mathcal R}_n \to T_{n-1}$. Later in section \ref{sec:cohomology-functors} we refine this construction to define for any $V \in {\mathcal R}_n$ a complex \[ \xymatrix{ \ldots \ar[r]^-{\partial} & \Pi (V_{2\ell-1}) \ar[r]^-{\partial} & V_{2\ell} \ar[r]^-{\partial} & \Pi (V_{2\ell+1}) \ar[r]^-{\partial} & \ldots }\ \] whose cohomology in degree $\ell$ is denoted by $H^\ell(V)$. The representation $DS(V)$ is naturally ${\mathbf{Z}}$-graded and we have a direct sum decomposition \[DS(V) = \bigoplus_{\ell \in {\mathbf{Z}}} \ \Pi^\ell (H^\ell(V)\] for $Gl(n\! -\! 1 | n\! -\! 1)$-modules $H^\ell(V))$ in ${\mathcal R}_{n-1}$. The definition of $DS$ can be easily generalized to the case $x \in X$ of higher rank $r>1$, and we denote the corresponding tensor functors by $DS_{n,n-r}: T_n \to T_{n-r}$. Like the BGG category the category $T_n$ has two different duality functors, the ordinary dual $()^{\vee}$ and the contragredient dual $()^*$. The tensor functors $DS$ and $DS_{n,n-r}$ are not $*$-invariant in the sense that $DS(V^*) \not\simeq DS(V)^*$. We therefore define an analog $D$ of the Dirac operator and we denote the corresponding Dirac cohomology groups by \[ H_D (V) = ker(D:M \to M)/ Im(D: M \to M)\] for a certain module $M\in T_{n-1}$ attached to $V$ in section \ref{DDirac}. This defines a $*$-invariant tensor functor. It agrees with $DS$ on irreducible modules, but in general gives rise to an analog of Hodge decomposition (proposition \ref{H1}). The definition of Dirac cohomology generalizes easily to define ${\mathbf{Z}}$-graded tensor functors $\omega_{n,n-r}= \bigoplus_{\ell\in\mathbb Z} \omega_{n,n-r}^\ell$ whose graded pieces are functors $\omega_{n,n-r}^\ell: T_n \to T_{n-r}$ that are described in section \ref{m}. The second part is devoted to the main theorem \ref{mainthm}. In the main theorem we give an explicit formula for the image of an irreducible representation $L = L(\lambda)$ in ${\mathcal R}_n$ of atypicality $j$ (for $0 < j \leq n$) under the functor $DS$. Surprisingly, with its natural $\mathbb Z$-gradation, the representation \[ DS(L) = \bigoplus_i L_i [-\delta_i] \] decomposes completely into a finite direct sum of irreducible representations. Here, for certain integers $\delta_i\in \mathbb Z$, the summands are attached to irreducible $Gl(n\! -\! 1 | n\! -\! 1)$-modules $L_i\in {{\mathcal R}}_{n-1}$, where $L_i [-\delta_i]$ denotes the module $\Pi^{\delta_i}(L_i) \in T_{n-1}$ concentrated in degree $\delta_i$ with respect to the $\mathbb Z$-graduation of $DS(L)$. If we ignore the $\mathbb Z$-graduation, the module $DS(L) \in T_{n-1}$ always is semisimple and multiplicity free for irreducible $L$. This makes the main theorem into an effective tool to reduce questions about tensor products or superdimensions to lower rank cases in the absence of any known branching laws. To analyse the ${\mathbf{Z}}$-graded object $DS(L)$ in more detail, we can assume that $L$ is a representation in the principal block containing the trivial representation (the \textit{maximally atypical} case). In fact, one can inductively reduce the general case to this special case. The irreducible maximal atypical representations $L\in {{\mathcal R}_n}$ can be described in different ways. For the moment it may be sufficient that up to isomorphism they uniquely correspond to spaced forests of rank $n$ in a natural way. By definition, such spaced forests ${{\mathcal F}}$ are defined by data $$(d_0,{{\mathcal T}}_1,d_1,{{\mathcal T}}_2, \cdots, d_{k-1},{{\mathcal T}}_k)$$ where the ${\mathcal T}_i$ for $i=1,...,k$ are rooted planar trees positioned on points of the numberline from left to right. The integer $d_0$ specifies the absolute position of the leftmost tree ${{\mathcal T}}_1$ and the natural numbers $d_{i}$ for $i=1,...,k-1$ indicate the distances between the position of the trees ${{\mathcal T}}_{i}$ and ${{\mathcal T}}_{i+1}$. Here we allow $d_i=0$, i.e. some trees may be positioned at the same point of the numberline. The absolute positions $\delta_i = \sum_{j<i} d_i \in\mathbb Z$ of the planar trees ${\mathcal T}_i$ therefore satisfy $$\delta_1 \leq \delta_2 \leq \cdots \leq \delta_k\ .$$ In particular, $\delta_1$ describes the absolute position of the leftmost tree ${{\mathcal T}}_1$ of the forest and $\delta_k$ describes the absolute position of the rightmost tree ${\mathcal T}_k$ of this forest. Each tree ${{\mathcal T}}_i$ is a planar tree with say $r_i$ nodes, among which is the distinguished node defined by the root of the tree. By definition, the rank of the forest ${{\mathcal F}}$ is the sum $\sum_{i=1}^k r_i$ of the nodes of all trees. Since in the equivalence above the rank $n$ is fixed, only forests with at most $k\leq n$ trees occur. This being said, we are now able to describe the summands of the decomposition of $DS(L)$ mentioned above. For simplicity, we still assume $L$ to be maximal atypical. If $L$ corresponds to the spaced forest with trees ${{\mathcal T}_1}, {{\mathcal T}_2},...,{{\mathcal T}}_k$ in the sense above with the positions at $\delta_1,...,\delta_k$, then $DS(L)$ has precisely $k$ irreducible constituents $L_i[-\delta_i]$ for $i=1,...,k$, so that $L_i$ corresponds to the spaced forest ${\mathcal T}_1, ... ,{\mathcal T}_{i-1}, \partial{{\mathcal T}}_i, ... ,{{\mathcal T}}_k$ of rank $n-1$ where $\partial{{\mathcal T}}_i$ denotes the forest of planar trees obtained from ${\mathcal T}_i$ by removing its root. The trees are now at the new positions $\delta_1-1,...,\delta_{i-1}-1,\delta_i,...,\delta_i,\delta_{i+1}+1,...,\delta_k +1$ where we use the convention that $\delta_i$ denotes the common position of all the trees in $\partial {\mathcal T}_i$. In the special case where ${{\mathcal T}}_i$ has only one node, $\partial {{\mathcal T}}_i$ is not defined and will be discarded (together with $\delta_i$). In other words, in this case the new spaced forest has only $k-1$ trees. This description of the ${\mathbf{Z}}$-graded object $DS(L)$ follows from the results in sections \ref{duals} - \ref{koh3}. We introduce spaced forests in section \ref{duals} where we describe the dual of an irreducible representation. The ${\mathbf{Z}}$-grading of $DS(L)$ for maximally atypical $L$ is then obtained in proposition \ref{hproof} and in the general case in proposition \ref{hproof-2}. These results follow from the main theorem and its proof by a careful bookkeeping, but they are considerably stronger and in particular incorporate theorem \ref{mainthm} as a special case. We show $DS_{n,0}(L) \cong \bigoplus \omega_{n,0}^\ell(L)[-\ell]$ for irreducible maximal atypical representations $L$. From this, as an application of the main theorem, we obtain in theorem \ref{thm:forest-formula} a nice explicit formula for the Laurent polynomial \[ \sum_{\ell \in {\mathbf{Z}}} sdim(\omega_{n,0}^{\ell}(L)) \cdot t^{\ell}. \] (Hilbert polynomial) attached to the Dirac cohomology tensor functors $$\omega_{n,0}^\ell: T_n \to T_0 \ $$ in the case of an irreducible maximal atypical representation $L$. As already mentioned, the main theorem does not require $L$ to be in the principal block. Applying $DS$ repeatedly $k$-times to an irreducible representation $L = L(\lambda)$ of atypicality $i$ we obtain an isotypical typical representation $m(\lambda) L^{core}$ in $T_{n-i}$, and $L^{core}$ only depends on the block of $L$ (section \ref{sec:main}). We derive a closed formula for the multiplicity $m(\lambda)$ in section \ref{sec:main}. The multiplicity $m(\lambda)$ can be expressed as \[ m(\lambda) = \frac{|\mathcal{F}(\lambda)|!}{\mathcal{F}(\lambda)!} \] where $\mathcal{F}(\lambda)$ is the spaced forest associated to $L(\lambda)$, $|\mathcal{F}(\lambda)|$ is the number of its nodes and $\mathcal{F}(\lambda)!$ is the forest factorial \ref{sec:main}. This not only implies that the so called modified superdimension of $L$ does not vanish (i.e. the \textit{generalized Kac-Wakimoto conjecture}), but moreover gives a closed formula for it. The main theorem has a number of other useful applications and we refer the reader to the list given after theorem \ref{mainthm}. The proof of the main theorem occupies the entire second part. Build on an involved induction using translation functors, carried out in sections \ref{sec:loewy-length} - \ref{sec:moves}, the proof is reduced to the case of ground-states; these are rather specific irreducible modules in a block. For instance, ground states of the principal block are powers of the Berezin determinant. Then, for a block of atypicality $k < n$, we prove in section \ref{stable0} that every ground state is a Berezin twist of a mixed tensor, an irreducible direct summand in an iterated tensor $X^{\otimes r} \otimes (X^{\vee})^{\otimes s}$ where $X$ denotes the standard representation of $Gl(n|n)$. It is easy to verify directly that the main theorem holds for Berezin powers and mixed tensors. In section \ref{sec:loewy-length} we study the Loewy structure of translation functors applied to irreducible representations and their behavior under $DS$. We also explain why we can restrict to the maximally atypical case for the proof of the main theorem. In section \ref{sec:inductive} we prove both parts of the main theorem (semisimplicity and determination of the constituents) under certain assumption on translation functors which are verified in the later section \ref{sec:moves}. In section \ref{sec:chevalley-eilenberg} we discuss the cohomology ring $H_{DS_n}^{\bullet}(V({\mathbf{1}}))$ for the tensor functor $DS_{n,0}$. Although the description of the composition factors of an arbitrary Kac module $V(\lambda)$ is much more complicated than that of $V({\mathbf{1}})$, we show in lemma \ref{lem:kac=kac} that there is an isomorphism \[ H^{\bullet}_{DS_n}(V({\mathbf{1}})) \cong H_{DS_n}^{\bullet + deg(\rho)} (V(\lambda)) \] of $I$-modules. In fact the cohomology ring of $V({\mathbf{1}})$ can be identified with the Lie algebra homology ring $H_{\bullet}(\mathfrak{gl}(n))$ and defines an exterior algebra $I$ on primitive elements $f_1,f_3, \ldots, f_{2n-1}$ so that $I$ acts on the graded cohomology $H^\bullet_{DS_n}$ of finite dimensional $\frak g$-modules. We also discuss the relationship between the cohomology of a Kac module and its irreducible quotient. We show in theorem \ref{cohom-proj} that the induced homomorphism \[ H^{\nu}_{DS_n}(pr): H_{DS_n}^{\nu}(V(\lambda)) \to H_{DS_n}^{\nu}(L(\lambda)) \] is an isomorphism in the top degree and trivial in all lower degrees. In section \ref{sec:primitive} we describe the elements of $I \cong H^{\bullet}_{DS_n}V({\mathbf{1}}))$ in terms of the representation theory of the superlinear group $Gl(n\vert n)$. Since the image of an irreducible representation under $DS$ is therefore understood, it is natural to look at the image $DS(I)$ of an indecomposable representations $I$. The kernel of $DS$ is the tensor ideal of representations with a filtration by anti-Kac modules by results in section \ref{sec:support}. If $R(\lambda)$ is a mixed tensor we can easily compute $DS(R(\lambda))$. In other cases it is rather complicated to determine $DS$. As an example for the importance of this problem consider the computation of the tensor product between two irreducible representations $L_1 \otimes L_2 = \bigoplus_i I_i$ in indecomposable summands in ${\mathcal R}_n$. The decomposition of $DS(L_1)$ and $DS(L_2)$ gives estimates on the number of possible direct summands, but these are rather weak unless something is known about $DS(I_i)$. For an easy example of the use of the cohomological tensor functors in this setting see \cite{Heidersdorf-Weissauer-gl-2-2}. In the last sections \ref{kac-module-of-one} - \ref{hooks} we give a cohomological criterion \ref{splitting1} for an indecomposable representation to be equal to the trivial representation. We call an epimorphism $q: V \to W$ strict, if the induced morphism $\omega(q): \omega(V) \to \omega(W)$ for the tensor functor $\omega = \omega_{n,0}:T_n \to svec_k$ is surjective. We prove in corollary \ref{splitting1} that if $Z$ is an indecomposable module with cosocle ${\mathbf{1}}$ such that the quotient map $q: Z \to {\mathbf{1}}$ is strict, then $Z \simeq {\mathbf{1}}$. Any such representation $Z$ contains extensions of the trivial representations ${\mathbf{1}}$ with the other irreducible constituents in the second upper Loewy layer. This leads us to study the cohomology $H^i$ of extensions of the trivial representation \[ \xymatrix{ 0 \ar[r] & S_{\nu} \ar[r] & V \ar[r]^{q_V} & {\mathbf{1}} \ar[r] & 0} \] for irreducible representations $S_{\nu}$. We show in the key lemma \ref{trivialextension} that in this case the map $\omega^0(q_V)$ vanishes. This is a contradiction to our analysis in section \ref{strictmorphisms} if we suppose that $Z$ is not irreducible. Most of the results in this article can be rephrased for representations of the supergroup $Gl(m|n)$ where $m \neq n$. This will be discussed elsewhere. \section{ The superlinear groups}\label{2} Let $k$ be an algebraically closed field of characteristic zero. A super vectorspace $V$ over $k$ is a $\mathbb Z/2\mathbb Z$-graded $k$-vectorspace $V=V_{\overline 0} \oplus V_{\overline 1}$. Its superdimension is $sdim(V)= \dim(V_{\overline 0})\! -\! \dim(V_{\overline 1})$. The parity shift functor $\Pi$ on the category of super vectorspaces over $k$ is defined by $\Pi(V)_{\overline 0} = V_{\overline 1}$ and $\Pi(V)_{\overline 1} =V_{\overline 0}$ and the parity endomorphism of $V$ is $p_V=id_{V_{\overline 0}}\oplus -id_{V_{\overline 1}}$ in $End_k(V)$. {\it Conventions on gradings}. For $\mathbb Z$-graded object $M=\bigoplus_i M_i$ with objects $M_i$ in an additive category ${\mathcal C}$ one has the shifted $\mathbb Z$-graded objects $M\langle j\rangle$ defined by $(M\langle j\rangle)_i = M_{i+j}$. If ${\mathcal C}$ carries a super structure defined by a functor $\Pi: {\mathcal C} \to {\mathcal C}$ such that $\Pi \circ \Pi$ is the identity functor, we mainly use the $\mathbb Z$-graded objects $M[j]$ defined by $(M[j])_i := \Pi^j(M_{i+j})$. Considering objects $L$ in ${\mathcal C}$ as graded objects concentrated in one degree, we often consider the $\mathbb Z$-graded objects $L[-\ell]$ concentrated in degree $\ell$. In this context, forgetting the $\mathbb Z$-grading of $L[\ell]$ for $L\in {\mathcal C}$ and $\ell\in \mathbb Z$ gives the object $\Pi^\ell(L)$ in ${\mathcal C}$. {\it The categories $F$ and $T$}. Let ${\mathfrak g}={\mathfrak{gl}}(m\vert n) = {\mathfrak g}_{\overline{0}} \oplus {\mathfrak g}_{\overline{1}}$ be the general Lie superalgebra. The even part ${\mathfrak g}_{\overline{0}} = {\mathfrak{gl}}(m) \oplus {\mathfrak{gl}}(n)$ of ${\mathfrak{gl}}(m\vert n)$ can be considered as the Lie algebra of the classical subgroup $G_{\overline 0}=Gl(m)\times Gl(n)$ in $G=Gl(m\vert n)$. By definition a finite-dimensional representation $\rho$ of $\mathfrak{gl}(m\vert n)$ defines a representation $\rho$ of $Gl(m \vert n)$, if its restriction to ${\mathfrak g}_{\overline{0}}$ comes from an algebraic representation of $G_{\overline 0}$, also denoted $\rho$. For the linear supergroup $G=Gl(m\vert n)$ over $k$ let $F$ be the category of the super representations $\rho$ of $Gl(m\vert n)$ on finite dimensional super vectorspaces over $k$. If $(V,\rho)$ is in $F$, so is $\Pi(V,\rho)$. The morphisms in the category $F$ are the $G$-linear maps $f:V \to W$ between super representations, where we allow even and odd morphisms with respect to the gradings on $V$ and $W$, i.e morphisms with $f \circ p_V =\pm p_W \circ f$. For $M,N \in F$ we have $Hom_F(M,N) = Hom_F(M,N)_{\overline 0} \oplus Hom_F(M,N)_{\overline 1}$, where $Hom_F(M,N)_{\overline 0}$ are the even morphisms. Let $T=sRep_\Lambda(G)$ be the subcategory of $F$ with the same objects as $F$ and $Hom_T(M,N)=Hom_F(M,N)_{\overline 0}$ . Then $T$ is an abelian category, whereas $F$ is not. {\it The category ${{\mathcal R}}$}. Fix the morphism $\varepsilon: \mathbb Z/2\mathbb Z \to G_{\overline 0}=Gl(m)\times Gl(n)$ which maps $-1$ to the element $diag(E_m,-E_n)\in Gl(m)\times Gl(n)$ denoted $\epsilon_{mn}$. We write $\epsilon_n = \epsilon_{nn}$. Notice that $Ad(\epsilon_{mn})$ induces the parity morphism on the Lie superalgebra ${\mathfrak{gl}}(m|n)$ of $G$. We define the abelian subcategory ${\mathcal R} = sRep(G,\varepsilon)$ of $T$ as the full subcategory of all objects $(V,\rho)$ in $T$ with the property $ p_V = \rho(\epsilon_{mn})$; here $\rho$ denotes the underlying homomorphism $\rho: Gl(m)\times Gl(n) \to Gl(V)$ of algebraic groups over $k$. The subcategory ${{\mathcal R}}$ is stable under the dualities ${}^\vee$ and $^*$. For $G=Gl(n\vert n)$ we usually write $T_n$ instead of $T$, and ${{\mathcal R}}_n$ instead of ${\mathcal R}$, to indicate the dependency on $n$. {\it The duality $*$}. The Lie superalgebra ${\mathfrak g}=\mathfrak{gl}(m\vert n)$ has a consistent \cite{Kac-Rep} $\mathbb Z$-grading ${\mathfrak g} = {\mathfrak g}_{(-1)} \oplus {\mathfrak g}_{(0)} \oplus {\mathfrak g}_{(1)}$, where ${\mathfrak g}_{\overline{0}} = {\mathfrak g}_{(0)}$ and where ${\mathfrak g}_{\overline{1}} = {\mathfrak g}_{(-1)} \oplus {\mathfrak g}_{(1)}$ is defined by the upper triangular block matrices ${\mathfrak g}_{(1)}$ and ${\mathfrak g}_{(-1)}$ by the lower triangular block matrices. The supertranspose $x^T$ (see \cite{Scheunert}, (3.35) and (4.14)]) of a graded endomorphism $x\in {\mathit{End}}(k^{m\vert n})$ is defined by \[ x=\begin{pmatrix} m_1 & m_2 \\ m_3 & m_4 \end{pmatrix} \ \mapsto \ x^T = \begin{pmatrix} m_1^t & -m_3^t \\ m_2^t & m_4^t \end{pmatrix}, \ \ x \in {\mathfrak g} \ \] where $m_i^t$ denotes the ordinary transpose of the matrices $m_i$. If we identify ${\mathfrak g}$ and $End(k^{m\vert n})$, then $\tau(x)= - x^T$ defines an automorphism of the Lie superalgebra ${\mathfrak g}$ such that $\tau({\mathfrak g}_{(i)}) = {\mathfrak g}_{(-i)}$ holds for $i=-1,0,1$. For a representation $M=(V,\rho)$ in $T_n$ and homogenous $x $ in ${\mathfrak g}$ the Tannaka dual representation $M^\vee=(V^\vee,\rho^\vee)$ is the representation $x\mapsto - \rho(x)^T$ on $V$, using the supertranpose $\rho(x)^T$ of $\rho(x)$ in $End(V)$. Finally we define the representation $M^* =(V^{\vee},\rho^\vee \circ \tau) $, where $\tau(x)=-x^T$ is the automorphism of ${\mathfrak g}$ defined by the supertranspose on ${\mathfrak g}$. See also \cite{BKN-complexity}, 3.4 using a different convention. $V \in {{\mathcal R}}_n$ (see below) implies $V^*\in {{\mathcal R}}_n$ by \cite{Brundan-Kazhdan}, lemma 4.43. For simple and for projective objects $V$ of $T_n$ furthermore $V^*\cong V$. Also $V^*\vert_{G_{\overline 0}} \cong V\vert_{G_{\overline 0}}$ for all $V$ in $T_n$. Notice that both $\vee$ and $*$ define contravariant functors on $T_n$. {\it Weights}. Consider the standard Borel subalgebra $\mathfrak{b}$ of upper triangular matrices in ${\mathfrak g}$ and its unipotent radical $\mathfrak{u}$. The basis $\Delta$ of positive roots associated to $\mathfrak{b}$ is given by the basis of the positive roots associated to $\mathfrak{b} \cap {\mathfrak g}_{\overline 0}$ for the Lie algebra ${\mathfrak g}_{\overline 0}$ and a single odd root $x$ whose weight will be called $\mu$. If we denote by $e_{i,i}$, $i=1,\ldots,2n$, the linear form which sends a diagonal element $(t_1,\ldots,t_{2n})$ to $t_i$, then the simple roots in this basis are given by the set $\{e_{1,1} - e_{2,2}, \ldots, e_{2n-1,2n-1} - e_{2n,2n} \}$ with $\mu = e_{n,n} - e_{n+1,n+1}$. The diagonal elements $t=diag(t_1,...,t_n,t_{n+1},...,t_{2n})$ in $G_{\overline 0}$ act by semisimple matrices on $V$ for any representation $(V,\rho)$ in $T_n$. Hence $V$ decomposes into a direct sum of eigenspaces $V = \bigoplus_\lambda V_\lambda$ for certain characters $t^\lambda= t_1^{\lambda_1} \cdots t_n^{\lambda_n} (t_{n+1})^{\lambda_{n+1}} \cdots (t_{2n})^{\lambda_{2n}}$. Then write $\lambda=(\lambda_1,...,\lambda_n ; \lambda_{n+1}, \cdots, \lambda_{2n})$. A {\it primitive} weight vector $v$ (of weight $\lambda$) in a representation $(V,\rho)$ of $g$ is a nonzero vector in $V$ with the property $\rho(X)v=0$ for $X\in u$ and $\rho(t)v = t^\lambda$. An irreducible representation $L$ has a unique primitive weight vector (up to a scalar), the highest weight vector. Its weight $\lambda$ uniquely determines the irreducible module $L$ up to isomorphism in ${\mathcal R}_n$. Therefore we write $L=L(\lambda)$. {\it Kac modules}. We put $\mathfrak{p}_{\pm} = {\mathfrak g}_{(0)} \oplus {\mathfrak g}_{(\pm1)}$. We consider a simple ${\mathfrak g}_{(0)}$-module as a $\mathfrak{p}_{\pm}$-module in which ${\mathfrak g}_{(1)}$ respectively ${\mathfrak g}_{(-1)}$ acts trivially. We then define the Kac module $V(\lambda)$ and the AntiKac module $V'(\lambda)$ via \[ V(\lambda) = Ind_{\mathfrak{p}_+}^{{\mathfrak g}} L_0(\lambda) \ , \ V'(\lambda) = Ind_{\mathfrak{p}_-}^{{\mathfrak g}} L_0(\lambda) \] where $L_0(\lambda)$ is the simple ${\mathfrak g}_{(0)}$-module with highest weight $\lambda$. The Kac-modules are universal highest weight modules. $V(\lambda)$ has a unique maximal submodule $I(\lambda)$ and $L(\lambda) = V(\lambda)/I(\lambda)$ \cite{Kac-Rep}, prop.2.4. {\it The Berezin}. The Berezin determinant of the supergroup $G=G_n$ defines a one dimensional representation $Ber=Ber_n$. Its weight is is given by $\lambda_i=1$ and $\lambda_{n+i} =-1$ for $i=1,..,n$. The representation space of $Ber_n$ has the superparity $(-1)^n$. We denote the trival representation $Ber^0$ by ${\mathbf{1}}$. {\it Ground states}. Each $i$-atypical block of ${\mathcal R}_n$ contains irreducible representations $L(\lambda)$ of the form $$ \lambda = (\lambda_1,...,\lambda_{n-i},\lambda_n,...,\lambda_n\ ;\ -\lambda_n,...,-\lambda_n,\lambda_{n+1+i}, ..., \lambda_{2n}) \ .$$ with $\lambda_n \leq \min(\lambda_{n-i}, - \lambda_{n+1+i})$. We call these the ground states of the block. They will play a major role in our computation of $DS(L)$ in theorem \ref{mainthm}. {\it Equivalence}. Two irreducible representations $M,N$ on $T$ are said to be equivalent $M \sim N$, if either $M \cong Ber^r \otimes N$ or $M^\vee \cong Ber^r \otimes N$ holds for some $r\in \mathbb Z$. This obviously defines an equivalence relation on the set of isomorphism classes of irreducible representations of $T$. A self-equivalence of $M$ is given by an isomorphism $f: M \cong Ber^r \otimes M$ (which implies $r=0$ and $f$ to be a scalar multiple of the identity) respectively an isomorphism $f: M^\vee \cong Ber^r \otimes M$. If it exists, such an isomorphism uniquely determines $r$ and is unique up to a scalar and we say $M$ is of type (SD). Otherwise we say $M$ is of type (NSD). {\it Negligible objects}. An object $M\in T_n$ is called negligible if it is the direct sum of indecomposable objects $M_i$ in $T_n$ with superdimensions $sdim(M_i)=0$. The tensor ideal of negligible objects is denotes ${\mathcal N}$ or ${\mathcal N}_n$. \section{ The Duflo-Serganova functor $DS$}\label{DF} {\it An embedding}. Fix some $1\leq m\leq n$. We view $G_{n-m}= Gl(n-m\vert n-m)$ as an \lq{outer block matrix}\rq\ in $G_n=Gl(n\vert n)$ and $G_1$ as the \lq{inner block matrix}\rq\ as below. Here $G_0$ is the empty group. We fix some invertible $m\times m$-matrix $J$ with the property $J=J^t = J^{-1}$. For example take $J$ to be the identity matrix $E$ , or the matrix with nonzero entries equal to 1 only in the antidiagonal. We furthermore fix the embedding $$ \varphi_{n,m}: G_{n-m} \times G_1 \hookrightarrow G_n \ $$ defined by $$\begin{pmatrix} A & B \\ C & D \end{pmatrix} \times \begin{pmatrix} a & b \\ c & d\end{pmatrix} \mapsto \begin{pmatrix} A & 0 & 0 & B \\ 0 & a E & b J & 0\\ 0 & c J & d E & 0\\ C & 0 & 0 & D \end{pmatrix}$$ We use this embedding to identify elements in $G_{n-m}$ and $G_1$ with elements in $G_n$. In this sense $\epsilon_n = \epsilon_{n-m} \epsilon_1$ holds in $G_n$, for the corresponding elements $\epsilon_{n-m}$ and $\epsilon_1$ in $G_{n-m}$ resp. $G_1$, defined in section \ref{2}. {\it Two functors}. One has a functor $(V,\rho) \mapsto V^+ =\{ v \in V\ \vert \ \rho(\epsilon_1)(v)=v \}$ $$ {}^+: {{\mathcal R}}_n \to {{\mathcal R}}_{n-m}$$ where $V^+$ is considered as a $G_{n-m}$-module using $\rho(\epsilon_1) \rho(g) = \rho(g) \rho(\epsilon_1)$ for $g\in G_{n-m}$. Indeed $Ad(\epsilon_1)(g)=g$ holds for all $g\in G_{n-m}$. The grading on $V$ induces a grading on $V^+$ by $(V^+)_{\overline 0}= V_{\overline 0} \cap V^+$ and $(V^+)_{\overline 1}= V_{\overline 1} \cap V^+$. For this grading the decomposition $V^+ = (V^+)_{\overline 0} \oplus (V^ +)_{\overline 1}$ is induced by the parity morphism $\rho(\epsilon_n)$ or equivalently $\rho(\epsilon_{n-1})$. With this grading on $V^ +$ the restriction of $\rho$ to $G_{n-m}$ preserves $V^+$ and defines a representation $(V^+,\rho)$ of $G_{n-m}$ in ${{\mathcal R}}_{n-m}$. Similarly define $V^- =\{ v \in V\ \vert \ \rho(\epsilon_1)(v)=-v \}$. With the grading induced from $V=V_{\overline 0}\oplus V_{\overline 1}$ this defines a representation $V^-$ of $G_{n-m}$ in $\Pi {{\mathcal R}}_{n-m}$. Obviously $$ (V,\rho)\vert_{G_{n-m}} \ =\ V^+ \ \oplus \ V^- \ .$$ {\it The exact hexagon}. Fix the following element $x\in {\mathfrak g}_n$ $$x = \begin{pmatrix} 0 & y \\ 0 & 0 \end{pmatrix} \in {\mathfrak g}_{n} \ \text{ for } \ y = \begin{pmatrix} 0 & 0 & \ldots & 0 \\ 0 & 0 & \ldots & 0 \\ \ldots & & \ldots & \\ J & 0 & 0 & 0 \\ \end{pmatrix} $$ for the fixed invertible $m\times m$-matrix $J$. Since $x$ is an odd element with $[x,x]=0$, we get $$2 \cdot \rho(x)^2 =[\rho(x),\rho(x)] =\rho([x,x]) =0 $$ for any representation $(V,\rho)$ of $G_n$ in ${{\mathcal R}}_n$. Notice $d= \rho(x)$ supercommutes with $\rho(G_{n-m})$. Furthermore $\rho(x): V^{\pm} \to V^{\mp}$ holds as a $k$-linear map, an immediate consequence of $d\rho(\varepsilon_{1}) = - \rho(\varepsilon_{1})d$, i.e. of $Ad(\varepsilon_1)(x)=-x$. Since $\rho(x) \in Hom_F(V,V)_1$ is an {\it odd} morphism, $\rho(x)$ induces the following {\it even} morphisms (morphisms in ${{\mathcal R}}_{n-m}$) $$ \rho(x): V^+ \to \Pi(V^-) \quad \text{ and } \quad \rho(x): \Pi(V^-) \to V^+ \ .$$ The $k$-linear map $\partial=\rho(x): V\to V$ is a differential and commutes with the action of $G_{n-m}$ on $(V,\rho)$. Therefore $\partial$ defines a complex in ${{\mathcal R}}_{n-m}$ $$ \xymatrix{ \ar[r]^-{\partial} & V^+ \ar[r]^-{\partial} & \Pi(V^-) \ar[r]^-{\partial} & V^+ \ar[r]^-{\partial} & \cdots } $$ Since this complex is periodic, it has essentially only two cohomology groups denoted $H^+(V,\rho)$ and $H^-(V,\rho)$ in the following. This defines two functors $(V,\rho) \mapsto D_{n,n-m}^\pm(V,\rho)=H^{\pm}(V,\rho)$ $$ \fbox{$ D_{n,n-m}^\pm: {{\mathcal R}}_n \to {{\mathcal R}}_{n-m} $} \ .$$ It is obvious that an exact sequence \[ \xymatrix{ 0 \ar[r] & A \ar[r]^\alpha & B \ar[r]^\beta & C \ar[r] & 0}\] in ${{\mathcal R}}_n$ gives rise to an exact sequences of complexes in ${{\mathcal R}}_{n-m}$. Hence \begin{lem} \label{hex} The long exact cohomology sequence defines an exact hexagon in ${{\mathcal R}}_{n-m}$ \[ \xymatrix{ & H^+ (A) \ar[r]^{H^+(\alpha)} & H^+(B) \ar[dr]^{H^+(\beta)} & \\ H^-(C) \ar[ur]^\delta & & & H^+(C) \ar[dl]^{\delta} \\ & H^-(B) \ar[ul]^{H^-(\beta)} & H^-(A) \ar[l]^{H^-(\alpha)} & }\] \end{lem} \noindent {\it Alternative point of view}. For the categories $T=T_n$ resp. $T_{n-m}$ (for the groups $G_n$ resp. $G_{n-m}$) consider the tensor functor of Duflo and Serganova in \cite{Duflo-Serganova} $$ DS_{n,n-m}: T_n \to T_{n-m} $$ defined by $DS_{n,n-m}(V,\rho)= V_x:=Kern(\rho(x))/Im(\rho(x))$. For $(V,\rho)\in {\mathcal R}_n$ we obtain $$ H^+(V,\rho) \oplus \Pi (H^-(V,\rho)) = DS_{n,n-m}(V) \ .$$ Indeed, the left side is $DS_{n,n-m}(V)=V_x$ for the $k$-linear map $\partial=\rho(x)$ on $V=V^+ \oplus V^-$. Hence $H^ +$ is the functor obtained by composing the tensor functor $$ DS_{n,n-m}: {{\mathcal R}}_n \to T_{n-m} $$ with the functor $$ T_{n-1} \to {{\mathcal R}}_{n-m} $$ that projects the abelian category $T_{n-m}$ onto ${{\mathcal R}}_{n-m}$ using \begin{lem} \label{thm:decomposition} Every object $M \in T_n$ decomposes uniquely as $M = M_0 \oplus M_1$ with $M_0 \in {{\mathcal R}}_n$ and $M_1 \in \Pi({{\mathcal R}}_n)$. This defines a block decomposition of the abelian category $$\fbox{$ T = {{\mathcal R}}_n \oplus \Pi ({{\mathcal R}}_n) $} \ .$$ \end{lem} {\it Proof}. For any $M, N \in {{\mathcal R}}_n$ the $\mathbb Z_2$-graded space $Ext_T^i (M,N)$ is concentrated in degree zero \cite{Brundan-Kazhdan}, Cor. 4.44. \qed {\it Tensor property}. As a graded module over $R=k[x]/x^2$ any representation $V$ decomposes into a direct sum of a trivial representation $T$ and copies of $R$ (ignoring shifts by $\Pi$). To show that $DS_{n,m}(V)= R_x \oplus T_x = T$ is a tensor functor, it suffices that $(R\otimes R)_x=0$, see also \cite{Serganova-kw}. For this we use that the underlying tensor product is the supertensor product. Indeed for $R=V_{\overline 0}\oplus V_{\overline 1}$ and $V_{\overline 0}=k\cdot 1$ and $V_{\overline 1}=k\cdot x$ we have $x(e_1)=e_2$ and $x(e_2)=0$. The induced {\it superderivation} $d$ on $R\otimes R$ satisfies $d(1\otimes 1)=x\otimes 1+ 1\otimes x$, $d(x\otimes 1)= -x\otimes x$, $d(1\otimes x)=x\otimes x$ and $d(x\otimes x)=0$. Hence $Im(d)= Ker(d)= k\cdot (1\otimes x + x \otimes 1) \oplus k \cdot x \otimes x$ and therefore $(R\otimes R)_x=0$. \section{ Cohomology Functors}\label{sec:cohomology-functors} In this section we assume $V \in T_n$ and $m=1$. In the following let $DS$ be the functor $DS_{n,n-1}$ (for $J=1$). {\it Enriched weight structure}. The maximal torus of diagonal matrices in $G_n$ naturally acts on $DS(V)$ so that $DS(V)$ decomposes into weight spaces $DS(V) = \bigoplus_\lambda DS(V)_\lambda$ for $\lambda$ in the weight lattice $X(n)$ of ${\mathfrak g}_n$. Indeed for the weight decomposition $V= \bigoplus_\lambda V_\lambda$ every $v\in V$ has the form $v = \sum_\lambda v_\lambda$ for $v_\lambda\in V_\lambda$. Now $\partial v=0$ if and only if $\partial v_\lambda=0$ holds for all $\lambda$, since $\partial(V_\lambda) \subseteq V_{\lambda + \mu}$ for the odd simple weight $\mu$ (ignoring parities on $V$). Similarly $v = \partial w$ if and only if $v_\lambda = \partial w_\lambda$ for all $\lambda$, since we can always project on the weight eigenspaces. This trivial remark shows that $DS(V)$ naturally carries a weight decomposition with respect to the weight lattice $X(n)$ of ${\mathfrak g}_n$. The weight structure for ${\mathfrak g}_{n-1}$ is obtained by restriction. The kernel of the restriction $X(n) \to X(n-1)$ of weights, denoted by $$\lambda \mapsto \overline\lambda\ ,$$ are the multiples $\mathbb Z \cdot \mu$ of the odd simple root $\mu=e_{n,n} - e_{n+1,n+1}$. We may therefore view $DS(V)$ as endowed with the richer weight structure coming from the $G_n$-module $V$. This decomposition induces a natural decomposition of $DS(V)$ into eigenspaces $DS(V) = \bigoplus_\ell \ DS(V)_\ell$. To make this more convenient consider the torus of elements $diag(1,...,1,1;t^{-1},1,...,1)$ for $t\in k^*$, called the {\it small torus}. These elements commute with $G_{n-1}$ and their eigenvalue decomposition gives a decomposition $$ V = \bigoplus_{\ell\in\mathbb Z} \ V_{\ell} $$ into $G_{n-1}$-modules $V_{\ell}$. Here $V_\ell \subseteq V$ denotes the subspace defined by all vectors in $V$ on which the above elements of the small torus acts by multiplication with $t^\ell$. Obviously $V_\ell=0$ for $\ell\notin [\ell_0, \ell_1]$ and suitable $\ell_0,\ell_1$. For the odd morphism $\partial = \rho(x)$ the properties $\mu(diag(1,...,1,1;t^{-1},1,...,1))= t$ and $\partial(V_\lambda) \subseteq V_{\lambda+\mu}$ show that $$ \xymatrix{ \ar[r]^-\partial & \Pi(V_{2\ell - 1}) \ar[r]^-\partial & V_{2\ell} \ar[r]^-\partial & \Pi(V_{2\ell + 1}) \ar[r]^-\partial & V_{2\ell + 2} \ar[r]^-\partial & }$$ defines a complex. Its cohomology is denoted $ H^\ell(V)$. Obviously \[ \Pi^\ell(H^\ell(V)) = DS(V)_\ell \] and hence we obtain a decomposition of $DS(V,\rho)$ into a direct sum of $G_{n-1}$-modules $$ DS(V,\rho) \ = \ \bigoplus_{\ell \in\mathbb Z} \ \Pi^\ell(H^\ell(V)) \ ,$$ If we want to emphasize the $\mathbb Z$-grading, we also write this in the form $$ \fbox{$ DS(V,\rho) \ = \ \bigoplus_{\ell \in\mathbb Z} \ H^\ell(V)[-\ell] $} \ .$$ We will calculate $DS(L) \in T_{n-1}$ for irreducible $L$ in theorem \ref{mainthm} and we will compute its ${\mathbf{Z}}$-grading in proposition \ref{hproof} and proposition \ref{hproof-2}. An exact sequence \[ \xymatrix{ 0 \ar[r] & A \ar[r]^\alpha & B \ar[r]^\beta & C \ar[r] & 0}\] in ${{\mathcal R}}_n$ then gives rise to a long exact sequence in ${{\mathcal R}}_{n-1}$ $$ \xymatrix@C=1.4em{\ar[r] & H^{\ell-1}(C) \ar[r] & H^{\ell}(A) \ar[r] & H^{\ell}(B) \ar[r] & H^{\ell}(C) \ar[r] & H^{\ell+1}(A) \ar[r] & . }$$ \begin{lem}\label{-ell} For $V$ in $T_n$ we have $H^\ell(Ber_n \otimes V) = Ber_{n-1} \otimes H^{\ell -1}(V)$. For the Tannaka dual $V^\vee$ of $V$ $H^\ell(V)^\vee \cong H^{-\ell}(V^\vee)$ holds for all $\ell\in\mathbb Z$ (isomorphisms of $G_{n-1}$-modules). \end{lem} {\bf Proof}. The first property follows from $DS(Ber_n)=Ber_{n-1}[-1]$ and the fact that $DS$ is a tensor functor. Furthermore $DS(V)^\vee \cong DS(V^\vee)$, since $DS$ is a tensor functor. Hence the second claim follows from $(V^\vee)_{-\ell} = (V_{\ell})^\vee$, since $\Pi^2$ is the identity and duality \lq{commutes}\rq\ with the parity shift $\Pi$. \qed Note that for $V_\ell \in T_{n-1}$ the module $(V_\ell)^* \in T_{n-1}$ is isomorphic to $(V^*)_\ell$. Finally, for $(V,\rho)\in {\mathcal R}_n$ we get $ V^+ = \bigoplus_{\ell \in 2\mathbb Z} V_\ell$ and $\Pi(V^-) = \bigoplus_{\ell \in1+2\mathbb Z} V_\ell $. Hence we obtain the next lemma. \begin{lem} \label{+ell} For $V$ in ${\mathcal R}_n$ the following holds $$ \fbox{$ H^{+}(V) \ = \ \bigoplus_{\ell \in 2\mathbb Z} H^\ell(V) \quad , \quad H^-(V) = \bigoplus_{\ell \in1+2\mathbb Z} H^\ell(V). $} $$ \end{lem} \section{ Support varieties and the kernel of $DS$}\label{sec:support} We show that the kernel of $DS$ consists of the modules which have a filtration by AntiKac modules. {\it Support varieties}. We review results from \cite{BKN-1}, \cite{BKN-2} and \cite{BKN-complexity} on support varieties. Recall the decomposition ${\mathfrak g} = {\mathfrak g}_{(-1)}\oplus {\mathfrak g}_{(0)} \oplus {\mathfrak g}_{(-1)}$. The support varieties are defined by \[ V_{{\mathfrak g}_{(\pm 1)}}(M) = \{ \xi \in {\mathfrak g}_{(\pm 1)} \ | \ M \text{ not projective as a } U(\langle \xi \rangle)- \text{module} \} \cup \{0\} \ . \] Notice that $\xi \in {\mathfrak g}_{(\pm 1)}$ generates an odd abelian Lie superalgebra $\langle\xi \rangle$ with $[\xi,\xi] = 0$, which up to isomorphisms has only two indecomposable modules: The trivial module and its projective cover $U(\langle\xi \rangle)$. By \cite{BKN-1}, prop 6.3.1 \[ V_{{\mathfrak g}_{(\pm 1)}}(M \otimes N) = V_{{\mathfrak g}_{(\pm 1)}}(M) \cap V_{{\mathfrak g}_{(\pm 1)}}(N). \] The associated variety of Duflo and Serganova is defined as \[ X_M = \{ \xi \in X \ | \ M_\xi \neq 0 \} \] where $X$ is the cone $X = \{ \xi \in {\mathfrak g}_{\bar{1}} \ | [\xi,\xi] = 0 \}$. For $\xi \in X$ the condition $M_\xi \neq 0$ is equivalent by \cite{BKN-complexity}, 3.6.1, to the condition that $M$ is not projective as a $U(\langle \xi \rangle)$-module. Hence $ X_M$ is the set of all $\xi \in X$ such that $M$ is not projective as a $U(\langle \xi \rangle)$-module together with $\xi=0$. Thus $$ V_{{\mathfrak g}_{(-1)}}(M) \cup V_{{\mathfrak g}_{(1)}}(M) \subseteq X_M \quad , \quad V_{{\mathfrak g}_{(\pm 1)}}(M) = X_M \cap {\mathfrak g}_{(\pm 1)} \ . $$ {\it Kac and anti-Kac objects}. We denote by ${\mathcal C}^+$ the tensor ideal of modules with a filtration by Kac modules in ${\mathcal R}_n$ and by ${\mathcal C}^-$ the tensor ideal of modules with a filtration by anti-Kac modules in ${\mathcal R}_n$ and quote from \cite{BKN-complexity}, thm 3.3.1, thm 3.3.2 $$ M \in {\mathcal C}^+ \Leftrightarrow V_{{\mathfrak g}_{(1)}}(M) = 0 \quad , \quad M \in {\mathcal C}^- \Leftrightarrow V_{{\mathfrak g}_{(-1)}}(M) = 0 \ .$$ Hence $M$ is projective if and only if $V_{{\mathfrak g}_{(1)}}(M)\! =\! V_{{\mathfrak g}_{(-1)}}(M)\! =\! 0$ holds. {\it Vanishing criterion}. For any $\xi \in X$ there exists $g \in Gl(n) \times Gl(n)$ and isotropic mutually orthogonal linearly independent roots $\alpha_1, \ldots, \alpha_k$ such that $Ad_g(\xi) = \xi_1 + \ldots + \xi_m$ with $\xi_i \in g_{\alpha_i}$. The number $m=r(\xi)$ is called the rank of $\xi$ \cite{Serganova-kw}. The orbits for the action of $Gl(n)\times Gl(n)$ on ${\mathfrak g}_{(1)}$ are \cite{BKN-complexity}, 3.8.1 \[ ({\mathfrak g}_{(1)})_m = \{ \xi \in {\mathfrak g}_{(1)} \ | \ r(\xi) = m\} \ \ \text{ for } \ \ 0 \leq m \leq n \ .\] By a minimal orbit for the adjoint action of $Gl(n) \times Gl(n)$ on $g_{(\pm 1)}$ we mean a minimal non-zero orbit with respect to the partial order given by containment in closures. The unique minimal orbit $({\mathfrak g}_{(1)})_1$ is the orbit of the element $x$ defined earlier. The situation is analogous for ${\mathfrak g}_{(-1)}$, where $\overline x=\tau(x)$ generates the corresponding minimal orbit. A slight modification of \cite{BKN-complexity}, thm 3.7.1 and its proof gives \begin{thm} \label{kernel} For $\xi \in {\mathfrak g}_{(1)}$ and $M\in {\mathcal C}^-$ we have $M_\xi=0$. For $\xi \in {\mathfrak g}_{(-1)}$ and $M\in {\mathcal C}^-$ we have $M_\xi=0$. For $\xi=x$ we have $DS(M) = M_x = 0$ if and only if $M\in {\mathcal C}^-$ and $M_{\overline x}=0$ if and only if $M\in {\mathcal C}^+$. \end{thm} {\it Proof}. Let $M \in {\mathcal C}^-$. Then the definition of Kac objects implies $V_{{\mathfrak g}_{(1)}}(M) = 0$. Hence $ \{\xi \in {\mathfrak g}_{(1)} \ | \ M_\xi \neq 0\} = 0$. Conversely assume $M_x = 0$. Since $V_{{\mathfrak g}_{(1)}}(M)$ is a closed $Gl(n) \times Gl(n)$-stable variety, it contains a closed orbit. Since the orbits $({\mathfrak g}_{(1)})_m$ are closed only for $m=1$, unless $V_{{\mathfrak g}_{(1)}}(M)$ is empty, it must contain $({\mathfrak g}_{(1)})_1$. But this would imply $M_x \neq 0$, a contradiction. Hence $V_{{\mathfrak g}_{(1)}}(M) = \emptyset$. \qed \begin{cor} \label{van} For our fixed $x \in ({\mathfrak g}_{(1)})_1$ \begin{enumerate} \item $M$ is projective if and only if $M_x = 0$ and $M_{\tau x} = 0$. \item $M$ is projective if and only if $M_x = 0$ and $M^*_x = 0$ \item If $M = M^*$, then $M$ is projective if and only if $M_x = 0$. \end{enumerate} \end{cor} {\it Proof}. $M_x = 0$ implies $V_{{\mathfrak g}_{(1)}}(M) = 0$ and $M_{\tau(x)} = 0$ implies $V_{{\mathfrak g}_{(-1)}}(M) = 0$, hence (1). Now (2) and (3) follow from \cite{BKN-complexity}, 3.4.1 using \[ V_{{\mathfrak g}_{(\pm 1)}}(M^*) = \tau( V_{{\mathfrak g}_{(\mp 1)}} (M)).\] \qed \section{ The tensor functor $D$} \label{DDirac} In this section we construct another tensor functor $H_D:T_n \to T_{n-1}$ which is defined as the cohomology of a complex given by a Dirac operator $D$. This tensor functor has the advantage that it is compatible with the twisted duality $^*$. In this section we assume $V \in T_n$. For $t\in k^*$ the diagonal matrices $$diag(E_{n-1},t,t,E_{n-1}) \in G_{\overline 0}$$ define a one dimensional torus, the center of $G_1$; for this recall the embeddings $G_1=id \times G_1 \hookrightarrow G_{n-1}\times G_1 \hookrightarrow G_n$. The center of $G_1$ commutes with $G_{n-1} \times id \subset G_n$. Hence the center of $G_1$ naturally acts on $DS(V)$ in a semisimple way for any representation $(V,\rho) \in T$. Hence the underlying vectorspace $V$ decomposes into $H$-eigenspaces for $H=diag(0_{n-1},1,1,0_{n-1})$ in ${\mathfrak g}_n=Lie(G_n)$ which generates the Lie algebra of the torus. Let $x\in {\mathfrak g}_n$ be the fixed nilpotent element specified in section \ref{DF}. Let $\overline x = x^T$ denote the supertranspose of $x$. Now $Ad(\epsilon_1)(H)=H$ and $[H,x]=[H,\overline x]=0$ imply that the operators $\partial=\rho(x)$ and $\overline\partial= c \cdot \rho(\overline x)$ (for any $c \in k^*$) commute with $H$. Furthermore $[x,\overline x] = H$ for the odd elements $x$ and $\overline x$ implies $$ \partial \overline\partial + \overline\partial \partial = c \cdot \rho(H) \ .$$ Since $H$ commutes with $x$, the operator $\rho(H)$ acts on $V_x$. Since $H$ commutes with $\varepsilon_1$, the grading $V^\pm$ is compatible with taking invariants \[ V^H = \{ v \in V \ | \ \rho(H) = 0 \}.\] Similarly we denote the space of coinvariants by $V_H$. On $V$ the odd operator $\overline\partial$ defines a homotopy of the complex $$ \xymatrix{ \ar[r]^-\partial & \Pi(V_{2\ell - 1}) \ar[r]^-\partial & V_{2\ell} \ar[dl]_-{\overline\partial}\ar[r]^-\partial & \Pi(V_{2\ell + 1}) \ar[dl]_-{\overline\partial}\ar[r]^-\partial & V_{2\ell + 2} \ar[dl]_-{\overline\partial}\ar[r]^-\partial & \cr \ar[r]^-\partial & \Pi(V_{2\ell - 1}) \ar[r]^-\partial & V_{2\ell} \ar[r]^-\partial & \Pi(V_{2\ell + 1}) \ar[r]^-\partial & V_{2\ell + 2} \ar[r]^-\partial & }$$ Hence $c \cdot \rho(H)$ is homotopic to zero. In particular, the natural action of $\rho(H)$ on the cohomology modules $H^\ell(V)$ is trivial. Therefore \begin{lem}\label{homotopy} $\rho(H)$ acts trivially on the cohomology $DS(V)=V_x$. \end{lem} Since $H$ acts in a semisimple way, taking $H$-invariants $V \mapsto V^H$ is an exact functor and commutes with the cohomology functor $V\mapsto V_x$. Thus $$ DS(V) = M_x \quad \text{ for } \quad M = V^H \ $$ and similarly $H^\pm(V) = H^\pm(V^H)$ etc. Notice $(V^H)^\pm = (V^\pm)^H$. Since the operators $\partial$ and $\overline\partial$ commute with $H$, they preserve $M=V^H$ and anti-commute on $M$. In this way we obtain a double complex for $M=V^H$ defined by $$ \xymatrix{ \ar[r]^-{\overline\partial} & M^+ \ar[r]^-{\overline\partial} & \Pi(M^-) \ar[r]^-{\overline\partial} & \cr \ar[r]^-{\overline\partial} & \Pi(M^-) \ar[r]^-{\overline\partial} \ar[u]^-{\partial} & M^+ \ar[r]^-{\overline\partial} \ar[u]^-{\partial} & \cr \ar[r]^-{\overline\partial} & M^+ \ar[r]^-{\overline\partial} \ar[u]^-{\partial}& \Pi(M^-) \ar[r]^-{\overline\partial} \ar[u]^{\partial}&. } $$ {\it The Dirac operator}. This double complex is related to the complex $$ \xymatrix{ \cdots \ar[r]^-D & M^+ \ar[r]^-D & \Pi(M^-) \ar[r]^-D & M^+ \ar[r]^-D & \Pi(M^-) \ar[r]^-D & \cdots} $$ for $M = V^H$ attached to the Dirac operator $$ D = \partial + \overline\partial \ .$$ Since $M = M^+ \oplus \Pi(M^-)$, the two cohomology modules $H_D^+(V)$ and $H_D^-(V)$ of this periodic complex compute $$ H_D(V) = Kern(D: M \to M)/Im(D: M \to M) \ $$ in the sense that $$H_D(V)= H_D^+(V) \oplus \Pi(H_D^-(V))\ $$ gives the decomposition of $H_D(V)$ into its ${\mathcal R}_n$ and $\Pi({\mathcal R}_n)$-part. {\bf Remark}. Note that $D$ commutes with $\rho(H)$. Hence the operator $D$ respects the eigenspaces of $H$ on $V$. Since $D^2 = \partial^2 + (\partial\overline \partial + \partial \overline\partial) + \overline\partial^2 = (\partial\overline \partial + \partial \overline\partial) = c \cdot \rho(H)$, we have $Ker(D:V\to V)=Kern(D:V^H \to V^H)$. However $D(V)$ is in general different from $D(V^H)$, although both spaces have the same intersection with $V^H$. \begin{lem} \label{*1} For $c = i$ there exist natural isomorphisms $H_D(V^*,\rho^*) \cong H_D(V,\rho)^*$, $H_D(V)^{\vee} \cong H_D(V^{\vee})$, $H_D^{\pm}(V^*) \cong H_D^{\pm}(V)^*$ and $H_D^{\pm}(V^{\vee}) \cong H_D^{\pm}(V)^{\vee}$ of $G_{n-1}$-modules. For short exact sequences in ${\mathcal R}_n$ one obtains an exact hexagon in ${\mathcal R}_{n-1}$ for the functors $H^\pm_D$. \end{lem} {\it Proof}. The assertion $H_D(V)^{\vee} = H_D(V^{\vee})$ follows since $H_D$ is a tensor functor by lemma \ref{H_D-tensor}. We calculate $\tau(x + i\overline x)= -(\overline x - i x)= i (x+ i\overline x)$, since $\tau^2(x)=-x$. Now recall that $\rho^*(D)= \rho^\vee(\tau(D))= i\rho^\vee(D)$ is defined as endomorphism on $V^*=V^\vee$. Hence $H_D(V^*,\rho^*)$, by definition the cohomology of $\rho^*(D)$ on $(V^*)^H$, can be identified with the space $$ Ker(i\rho^\vee(D): (V^\vee)^H \to (V^\vee)^H)/Im(i\rho^\vee(D):(V^\vee)^H \to (V^\vee)^H)\ .$$ Of course we can ignore the factor $i$, and identify this representation with the representation on $$\bigl(Ker(\rho(D): V_H \to V_H)/Im(\rho(D):(V_H \to V_H)\bigr)^\vee$$ or hence with $$H_D(V,\rho)^\vee= \bigl(Ker(\rho(D): V^H \to V^H)/Im(\rho(D):(V^H \to V^H)\bigr)^\vee \ ,$$ using the dual $(V_H)^\vee \to (V^H)^\vee$ of the natural morphism $V^H \to V_H$, which is an isomorphism by the semisimplicity of $H$. Finally recall $H_D(V,\rho)^\vee = H_D(V,\rho)^*$ for the underlying representation spaces. This is an isomorphism of $G_{n-1}$-modules since $\tau$ restricts to the corresponding $\tau$ on $G_{n-1}$. \qed So from now on assume $c=i$. Then, in contrast to lemma \ref{-ell}, we obtain \begin{lem} \label{*} There exist natural isomorphisms of functors ${\mathcal R}_n \to {\mathcal R}_{n-1}$ $$ \fbox{$ \mu_V: H_D^\pm(V^*) \cong H_D^\pm(V)^* $} \ .$$ \end{lem} {\it Proof}. It remains to show that the isomorphism $ \mu_V: H_D^\pm(V^*) \cong H_D^\pm(V)^*$ given above defines a natural transformation. For a $G_n$-linear map $f: V \to W$ the induced map $f^*: W^* \to V^*$ is nothing but the morphism $f^\vee: W^\vee \to V^\vee$, using $V^*=V^\vee$ and $W^* = W^\vee$. This now easily shows that the above identifications $\mu_V, \mu_W$ induce a commutative diagram $$ \xymatrix{ H_D(W)^* \ar[r]^{H_D(f)^*} & H_D(V)^* \cr H_D(W^*) \ar[u]_{\mu_W} \ar[r]^{H_D(f^*)} & H_D(V^*) \ar[u]_{\mu_V} \cr} $$ \qed {\bf Example}. Let $V$ be the Kac module $V({\bf 1})$ in ${\mathcal R}_1$. Then $DS(V^*)=0$ and $DS(V) = {\bf 1} \oplus \Pi({\bf 1})$. On the other hand $H_D(V)=0$ and $H_D(V^*)=0$. {\bf Remark}. It is not a priori clear how to define a Dirac analog of the modules $H^\ell(V)$. Indeed $\overline\partial$ and $\partial$ (in the sense of odd morphisms) satisfy $\overline\partial : V_\lambda \to V_{\lambda - \mu}$ and $\partial : V_\lambda \to V_{\lambda + \mu}$ for the odd simple weight $\mu$. Hence $\overline\partial : V_\ell \to V_{\ell-1}$ and $\partial : V_\ell \to V_{\ell+1}$ and therefore $D= \partial + \overline\partial$ does not simply shift the grading. We adress this question in section \ref{Hodge}. {\it $H_D$ as a tensor functor}. Although taking $H$-invariants $V\mapsto M=V^H$ is not a tensor functor, $H_D$ is nevertheless a tensor functor. To show this it is enough to restrict the representations $(V,\rho)$ to $G_1 \hookrightarrow G_n$. Hence it suffices to show that the functor $$ H_D: T_1 \to T_0 = svec_k \ $$ is a tensor functor. $H$ generates the center of $\mathfrak{gl}(1|1)$ and $D^2= \rho(H)$. Hence $Kern(D) \subset V^H$. Since $H$ is semisimple, the Jordan blocks of $D$ on $V$ (ignoring the grading!) are Jordan blocks $B_\lambda$ of length 1 except for the eigenvalue $\lambda=0$, where they are either Jordan blocks $B_0$ of length 1 or Jordan blocks $R$ of lenght 2. Indeed the square of an indecomposable Jordan block of length $a$ and eigenvalue $\lambda$ is again an indecomposable Jordan block of length $a$ for $\lambda\neq 0$. Since $D^2=\rho(H)$ is semisimple, this implies $\lambda=0$ and $a\leq 2$ for $a>1$. By definition, for $V= \bigoplus_\lambda k_\lambda(V) \cdot B_\lambda \oplus k(V)\cdot R $ we have $H_D(V) = k_0(V) \cdot B_0$, if we ignore the grading. Now $B_\lambda \otimes B_{\lambda'} = B_{\lambda \pm\lambda'}$, where the sign depends on the parity of $B_\lambda$. Furthermore the characteristic polynomial of $D$ on $R\otimes B_\lambda$ is $X^2 - \lambda^2$, hence $D$ has eigenvalue $0$ on $R\otimes B_\lambda$ only for $\lambda=0$, in which case $R\otimes B_\lambda$ is isomorphic to $R$. Finally $R\otimes R \cong R^2$. Hence the only possible deviation from the tensor functor property for $H_D$ might come from tensor products $B_\lambda \otimes B_{\lambda'}$ where $\lambda\pm \lambda'=0$. In this case $H=\lambda^2 \cdot id$ on $B_\lambda$ and $B_{\lambda'}$, hence $H= 2 \lambda^2 \cdot id$ on $B_\lambda \otimes B_{\lambda'}$. But the even operator $D^2$ then acts by $ 2 \lambda^2 \cdot id$ on $B_\lambda \otimes B_{\lambda'}$. Hence $D$ does not have the eigenvalue zero on $B_\lambda \otimes B_{\lambda'}$ unless $\lambda=\lambda'=0$. Therefore $B_0\otimes B_0 \cong B_0$ is the only relevant case. Hence $H_D(V \otimes W) = k_0(V)k_0(W)\cdot B_0 = k_0(V)\cdot B_0 \otimes k_0(W)\cdot B_0 = H_D(V) \otimes H_D(W)$. This remains true if we also take into account gradings. \begin{lem}\label{H_D-tensor} $H_D: T_n \to T_{n-1}$ is a tensor functor. \end{lem} \section{ The relation between $DS(V)$ and $D(V)$}\label{DS-vs-D} \noindent For $(V,\rho)\in T_n$ the eigenvalue decomposition with respect to the small torus gives a decomposition $$ V = \bigoplus_{\ell\in\mathbb Z} V_{\ell} $$ into $G_{n-1}$-modules $V_{\ell}$. Furthermore $\overline\partial$ and $\partial$ (in the sense of odd morphisms) satisfy $\overline\partial : V_\ell \to V_{\ell-1}$ and $\partial : V_\ell \to V_{\ell+1}$. In other words, they give rise to morphisms $\overline\partial : \Pi^\ell(V_\ell) \to \Pi^{\ell -1}(V_{\ell-1})$ and $\partial : \Pi^{\ell }(V_\ell) \to \Pi^{\ell +1}(V_{\ell+1})$, hence induce morphisms on $\bigoplus_{\ell\in\mathbb Z} H^\ell(V)$ which shift the grading by $-1$ resp. $+1$. Since the generator $H$ of the center of $Lie(G_1)$ commutes with the small torus, we obtain an induced decomposition for the invariant subspace $M= V^H \subseteq V$ $$ M \ = \ \bigoplus_\ell \ \Pi^{\ell}(M_\ell) $$ for $\Pi^{\ell}(M_\ell) = M \cap V_\ell = (V_\ell)^H$. Notice $M = M^+ \oplus \Pi(M^-)$ for $(V,\rho)\in {\mathcal R}_n$, with $M^+$ and $M^-$ defined in ${\mathcal R}_n$ by $$ M^+ = \bigoplus_{\ell \in 2\mathbb Z} M_\ell \quad , \quad M^- = \bigoplus_{\ell \in1+2\mathbb Z} M_\ell \ .$$ The spaces $M_\ell$ are $G_{n-1}$-modules. On $M$ the operators $\partial$ and $\overline\partial$ define even morphisms and they anticommute in the diagram below. Hence we get a double complex $K=K^{\bullet,\bullet}$ in $T_{n-1}$ attached to $(V,\rho)$ $$ \xymatrix{ M_{\ell+2} \ar[r]^{\overline\partial} & M_{\ell+1} \ar[r]^{\overline\partial} & M_\ell \ar[r]^{\overline\partial} & M_{\ell-1} \cr M_{\ell+1} \ar[u]^{\partial}\ar[r]^{\overline\partial} & M_\ell \ar[u]^{\partial}\ar[r]^{\overline\partial} & M_{\ell-1} \ar[u]^{\partial}\ar[r]^{\overline\partial} & M_{\ell-2} \ar[u]^{\partial} \cr M_{\ell}\ar[u]^{\partial}\ar[r]^{\overline\partial} & M_{\ell-1}\ar[u]^{\partial}\ar[r]^{\overline\partial} & M_{\ell-2} \ar[u]^{\partial}\ar[r]^{\overline\partial} & M_{\ell-3} \ar[u]^{\partial} } $$ with $K^{i,j} = M_{j-i}$. This double complex is periodic with respect to $(i,j) \mapsto (i+1,j+1)$. The modules $K^{i,j}$ vanish for $j-i \notin [\ell_0,\ell_1]$ and certain $\ell_0,\ell_1 \in \mathbb Z$. The associated single complex $(Tot(K),D)$ has the objects $Tot(K)^n = \bigoplus_{i\in \mathbb Z} M_{n + 2i}$ and the differential $D=\partial + \overline\partial$. The total complex therefore is periodic with $Tot^0(K)= M^+$ and $Tot^1(K)=\Pi(M^-)$ and computes the cohomology $H^n(Tot(K),D)= H_D^+(V)$ for $n\in 2\mathbb Z$ and $H^n(Tot(K),D)= H_D^-(V)$ for $n\in 1+2\mathbb Z$. On the total complex $(Tot(K),D)$ we have a decreasing filtration defined by $F^p Tot^n(K) = \bigoplus_{r+s=n, r\geq p} K^{r,s}$. This filtration induces decreasing filtrations on the cohomology of the total complex $$ ... \supseteq F^p(H_D^\pm(V)) \supseteq F^{p+1}(H_D^\pm(V)) \supseteq ... $$ and a spectral sequence $(E_r^{p,q},d_r)$ converging to $$ E_\infty^{p,q} \ = \ gr^p H^{p+q}(Tot(K),D) \ .$$ Indeed the convergence of the sequence follows from the fact that the higher differentials $d_r: E_r^{pq} \to E_r^{p+r,q-r+1}$ vanish for $2r - (q-p+1) > \ell_1 - \ell_0$. The $E_1$-complex of the spectral sequence is the direct sum over all $q$ of the horizontal complexes $E_1^{p,q} = (H^q_\partial(K^{p,\bullet}),\overline\partial)$. For the various $q$ these complexes are the same up to a shift of the complex. So, if we ignore this shift, these complexes are given by the natural action of $\overline\partial$ on $\bigoplus_{\ell\in\mathbb Z} H^\ell(V)$ defining the complex $$ \xymatrix{ ... \ar[r]^-{\overline\partial} & H^{q+1}(V) \ar[r]^-{\overline\partial} & H^q(V) \ar[r]^-{\overline\partial} & H^{q-1}(V) \ar[r]^-{\overline\partial} & ... } \ .$$ The decreasing filtration $F^p$ induced on $$E_1(K)^n \ =\ \bigoplus_{i\in \mathbb Z} H^{n+2i}(V) $$ has graded terms $gr^p(E_1(K)^n) = H_\partial(K^{p,n-p})= H_\partial(M_{n-2p}) = H^{n-2p}(V)$. We now define the subquotient $ H_D^{n-2p}(V) \ := \ gr^p(E_\infty(K)^n) $ of $H^{n-2p}(V)$, hence $$ H_D^{\ell}(V) \ := \ gr^p(E_\infty(K)^{\ell + 2p}). $$ Note that this definition does not depend on the choice of $p$. We thus obtain \begin{lem} For $T\in T_n$ the cohomology modules $H^\pm_D(V)$ admit canonical decreasing filtrations $F^p$ whose graded pieces are the $G_{n-1}$-modules $H_D^{-2p}(V)$ for $H^+_D(V)$ and $H_D^{-2p-1}(V)$ for $H^-_D(V)$. \end{lem} {\bf Condition {\tt T}}. {\it We say that condition {\tt T} holds for $(V,\rho)$ in $T_n$ if the natural operation of the operator $\overline\partial=\rho(\tau(x))$ on $DS(V,\rho)$ is trivial}. {\bf Example}. The standard representation $X=X_{st}$ of $G_n$ on $k^{n\vert n}$ satisfies condition {\tt T}. {\bf Remark}. If $\tau(x)$ act trivially both on $DS(V)$ and $DS(W)$ for some $V,W \in T_n$, then $\tau(x)$ acts trivially on $DS(V\otimes W)= DS(V) \otimes DS(W)$. If $\tau(x)$ acts trivially on $DS(U)$ for $U \in T_n$, then $\tau(x)$ act trivially on every retract of $DS(U)$. Hence condition {\tt T} for $(V,\rho)=L(\lambda)$ implies condition {\tt T} for every retract $U$ of $DS(V \otimes W)$. Thus the subcategory of objects in ${\mathcal R}_n$ satisfying condition {\tt T} is closed under tensor products and retracts. Now consider the following conditions for $(V,\rho)$: \begin{enumerate} \item $(V,\rho)$ is irreducible. \item $H^+(V) \oplus H^-(V)$ is multiplicity free. \item $H^+(V)$ and $H^-(V)$ do not have common constituents. \item Condition $\tt T$ holds. \item $\overline\partial$ acts trivially on $DS(V)$. \item The $E_1^{p,q}$ and the $E_2^{p,q}$ terms of the spectral sequence coincide $$H^{}_{\overline\partial}\bigl(H^{\ell}(V)\bigr) = H^{\ell}(V)\ $$ where $\ell:=n-2p=q-p$. \end{enumerate} Later in theorem \ref{mainthm} we prove that (1) implies (2). Furthermore it is trivial that $(2) \Longrightarrow (3) \Longrightarrow (4) \Longrightarrow (5) \Longrightarrow (6)$. \begin{prop} If condition (3) holds, then the spectral sequence degenerates at the $E_1$-level and $H_D^{\pm}(V)$ is naturally isomorphic to $ H^\pm(V)$. \end{prop} {\it Proof}. The differentials of the spectral sequence $d_r: E_r^{pq} \to E_r^{p+r,q-r+1}$ define maps from the subquotient $E_r^{pq}$ of $H^{n-2p}(V)$ (for $n=p+q$) to the subquotient $E_r^{p+r,q-r+1}$ of $H^{n-2p- 2r+1}(V)$. If $H^{n-2p}(V)$ contributes to $H^\pm(V)$, then $H^{n-2p- 2r+1}(V)$ contributes to $H^\mp(V)$. Since all the higher differentials are $G_{n-1}$-linear, condition (3) forces all differentials $d_r$ to be zero for $r\geq 1$. Hence the spectral sequence degenerates at the $E_1$-level. \qed \begin{prop}\label{abutment} The spectral sequence always degenerates at the $E_2$-level, i.e. for all objects $(V,\rho)$ in $T_n$ we have $$ \fbox{$ H_{\overline\partial}(H^\ell(V)) \ \cong \ H^{\ell}_{D}(V) $} \ . $$ \end{prop} \begin{cor} The kernel of $H_D:T_n \to T_{n-1}$ contains $\mathcal{C}^+ \cup \mathcal{C}^-$. \end{cor} {\bf Remark}. It seems plausible that the kernel equals $\mathcal{C}^+ \cup \mathcal{C}^-$. {\it Proof}. This is a general assertion on spectral sequences arising from a double complex $K$ such that $K^{i,j}=M_{j-i}$ for maps $\partial: M_\ell \to M_{\ell +1}$ and $\overline\partial: M_{\ell} \to M_{\ell -1}$ between finite dimensional $k$-vectorspaces $M_\ell, \ell\in\mathbb Z$ so that $M_\ell=0$ for almost all $\ell$. Indeed, any such double complex $K$ can be viewed as an object in the category $T_1$ via the embedding $\varphi_{n,m}$ of section \ref{DF}.. Using $T_1= {\mathcal R}_1 \oplus \Pi({\mathcal R}_1)$ we can decompose and assume without restriction of generality that it is an object in ${\mathcal R}_1$. However, then it defines a maximal atypical object in the category ${\mathcal R}_1^1 \subset {\mathcal R}_1$. For this notice that ${\mathcal R}_1^1$ can be identified with the category of objects in ${\mathcal R}_1$ with trivial central character. Note that this condition on the central character for a representation $(V,\rho)$ of $G_1$ simply means $V = V^H=M$, since $H$ generates the center of $Lie(G_1)$. This reduces our claim to the special case $n=1$ for $(V,\rho)$ in ${\mathcal R}_1^1$. Obviously we can assume that $(V,\rho)$ is indecomposable. The indecomposable objects $V$ in ${\mathcal R}_1^1$ were classified by Germoni \cite{Germoni-sl}. Either $V\in {\mathcal C}^{+}$ (Kac object), or $V\in {\mathcal C}^-$ or there exists an object $U \subset V, U\in{\mathcal C}^-$ with irreducible quotient $L$ or there exists a quotient $Q$ of $V$ in ${\mathcal C}^-$ with irreducible kernel $L'$. Since $DS(N)=0$ for all objects in ${\mathcal C}^-$ (theorem \ref{kernel}), we conclude from the long exact sequence of $H^\ell$-cohomology that we can either assume $V\in {\mathcal C}^+$ or that $V$ is irreducible, since in the remaining cases $DS(V)=0$ or $DS(V)\cong DS(L)$ or $DS(L') \cong DS(V)$. As already mentioned, by the later theorem \ref{mainthm} for irreducible $V$, the spectral sequence already abuts. For $r=1$ however this is obvious anyway, since any atypical irreducible $L$ is isomorphic to a Berezin power $L \cong Ber^m$. Hence $H_D^\nu(L) = H^\nu(L)= k$ for $\nu=m$ and $H_D^\nu(L) = H^\nu(L)= 0$ otherwise. So it remains to consider the case of indecomposable Kac objects $V\in{\mathcal C}^+ $ in ${\mathcal R}_1^1$. Unless $V\in {\mathcal C}^+ \cap {\mathcal C}^-$, by Germoni's results $V \cong V(i;m)$ for $i\in \mathbb Z$ and $m\in\mathbb N$ is a successive extension $$ 0 \to V(i-2;m-1) \to V(i;m) \to V(Ber^i) \to 0 $$ of the Kac objects with $V(i;1)=V(Ber^i)$. Furthermore the Kac module $V(Ber^i)$ is an extension of Berezin modules $$ 0\to Ber^{i-1} \to V(Ber^i) \to Ber^i \to 0 \ ,$$ hence $H^\ell(V(Ber^i))\cong k$ for $\ell= i,i-1$ and is zero otherwise. From the long exact cohomology sequence and induction we obtain $\dim(H^\nu(V(i;m))=1$ for $\nu\in\{i,i-1,...,i-2m+1\}$, and $H^\nu(V(i;m))=0$ otherwise. So (for fixed $q$) the complexes in the $E_1$-term of the spectral sequence for $V=V(i;m)$ have the form $$ 0 \to H^i(V) \to H^{i-1}(V) \to \cdots \to H^{i-2m+2}(V) \to H^{i-2m+1}(V) \to 0 \ $$ with differentials $\overline\partial$ and $H^\nu(V)$ of dimension one for $\nu=i,i-1,...,i-2m+1$. We have to show that these complexes are acyclic for all $V=V(i;m)$. For this it suffices that the first differential $\overline\partial: H^i(V)\to H^{i-1}(V)$, the third differential $\overline\partial: H^{i-2}(V)\to H^{i-3}(V)$ and so on, are injective. By dimension reasons the differentials $\bar{\partial}: H^{i-1}(V) \to H^{i-2}(V)$, $\bar{\partial}: H^{i-3}(V) \to H^{i-4}(V)$ etc. are then isomorphisms and the differentials $\bar{\partial}: H^{i}(V) \to H^{i-1}(V)$, $\bar{\partial}: H^{i-2}(V) \to H^{i-3}(V)$ etc. are zero. Hence the cohomology of this complex vanishes and the $E_2$-term of the spectral sequence is zero.. Hence the spectral sequence abuts at $r=2$, which proves our claim. To prove the injectivity for the first, third and so on differential $\overline\partial$ we use induction on $m$. For $m=1$ and $V=V(Ber^i)\in{\mathcal C}^+$ we know $H_D(V)=0$ by theorem \ref{kernel} and lemma \ref{*} . Since $H^\nu(V)=0$ for $\nu\neq i, i-1$, all higher differentials $d_r$ for $r\geq 2$ are zero by degree reasons. Hence $\overline\partial: H^i(V) \to H^{i-1}(V)$ must be an isomorphism. For the induction step put $V_i:=V(i,1)$ and $N=V(i-2,m-1)$; then $V/N \cong V_i$. Hence we get a commutative diagram with horizontal exact sequences $$ \[email protected]{ ... \ar[r] & H^{\nu -1}(N) \ar[r]\ar[d]_{\overline\partial} & H^{\nu -1}(V) \ar[d]_{\overline\partial}\ar[r] & H^{\nu -1}(V_i)\ar[d]_{\overline\partial} \ar[r] & H^\nu(N) \ar[d]_{\overline\partial} \ar[r] & ... \cr ... \ar[r] & H^{\nu-2}(N) \ar[r] & H^{\nu-2}(V) \ar[r] & H^{\nu-2}(V_i) \ar[r] & H^{\nu-1}(N) \ar[r] & ... } $$ Since $H^\nu(N)=0$ for $\nu >i-2$ and $H^\nu(V_i)=0$ for $\nu\neq i,i-1$ $$ \[email protected]{ 0\ar[r] & H^{i-1}(N) \ar[r]\ar[d]_{\overline\partial} & H^{i-1}(V) \ar[d]_{\overline\partial}\ar[r] & H^{i-1}(V_i)\ar[d]_{\overline\partial} \ar[r] & 0 \ar[d]_{\overline\partial}\ar[r] & H^i(V) \ar@{^{(}->}[d]_{\overline\partial}\ar[r]^\sim & H^i(V_i)\ar@{^{(}->}[d]_{\overline\partial} \ar[r] & 0 \cr 0\ar[r] & H^{i-2}(N) \ar[r] & H^{i-2}(V) \ar[r] & H^{i-2}(V_i) \ar[r] & 0 \ar[r] & H^{i-1}(V) \ar[r]^\sim & H^{i-1}(V_i) \ar[r] & 0 } $$ Thus $\overline\partial: H^i(V) \to H^{i-1}(V)$ is injective by a comparison with $V_i$. The assertion for the third, fifth and so on differential $\overline\partial$ follows from the induction assumption on $N$, since $H^\nu(V) \cong H^\nu(N)$ for $\nu\leq i-2$. \qed \noindent \section{ Hodge decomposition}\label{Hodge} We show in proposition \ref{H1} that the groups $H_D^{\pm}(V)$ satisfy a Hodge decomposition. Put $F_p = F^{-p}$. This defines a decreasing filtration of $G_{n-1}$-modules $F_p(H_D^\pm(V))$ on $H_D^\pm(V)$ as in the last section for $V\in T_n$. Here $$ F_p(H_D^\pm(V)) \ = \ Im\bigl( (\bigoplus_{\ell \leq 2p} M^\pm_\ell ) \cap Ker(D) \to H_D^\pm(V) \bigr) \ .$$ One has also a decreasing filtration of $G_{n-1}$-modules $\overline F_q(H_D^\pm(V))$ on $H_D^\pm(V)$ defined by the second filtration of the cohomology of $(Tot(K),D)$ for the double complex $K^{\bullet,\bullet}$ defined in the last section. It is defined by the subcomplexes $\overline F^q(Tot(K)^n) = \bigoplus_{r+s=n, s\geq q} K^{r,s}$ of $(Tot(K),D)$. Notice that $$ \overline F^q(H_D^\pm(V)) $$ is the image of the $D$-cohomology of this subcomplex in $H_D(K)$. This filtration has analogous properties. In particular $$ \overline H^{n-2q}_D(V) := \overline F^q(Tot^n(K))/\overline F^{q+1}(Tot^n(K)) $$ by an analog of proposition \ref{abutment} is isomorphic to $$ \overline H^\ell_D(V) \cong H_\partial(\overline H^\ell(V)) $$ where $\overline H^\ell(V)$ is defined as $H^\ell(V)$, only by using $\overline\partial$ instead of $\partial$. We remark that both filtrations are functorial with respect to morphisms $f: V \to W$ in $T_n$. Hence also $V \mapsto \overline F^q(H^n_D(V)) \cap F_p(H^n_D(V))$ defines a functor from $T_n$ to $T_{n-1}$. \begin{prop}\label{H1} For all objects $V$ in $T_n$ we have a canonical decomposition of $H_D^\pm(V)$ into $G_{n-1}$-modules $$ H_D^\pm(V) \ = \ \bigoplus_{\nu\in \mathbb Z} H_D^\nu(V) $$ where for $\varepsilon = (-1)^\nu$ $$ H_D^\nu(V) := F_\nu(H_D^\varepsilon(V)) \cap \overline F_\nu(H_D^\varepsilon(V)) \ .$$ Furthermore for $\mu > \nu$ we have $$ F_\nu(H_D^\pm(V)) \cap \overline F_\mu(H_D^\pm(V)) = 0\ .$$ \end{prop} \begin{cor}\label{H2} For a short exact sequence $0\to A \to B\to C\to 0$ in $T_n$ the sequences $$ H_D^\nu(A) \to H_D^\nu(B) \to H_D^\nu(C) $$ are exact for all $\nu$. \end{cor} {\bf Remark}. As shown after lemma \ref{Ia} these halfexact sequences can not be extended to long exact sequences! {\it Proof}. If $x \in H_D^\nu(B)$ maps to zero in $H_D^\nu(C) \subset H_D(C)$ there exist $y\in H_D(A)$ such that $x$ is the image of $y$ by the exact hexagon for $H_D^\pm$. But then, for the decomposition $y = \sum_\nu y_\nu$ and $y_\nu \in H_D^\nu(C)$ given in proposition \ref{Hodge}, the components $y_\nu$ also maps to $x$ by the functoriality of $H_D^\nu(.)$. \qed {\it Proof of proposition \ref{H1}}. As in the proof of proposition \ref{abutment} we can reduce to the case of an indecomposable object $V$ in ${\mathcal R}_1^1$. For such $V$ either $H_D(V)=0$, in which case the assertion is trivial, or $V$ is of the form $$ 0 \to L \to V \to Q \to 0 $$ with irreducible $L$ and $Q\in {\mathcal C}^-$ or of the form $$ 0 \to U \to V \to L \to 0 $$ with irreducible $L$ and $U\in {\mathcal C}^-$. These two situation are duals of each other. So we restrict ourselves to the first case. The irreducible module $L$ is isomorphic to $Ber^m$ for some $m\in \mathbb Z$. Then according to \cite{Germoni-sl} the quotient module $Q$ has socle and cosocle $$ socle(Q) = \bigoplus_{i=1}^s Ber^{m+2i} $$ $$ cosocle(Q) = \bigoplus_{i=1}^s Ber^{m+2i-1} \ .$$ Recall $H^m(V) = H^\bullet(V)$, and hence $H_D(V)=H^m(V)$. Hence by the abutment of the spectral sequences $$ H_{\overline\partial}(H^\nu(V)) = H^\nu(V) \cong k $$ for $\nu = m$ and is zero otherwise. Hence $H^m(V) \cong H_D(V)$, since the filtration $F^q$ only jumps for $p=m$. Similarly $$ H_\partial(\overline H^\nu(V)) = \overline H^\nu(V) \cong k $$ for $\nu = m$ and is zero otherwise. Hence $\overline H^m(V) \cong H_D(V) $, since the filtration $\overline F^q$ only jumps for $q=m$. This simultaneous jump shows $$ H_D^m(V) = \overline F^m(H_D(V)) \cap F_m(H_D(V)) $$ and also for $q > p$. $$ \overline F^q(H_D(V)) \cap F_p(H_D(V)) \ = \ 0 \ .$$ \qed \section{ The case $m>1$}\label{m} As the diligent reader may have observed, the results obtained in the last sections on the functor $DS$ carry over to the case of the more general functors $DS_{n,n-m}$. For this fix $m \geq 1$. The enriched weight structure of $DS_{n,n-m}$ (which depends on $m$) is obtained from the decomposition of $(V,\rho) \in T_n$ into eigenspaces with respect to the eigenvalues $t^\ell$ under the elements $\varphi_{n,m}(E \times diag(1,t^{-1}))$ of the small torus. This allows to give a decomposition $$ DS_{n,n-m}(V) = \bigoplus_\ell \ DS_{n,n-m}^\ell(V)[-\ell] \ $$ into eigenspaces $\Pi^\ell(DS_{n,n-m}^\ell(V))$ and gives long exact sequences in $T_{n-m}$ attached to short exact sequences in $T_n$ as in section \ref{DF} . Furthermore lemma \ref{-ell} and lemma \ref{+ell} carry over verbatim. Notice, $$DS_{n,n-m}^\ell(Ber_n)= Ber_{n-m} \quad , \quad \mbox{ for } \ell=m $$ and it is zero for $\ell\neq m$. Indeed, $\varphi_{n,m}(E \times diag(1,t^{-1}))$ acts on $Ber_n$ by $t^m$. Since $\epsilon_n = \varphi_{n,m}(E \times diag(1,-1)) \epsilon_{n-m}$, the restriction of $Ber_n$ to $G_{n-m}$ via $\varphi_{n,m}$ defines the module $\Pi^m(Ber_{m-n})$. {\bf Remark}. Note that $DS_{n,n-m}^\ell(V)[-\ell] = (DS_{n,n-m}(V))_\ell$. Here upper indices denote graduations without twist, lower indices graduations with twist. This is consistent with $DS_{n,n-1}^\ell(V)=H^\ell(V)$ in section \ref{sec:cohomology-functors}. Note that it is essential to have non twisted objects such as $H^\bullet(V)$ or more generally $DS^\bullet(V)$ due to the comparison with $H_D$ respectively $\omega_{n,n-m}$ (see below) which don't have any twists. For $n-m_1=n_1$ and $n_1-m_2=n_2$ the functors $DS_{n,n_1}: T_n \to T_{n_1}$ and $DS_{n_1,n_2}: T_{n_1} \to T_{n_2}$ are related to the functor $DS_{n,n_2}: T_n \to T_{n_2}$ by a Leray type spectral sequence with the $E_2$-terms $$ \fbox{$ \bigoplus_{p+q=k} DS^p_{n_1,n_2}(DS_{n,n_1}^q(V)) \ \Longrightarrow \ DS^k_{n,n_2}(V) $} \ .$$ To be more precise, choose matrices $$ J \ = \ \begin{pmatrix} 0 & J_2 \cr J_1 & 0 \end{pmatrix} $$ and $m_i\times m_i$-matrices $J_i, i=1,2$ with zero enties except for the entries 1 in the antidiagonal. Then $J$ and $J_1$ define functors $DS_{n,n_2}(V,\rho)=(V,\rho)_x$ resp. $DS_{n,n_1}(V,\rho)=(V,\rho)_{x_1}$ and $J_2$ defines a functor $DS_{n_1,n_2}(W,\psi)=(W,\psi)_{x_2}$. Obviously we have $x=x_1 + x_2 \in {\mathfrak g}_n$ and $x_2\in {\mathfrak g}_{n_1} \subset {\mathfrak g}_n$ such that $[x_1,x_2]=0$. Then indeed $\partial = \partial_1 + \partial_2$ and $\partial_1 \partial_2 + \partial_2 \partial_1 =0$ for $\partial = \rho(x), \partial_1=\rho(x_1)$ and $\partial_2=\psi(x_2)$. Consider the weight (eigenvalue) decomposition $$ V \ = \ \bigoplus_{p,q\in\mathbb Z} V^{p,q} $$ of $(V,\rho)$ with respect to the matrices $$g(t_1,t_2)=diag(1,..,1;t_1^{-1},...,t_1^{-1},t_2^{-1},..,t_2^{-1},1,..,1)$$ in $G_n$ ($m_1$ entries $t_1^{-1}$ and $m_2$ entries $t_2^{-1}$) so that $v\in V^{p,q}$ if and only if $ g(t_1,t_2)v = t_1^q t_2^p \cdot v$. (We now write indices on top to avoid confusion with the lower indices $n$ and $n-m$). Then $\partial_2: V^{p,q} \to V^{p+1,q}$ and $\partial_1: V^{p,q} \to V^{p,q+1}$. Hence the Leray type spectral sequence is obtained by the spectral sequence of this double complex. For this note that the the functors $D_{n,n_2}^k$ are defined by the eigenvalues $t^k$ of the elements $g(t,t)$. \begin{prop} \label{Leray} For irreducible maximal atypical objects $L$ in $T_n$ the Leray type spectral sequence degenerates: $$ \fbox{$ DS_{n,n_2}(L) \ \cong \ DS_{n_1,n_2}(DS_{n,n_1}(L)) $} \ .$$ \end{prop} {\it Proof}. Up to a parity shift, we can replace $L=L(\lambda)$ by $X_\lambda$ in $T_n$, so that $sdim(X_\lambda) >0$ using that $sdim (X) \neq 0$ \cite{Serganova-kw}, \cite{Weissauer-gl}. Then it suffices to prove inductively (for $DS$ applied $m$ times) $$ (DS\circ DS .... \circ DS)(X_\lambda) \ \cong \ DS_{n,n-m}(X_\lambda) \ .$$ The case $m=1$ is obvious by definition, since $DS_{n,n-1}=DS$. Suppose this assertion holds for $m$. Let us show that it then also holds for $m$ replaced by $m+1$. Indeed, the $E_2$-term of the spectral sequence $$ DS\circ (DS\circ DS .... \circ DS)(X_{\lambda}) \Longrightarrow D_{n,n-m-1}(X_\lambda) \ $$ are of the form $$ DS\circ (DS\circ DS .... \circ DS)(X_{\lambda}) \ \cong \bigoplus_\mu X_\mu $$ for irreducible representations $X_\mu$ in $T_{n-m-1}$ of superdimension $sdim(X_\mu)>0$. Indeed this follows by repeatedly applying the later theorem \ref{mainthm}, which implies $DS(X_\lambda) \cong \bigoplus_{i=1}^k X_{\lambda_i}$ for irreducible maximal atypical objects $X_{\lambda_i}$ in $T_{n-1}$ with $sdim(X_{\lambda_i}) >0$. Now $DS$ is a tensor functor, and hence preserves superdimensions. Hence $sdim(X_\lambda) = \sum_\mu sdim(X_\mu)$. If the spectral sequence would not degenerate at $E_2$-level, then the $E_\infty$-term is a proper subquotient of the semisimple $E_2$-term. Hence $$ sdim(DS_{n-m-1})(X_\lambda) < \sum_\mu sdim(X_\mu)\ ,$$ since $sdim(X_\mu) >0$. This would imply $$sdim(DS_{n-m-1})(X_\lambda) < sdim(X_\lambda) \ .$$ However this is a contradiction, since $D_{n-m-1}$ is a tensor functor and hence $sdim(DS_{n-m-1}(X_\lambda)) = sdim(X_\lambda)$. Hence the spectral sequence degenerates and $DS_{n,n_2}(X_{\lambda})$ has a filtration with graded pieces that are computed by appropriate $DS_{n_1,n_2}^p DS_{n,n_1}^q (X_{\lambda})$. In order to prove that $DS_{n,n_2}(X_{\lambda}) = DS_{n_1,n_2}(DS_{n,n_1}(X_{\lambda}))$ we show that this filtration splits and $DS_{n,n_2}(X_{\lambda})$ is semisimple. This follows from the sign rules of the main theorem \ref{mainthm}. Indeed for $X_{\lambda}$ with $\varepsilon(\lambda) = 1$, the constituents of $DS(X_{\lambda})$ in ${\mathcal R}_n$ have the same sign $\varepsilon = 1$ and the constituents of $DS(X_{\lambda})$ in $\Pi {\mathcal R}_n$ have sign $\varepsilon = -1$. Now use that $Ext^1({\mathcal R}_n,\Pi {\mathcal R}_n) = 0$ by lemma \label{thm:decomposition} and $Ext^1(L(\lambda),L(\mu)) = 0$ if $\varepsilon(\lambda) = \varepsilon(\mu)$ by corollary \ref{semisimple-sign}. Hence there are no extensions between the constituents of $DS(X_{\lambda})$. Repeated application of $DS$ gives again constituents which are either in ${\mathcal R}_n$ with sign $\varepsilon = 1$ or constituents in $\Pi {\mathcal R}_n$ with sign $\varepsilon = -1$. Since the constituents of $DS_{n,n_2}(L)$ are given by the constituents of the graded pieces, the semisimplicity follows. \qed We have seen in the last proposition that the Leray type spectral sequence degenerates at the $E_2$-level for irreducible maximal atypical objects. Let $F^p$ be the decending first (or second) filtration of the total complex. Due to the degeneration we can make use of the following lemma. \begin{lem} Suppose given a finite double complex $(K^{\bullet,\bullet}, d_{hor},d_{vert})$ with associated total complex $K^\bullet = Tot(K^{\bullet,\bullet})$ and total differential $d$. Suppose the associated spectral sequence for the first (second) filtration degenerates at the $E_2$-level and suppose $x\in F^p(K^\bullet)$ is a boundary in $K^\bullet$. Then there exists $y\in F^{p-1}(K^\bullet)$ such that $x=dy$. \end{lem} {\it Proof}. We can assume that $x=\sum_{i=p}^{\infty} x_{p,n-p}$ has fixed degree $n$. The spectral sequence degenerates at $E_2$ and $[x]=0$ in $F^pH^n(K^\bullet)$. Hence the class of $x$ in $Gr^p(H^n(K^\bullet)) = H_{hor}^p(H^{n-p}_{vert}(K^{\bullet,\bullet}))$ vanishes. In other words there exists $v\in K^{p,n-p-1}$ and $u\in K^{p-1,n-p}$ such that $d_{vert}(u)=0$ and such that $d_{hor}(u) + d_{vert}(v) = x_{p,q}$. Hence $x- d(u+v) \in F^{p+1}(K^\bullet)$ with $u+v \in F^{p-1}(K^\bullet)$ again is closed. Iterating this argument we conclude that for any $r$ large enough we find $y\in F^{p-1}(K^\bullet)$ such that $x - dy \in F^r(K^\bullet)$. If $r$ is large enough, then $F^r(K^\bullet)=0$ and hence the claim follows. \qed {\it Dirac cohomology}. Similarly the results of section \ref{DDirac} hold verbatim for $\partial = \rho(x)$ and $\overline\partial = c \cdot \rho(\overline x)$ and $D=\partial + \overline\partial$. In particular, for a generator $z$ of the Lie algebra of the center of $G_1$, let $H$ denote its image $\varphi_{n,m}(z)\in {\mathfrak g}_n$. The $D$-cohomology of the fixed space $V^H$ then gives objects $\omega_{n,n-m}(V,\rho)$ so that so that $$ \omega_{n,n-m}: T_n \to T_{n-m} $$ defines a tensor functor generalizing $H_D=\omega_{n,n-1}$. Note that $\omega_{n,n-m}$ restricts to a tensor functor $\omega_{n,n-m}:\mathcal{R}_n \to \mathcal{R}_{n-m}$ unlike $DS_{n,n-m}$. As in section \ref{DS-vs-D} there is a spectral sequence that allows to define a filtration on $\omega_{n,n-m}(V)$ whose graded pieces are $$ \fbox{$ \omega_{n,n-m}^\ell(V) \ \cong \ H_{\overline\partial}(DS_{n,n-m}^\ell(V)) $} \ .$$ This generalizes proposition \ref{abutment}. Furthermore the results of proposition \ref{H1} and corollary \ref{H2} of section \ref{Hodge} carry over and define a Hodge decomposition for $\omega_{n,n-m}$ in terms of the functors $\omega_{n,n-m}^\ell$. Finally the same argument used in the proof of proposition \ref{Leray} also shows \begin{prop} \label{Leray2} For irreducible maximal atypical objects $L$ in $T_n$ the spectral sequence above degenerates, i.e. for all $\ell$ $$ \fbox{$ \omega_{n,n-m}^\ell(L)\ \cong \ DS_{n,n-m}^\ell(L) $} \ .$$ \end{prop} Now consider the $\mathbb Z$-graded object $DS_{n,n-m}^\bullet(L) =\bigoplus_{\ell\in {\mathbf{Z}}}\ DS_{n,n-m}^\ell(L)$ (which is different from $DS_{n,n-m}(L)$ if we forget the graduation) to compare with $\bigoplus_{\ell\in {\mathbf{Z}}}\ \omega_{n,n-m}^\ell(L)$ (that is $\omega_{n,n-m}(L)$ after forgetting the graduation). \begin{lem}\label{insteps} Suppose for irreducible $V\in T_n$ that $DS_{n,n_1}^\bullet(V) \cong \omega_{n,n_1}(V)$. Then $$ \omega_{n,n_2}^\bullet(V) \cong \omega_{n_1,n_2}^\bullet(DS_{n,n_1}^\bullet(V)) $$ holds. \end{lem} {\it Proof}. Use that $\overline \partial = \overline\partial_1 + \overline\partial_2$. By the assumption $DS_{n,n_1}^\bullet(V) \cong \omega_{n,n_1}^\bullet(V)$ the differential $\overline\partial_1$ is trivial on $DS_{n,n_1}^\bullet(V)$, hence trivial on $DS^\bullet_{n_1,n_2}(DS^\bullet_{n,n_1}(V)) \cong DS^\bullet_{n,n_2}(V)$. Therefore the $\overline\partial$-homology of $DS_{n,n_2}^\bullet(V)$ is the same as the $\overline\partial_2$-homology attached to $ DS_{n_1,n_2}^\bullet(DS_{n,n_1}(V))^\bullet$. \qed This implies $\omega_{n,n-m}^\bullet(L)\ \cong \ DS_{n,n-m}^\bullet(L)$ for any irreducible $L$ in $T_n$. We prove this by induction on $m$. For $m=1$ this follows from the fact that irreducible representations satisfy property $\tt T$. Now we use $ \omega_{n,n-m-1}^\bullet(L) \cong \omega_{n-m,n-m-1}^\bullet(DS_{n,n-m}^\bullet(L))$ from lemma \ref{insteps}. Since $DS_{n,n-m}^\bullet(L)$ is semisimple by proposition \ref{Leray} (as iteration of $m$ times $DS^\bullet$), we have $$ \omega_{n-m,n-m-1}^\bullet(DS_{n,n-m}^\bullet(L)) = DS^\bullet(DS_{n,n-m}^\bullet(L)) = DS_{n,n-m-1}^\bullet(L)\ .$$ This implies \begin{prop} \label{Leray3} For all irreducible objects $L$ in $T_n$ and all $\ell$ we have $$ \fbox{$ \omega_{n,n-m}^\ell(L)\ \cong \ DS_{n,n-m}^\ell(L) $} \ .$$ \end{prop} \noindent The case $m=n$ is of particular interest. Notice that $T_0$ is the category $svec_k$ of finite dimensional super $k$-vectorspaces. Hence $$ \omega=\omega_{n,0}: T_n \ \longrightarrow \ svec_k \ .$$ {\it The tori $A_i$}. Let $A_i\subseteq G_n$ denote the diagonal torus of all elements of the form $diag(1,\ldots,1,t_{n-i+1},\ldots,t_n \ | \ t_n,\ldots, t_{n-i+1},1,\ldots,1)$. In particular $A_1$ is the torus of section 5 and $H = H_{n,n-1}$. It commutes with all operators $\partial_{n,n-i}, \overline\partial_{n,n-i}$ and $D_{n-i}$ and hence acts on $DS_{n,n-i}(V)$ respectively $\omega_{n,n-i}(V)$. We claim \begin{lem} The action of $A_i$ on $DS_{n,n-i}(V)$ and $\omega_{n,n-i}(V)$ is trivial. \end{lem} {\it Proof}. For this we can assume without loss of generality that $i=n$. The $H_{n,n-i}$ for $i=1,...,n$ generate the Lie algebra of the torus $A$. Hence it suffices that all $H_{n,n-i}$ act trivially. This follows from the Leray type spectral sequence $DS_{n-i,0}\circ DS_{n,n-i} \Longrightarrow DS_{n,0}$. As in the proof of lemma \ref{homotopy} one shows that $H_{n,n-i}$ acts trivially on $DS_{n,n-i}(V)$. Hence by the spectral sequence $H_{n,n-i}$ acts by a nilpotent matrix on $D_{n,0}(V)$. On the other hand $A$, and hence $H_{n,n-i} \in Lie(A)$, acts in a semisimple way. This proves the claim. \qed \noindent \section{ Boundary maps} Suppose given a module $S$ in ${\mathcal R}_n$. Consider $D_{tot} = D+D'$ for $D=D_{n,n-i}$ and $D'=D_{n-i,0}$. Notice that $DD' = -D'D$ and $D^2= c \rho(H)$, $(D')^2 = c \rho(H')$ and $D_{tot}^2 = c \rho(H_{tot})$. For fixed $i$ we write $A = A_i$. We have $H_D(S)= Kern(D:S\to S)/(S^H \cap Im(D:S\to S))$. We have also shown that this is equal to $$ H_D(S)= Kern(D:S^A\to S^A)/Im(D:S^A\to S^A)$$ for the torus $A$ whose Lie algebra is generated by all $H_{n,n-j}$ for $j=1,...,n$. In a similar way $$ H_{D_{tot}}(S)= Kern(D_{tot}:S^A\to S^A)/Im(D_{tot}:S^A\to S^A)\ .$$ Recall that $A$ commutes with $D_{tot},D,D'$ and acts in a semisimple way. Let $U\subseteq S$ denote the image of $D':S^A\to S^A$. Then $U$ and $\overline S^A= S^A/U$ are stable under $D$ and $D'$. If $s\in S^A$ is in $Kern(D_{tot})$, then $Ds = - D's \in U$. Hence $s\mapsto s + U$ defines a map from $Kern(D_{tot}:S^A\to S^A)/Im(D_{tot}:S^A\to S^A)$ to $Kern(D:\overline S^A\to \overline S^A)/Im(D_{tot}:\overline S^A\to \overline S^A)$, hence a map or $$ \sigma_S: H_{D_{tot}}(S) \longrightarrow H_D(\overline S) \ .$$ \noindent Suppose given modules $S, V, L$ in ${\mathcal R}_n$ defining an extension $$ 0 \to S \to V \to L \to 0 \ .$$ We get a boundary map $$ \delta_{tot}: H^\pm_{D_{tot}}(L) \longrightarrow H^\mp_{D_{tot}}(S) $$ defined as usually by $ Kern(D_{tot}:L\to L) \ni \overline v \ \mapsto \ [s] \ , \ s = D_{tot} v \in L $. Here $v\in V^A$ is any lift of $\overline v \in L^A$ (it exists by the semisimple action of $A$ on $V$). Obviously $D_{tot}(s)=0$, since $D_{tot}^2=0$ on the space of $A$-invariant vectors. Therefore the class $[s]$ of $s$ in $H^\mp_{D_{tot}}(S)$ is well defined. In a completely similar way one defines the boundary map $$ \delta: H^\pm_{D}(L) \longrightarrow H^\mp_{D}(S) \ .$$ We claim that there exists a commutative diagram $$ \xymatrix{ H_D^\pm(L) \ar[r]^\delta & H_D^\mp(S) \cr H^\pm_{D_{tot}}(L)\ar[u]_{\sigma_L} \ar[r]^{\delta_{tot}} & H_{D_{tot}}^\mp(S) \ar[u]_{\sigma_S} } $$ In fact on the level of representatives $v\in V^A$ it amounts to the assertion $$ \xymatrix{ \overline v = v \ mod \ U \ar[r] & \overline s = D\overline v \cr v \ar[u]_{\sigma_L} \ar[r] & s= D_{tot}v \ar[u]_{\sigma_S} } $$ using $D_{tot}v \equiv Dv\ mod \ U_L$. We now consider two extension $(S,V,L)$ and $(S,\tilde V, \tilde L)$. Then the commutative diagram $$ \xymatrix{ H^+_D(\tilde L) \ar[r]^{\tilde \delta} & H^-_D(S) & H^+_D(L) \ar[l]_{\delta} \cr H^+_{D_{tot}}(\tilde L) \ar[u]_{\sigma_{\tilde L}}\ar[r]^{\tilde \delta_{tot}} & H^-_{D_{tot}}(S) \ar[u]_{\sigma_{S}} & H^+_{D_{tot}}(L) \ar[u]_{\sigma_{L}}\ar[l]_{\delta_{tot}} } $$ implies \begin{lem} Suppose $L\cong{\bf 1}$. Then $Im(\delta_{tot})$ is not contained in $Im(\tilde\delta_{tot})$, if there exists an integer $i$ for $1\leq i\leq n$ such that \begin{itemize} \item $H^+_D(\tilde L)$ does not contain $\bf 1$ as a $G_{n-i}$-module. \item $\delta({\bf 1}) \neq 0 $ in $H_D^-(S)$. \end{itemize} for $D=D_{n,n-i}$. \end{lem} {\it Remark}. If $S=L_n(j)$ for some $1\leq j\leq n$ is one of the hook representation discussed in section \ref{hooks}, then the smallest integer $i$ for which $\delta({\bf 1}) \neq 0 $ in $H_D^-(S)$ holds is given by $n-i=j-1$. Indeed we later show that although $0 \to D_{n,j}^0(S) \to D_{n,j}^0(V) \to {\bf 1}\to 0$ is exact, the map $D_{n,j-1}^0(V) \to {\bf 1}$ is not surjective any longer. So in the later applications, to apply the last lemma, we have to check whether $H^+_D(\tilde L)$ contains the trivial $G_{j-1}$-module in this case or not. \section{Highest Weight Modules}\label{HighW} {\it \! Irreducible representations}. The irreducible $Gl(n\vert n)$-modules $L$ in ${{\mathcal R}}_n$ are uniquely determined up to isomorphism by their highest weights $\lambda$. These highest weights $\lambda$ are in the set $X^+(n)$ of dominant weights, where $\lambda$ is in $X^+(n)$ if and only if $\lambda$ is of the form $$\lambda = (\lambda_1,\lambda_2,\ldots,\lambda_n \ ; \ \lambda_{n+1},\ldots, \lambda_{2n})$$ with integers $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n$ and $\lambda_{n+1} \geq \lambda_{n+2} \geq \ldots \geq \lambda_{2n}$. We remark that the condition $$ \lambda_n = -\lambda_{n+1} \ $$ for $\lambda$ is equivalent to the condition $\lambda(H)=0$. In the language of Brundan and Stroppel in section \ref{BS} the condition $\lambda(H)=0$ is tantamount to the condition that the irreducible representation $L(\lambda)$ is not projective and the smallest $\vee$-hook is to the left of all $\times$'s and $\circ$'s. Any at least 1-atypical block contains such $L(\lambda)$. If these equivalent conditions hold we write $$\overline\lambda =(\lambda_1,...,\lambda_{n-1} \ ; \ \lambda_{n+2},\ldots, \lambda_{2n})$$ defining an irreducible representation $L(\overline\lambda)$ in ${\mathcal R}_{n-1}$. Using the notation of \cite{Drouot} the irreducible {\it maximally atypical} $Gl(n\vert n)$-modules $L$ in ${{\mathcal R}}_n$ are given by highest weights $\lambda$ of the form $$\lambda = (\lambda_1,\lambda_2,\ldots,\lambda_n \ ; \ -\lambda_n,\ldots, -\lambda_1) $$ with integers $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n$. We abbreviate this by writing $[\lambda_1,\ldots,\lambda_n]$ for the corresponding irreducible representation. The full subcategory of ${\mathcal R}_n$ generated by these will be denoted ${\mathcal R}_n^n$. {\it Highest weight modules}. Recall that a vector $v\neq 0$ in a module $(V,\rho)$ in ${\mathcal R}_n$ is called {\it primitive}, if $\rho(X)v=0$ holds for all $X$ in the standard Borel subalgebra $\mathfrak{b}$ of ${\mathfrak g}={\mathfrak g}_n$. A highest weight vector of a module $V$ (of weight $\lambda$) in ${\mathcal R}_n$ is a vector $v\in V$ that is a primitive eigenvector of $\mathfrak{b}$ (of the weight $\lambda$) generating the module $V$. In this case $V$ is called a highest weight module (of weight $\lambda$). Every irreducible representation $L(\lambda)$ in ${\mathcal R}_n$ is a highest weight module of weight $\lambda$. Every highest weight module $V$ of weight $\lambda$ has cosocle isomorphic to $L(\lambda)$. \begin{lem}\label{GEN} For $(V,\rho)=L(\lambda)$, or more generally a cyclic representation generated by a highest weight vector of weight $\lambda$, the weight space in $V$ of weight $\lambda-\mu$ is generated by $\rho(\overline x)v$, where $v$ is a highest weight vector of $(V,\rho)$. \end{lem} \noindent {\it Proof}. For the simple positive roots $\Delta=\{\alpha_1,...,\alpha_r\}$, i.e. the union of the odd simple root $\{\mu\}$ and the even simple roots in ${\mathfrak g}_{\overline 0}$ with respect to the standard Borel subalgebra of upper triangular matrices, choose generators $X_\alpha\in \mathfrak{u}$. Put $\tau(X_\alpha)= Y_{-\alpha}$ and $V_0=F\cdot v$. Recursively define $V_i = V_{i-1}+\sum_{\alpha\in \Delta} \rho(Y_{-\alpha})(V_{i-1}).$ We claim that $V_\infty=\bigcup_{i=0}^\infty V_i$ is a ${\mathfrak g}$-submodule of $V$, hence equal to $V$. This claim also implies that the weight space $V_{\lambda-\mu}$ is generated by $\rho(\tau(x))v$. $V_\infty$ is invariant under all $\rho(Y_{-\alpha}), \alpha\in \Delta=\{\alpha_1,...,\alpha_r\}$. Each $V_i$ obviously is invariant under $\rho(X)$ for diagonal $X\in \mathfrak{g}$. Indeed each $V_i/V_{i-1}$ decomposes in weight spaces for weights $$ \lambda - \sum_{j=1}^r n_j \alpha_j \quad , \quad \sum_{j=1}^r n_j = i \quad \quad (n_j \in \mathbb N_{\geq 0}) \ .$$ Note $\rho(X_\alpha)\rho(Y_{-\beta}) \pm \rho(Y_{-\beta})\rho(X_\alpha) = \rho(H_\alpha)$ for $\alpha=\beta\in \Delta$ and $\rho(X_\alpha)\rho(Y_{-\beta}) \pm \rho(Y_{-\beta})\rho(X_\alpha) = \rho([X_\alpha, Y_{-\beta}]) =0$ for $\alpha,\beta\in \Delta$ and $\alpha\neq \beta$ [since $\alpha - \beta \notin \Phi^+ \cup \Phi^-$ for $\alpha,\beta\in\Delta$]. Hence $V_\infty$ is invariant under $g$, since $Y_{-\beta}, \beta\in\Delta$ and diagonal $ X$ and $X_\alpha, \alpha\in \Delta$, generate $\mathfrak{g}$ as a Lie superalgebra. \qed \begin{lem} \label{stable}\label{0} Suppose $\lambda=(\lambda_1,...,\lambda_{n-1},\lambda_n\ ; \ \lambda_{n+1}, \lambda_{n+2},\ldots, \lambda_{2n})$ satisfies $\lambda_n = -\lambda_{n+1} $. If $V$ is a highest weight representation generated by a highest weight vector $v$ of weight $\lambda$, the module $H^{\lambda_n}(V)$ contains a highest weight submodule of weight $\overline\lambda$ generated by the image of $v$ with parity $(-1)^{\lambda_n}$. In particular the representation $\Pi^{\lambda_n} L(\overline\lambda)$ in ${\mathcal R}_{n-1}$ is a Jordan-H\"older constituent of $H^{\lambda_n}(V)$. \end{lem} {\it Proof of the lemma}. The highest weight vector $v$ of $V$ is a highest weight vector of the restriction of $V$ to the subgroup $G_{n-1}$ of $G_n$ and is annihilated by $\rho(x)$. By our assumption on the weight $\lambda$ furthermore $ v \in V^H $. To prove our claim it suffices to show that $v$ is not contained in $Im(\rho(x))$. Suppose $v = \rho(x)(w)$. Since the weight of $x$ is $\mu$, we can assume that the weight of $w$ is $\lambda - \mu$. Since $V$ is a highest weight representation, by lemma \ref{GEN} then $w$ is proportional to $\rho(\overline x)v$. So that to show $\rho(x)w=0$ and to finish our proof, it suffices that by $ [x, \overline x] = H$ $$ \rho(x) \rho(\overline x) v = - \rho(\overline x) \rho(x) v + \rho(H) v = 0$$ vanishes, since $\rho(x)v=0$ and $v\in V^H$. \qed Note that (for the notation used see section \ref{sec:Casimir}) $$ z_i= [x_i,\overline x] = x_i \overline x + \overline x x_i \quad , \quad z'_i= [x'_i,\overline x] = x_i' \overline x + \overline x x_i' $$ are in the unipotent Lie algebra $\mathfrak{u}_{\overline 0} \subset \mathfrak{u}$ of the standard Borel $\mathfrak{b}_{\overline 0}$ of ${\mathfrak g}_{\overline 0}$ for all $i=1,..,n-1$. Suppose $(V,\rho)$ is a representation of $G_n$. If $(V,\rho)$ has a highest weight vector $v$, then $\rho(X)v=0$ holds for all $X$ in the unipotent radical $\mathfrak{u}$ of the standard Borel of ${\mathfrak g}$. In particular $$\rho(x)v=0, \rho(x_i)v=0, \rho(x'_i)v=0, \rho(z_i)v=0, \rho(z'_i)v=0 $$ and hence by the commutation relations above this implies for $i=1,...,n-1$ also $$\rho(x_i)\rho(\overline x)v =0 \quad , \quad \rho(x'_i)\rho(\overline x)v =0 \ .$$ Now also suppose $v\in V^H$ and put $w=\rho(\overline x)v$. Then $\rho(x)w=0$, as shown in the proof of lemma \ref{stable}. Similarly one can show $\rho(x_i)w=0$ (since $\rho(x_i)v=\rho(z_i)v=0$) and $\rho(x'_i)w=0$. All elements $\mathfrak{u} \cap {\mathfrak g}_{n-1}$ commute with $\rho(\overline x)$ and annihilate $v$, hence annihilate $w$. Finally, since $\rho(\overline x)$ and $\rho(x_i),\rho(x'_i)$ annihilate $w$, also $\rho(z_i)$ and $\rho(z'_i)$ annihilate $w$. It follows that $\rho(X)w=0$ for all $X\in \mathfrak{u}$, since $\mathfrak{u}$ is spanned by $\mathfrak{u}\cap {\mathfrak g}_{n-1}$ and the $x,x_i,x'_i,z_i,z'_i$,. This implies that $w$ is a highest weight vector in $(V,\rho)$ of weight $\lambda -\mu$, if $w\neq 0$. Hence \begin{cor}\label{companion} If $(V,\rho)$ is a highest weight representation with highest weight vector $v$ and highest weight $\lambda$ so that $\lambda(H)=0$, then $w=\rho(\overline x)v$ defines a highest weight vector of weight $\lambda-\mu$ in $V$ if $w\neq 0$. \end{cor} In the situation of the last corollary, the following conditions are equivalent \begin{enumerate} \item $w=0$ \item $D(v)=0$ \item $D(v)=0$ and $v$ defines a nonvanishing cohomology class in $H_D(V)$. \end{enumerate} Indeed $D(v)= i\rho(\overline x) v + \rho(x)v= i w$. Furthermore, if $v=D(\tilde w)$, then $v = i\rho(\overline x) \tilde w_1 + \rho(x) \tilde w_2$ for $w_1\in V_{\lambda+\mu}$ and $w_2\in V_{\lambda-\mu}$. Since $\lambda$ is highest weight, therefore $V_{\lambda+\mu}=0$. Furthermore $V_{\lambda-\mu}$ is generated by $w$, and $\rho(x)w=0$. Hence $v \notin D(V)$. A highest weight representation $V$ of weight $\lambda$ canonically admits the irreducible representation $L=L(\lambda)$ as a quotient. Let $q : V \to L$ denote the quotient map. \begin{cor}\label{companion2} In the highest weight situation of corollary \ref{companion} the following holds for the representation $V$: \begin{enumerate} \item If $V$ contains a highest weight subrepresentation $W\neq 0$ of weight $\lambda-\mu$, then $H_D^{\lambda_n}(V)$ has trivial weight space $H_D^{\lambda_n}(V)_{\overline \lambda} \subseteq H_D^{\lambda_n}(V)$. \item If the natural map $H_D(q): H_D(V) \to H_D(L)$ is surjective, then $V$ does not contain a highest weight subrepresentation $W\neq 0$ of weight $\lambda-\mu$. \end{enumerate} \end{cor} {\it Proof}. For the first assertion, notice that $D(v)=0$ implies $w=0$ and $w$ generates $V_{\lambda-\mu}$. For the second assertion notice that the highest weight vector $v\in V$ maps to the highest weight vector $q(v)$ of $L$. By the first assertion and lemma \ref{stable}, applied for $L$, the vector $q(v)$ is $D$-closed and defines a nonzero class in $H_D^{\lambda_n}(L)_{\overline\lambda}$. Since now $H_D(q)$ is surjective by assumption, corollary \ref{H2} implies that this class is the image of a nonzero cohomology class $\eta$ in $H_D^{\lambda_n}(V)$. This class is representated by a nonzero $\overline\partial$ closed class in $H^{\lambda_n}(V)=DS_{\lambda_n}$ in the weight space $\overline\lambda$. Hence this class has a $D$-closed representative $v'$ in $V_{\lambda}$, since the enriched weight structure on $DS(V)$ allows to recover the weight structure of $V$. Since $V$ is a highest weight representation, the space $V_\lambda$ has dimension one and therefore $v'$ is proportional to $v$. Thus $D(v)=0$. But, as explained above, this implies $w=0$ and hence $V_{\lambda-\mu}=0$. \qed Since Kac modules $V(\lambda)$ are highest weight modules of weight $\lambda$ with $H_D(V(\lambda))=0$, lemma \ref{stable} and its corollaries above imply \begin{lem}\label{KAC} For $\lambda$ in $X^+$ with $\lambda_n = \lambda_{n+1}=0$ the cohomology $H^{0}(V(\lambda))$ of the Kac module $V(\lambda)$ contains a highest weight module of weight $\overline\lambda$. Furthermore $V(\lambda)$ contains a nontrivial highest weight representation of weight $\lambda -\mu$. \end{lem} {\bf Example}. Let $(V,\rho)=V({\bf 1})$ in ${\mathcal R}_2$ be the Kac module of the trivial representation. Then $DS(V^*)=0$ and $DS(V)\neq 0$, since $V$ is not projective. The module $V$ is a cyclic module generated by it highest weight vector of weight $\lambda=0$ (this is not true for the anti-Kac module $V^*$). Furthermore $V$ has Loewy length 3 with Loewy series $(Ber_2^{-2},Ber_2^{-1}S^1,{\bf 1})$ where $S^1 = [1,0]$. We claim $$DS(V) = (Ber_1^{-2}\oplus {\bf 1}) \otimes \bigl({\bf 1} \oplus {\Pi(\bf 1)}\bigr)\ .$$ This follows from the later results, e.g. lemma \ref{hex} and theorem \ref{mainthm}: $d(Ber_2) = - Ber_1$ and $d(S^1)= Ber_1^{-1} + Ber_1$ imply $d(V)=0$, hence $DS(V)$ has at most 4 Jordan-H\"older constituents $Ber_1^{-2}, \Pi(Ber_1^{-2}), {\bf 1}, \Pi({\bf 1})$. By lemma \ref{stable} the constituent ${\bf 1}$ occurs. By duality then also the constituent $Ber_1^{-2}$ must occur. Since $d(V)=0$ the constituent $\Pi (Ber_1^{-2}), {\Pi (\bf 1)}$ must occur. Finally apply proposition \ref{ext-0}. This example shows that $DS$ in general does not preserve negligible objects. {\it Highest weights}. Suppose $(V,\rho)$ is a highest weight module of weight $\lambda$ such that $\lambda(H)=0$. Let $\nu$ be a weight of $V$. Then $$ \nu = \lambda - \sum_{\alpha\in\Delta_n} \mathbb{N}_{\geq 0} \cdot \alpha $$ for the set $\Delta_n$ of simple positive roots $\alpha$ of $G_n$. Now suppose $\nu$ contributes to $DS(V)$. Then $\nu$ is a weight of $V^H$ and hence $\nu(H)=0$. Notice $\Delta_n$ is the union of $\Delta^+_n=\{e_1-e_2,...,e_{n-1}-e_n, e_{n+1}-e_{n+2},..., e_{2n-1}-e_{2n}\}$ and $\Delta^-_n =\{ e_n - e_{n+1} \}$. The restriction of the simple roots $\alpha \in \Delta_n$ are in $\Delta_{n-1}$ (i.e. simple root of $G_{n-1}$) except for the even simple roots $\alpha = e_{n-1}-e_n$, $\alpha= e_{n+1}-e_{n+2}$ and the odd simple root $\alpha = e_n-e_{n-2}$. A linear combination $\sum_{\alpha\in \Delta_n} n_{\alpha}\alpha $ annihilates $H$ if and only if the coefficient, say $m$, of $e_{n-1}-e_n$ and $e_{n+1}-e_{n+2}$ coincides; hence this holds iff $\nu$ is of the form $\sum_{\alpha\in \Delta_{n-1}^+} n_\alpha \alpha + (n_\mu - m) \cdot (e_n - e_{n+1}) + m\cdot (e_{n-1} - e_{n+2})$. Notice that $\mu=(e_n - e_{n+1})$ is trivial on the maximal torus of $G_{n-1}$ and that $(e_{n-1} - e_{n+2})$ defines the new odd simple root in $\Delta_{n-1}^-$. Hence the restriction of $\nu \in V^H$ is of the form $$ \nu\vert_{\mathfrak{b}\cap {\mathfrak g}_{n-1}} \ \in \ \lambda\vert_{\mathfrak{b}\cap {\mathfrak g}_{n-1}} - \sum_{\alpha\in\Delta_{n-1}} \mathbb{N}_{\geq 0} \cdot \alpha \ $$ under our assumptions above. Notice for $V_\lambda \subset V^H \subset V$ we have $$ \ell = \lambda(diag(1,..,1;t^{-1},1,..,1)) = n_\mu - m = \lambda'_n \ .$$ The discussion above implies \begin{lem} \label{Hi} For a highest weight module $(V,\rho)$ in $T_n$ of weight $\lambda$ with $\lambda(H)=0$ the module $DS(V,\rho)$ has its weights $\nu$ in $\lambda - \sum_{\alpha\in\Delta_{n-1}} \mathbb{N}_{\geq 0} \cdot \alpha$. \end{lem} \begin{cor} Given $(V,\rho)\in T_n$, suppose $L(\lambda)$ is a Jordan-H\"older constituent of $(V,\rho)$ such that for all Jordan-H\"older constituents $L(\nu)$ of $(V,\rho)$ we have $\nu \in \lambda - \sum_{\alpha\in\Delta_n} \mathbb{N}_{\geq 0} \cdot \alpha$ and $\nu(H)=0$. Then $L(\overline\lambda)$ appears in $DS(V,\rho)$ and all other irreducible constituents $L(\nu')$ or $\Pi L(\nu')$ of $DS(V,\rho)$ satisfy $\nu' \in \lambda - \sum_{\alpha\in \Delta_{n-1}} \mathbb{N}_{\geq 0} \cdot \alpha$. \end{cor} {\it Proof}. This follows from the last lemma and the weak exactness of the functor $DS$. \qed \section{The Casimir}\label{sec:Casimir} We study the operation of the Casimir $C_n$ on $DS(V)$. This will be used in section \ref{sec:loewy-length} when we study the effect of $DS$ on translation functors $F_i(L_{\times \circ})$. Consider the fixed element $x\in {\mathfrak g}_n$ $$x = \begin{pmatrix} 0 & y \\ 0 & 0 \end{pmatrix} \in {\mathfrak g}_{n} \ \text{ for } \ y = \begin{pmatrix} 0 & 0 & \ldots & 0 \\ 0 & 0 & \ldots & 0 \\ \ldots & & \ldots & \\ 1 & 0 & 0 & 0 \\ \end{pmatrix} $$ Similarly we define $$x_i\ ,\ x_i' \ \text{ for } i=1,..,n-1$$ for matrices $y=y_i$ resp. $y_i'$ with a unique entry 1 in the first column resp. last row at positions different from the entry 1 in the above $y$ $$ \begin{pmatrix} * & 0 & \ldots & 0 \\ * & 0 & \ldots & 0 \\ \ldots & & \ldots & \\ 0 & * & * & * \\ \end{pmatrix} $$ Then $x, x_i, x'_i$ are in $u$ for $i=1,...,n-1$. The elements $x_i,x'_i$ satisfy $[x_i,x]=0=[x'_i,x]$. Using Brundan-Stroppel's notations \cite{Brundan-Stroppel-4}, (2.14), let $e_{r,s}\in {\mathfrak g}_n$ be the $rs$-matrix unit. Then the Casimir operator $C_n= \sum_{r,s=1}^n (-1)^{\overline s} e_{r,s} e_{s,r}$ of the super Lie algebra ${\mathfrak g}_n=Lie(G_n)$ is recursively given by $$ C_n = C_{n-1} + C_1 + 2(\overline z_1 z_1 + \cdots + \overline z_{n-1} z_{n-1}) + (e_{1,1} + \cdots + e_{n-1,n-1} - (n-1)e_{n,n}) $$ $$ -2(\overline z'_1 z'_1 + \cdots + \overline z'_{n-1} z'_{n-1}) - ( - e_{n+2,n+2} - \cdots - e_{2n,2n} + (n-1) e_{n+1,n+1}) $$ $$ + 2(\overline x_1 x_1 + \cdots + \overline x_{n-1} x_{n-1}) - (e_{1,1} + \cdots + e_{n-1,n-1} + (n-1)e_{n+1,n+1}) $$ $$ + 2(\overline x'_1 x'_1 + \cdots + \overline x'_{n-1} x'_{n-1}) - (e_{n+2,n+2} + \cdots + e_{2n,2n} + (n-1)e_{n,n}) $$ with the notations $x_i = e_{i,n+1}$, $x'_i = e_{n,2n+1-i}$, $z_i=e_{i,n}$ and $z'_i= e_{n+1,2n+1-i}$. Furthermore $\overline x_i, \overline x'_i, \overline z_i$ and $\overline z'_i$ denote the supertransposed of $x_i, x'_i, z_i$ and $z'_i$. Hence $$ C_n = C_{n-1} + C_1 + 2(\overline z_1 z_1 + \cdots + \overline z_{n-1} z_{n-1} -\overline z'_1 z'_1 - \cdots - \overline z'_{n-1} z'_{n-1}) $$ $$ + 2(\overline x_1 x_1 + \cdots + \overline x_{n-1} x_{n-1} + \overline x'_1 x'_1 + \cdots + \overline x'_{n-1} x'_{n-1}) - 2(n-1)H \ $$ using $[\tau(x),x_i] = z_i$ and $[\tau(x),x'_i]= z'_i$ and $$[z_i,\overline z_i] = e_{i,i} - e_{n,n}\ \quad , \quad [z'_i,\overline z'_i] = e_{n+1,n+1} - e_{2n+1-i,2n+1-i}$$ and $[\overline x_i,x_i] = e_{i,i} + e_{n+1,n+1}$ and $[\overline x'_i,x'_i] = e_{2n+1-i,2n+1-i} + e_{n,n}$. Notice $\overline x_i x_i - x_i \overline x_i = 2 \overline x_i x_i - e_{i,i} - e_{n+1,n+1}$ and $\overline x'_i x'_i - x'_i \overline x'_i = 2 \overline x'_i x'_i - e_{2n+1-i,2n+1-i} - e_{n,n}$. Finally $C_1 = e_{n,n}^2 - e_{n+1,n+1}^2 - x \overline x + \overline x x = e_{n,n}^2 - e_{n+1,n+1}^2 + 2 \overline x x - H$. {\it Representations}. Suppose $(V,\rho)$ is a representation of ${\mathfrak g}_n$. On $DS(V,\rho)$ we have $\rho(H)=0$ and $\rho(x)=0$. Since $$[x,x_i] = [x,x'_i] = [x,z_i] = [x,z'_i]= 0 \ ,$$ the elements $x_i,x'_i,z_i,z'_i$ naturally act on the cohomology $DS(V)=V_x$. Since $x$ commutes with $H$, the spaces $Kern(x)$ and its subspace $Im(x)$ decompose into $H$-eigenspaces $Kern(x)(j)$ and $Im(x)(j)$ for $j\in \mathbb Z$. By lemma \ref{homotopy} however $Kern(x)(j)= Im(x)(j)$, expect for the zero-eigenspace of $H$. Now, although $x,\overline x$ commute with $H$, the operators $y\in\{x_i,x'_i,z_i,z'_i\}$ satisfy $[H,y]=\pm y$ and hence map the zero eigenspace $M=V^H$ into the $\pm 1$-eigenspace of $H$ on $V$. Since the $j=\pm 1$-eigenspaces do not give a nonzero contribution to the cohomology $DS(V) = V_x$, this implies \begin{lem}\label{T} The natural action of $\rho(x_i), \rho(x'_i), \rho(z_i), \rho(z'_i)$ and $\rho(x), \rho(H)$ on $DS(V,\rho)$ is trivial. \end{lem} Notice that $C_n$ commutes with all elements in ${\mathfrak g}_n$, hence induces a linear map on $DS(V,\rho)$ that commutes with the action of $G_{n-1}$ on $DS(V,\rho)$. \begin{lem} \label{Cas} The restriction of the Casimir $C_n$ acts on $DS(V,\rho)$ like the Casimir $C_{n-1}$ of $T_{n-1}$ acts on $DS(V,\rho)\in T_{n-1}$. \end{lem} {\it Proof}. By lemma \ref{T} the restriction of $C_n$ to $DS(V,\rho)$ is the sum of $C_{n-1}$ and the operator $C_1 = e_{n,n}^2 - e_{n+1,n+1}^2 $. Now consider a weight space of $DS(V,\rho)$ with eigenvalue $\overline\lambda$. Then $\overline\lambda$ is the restriction of an eigenvalues $\lambda$ of the weight decomposition of $(V,\rho)$. Since $DS(V,\rho)$ is represented by elements in $M=V^H$, the condition $\lambda(H)=0$ implies $\lambda_n = - \lambda_{n+1}$ and hence $\lambda_n^2 - \lambda_{n+1}^2=0$. Therefore $C_1$ acts trivially on $DS(V,\rho)$. \qed \noindent {\it Remark}. As the referee pointed out, there is a more conceptual proof of lemma \ref{Cas}. For a module $M \in T_n$, the Casimir map is the composition \[ \xymatrix{ C_M: M \ar[r] & \mathfrak{gl}(n|n) \otimes \mathfrak{gl}(n|n)^* \otimes M \ar[r] & \mathfrak{gl}(n|n) \otimes M \ar[r] & M } \] where the first map is the coevaluation map for the adjoint representation of ${\mathfrak g}$ and the last two are the action maps $\mathfrak{gl}(n|n) \otimes M \to M$. Since $DS$ maps the standard representation to the standard representation, it preserves the adjoint representation as well, and hence preserves the Casimir map in the sense that $DS(C_M)$ is the Casimir on $DS(M)$. \part{The Main Theorem and its proof} In this part we prove the main theorem, stating that $DS(L) = \bigoplus_i \Pi^{n_i} L(\lambda_i)$ in $T_{n-1}$ for any irreducible representation $L$. We have seen that $DS(L)$ is actually a ${\mathbf{Z}}$-graded object in $T_{n-1}$; and we calculate the ${\mathbf{Z}}$-grading for any $L$ in the propositions \ref{hproof} \ref{hproof-2}. These statements contain the main theorem as a special case. Their proofs however depend on the main theorem and its proof. We will prove the main theorem first for special irreducible $L$, called ground states, and reduce the general question to these by means of translation functors. \section{The language of Brundan and Stroppel}\label{BS} By the work of Brundan-Stroppel \cite{Brundan-Stroppel-4} the block combinatoric and notably the $Ext^1$ between irreducible representations can be described in terms of weight and cup diagrams associated to any irreducible $L()\lambda)$. {\it Weight diagrams}. Consider a weight $\lambda=(\lambda_1,...,\lambda_n ; \lambda_{n+1}, \cdots, \lambda_{2n})$. Then $\lambda_1 \geq ... \geq \lambda_n$ and $\lambda_{n+1} \geq ... \geq \lambda_{2n}$ are integers, and every $\lambda\in {\mathbb Z}^{m+n}$ satisfying these inequalities occurs as the highest weight of an irreducible representation $L(\lambda)$. The set of highest weights will be denoted by $X^+=X^+(n)$. Following \cite{Brundan-Stroppel-4} to each highest weight $\lambda\in X^+(n)$ we associate two subsets of cardinality $n$ of the numberline $\mathbb Z$ \begin{align*} I_\times(\lambda)\ & =\ \{ \lambda_1 , \lambda_2 - 1, ... , \lambda_n - n +1 \} \\ I_\circ(\lambda)\ & = \ \{ 1 - n - \lambda_{n+1} , 2 - n - \lambda_{2n-1} , ... , - \lambda_{2n} \}. \end{align*} We now define a labeling of the numberline $\mathbb Z$. The integers in $ I_\times(\lambda) \cap I_\circ(\lambda) $ are labeled by $\vee$, the remaining ones in $I_\times(\lambda)$ resp. $I_\circ(\lambda)$ are labeled by $\times$ respectively $\circ$. All other integers are labeled by $\wedge$. This labeling of the numberline uniquely characterizes the weight vector $\lambda$. If the label $\vee$ occurs $r$ times in the labeling, then $r=atyp(\lambda)$ is called the {\it degree of atypicality} of $\lambda$. Notice $0 \leq r \leq n$, and for $r=n$ the weight $\lambda$ is called {\it maximal atypical}. {\it Blocks}. A block $\Gamma$ of $X^+(n)$ is a connected component of the Ext-quiver of ${{\mathcal R}}_n$. Let ${{\mathcal R}}_{\Gamma}$ (or by abuse of notation $\Gamma$) be the full subcategory of objects of ${{\mathcal R}}_n$ such that all composition factors are in $\Gamma$. This gives a decomposition ${{\mathcal R}}_n = \bigoplus_{\Gamma} {{\mathcal R}}_{\Gamma}$ of the abelian category. Two irreducible representations $L(\lambda)$ and $L(\mu)$ are in the same block if and only if the weights $\lambda$ and $\mu$ define labelings with the same position of the labels $\times$ and $\circ$. The degree of atypicality is a block invariant, and the blocks $\Lambda$ of atypicality $r$ are in 1-1 correspondence with pairs of disjoint subsets of ${\mathbb Z}$ of cardinality $n-r$ resp. $n-r$. Let ${\mathcal R}_n^i$ be the full subcategory of ${\mathcal R}_n$ defined by the blocks of atypicity $n-i$. In particular ${\mathcal R}_n$ has a unique maximally atypical block, and any block of atypicality $i$ in ${\mathcal R}_n$ is equivalent to the maximally atypical block in ${\mathcal R}_i$. {\it Cups}. To each weight diagram we associate a cup diagram as in \cite{Brundan-Stroppel-1}. Here a cup is a lower semi-circle joining two points in $\mathbb Z$. To construct the cup diagram go from left to right throught the weight diagram until one finds a pair of vertices $\vee \ \ \ \wedge$ such that there only $x$'s, $\circ$'s or vertices which are already joined by cups between them. Then join $\vee \ \ \wedge$ by a cup. This procedure will result in a weight diagram with $r$ cups. For example \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {0,-1,-2,-3} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,1,2,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](0,0) to (1,0); \draw [-,black,out=270, in=270](-1,0) to (2,0); \draw [-,black,out=270, in=270](-2,0) to (3,0); \draw [-,black,out=270, in=270](-3,0) to (4,0); \end{tikzpicture} \end{center} is the labelled cup diagram ($n=4$) of the trivial representation attached to the weight $\lambda=(0,\ldots,0 \vert 0, \ldots,0)$. {\it Sectors and segments}. For the purpose of this paragraph we assume $\lambda\in X^+$ to be in a maximal atypical block, so that the weight diagram does not have labels $\times$ or $\circ$. Some of the $r$ cups of a cup diagram may be nested. If we remove all inner parts of the nested cups we obtain a cup diagram defined by the (remaining) outer cups. We enumerate these cups from left to right. The starting point of the $j$-th lower cup is denoted $a_j$ and its endpoint is denoted $b_j$. Then there is a label $\vee$ at the position $a_j$ and a label $\wedge$ at position $b_j$. The interval $[a_j,b_j]$ of ${\mathbb Z}$ will be called the $j$-th sector of the cup diagram. Adjacent sectors, i.e with $b_j=a_{j+1} -1$ will be grouped together into segments. The segments again define intervals in the numberline. Let $s_j$ be the starting point of the $j$-th segment and $t_j$ the endpoint of the $j$-th segment. Between any two segments there is a distance at least $\geq 1$. In the following case the weight diagram has 2 segments and 3 sectors \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-4,0) to (-1,0); \draw [-,black,out=270, in=270](-3,0) to (-2,0); \draw [-,black,out=270, in=270](1,0) to (2,0); \draw [-,black,out=270, in=270](3,0) to (4,0); \end{tikzpicture} \end{center} whereas the following weight diagram has 1 segment and 1 sector. \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-3,0) to (4,0); \draw [-,black,out=270, in=270](-2,0) to (-1,0); \draw [-,black,out=270, in=270](0,0) to (3,0); \draw [-,black,out=270, in=270](1,0) to (2,0); \end{tikzpicture} \end{center} Removing the outer circle would result in a cup diagram with two sectors and one segment. We can also define the notion of a sector or segment for blocks which are not maximally atypical. In this case we say that two sectors are adjacent (and belong to the same segment) if they are only separated by $\times$ or $\circ$'s. For our purpose the $\times$ and $\circ$'s will not play a role and we will often implicitly assume that we are in the maximally atypical block. {\it Important invariants}. Note that the segment and sector structure of a weight diagram is completely encoded by the positions of the $\vee$'s. Hence any finite subset of ${\mathbb Z}$ defines a unique weight diagram in a given block. This will lead to the notion of a {\it plot} in the next section where we associate to a maximal atypical highest weight the following invariants: \begin{itemize} \item the type (SD) resp. (NSD), \item the number $k=k(\lambda)$ of sectors of $\lambda$, \item the sectors $S_\nu=(I_\nu,K_\nu)$ from left to right (for $\nu=1,...,k$), \item the ranks $r_\nu = r(S_\nu)$, so that $\# I_\nu = 2r_\nu$, \item the distances $d_\nu$ between the sectors (for $\nu=1,...,k-1$), \item the total shift factor $d_0=\lambda_n$ \item and the added distances $\delta_i = \sum_{\nu=0}^{i-1} d_{\nu}$. \end{itemize} If convenient, $k$ sometimes may also denote the number of segments, but hopefully no confusion will arise from this. A maximally atypical weight is called basic if $\lambda_\nu = -\lambda_{n+\nu}$ holds for $\nu=1,...,n$ such that $[\lambda]:=(\lambda_1,...,\lambda_n)$ defines a decreasing sequence $\lambda_1 \geq \cdots \geq \lambda_{n-1} \geq \lambda_n=0$ with the property $n-\nu \geq \lambda_\nu$ for all $\nu=1,...,n$. The total number of such {\it basic weights} in $X^+(n)$ is the Catalan number $C_n$. Reflecting the graph of such a sequence $[\lambda]$ at the diagonal, one obtains another basic weight $[\lambda]^*$. We will show that a basic weight $\lambda$ is of type (SD) if and only if $[\lambda]^* = [\lambda]$ holds. To every maximal atypical highest weight $\lambda$ is attached a unique maximal atypical highest weight $\lambda_{basic}$ $$ \lambda \mapsto \lambda_{basic} \ $$ having the same invariants as $\lambda$, except that $d_0 = d_1=\cdots = d_{k-1}=0$ holds for $\lambda_{basic}$. For example, the basic weight attached to the irreducible representation $[5,4,-1]$ in ${\mathcal R}_3$ with cup diagram \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-4,0) to (-3,0); \draw [-,black,out=270, in=270](2,0) to (3,0); \draw [-,black,out=270, in=270](4,0) to (5,0); \end{tikzpicture} \end{center} is the basic representation $[2,1,0]$ with weight diagram \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-2,0) to (-1,0); \draw [-,black,out=270, in=270](0,0) to (1,0); \draw [-,black,out=270, in=270](2,0) to (3,0); \end{tikzpicture} \end{center} \section{On segments, sectors and plots}\label{derivat} If $\lambda$ is a maximally atypical weight in ${\mathcal R}_n$, it is completely encoded by the $n$ $\vee$'s in its weight diagram. We change the point of view and regard it as a map (a plot) $\lambda: {\mathbf{Z}} \to \{ \boxplus, \boxminus\}$ where the $\boxminus$ correspond to the $\vee$'s. If $\lambda$ is not maximal atypical, its weight diagram has crosses and circles. These do not play any role in the combinatorial arguments, and we can still describe $\lambda$ by a plot if we just ignore and remove the crosses and circles from the weight diagram. A plot $\lambda$ is a map $$ \lambda: \mathbb Z \to \{\boxplus,\boxminus\}\ $$ such that the cardinality $r$ of the fiber $\lambda^{-1}(\boxplus)$ is finite. Then by definition $r=r(\lambda)$ is the degree and $\lambda^{-1}(\boxplus)$ is the support of $\lambda$. As usual an interval $[a,b] \subset \mathbb Z$ is the set $\{x\in\mathbb Z\ \vert \ a \leq x \leq b\}$. Replacing $\boxplus$ by $1$ and $\boxminus$ by $-1$ we may view $\lambda(x)$ as a real valued function extended by $\lambda(x):= \lambda([x])$ to a function on $\mathbb R$ for $[x] = \max_{n\in \mathbb Z} \{ n\leq x\}$. {\bf Segments and sectors}. An interval $I=[a,b]$ of even cardinality $2r$ and a subset $K$ of cardinality of rank $r$ defines a plot $\lambda$ of rank $r$ with support $K$. We call $(I,K)$ a {\it segment}, if $f(x) = \int_a^x \lambda(x) dx$ is nonnegative on $I$. Notice, then $a\in K$ but $b\notin K$. {\it Factorization}. For a given plot $\lambda$ put $a=\min(supp(\lambda))$ and for the first zero $x_0>a$ of the function $f(x)=\int_{a}^x \lambda(x)dx$ put $b=x_0-1$. This defines an interval $I=[a,b]$ of even length, such that $\lambda\vert_I$ (now again viewed as a function on $I\cap \mathbb Z$) admits the values $1$ and $-1$ equally often. If $supp(\lambda)\subset I$, then $\lambda$ is called a {\it prime} plot. If $\lambda$ is not a prime plot, the plot $\lambda_1$ with support $I\cap supp(\lambda)$ defines a prime plot. It is called the first sector of the plot $\lambda$. Now replace the plot $\lambda$ by the plot, where the support $K_1$ of the first sector $I=I_1$ is removed from the support $K$ of $\lambda$. Repeating the process above, we obtain a prime plot $\lambda_2$ with support $K_2$ defining a segment $(I_2,K_2)$. This segment is called the second sector of $\lambda$. Obviously $I_1$ is an interval in $\mathbb Z$ on the right of $I_1$, hence in particular they are disjoint. Continuing with this process, one defines finitely many prime plots $\lambda_1,...,\lambda_k$ attached to a given plot defining disjoint segments $S_1=(I_1,K_1)$,..., $S_k(I_k,K_k)$. These segments $S_\nu$ are called the sectors of the plot $\lambda$. Let $$ d_\nu = dist(I_\nu, I_{\nu+1}) \quad , \quad \nu=1,..., k-1$$ denote the distances between these sectors $S_\nu$, i.e. $d_\nu = \min(S_{\nu+1}) - \max(S_\nu)$. For disjoint segments $(I_1,K_1)$ and $(I_1,K_2)$ the union $(I,K)=(I_1\cup I_2,K_1\cup K_2)$ again is a segment, provided $I=I_1\cup I_2$ is an interval in $\mathbb Z$. Grouping together adjacent sectors of $\lambda$ with distances $d_\nu=0$ defines the segments of $\lambda$. In other words, the union of the intervals $I_\nu$ of the sectors $S_\nu$ of the $\lambda_\nu$ can be written a disjoint union of intervals $I$ of maximal length. These intervals $I$ define the {\it segments of $\lambda$} as $(I, T \cap supp(\lambda))$. We consider formal finite linear combinations $\sum_i n_i\cdot \lambda_i$ of plots with integer coefficients. This defines an abelian group $R = \bigoplus_{r=0}^\infty R_r$ (graduation by rank $r$). We define a commutative ring structure on $R$ so that the product of two plots $\lambda_1$ and $\lambda_2$ is zero unless the segments of $\lambda_1$ and $\lambda_2$ are disjoint, in which case the support of the product becomes the union of the supports. A plot $\lambda$ that can not be written in the form $\lambda_1 \cdot \lambda_2$ for plots $\lambda_i$ of rank $r_i>0$ is called a prime plot. \begin{lem} \label{primef} Every plot can be written as a product of prime plots uniquely up to permutation of the factors. \end{lem} Of course this prime factorization of a given plot $\lambda$ is given by the prime factors $\lambda_\nu$ attached to the sectors $S_\nu, \nu=1,..,k$ of $\lambda$. Hence for $\lambda= \prod_i \lambda_i$ with prime plots $\lambda_i$, the {\it sectors of $\lambda$} are the segments attached to the prime factors $\lambda_i$. The interval $I=[a,...,b]$ attached to a prime plot $\lambda$ is the unique sector or the unique segment of the prime plot $\lambda$. It has cardinality $2r(\lambda)$, and the support $K$ of $\lambda$ defines a subset of the sector $I$ of cardinality $r$. Recall $a\in K$ but $b\notin K$. {\bf Differentiation}. We define a derivation on $R$ called derivative. Indeed the derivative induces an additive map $$ \partial: R_{n} \to R_{n-1} \ .$$ To {\it differentiate} a plot of rank $n>0$, or a segment, we use the formula $$ \partial(\prod_i \lambda_i) = \sum_i \partial \lambda_i \cdot \prod_{j\neq i} \lambda_j $$ in the ring $R$ to reduce the definition to the case of a prime plot $\lambda$. For prime $\lambda$ let $(I,K)$ be its associated sector. Then $I=[a,b]$. Using $a\in K$, $b\notin K$, for a sector $(I,K)$ of a prime plot $\lambda$ it is easy to verify by the integral criterion that $$\partial(I,K) = (I,K)' = (I',K')$$ for $I'=[a+1,b-1]$ and $K'=I\cap K$ again defines the sector of a prime plot $\partial \lambda$ of rank $r(\lambda')=r(\lambda)-1$. Then for prime plots $\lambda$ of rank $n$ with sector $(I,K)$ we define $\partial \lambda$ in $R$ by $$ \fbox{$ \partial \lambda = \partial(I,K) $} \quad , \quad I=[a,b] \ .$$ {\bf Integration}. For a segment $(I,K)$ with $I=[a,b]$ put $$ \int(I,K) = ([a-1,b+1],K\cup\{a-1\}) \ $$ increasing the rank by 1. Observe, that the integral criterion implies that $([a-1,b+1],K\cup\{a-1\})$ always defines a prime segment. Obviously $$ \partial \int (I,K) = (I,K) \ . $$ Similarly $\int \partial (I,K) =(I,K)$ for a prime segment $(I,K)$ of rank $>0$. {\bf Lowering sectors}. For a sector $S=(I,K)$ with $I=[a,b]$ define $$ S^{low} \ =\ ([a-1,a],\{a-1\}) \cup \partial(S) \ .$$ Notice that $S^{low}$ is a segment with interval $[a-1,b-1]$. {\bf Melting sectors}. Suppose $\lambda_1$ and $\lambda_2$ are prime plots. Let $(I_1,K_1)$ and $(I_2,K_2)$ be their defining sectors. Assume that $(I,K)=(I_1\cup I_2,K_1\cup K_2)$ defines a segment with plot $\lambda$. Hence $I_1=[a,i]$ and $I_2=[i+1,b]$ for some $i\in \mathbb Z$ and $i\notin K_1$ and $i+1\in K_2$. Then by the integral criterion $$ (I,K)^{melt} \ = \ (I_1\cup I_2, K_1 \cup \{i \} \cup (K_2 - \{i+1\} ) $$ defines a prime plot with $I=[a,b]$. {\it Example.} We can represent plots with labelled cup diagrams. A plot of rank $r$ has $r$ cups. For instance the irreducible representation $[3,3,1,1] \in {\mathcal R}_4$ has the cup diagram \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-2,0) to (1,0); \draw [-,black,out=270, in=270](-1,0) to (-0,0); \draw [-,black,out=270, in=270](2,0) to (5,0); \draw [-,black,out=270, in=270](3,0) to (4,0); \end{tikzpicture} \end{center} The corresponding plot is defined by its support $\{-2,-1,2,3\}$. Its derivative is the sum of two plots of rank 3 corresponding to the two cup diagrams \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-1,0) to (0,0); \draw [-,black,out=270, in=270](2,0) to (5,0); \draw [-,black,out=270, in=270](3,0) to (4,0); \end{tikzpicture} \end{center} \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-2,0) to (1,0); \draw [-,black,out=270, in=270](-1,0) to (0,0); \draw [-,black,out=270, in=270](3,0) to (4,0); \end{tikzpicture} \end{center} If we integrate the first segment of the plot we get the plot of rank 5 with support $\{-3,-2,-1,2,3\}$ with corresponding cup diagram \begin{center} \begin{tikzpicture} \draw (-5,0) -- (7,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-2,0) to (1,0); \draw [-,black,out=270, in=270](-1,0) to (-0,0); \draw [-,black,out=270, in=270](2,0) to (5,0); \draw [-,black,out=270, in=270](3,0) to (4,0); \draw [-,black,out=270, in=270](-3,0) to (6,0); \end{tikzpicture} \end{center} The plot of $[3,3,1,1]$ has two adjacent sectors. Melting these two gives the plot with support \{ -2,-1,1,3\} with cup diagram \begin{center} \begin{tikzpicture} \draw (-5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270, in=270](-2,0) to (5,0); \draw [-,black,out=270, in=270](-1,0) to (0,0); \draw [-,black,out=270, in=270](1,0) to (2,0); \draw [-,black,out=270, in=270](3,0) to (4,0); \end{tikzpicture} \end{center} {\it The not maximally atypical case}. As with sectors and segments we can define the notion of a plot for representations which are not maximally atypical. We fix the block of the irreducible representations, ie. the positions of the $\times$'s (say at the vertices $x_1, \ldots,x_r$) and the positions of the $\circ$'s (say at the vertices $\circ_1,\ldots, \circ_r$). Once these are fixed we define ${\mathbf{Z}}_{\times \circ} := {\mathbf{Z}} \setminus ( \{x_1,\ldots,x_r\} \cup \{\circ_1, \ldots, \circ_r\})$. Then a plot is a map $\lambda: {\mathbf{Z}}_{\times \circ} \to \{\boxplus,\boxminus\}$. The reader can convince himself that all the previous definitions and operations on plots (factorization, derivatives etc) can be adapted easily to this more general setting. However this amounts in practice only to fixing the positions of the $\times$ and $\circ$'s and then ignoring them. We will associate in section \ref{sec:loewy-length} to every weight $\lambda$ of atypicality $i$ a plot $\phi(\lambda)$ of rank $i$ (without $\times$'s and $\circ$'s) and work with these instead. \section{Mixed tensors and ground states} \label{stable0} \noindent We compute $DS(L) \in T_{n-1}$ for special irreducible representations $L$ in a block $\Gamma$, the so-called ground states. The general case for arbitrary $L$ will then be reduced to this case in later sections. Let $MT$ denote the full subcategory of mixed tensors in ${{\mathcal R}_n}$ whose objects are direct sums of the indecomposable objects in ${{\mathcal R}}_n$ that appear in a decomposition $X_{st}^{\otimes r} \otimes (X_{st}^{\vee})^{\otimes s}$ for some natural numbers $r,s \geq 0$, where $X_{st} \in {{\mathcal R}_n}$ denotes the standard representation. By \cite{Brundan-Stroppel-5} and \cite{Comes-Wilson} the indecomposable objects in $MT$ are parametrized by $(n|n)$-cross bipartitions. Let $R_n(\lambda^L,\lambda^R)$ denote the indecomposable representation in ${{\mathcal R}}_n$ corresponding to the bipartition $\lambda = (\lambda^L ,\lambda^R)$ under this parametrization. To any bipartition $\lambda$ we attach a weight diagram in the sense of \cite{Brundan-Stroppel-1}, ie. a labelling of the numberline $\mathbb Z$ according to the following dictionary. Put \[ I_{\wedge}(\lambda) := \{ \lambda_1^L, \lambda_2^L - 1, \lambda_3^L - 2, \ldots \} \quad \text{and}\quad I_{\vee}(\lambda) := \{1 -\lambda_1^R, 2 - \lambda_2^R, \ldots \}\ . \] Now label the integer vertices $i$ on the numberline by the symbols $\wedge, \vee, \circ, \times$ according to the rule \[ \begin{cases} \circ \quad \text{ if } \ i \ \notin I_{\wedge} \cup I_{\vee}, \\ \wedge \quad \text{ if } \ i \in I_{\wedge}, \ i \notin I_{\vee}, \\ \vee \quad \text{ if } \ i \in I_{\vee}, \ i \notin I_{\wedge}, \\ \times \quad \text{ if } \ i \in I_{\wedge} \cap I_{\vee}. \end{cases} \] To any such data one attaches a cup-diagram as in section \ref{BS} or \cite{Brundan-Stroppel-1} and we define the following three invariants \begin{align*} a(\lambda) & = \text{ number of crosses } \\ d(\lambda) & = \text{ number of cups } \\ k(\lambda) & = a(\lambda) + d(\lambda) \end{align*} A bipartition is {\it $(n|n)$-cross} if and only if $k(\lambda) \leq n$. By \cite{Brundan-Stroppel-5} the modules $R( \lambda^L, \lambda^R)$ have irreducible socle and cosocle equal to $L(\lambda^{\dagger})$ where the highest weight $\lambda^{\dagger}$ can be obtained by a combinatorial algorithm from $\lambda$. Let $$\theta: \Lambda \to X^+(n) \ $$ denote the resulting map $\lambda \mapsto \lambda^\dagger$ between the set of $(n|n)$-cross bipartitions $\Lambda$ and the set $X^+(n)$ of highest weights of ${\mathcal R}_n$. \begin{thm} \cite{Heidersdorf-mixed-tensors} $R = R(\lambda)$ is an indecomposable module of Loewy length $2 d(\lambda) + 1$. It is projective if and only if $k(\lambda) = n$, in which case we have $R = P(\lambda^{\dagger})$. \end{thm} Hence $R$ is irreducible if and only if $d(\lambda) = 0$, and then $R = L(\lambda^{\dagger})$. {\it Deligne's interpolating category}. For every $t \in k$ there exists the category $Rep(Gl_t)$ defined in \cite{Deligne-interpolation}. This is a $k$-linear pseudoabelian rigid tensor category. By construction it contains an object $st$ of dimension $t$, called the standard representation. Given any $k$-linear pseudoabelian tensor category $C$ with unit object and a tensor functor $$F: Rep(Gl_t) \to C$$ the functor $F \to F(st)$ is an equivalence between the category of $\otimes$-functors of $Rep(Gl_t)$ to $C$ with the category of $t$-dimensional dualisable objects $X \in C$ and their isomorphisms. In particular, given a dualizable object $X$ of dimension $t$ in a $k$-linear pseudoabelian tensor category, a unique tensor functor $F_X: Rep(Gl_t) \to C$ exists mapping $st$ to $X$. Hence, for our categories ${{\mathcal R}}_n$ and $t=0$, we get a tensor functor $F_n: Rep(Gl_0) \to {{\mathcal R}}_n$ by mapping the standard representation of $Rep(Gl_0)$ to the standard representation of $Gl(n\vert n)$ in ${{\mathcal R}}_n$. Every mixed tensor is in the image of this tensor functor ( \cite{Comes-Wilson}). The indecomposable elements in Deligne's category are parametrized by the set of all bipartitions. The kernel of $F_n$ contains those indecomposables labelled by bipartitions that are not $(n|n)$-cross. Any $(n|n)$-cross bipartion $\lambda$ defines an indecomposable object in the image of $Rep(Gl_0)$. We write $R_n(\lambda)$ for $F_n(R(\lambda))$. By the universal property of Deligne's category any tensor functor from $Rep(Gl_0)$ to a tensor category $C$ is fixed up to isomorphism by the choice of an image of the standard representation of $Rep(Gl_0)$. \begin{lem}\label{stable2}\cite{Heidersdorf-mixed-tensors} $DS(R_n(\lambda)) = R_{n-1}(\lambda) $ holds unless $R_n(\lambda)$ is projective, in which case $DS(R_n(\lambda)) =0$. \end{lem} Note that the vanishing of $R_n(\lambda)_x$ in the projective case is just a special case of lemma \ref{van} (i) and (ii). {\it Proof}. An easy computation shows that under the Duflo-Serganova functor the standard representation of ${\mathfrak g}_n$ is mapped to the standard representation of ${\mathfrak g}_{n-1}$. Since any indecomposable mixed tensor module is in the image of a tensor functor from Deligne's category $Rep(Gl_0)$ \cite{Comes-Wilson} the result follows from the commutative diagram \[ \xymatrix{ Rep (Gl_{0}) \ar[d]_{F_{n}} \ar[dr]^{F_{n-1}} & \\ {{\mathcal R}}_{n} \ar[r]_-{DS} & {{\mathcal R}}_{n-1} }. \] The kernel of $F_n$ consists of the $R(\lambda)$ with $k(\lambda) > n$, the kernel of $F_{n-1}$consists of the $R(\lambda)$ with $k(\lambda) \geq n$ Hence $R(\lambda) \in ker(DS)$ if and only if $k(\lambda) = n$ which is equivalent to $R(\lambda)$ projective. \qed {\bf Example}. As in section \ref{sec:strategy} put ${\mathbb A}_{S^i} :=R((i),(1^i)) \in {{\mathcal R}}_n$. By lemma \ref{stable2} we have $({\mathbb A}_{S^i})_x = {\mathbb A}_{S^i}$ for all $i\geq 1$. \begin{cor}\label{projective-image} Every indecomposable projective module of ${{\mathcal R}}_{n-1}$ is in the image of $DS$. \end{cor} {\it Proof}. The indecomposable projective modules are precisely the modules $DS(R(\lambda^L,\lambda^R))$ with $k(\lambda) = n-1$. Note that every indecomposable projective module is a mixed tensor \cite{Heidersdorf-mixed-tensors}.\qed {\it Irreducible mixed tensors}. By the results above the map $\theta: \Lambda \to X^+(n)$ is injective if restricted to bipartitions with $d(\lambda) = 0$. We denote by $\theta^{-1}$ its partial inverse. A closer inspection \cite{Heidersdorf-mixed-tensors} of the assignment $\theta: \lambda \mapsto \lambda^{\dagger}$ shows that $\theta$ and $\theta^{-1}$ are given by the following simplified rule: Define \begin{align*} m &= \text{ maximal coordinate of a } \times \text{ or } \circ \\ t & = max( k(\lambda) + 1, m + 1) \\ s & = \begin{cases} 0 \quad \quad \quad \quad \ m + 1 \leq k(\lambda) + 1 \\ m - k(\lambda) \quad m + 1 > k(\lambda) + 1 \end{cases} \end{align*} The weight diagram of $\lambda^{\dagger}$ is obtained from the weight diagram of $\lambda$ by switching all $\vee$'s to $\wedge$'s and vice versa at positions $\geq t$ and switching the first $s + n - k(\lambda)$ $\vee$'s at positions $< t$ to $\vee$'s and vice versa. The numbers labelled by a $\wedge$ or $\vee$ will be called free positions. Conversely if $L(\lambda^{\dagger})$ is some irreducible representation in $MT$, the corresponding bipartition with $\theta^{-1}(\lambda^{\dagger}) = \lambda$ is obtained in the same way: Define $t, m, s$ as above and apply the same switching rules to the weight diagram of $\lambda^{\dagger}$. \begin{prop}\label{mixed-tensor-derivative} Let $$L = L(\lambda^{\dagger}) = L(\lambda_1, \ldots, \lambda_{n-i}, 0,\ldots, 0\ ;\ 0, \ldots,0, \lambda_{n+i+1},\ldots, \lambda_{2n})$$ be an irreducible $i$-fold atypical representation. Then $L$ is a mixed tensor $L = R(\lambda)$ for a unique bipartition of defect 0 and $rk = n-i$. Then \[ DS(L) = R_{n-1}(\lambda) = L(\bar{\lambda}^{\dagger})\ ,\] where $\bar{\lambda}^{\dagger}$ is obtained from $\lambda^{\dagger}$ by removing the two innermost zeros corresponding to $\lambda^{\dagger}_n$ and $\lambda^{\dagger}_{n+1}$. \end{prop} \noindent {\it Proof}. We apply $\theta^{-1}$ to $\lambda^{\dagger}$. It transforms the weight diagram of $\lambda^{\dagger}$ into some other weight diagram which might not be the weight diagram of a bipartition. However if the resulting weight diagram is the weight diagram of an $(n|n)$-cross bipartition of defect 0, then $\theta(\lambda) = \lambda^{\dagger}$ and $R(\lambda) = L(\lambda^{\dagger})$. For $\lambda^{\dagger}$ \begin{align*} I_{\times} & = \{ \lambda_1,\lambda_2 -1,\ldots,\lambda_{n-i} - (n-i) + 1, - n + i, \ldots, -n + 1 \} \\ I_{\circ} & = \{ 1-n,2-n,\ldots,i-n,i+1-n - \lambda_{n+1+i},\ldots, - \lambda_{2n} \}.\end{align*} Then $I_{\times} \cap I_{\circ} = \{-n+1, \ldots,-n+i\}$ (since the atypicality is $i$) and the $n-i$ crosses are at the positions $\lambda_1,\lambda_2 -1,\ldots,\lambda_{n-i} - (n-i) + 1$ and the $n-i$ circles at the positions $i+1-n - \lambda_{n+1+i},\ldots, - \lambda_{2n}$. Define $m,t,s$ as above. Note that $k(\lambda) = n-i$. We distinguish two cases, either $t = n-i+1$ or $t=m+1$. Assume first $m + 1 \leq n-i + 1$. Switch all free labels at positions $ \geq t$ and the first $n-(n-i) = i$ free labels at positions $<t$. By assumption the $2n- 2i$ crosses and circles lie at positions $> i- n$ and $< n-i+1$. However there are exactly $2n-2i$ such positions. Hence the switches at positions $<t$ turn exactly the $i$ $\vee$'s at positions $i-n,\ldots,1-n$ into $\wedge$'s. In the second case $t = m + 1 > n-i+1$ switch the first $m + n - 2 (n-i)$ free labels at positions $<t$. There are exactly $m + n -i$ positions between $m$ and $i-n$, $m - n +2i$ switches and $2n - 2i$ crosses and circles between $i-n$ and $t$. This results in $m - n + i$ free positions between $i-n$ and $t$. The remaining $i$ switches transform the $i$ $\vee$'s to $\wedge$'s. Hence in both cases $\theta^{-1}$ transforms the weight diagram of $\lambda^{\dagger}$ into a weight diagram where the rightmost $\wedge$ is at position $i-n$ and the leftmost $\vee$ is at the first free position $> i-n$ and all labels at positions $\geq t$ are given by $\vee$'s. This is the weigth diagram of a bipartition of defect 0 and rank $n-i$. Indeed the labelling defines the two sets $I_{\wedge}$ and $I_{\vee}$ and this defines two tuples $\lambda^L = (\lambda^L_1,\lambda^L_2,\ldots)$ and $\lambda^R = (\lambda^R_1,\lambda^R_2,\ldots)$. The positioning of the $\wedge$'s implies that $\lambda^L_{n-i+1} = 0$ and the positioning of the $\vee$'s implies $\lambda^R_t = 0$. Clearly $\lambda_1^L = \lambda_1 > 0$ and $\lambda^R_1 \geq 0$. Hence the pair $\lambda:= (\lambda^L,\lambda^R)$ is a bipartition (of defect 0 and rank $n-i$) and $\theta(\lambda) = \lambda^{\dagger}$. It remains to compute the highest weight of $R_{n-1}(\lambda)$. The two sets $I_{\vee}$ and $I_{\wedge}$ and accordingly the weight diagram of $\lambda$ do not depend on $n$. Neither do $t, m, s$ and the switches at positions $\geq t$. To get $\lambda^{\dagger}$ in ${{\mathcal R}}_n$ from $\lambda$ we switch the first $s + n - (n-i)$ free labels $<t$. To get $\lambda^{\dagger}$ in ${{\mathcal R}}_{n-1}$ from $\lambda$ we switch the first $s + (n-1) - (n-i)$ free labels $<t$. This results in removing the leftmost $\vee$ at position $1-n$. \qed {\it Ground states}. Let ${{\mathcal R}}_n^{i} \subset {{\mathcal R}}_n$ denotes the full subcategory of $i$-atypical objects. Every block in ${\mathcal R}_n^i$ contains irreducible objects with the property that all $i$ labels $\vee$ are adjacents and to the left of all $n-i$ labels $\times$ and all $n-i$ labels $\circ$. We call such an irreducible object a groundstate of the corresponding block in ${\mathcal R}_n^i$. Each block in ${\mathcal R}_n^i$ uniquely defines its groundstate up to a simultaneous shift of the $i$ adjacent labels $\vee$. The weight $\lambda$ of such a groundstate $L(\lambda)$ is of the form $$ \lambda = (\lambda_1,...,\lambda_{n-i},\lambda_n,...,\lambda_n\ ;\ -\lambda_n,...,-\lambda_n,\lambda_{n+1+i}, ..., \lambda_{2n}) \ .$$ with $\lambda_n \leq \min(\lambda_{n-i}, - \lambda_{n+1+i})$ (here $\lambda_n \mapsto \lambda_n - 1$ corresponds to the shift of the $i$ adjacent labels $\vee$). The coefficients $\lambda_1,...,\lambda_{n-i}, \lambda_{n+1+i}, ..., \lambda_{2n}$ determine and are determined by the position of the labels $\times$ and $\circ$ defining the given block in ${\mathcal R}_n^i$. We define $$\overline \lambda = (\lambda_1,...,\lambda_{n-i},\lambda_n,...\ ;\ ...,-\lambda_n,\lambda_{n+1+i}, ..., \lambda_{2n}) \ $$ by omitting the innermost $\lambda_n ; -\lambda_n$ pair. Then $ L(\overline\lambda) \in {\mathcal R}_n^{i-1} \subset T_{n-1} $. {\it Berezin twists}. Twisting with $Ber=Ber_n$ induces an endofunctor of ${\mathcal R}_n^i$ and permutes blocks. By a suitable twist one can replace a given block in ${\mathcal R}_n^i$ such that it contains the groundstate $$ \lambda' = (\lambda_1-\lambda_n,...,\lambda_{n-i}-\lambda_n,0,...,0\ ;\ 0,...,0,\lambda_{n+1+i}+\lambda_n, ..., \lambda_{2n}+\lambda_n) \ .$$ \begin{prop}\label{13} For a groundstate $L=L(\lambda)$ of a block in ${\mathcal R}_n^i \subset {\mathcal R}_n$ the image $DS(L)$ in $T_{n-1}$ of $L$ under the Duflo-Serganova functor is $$ DS(L(\lambda)) = \Pi^{-\lambda_n}L(\overline\lambda) \ $$ for $i>0$ or $DS(L)=0$ for $i=0$. \end{prop} \noindent In particular therefore theorem \ref{mainthm} holds for the groundstates $L=L(\lambda)$ of blocks in ${\mathcal R}_n^i \subset {\mathcal R}_n$. \noindent {\it Proof}. We can assume $i>0$. Then we can assume $\lambda_{n}=\lambda_{n+1}=0$ by a suitable Berezin twist. Hence $$ L = R_n(\lambda^L,\lambda^R) $$ for an $(n|n)$-cross bipartition $(\lambda^L,\lambda^R)$ and therefore $$ DS(L) = R_{n-1}(\lambda^L,\lambda^R) $$ is irreducible of weight $\overline\lambda$, i.e. $DS(L(\lambda))=L(\overline\lambda)$. This proves the claim, since by assumption now $\lambda_{n+1}=0$. \qed \noindent \section{Sign normalizations}\label{signs} \noindent The main theorem \ref{mainthm} asserts in particular that $DS(L)$ is semisimple. In order to show this we define a sign $\varepsilon(L) \in \{ \pm 1 \}$ for every irreducible representation $L$ with the property that $Ext^1_{{\mathcal R}_n}(L(\lambda),L(\mu)) = 0$ if $\varepsilon(\lambda) = \varepsilon(\mu)$. More precisely, this sign should satisfy the following conditions: \begin{enumerate} \item Let ${\mathcal R}_n(\varepsilon)$ denote the full subcategory of all objects whose irreducible constituents $X$ have sign $\varepsilon(X)$. Similarly define the full subcategories $\Gamma(\varepsilon)$ for a block $\Gamma$. Then we require that the categories ${\mathcal R}(+)$ and ${\mathcal R}(-)$ are semisimple categories. \end{enumerate} Clearly it is enough to require that the categories $\Gamma(\varepsilon)$ are semisimple. Note that any such sign function is unique on a block $\Gamma$ up to a global sign $\pm 1$. Hence if we normalize the sign by $\varepsilon(L) = +1$ where $L$ is an irreducible representation in a block $\Gamma$, the sign is uniquely determined on $\Gamma$. Our second condition is: \begin{enumerate} \setcounter{enumi}{1} \item $\varepsilon(L(\lambda)) = 1$ if $L(\lambda)$ is an irreducible mixed tensor (see section \ref{stable0}) and $\varepsilon({\mathbf{1}}) = 1$. \end{enumerate} If $L(\lambda)$ is maximal atypical, we put \[ \varepsilon(L(\lambda)) = (-1)^{p(\lambda)}\] for the parity $p(\lambda) = \sum_{i=1}^n \lambda_{n+i}$. In the maximal atypical case we have $Ext^1_{{\mathcal R}_n}(L(\lambda), L(\mu)) = 0$ if $p(\lambda) \equiv p(\mu) \ mod \ 2$ by \cite{Weissauer-gl}. Hence the categories $\Gamma_n(\pm)$ are semisimple. This determines the sign $\varepsilon$ up to a global $\pm 1$ on each block $\Gamma$ of atypicality $i$. Indeed by \cite{Serganova-blocks} any block of atypicality $i$ is equivalent to the maximal atypical block $\Gamma_i$ of ${\mathcal R}_i$. We fix once and for all a particular equivalence denote it by $\tilde{\phi}_n^i$. We describe the effect of $\tilde{\phi}_n^i$ on an irreducible module $L(\lambda)$ of atypicality $i$ \cite{Serganova-blocks} \cite{Gruson-Serganova}. For an $i$-fold atypical weight in $ X^+(n)$ its weight diagram has $n-i$ vertices labelled with $\times$ and $n-i$ vertices labelled with $\circ$. Let $j$ be the leftmost vertex labelled either by $\times$ or $\circ$. By removing this vertex and shifting all vertices at the positions $> j$ one position to the left, recursively we remove all vertices labelled by $\times$ or $\circ$ from the given weight diagram. The remaining finite subset $K$ of labels $\vee$ has cardinality $i$ and the weight diagram so obtained defines a unique irreducible maximally atypical module in ${\mathcal R}_{i}$. Under $\tilde{\phi}_n^i$ the irreducible representation $L(\lambda)$ maps to the irreducible $Gl(i|i)$-representation described by the removal of crosses and circles above. We denote the weight of this irreducible representation by $\phi_n^i(\lambda)$. We make the preliminary definition $\varepsilon(L(\lambda)) = (-1)^{p(\phi_n^i(L(\lambda)))}$ for a weight $\lambda$ of atypicality $i$. We claim that this sign satisfies condition 1. Indeed in the maximal atypical case we have $Ext^1_{{\mathcal R}_i}(L(\lambda), L(\mu)) = 0$ if $p(\lambda) \equiv p(\mu) \ \text{ mod } \ 2$ by \cite{Weissauer-gl}. Hence the categories $\Gamma_i(\pm)$ are semisimple. The equivalence $(\tilde{\phi}_n^i)^{-1}$ between the two abelian categories $\Gamma_i$ and $\Gamma$ is exact and sends the semisimple category $\Gamma_i(\pm) \subset {\mathcal R}_i$ to the category $\Gamma(\pm)$ by the definition of the sign $\varepsilon$; and $\Gamma(\pm)$ is semisimple. This preliminary definition however doesn't satisfy condition 2. If $L = L(\lambda_1, \ldots, \lambda_{n-i}, 0,\ldots, 0\ ;\ 0, \ldots,0, \lambda_{n+i+1},\ldots, \lambda_{2n})$ is the $i$-atypical mixed tensor of section \ref{stable0}, its weight diagram has $n-i$ circles at the vertices $\lambda_1,\lambda_2 -1,\ldots,\lambda_{n-i} - (n-i) + 1$ and $n-i$ circles at the vertices $i+1-n - \lambda_{n+1+i},\ldots, - \lambda_{2n}$. The $i$ $\vee$'s are to the left of the crosses and circles. Applying $\tilde{\phi}_n^i$ removes the crosses and circles but leaves the $\vee$'s unchanged at the vertices $-n + 1, \ldots, -n+i$. The irreducible representation of $Gl(i|i)$ so obtained is $Ber^{-n+i}$ with $p([-n+i,\ldots,-n+i]) = i(-n+i)$. Hence the preliminary sign $\varepsilon(L(\lambda))$ of a mixed tensor of atypicality $i$ is $\varepsilon(L(\lambda)) = (-1)^{i(-n+i)}$. In order to satisfy condition 2) we have to normalize the sign by the additional factor $(-1)^{i(-n+i)}$ for an $i$-atypical weight, and we define \[ \varepsilon(\lambda) = (-1)^{i(-n+i)} (-1)^{p(\tilde{\phi}_n^i(L(\lambda)))}\] where $p$ is the parity in the maximal atypical block of $Gl(i|i)$. This sign satisfies condition 1) and 2) by construction, and it is the unique sign with these properties. Note that our definition implies that the sign of a typical weight in ${\mathcal R}_n$ is always positive. The additional sign factor can be understood as follows. The unique irreducible mixed tensor should play an analogous rule to the trivial representation ${\mathbf{1}}$ in ${\mathcal R}_i$. W can modify the block equivalences $\tilde{\phi}_n^i$ as follows: Since the mixed tensor $L(\lambda)$ maps to $Ber^{-n+i}$ we twist with the inverse and define the normalized equivalence \[ \phi_n^i(L) = Ber^{n-i} \otimes \tilde{\phi}_n^i(L).\] Then we obtain $(-1)^{p(\phi_n^i(L(\lambda)))} = (-1)^{-i(-n+i)} (-1)^{p(\tilde{\phi}_n^i(L(\lambda)))}$. Hence \[ \varepsilon(\lambda) = (-1)^{p(\phi_n^i(L(\lambda)))}.\] \begin{cor} \label{semisimple-sign} The categories ${\mathcal R}_n(\varepsilon)$ are semisimple categories. \end{cor} The sign $\varepsilon$ will automatically have the following important property: The translation functors of section \ref{sec:loewy-length} ${\mathbb A} = F_i L(\lambda_{\times \circ})$ have Loewy structure $(L,A,L)$ with $L \in {\mathcal R}_n(\pm \varepsilon)$ and $A \in {\mathcal R}_n(\mp \varepsilon)$. This is required in our axioms in section \ref{sec:inductive}. This property follows immediately from the maximal atypical case \cite{Weissauer-gl} due to the description of the composition factors of $F^i L(\lambda_{\times \circ})$ given in section \ref{sec:loewy-length}. The significance of the sign function is the following: $DS(L)$ is in general a representation in $T_{n-1}$, not ${\mathcal R}_{n-1}$. The sign factor regulates whether an irreducible summand of $DS(L)$ is in ${\mathcal R}_{n-1}$ or $\Pi {\mathcal R}_{n-1}$, see theorem \ref{mainthm}. In the proof of the main theorem we use the language of plots of section \ref{derivat} for uniform bookkeeping. Each maximally atypical irreducible representation $L$ defines a plot: The segments of the cup diagram of $L$ define the segments of the plot, and the $\vee$ in the weight diagram give the support of the plot. If $L$ is not maximally atypical, we associate to it a plot via the map $\phi$ of section \ref{sec:loewy-length}. For each plot we defined its derivative $\partial \lambda$ in section \ref{derivat}. If $\lambda$ is a prime plot given by the sector $(I,K)$, $I = [a,b]$, then $\partial(I,K) = (I,K)' = (I',K')$ for $I'=[a+1,b-1]$ and $K'=I'\cap K = [a,b]$ defines the derivative. We normalize the derivative of section \ref{derivat} and put \[ \lambda' = (-1)^{a+n-1} \partial (I,K) \] where $I = [a,b]$ (i.e. the leftmost $\vee$ is at $a$). The reason for this is as follows. The sign has to be normalized in such a way that for objects $X=L(\lambda)$ in the stable range of the given block we get $d(X) = X'$ for the map $d$ of section \ref{sec:inductive}. Assume first that we are in the maximally atypical ${\mathcal R}_n$-case and consider a weight with associated prime plot $\lambda$. The parity of the weight $\lambda$ is $p(\lambda) = \sum_{i=1}^n \lambda_{n+i}$. Applying $DS$ removes the $\vee$ in the outer cup. The parity of the resulting weight in $T_{n-1}$ is given by $p(\lambda') = \sum_{i=1}^{n-1} \lambda_{n+i}$, hence $p(\lambda) - p(\lambda') = \lambda_n$ and we get a shift by $n_i \equiv (-1)^{\lambda_n}$ according to theorem \ref{mainthm}. The leftmost $\vee$ is at the vertex $a = \lambda_n - n +1$, hence $(-1)^{a+n-1} = (-1)^{\lambda_n}$ and the two shifts agree. Let us now assume $at(L(\lambda)) = k <n$ and that the weight defines a prime plot of rank $k$. Here we have to use the normalized plot associated to the weight $\lambda$ by the map $\phi$ from section \ref{sec:loewy-length} in which case the two shifts agree again. We may pass to the maximally atypical case due to the lemmas \ref{shift-1}, \ref{shift-2}, \ref{shift-3} which allow us to shift all the circles and crosses sufficiently far to the right. \section{The Main theorem}\label{sec:main} In the main theorem we calculate $DS(L) \in T_{n-1}$ for any irreducible $L$. We refine this in section \ref{kohl-2}, \ref{koh3} and compute the ${\mathbf{Z}}$-grading of $DS(L)$. \begin{thm} \label{mainthm} Suppose $L(\lambda)\in {\mathcal R}_n$ is an irreducible atypical representation, so that $\lambda$ corresponds to a cup diagram $$ \bigcup_{j=1}^r \ \ [a_j,b_j] $$ with $r$ sectors $[a_j,b_j]$ for $j=1,...,r$. Then $$DS(L(\lambda)) \ \cong\ \bigoplus_{i=1}^r \ \Pi^{n_i} L(\lambda_i)$$ is the direct sum of irreducible atypical representations $L(\lambda_i)$ in ${\mathcal R}_{n-1}$ with shift $n_i \equiv \varepsilon(\lambda) - \varepsilon(\lambda_i)$ modulo 2. The representation $L(\lambda_i)$ is uniquely defined by the property that its cup diagram is $$ [a_i +1, b_i-1] \ \ \ \cup \ \ \bigcup_{j=1, j\neq i}^r \ \ [a_j,b_j] \ ,$$ the union of the sectors $[a_j,b_j]$ for $1\leq j\neq i \leq r$ and (the sectors occuring in) the segment $[a_i+1,b_i-1]$. \end{thm} {\it Consequence}. {\it In particular this implies that for irreducible representation $(V,\rho)$ the $G_{n-1}$-module $H^+(V)\oplus H^-(V)$ is semisimple in ${\mathcal R}_{n-1}$ and multiplicity free. Furthermore the sign of the constituents in $H^\pm(V)$ is $\pm sign(V)$.} If we use the language of plots, the main theorem says that the irreducible summands of $DS(L)$ are given by the derivatives of the sectors of the plot associated to $\lambda$. {\bf Example.} The maximally atypical weight $[3,0,0]$ has cup diagram \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {3,-1,-2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,1,2,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \foreach \x in {} \draw node at (1,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \draw [-,black,out=270,in=270](-2,0) to (1,0); \draw [-,black,out=270,in=270](-1,0) to (0,0); \draw [-,black,out=270,in=270](3,0) to (4,0); \end{tikzpicture} \end{center} It splits into the two irreducible representations $[3,0]$ and $\Pi[-1,-1]$ in ${\mathcal R}_2 \oplus \Pi{\mathcal R}_2$. We will later compute its cohomology in proposition \ref{hproof} and obtain $H^{\bullet} ([3,0,0]) = S^2\langle0\rangle \oplus Ber^{-1}\langle-1\rangle$. {\bf Example}. Denote by $S^i$ the irreducible representation $[i,0,\ldots,0]$. Consider a nontrivial extension $0 \to S^2 \to E \to Ber(S^2)^\vee \to 0$ in ${\mathcal R}_3$ (such extensions exist). Then $sdim(E)= 0$ and $E$ is indecomposable, hence negligible. The derivative of $S^2=[2,0,0]$ (in the sense of plots) is $ (S^2)' = [2,0] + Ber^{-1}$ and the derivative of $Ber(S^2)^\vee = [2,2,1]$ is $[2,2,1]' = (Ber[1,1,0])' = -Ber ([1,1,0]') = - [2,2] - [2,0]$. From $ [2,2]=Ber^2$ then $H^+(E) = Ber^{-1} \oplus ? $ and $ H^-(E) = Ber^2 \oplus ?$ where ? is either $[2,0]$ or zero. Hence $E$ is negligible in ${\mathcal R}_3$, but $D^2(E)= D(D(E)) \neq 0$. In particular, $D({\mathcal N}_3)$ {\it is not contained} in ${\mathcal N}_2$. {\it Block equivalences}. Applying $DS$ is compatible with taking the block equivalences $\phi_n^i$ and $\tilde{\phi}_n^i$ in the sense $DS(\phi_n^i(L)) = \phi_n^i(DS(L))$ for irreducible $L$ by the main theorem. This can be extended to arbitrary modules $M$ in a block. By \cite{Serganova-blocks} and \cite{Kujawa-generalized-kac-wakimoto} the block equivalence $\tilde{\phi}_n^i$ between an $i$-atypical block $\Gamma$ and the unique maximal atypical block of $Gl(i|i)$ is obtained as a series of translation functors, a restriction and the projection onto a weight space. Call a weight $\lambda$ in $\Gamma$ stable if all the $\vee$'s are to the left of all crosses and circles. By \cite{Serganova-blocks} we can apply a suitable sequence of translation functors to any indecomposable module in $\Gamma$ until all its composition factors are stable. We recall now the definition of $\tilde{\phi}_n^i$ as in \cite{Kujawa-generalized-kac-wakimoto} on an indecomposable module $M$. Embed $\mathfrak{gl}(k|k)$ as an inner block matrix in $\mathfrak{gl}(n|n)$. Let $\mathfrak{l} = \mathfrak{gl}(k|k) + \mathfrak{h}$ where $\mathfrak{h}$ are the diagonal matrices. Then choose $\mathfrak{h'} \subset \mathfrak{h}$ such that $\mathfrak{h'}$ is a central subalgebra of $\mathfrak{l}$ and $\mathfrak{l} = \mathfrak{gl}(k|k) \oplus \mathfrak{h'}$. We denote the restriction of a weight $\lambda$ to $\mathfrak{h'}$ by $\lambda'$. Now move $M$ by a suitable sequence of translation functors until its composition factors are stable. The block $\Gamma$ is the full subcategory of modules admitting some central character $\chi_{\mu}$. Now define $Res_{\mu'}(M) = \{ m \in M \ | \ h'm = \mu'(h') m \text{ for all } h' \in \mathfrak{h}' \}$. Then, on a module $M$ with stable composition factors, the functor $\tilde{\phi}_n^i(M)$ is given by $Res_{\mu'}(M)$. Alternatively we could first restrict to $\mathfrak{l}$ and then project on the $\mu'$-eigenspace. By \cite{Serganova-kw}, cor 4.4, and the main theorem, $DS$ induces a bijection between the blocks in $T_n$ and $T_{n-1}$, and for any $M$ in $T_n$ $DS(F_i(M)) = F_i(DS(M))$. Since our fixed $x$, which we choose in the definition of $DS$, is contained in the embedded $\mathfrak{gl}(k|k)$, the operation of $\rho(x)$ on $Res(M)$ or on its $\lambda'$-eigenspace is the same as of $\rho(x)$ on $M$. Hence $DS$ is clearly compatible with restriction, but it also doesn't matter whether we first apply $DS$ and project onto the $\lambda'$-eigenspace or first project to the $\lambda'$-eigenspace and then apply $DS$ since $\rho(x)$ commutes with $\mathfrak{h}'$. Hence $DS(\tilde{\phi_n^i}(M)) = \tilde{\phi}_n^i(DS(M))$ holds for any $M$, and the analogous statement for $\phi_n^i$ follows immediately. To summarize: If $\Gamma'$ denotes the unique block obtained from the $i$-atypical $\Gamma$ via $DS$, we obtain a commutative diagram \[ \xymatrix@+1,5cm{ \Gamma \ar^{\phi_n^i}[r] \ar_{DS}[d] & \Gamma_i \ar^{DS}[d] \\ \Gamma' \ar^{\phi_{n-1}^{i-1}}[r]& \Gamma_{i-1}}.\] \\ The main theorem has a number of useful consequences: \textit{Cohomology.} The main theorem permits us to compute the cohomology $H^i(L)$ of irreducible modules $L$ in section \ref{kohl-2} and \ref{koh3}. Although the calculation of the ${\mathbf{Z}}$-grading of $DS(L)$ is much stronger than the ${\mathbf{Z}}_2$ version of theorem \ref{mainthm}, it should be noted that the proof is based on the main theorem and a careful bookkeeping of the moves in section \ref{sec:moves}. \textit{Spectral sequences.} The main theorem also shows the degeneration of the spectral sequences from section \ref{m} and shows \[ DS_{n,n_2}(L) \simeq DS_{n_1,n_2}(DS_{n,n_1}(L)).\] The degeneration can be extended in a similar way to the not maximally atypical case, see below. \textit{Tensor products.} The main theorem allows us to reduce some questions about tensor products of irreducible representations to lower rank. Since $DS$ is a tensor functor we have $DS(L(\lambda) \otimes L(\mu)) = DS(L(\lambda)) \otimes DS(L(\mu)) = \bigoplus_{i,j} (\Pi^{n_i} L(\lambda_i)) \otimes (\Pi^{n_j} L(\mu_j))$. If we inductively understood the tensor product in $T_{n-1}$, we would obtain estimates about the number of indecomposable summands and composition factors in this way. We use this method to calculate the tensor product of two maximal atypical representations of $Gl(2|2)$ in \cite{Heidersdorf-Weissauer-gl-2-2}, see also \cite{Heidersdorf-semisimple-quotient}. \textit{Negligible modules and branching laws.} The functor $DS$ does not preserve negligible modules as the example above shows. However when we restrict $DS$ to the full subcategory $\mathcal{RI}_n$ of modules which arise in iterated tensor products of irreducible representations, $DS$ induces a functor $DS: \mathcal{RI}_n/\mathcal{N} \to \mathcal{RI}_{n-1}/\mathcal{N}$. We show in \cite{Heidersdorf-Weissauer-tannaka} \cite{Heidersdorf-semisimple-quotient} that $\mathcal{RI}_n/\mathcal{N}$ is equivalent as a tensor category to the representation category of a proreductive group $H_n$. We also show that there is an embedding $H_{n-1} \to H_n$, and $DS$ can be identified with the restriction functor with respect to this embedding. In other words $DS$ gives us the branching laws for the restriction of the image of $L(\lambda)$ in $Rep(H_n)$ to the subgroup $H_{n-1}$. \textit{Superdimensions and modified superdimensions.} The main theorem can be used to reprove parts of the generalized Kac-Wakimoto conjecture on modified superdimensions \cite{Serganova-kw}. In fact we derive a closed formula for the modified superdimension. We sketch this and prove the analog of proposition \ref{Leray2}. \textit{A superdimension formula}. Assume $L$ maximally atypical. If $sdim(L) > 0$, $$DS(L(\lambda)) \ \cong\ \bigoplus_{i=1}^r \ \Pi^{n_i}( L(\lambda_i) )$$ splits into a direct sum of irreducible modules of positive superdimension. Indeed the parity shift $\Pi^{n_i}$ occurs if and only if $p(\lambda) \not\equiv p(\lambda_i) \ mod \ 2$. Hence $DS^{n-1}(L)$ splits into a direct sum of irreducible representations of superdimension 1. Applying $DS$ $n$-times gives a functor $DS^n: {\mathcal R}_n \to svec$, hence $DS^n(L) \simeq m \ k \oplus m' \Pi k$ for positive integers $m,m'$, hence $m = 0$ if and only if $sdim(L) < 0$ and $m' = 0$ if and only $sdim(L) > 0$. By \cite{Weissauer-gl} the superdimension of a maximal atypical irreducible representation in ${\mathcal R}_n$ is given by \[ sdim(L(\lambda)) =( -1)^{p(\lambda)} m(\lambda)\] for a positive integer $m(\lambda)$ (see below for the definition). In particular \[ m(\lambda) = \begin{cases} m & p(\lambda) \equiv 0 \text{ mod } 2 \\ m' & p(\lambda) \equiv 1 \text{ mod } 2. \end{cases} \] By proposition \ref{Leray2} this also holds for $DS_{n,0}:{\mathcal R}_n \to svec$: If $DS_{n,0} (L) \simeq m\ k \oplus m' \Pi k$, we get that either $m$ or $m'$ is zero. The positive integer $m(\lambda)$ for a maximally atypical weight can be computed as follows. We refer to \cite{Weissauer-gl}, but it would be an easy exercise to deduce this from the main theorem. We let $\underline{\lambda}$ be the associated oriented cup diagram to the weight $\lambda$ as defined in section \ref{BS}. To each such cup diagram we can associate a forest $\mathcal{F}(\lambda)$ with $n$ nodes, i.e. a disjoint union of rooted trees as in \cite{Weissauer-gl}. Each sector of the cup diagram corresponds to one rooted planar tree. We read the nesting structure of the sector from the bottom to the top such that the outer cup corresponds to the root of the tree. If the following is a sector of a cup diagram \begin{center} \begin{tikzpicture} \draw (-6.5,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270,in=270](-5,0) to (4,0); \draw [-,black,out=270,in=270](-4,0) to (1,0); \draw [-,black,out=270,in=270](-3,0) to (-2,0); \draw [-,black,out=270,in=270](-1,0) to (0,0); \draw [-,black,out=270,in=270](2,0) to (3,0); \draw [-,black,out=270,in=270](-6,0) to (5,0); \end{tikzpicture} \end{center} then the associated planar rooted tree is \[ \xymatrix{ & & \bullet \ar@{-}[d] & \\ & & \bullet \ar@{-}[dr] \ar@{-}[dl] & \\ & \bullet \ar@{-}[dr] \ar@{-}[dl] & & \bullet \\ \bullet & & \bullet & } \] If $\mathcal{F}$ is a forest let $|\mathcal{F}|$ the number of its nodes. We define the forest factorial $\mathcal{F}!$ as the the product $\prod_{x \in \mathcal{F}} |\mathcal{F}_x|$ where $\mathcal{F}_x$ for a node $x \in \mathcal{F}$ denotes the subtree of $\mathcal{F}$ rooted at the node $x$. Then the multiplicity is given by \[ m(\lambda) = \frac{ |\mathcal{F}(\lambda)|!}{\mathcal{F}(\lambda)!}.\] For example $m(\lambda)$ for irreducible module in ${\mathcal R}_4$ with cup diagram \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270,in=270](-4,0) to (1,0); \draw [-,black,out=270,in=270](-3,0) to (-2,0); \draw [-,black,out=270,in=270](-1,0) to (0,0); \draw [-,black,out=270,in=270](3,0) to (4,0); \end{tikzpicture} \end{center} is computed as follows: The associated planar forest is \[ \xymatrix{ & \bullet \ar@{-}[dr] \ar@{-}[dl] & & \bullet \\ \bullet & & \bullet & & } \] Hence \[sdim (L(\lambda)) = \frac{24}{3 \cdot 1 \cdot 1 \cdot 1} = 8.\] \textit{Modified Superdimensions} If $at(L(\lambda)) <n$, $sdim(L) = 0$. However one can define a modified superdimension for $L$ as follows. We recall some definitions and results from \cite{Kujawa-generalized-kac-wakimoto}, \cite{Geer-Kujawa-Patureau-Mirand} and \cite{Serganova-kw}. Denote by $c_{V,W}: V \otimes W \to W \otimes V$ the usual flip $v \otimes w \mapsto (-1)^{p(v)p(w)} w \otimes v$. Put $ev'_V = ev_V \circ c_{V,V^{\vee}}$ and $coev'_V = c_{V,V^{\vee}} \circ coev_{V}$ for the usual evaluation and coevaluation map in the tensor categories ${\mathcal R}_n$ and $T_n$. For any pair of objects $V,W$ and an endomorphism $f:V\otimes W \to V \otimes W$ we define \begin{align*} tr_L(f) & = (ev_V \otimes id_W) \circ (id_{V^{\vee}} \otimes f) \circ (coev'_{V} \circ id_w) \in End_T(W) \\ tr_R(f) & = (id_V \otimes ev'_W ) \circ (f \otimes id_{W^{\vee}} ) \circ (id_V \otimes coev_{W}) \in End_T(V) \end{align*} For an object $J \in {{\mathcal R}}_n$ let $I_J$ be the tensor ideal generated by $J$. A trace on $I_J$ is by definition a family of linear functions \[t = \{t_V:{\mathit{End}}_{{{\mathcal R}}_n}(V)\rightarrow k \}\] where $V$ runs over all objects of $I_J$ such that following two conditions hold. \begin{enumerate} \item If $U\in I_{J}$ and $W$ is an object of ${{\mathcal R}}_n$, then for any $f\in {\mathit{End}}_{{{\mathcal R}}_n}(U\otimes W)$ we have \[t_{U\otimes W}\left(f \right)=t_U \left( t_R(f)\right). \] \item If $U,V\in I$ then for any morphisms $f:V\rightarrow U $ and $g:U\rightarrow V$ in ${{\mathcal R}}_n$ we have \[t_V(g\circ f)=t_U(f \circ g).\] \end{enumerate} By Kujawa \cite{Kujawa-generalized-kac-wakimoto}, thm 2.3.1, the trace on the ideal $I_{L}$, $L$ irreducible, is unique up to multiplication by an element of $k$. Given a trace on $I_{J}$, $\{t_{V} \}_{V \in I_{J}}$, $J \in {\mathcal R}_n$, define the modified dimension function on objects of $I_{J}$ as the modified trace of the identity morphism: \begin{equation*} d_{J}\left(V \right) =t_{V}(id_{V}). \end{equation*} We reprove the essential part of the generalized Kac-Wakimoto conjecture: We prove that there exists a nontrivial trace on the ideal of any $i$-atypical irreducible $L$, and we deduce a formula for the resulting modified superdimension. {\it Tensor ideals}. By \cite{Serganova-kw} any two irreducible object of atypicality $k$ generate the same tensor ideal. Therefore write $I_i$ for the tensor ideal generated by any irreducible object of atypicality $i$. Clearly $I_0 = Proj$ and $I_n = T_n$ since it contains the identity. This gives the following filtration \[Proj = I_0 \subsetneq I_1 \subsetneq \ldots I_{n-1} \subsetneq I_n = T_n\] with strict inclusions by \cite{Serganova-kw} and \cite{Kujawa-generalized-kac-wakimoto}. We use this in the following. However it is not necessary for the results about the modified superdimension. We could simply consider consider the ideal $<L>$ generated by an $i$-atypical irreducible representation instead the ideal $I_i$. {\it The projective case}. Denote by $\Delta_0^+$ the positive even roots and by $\Delta_1^+$ the positive odd roots for our choice of Borel algebra. The half sums of the positive even roots is denoted $\rho_0$, the half-sum of the positive odd roots by $\rho_1$ and we put $\rho = \rho_0 - \rho_1$. We define a bilinear form $(,)$ on $\mathfrak{h}^*$ as follows: We put $(\epsilon_i, \epsilon_j) = \delta_{ij}$ for $i,j \leq m$, $(\epsilon_i,\epsilon_j) = - \delta_{ij}$ for $i,j \geq m+1$ and $(\epsilon_i,\epsilon_j) = 0$ for $i \leq m$ and $j >m$. Define for any typical module the following function \[ d(L(\lambda)) = \prod_{\alpha \in \Delta_0^+} \frac{(\lambda + \rho, \alpha)}{\rho,\alpha} / \prod_{\alpha \in \Delta_1^+} (\lambda + \rho,\alpha).\] Then $d(L(\lambda)) \neq 0$ for every typical $L(\lambda)$. By \cite{Geer-Kujawa-Patureau-Mirand}, 6.2.2 for typical $L$ \[ d_J(L) = \frac{d(L)}{d(J)}.\] Since the ideal $I_0$ is independent of the choice of a particular $J$ and any ambidextrous trace is unique up to a scalar, we normalize and define the modified normalized superdimension on $I_0$ to be \[ sdim_0 (L(\lambda)) := d(L(\lambda)).\] {\it A formula for the modified superdimension}. Applying $DS$ iteratively $i$-times to a module of atypicality $i$ we obtain the functor \[ DS^i := DS \circ \ldots \circ DS: {{\mathcal R}}_n \to T_{n-i}\] which sends $M$ with $atyp(M) = i$ to a direct sum of typical modules. We show that there exists a nontrivial trace on $I_i$ similar to \cite{Kujawa-generalized-kac-wakimoto}, but without invoking Serganovas results. Denote by $t^P$ the normalized (such that we get $sdim_0$ from above) trace on $I_0 = Proj$. Now we define for $M \in I_i$ \[ t_M (f) : = t^p_{DS^i(M)} f_{DS^i(M)}: End_{{{\mathcal R}}_n} (M) \to k \] where $f_{DS^i(M)}$ is the image of $f$ under the functor $DS^i$. We claim that this defines a nontrivial trace on $I_i$: Let $M = L$ be irreducible and put \[ t_L (id_L) := t_{DS^i(L)}^p (id_{DS^i(L)}). \] Now we compute $DS^i(L)$. By the main theorem the irreducible summands in $DS(L)$ are obtained by removing one of the outer cups of each sector. Applying $DS$ $i$-times gives then the typical module in $T_{n-i}$ given by the cup diagram of $L$ with all $\vee$'s removed. Applying $DS^i$ to any other irreducible module in the same block will result in the same typical weight. Following Serganova \cite{Serganova-kw} we call this unique irreducible module the core of the block $L^{core}$. Hence $DS^i(L) = m (L)\cdot L^{core} \oplus m'(L)\cdot \Pi L^{core}$. Since the positive integers $m$ and $m'$ only depend on the nesting structure of the cup diagram $\underline{\lambda}$, we may compute them in the maximally atypical case. By a comparison with the maximal atypical case ${\mathcal R}_i$-case either $m$ or $m'$ is zero. As in the maximally atypical case a parity shift happens in $DS(L(\lambda))$ if and only if $\varepsilon(\lambda) \not\equiv \varepsilon(\lambda_i) \ mod \ 2$. Hence \begin{align*} m(\lambda) = \begin{cases} m \ \ & \varepsilon(\lambda) \equiv 0 \ mod \ 2 \\ m' & \varepsilon(\lambda) \equiv 1 \ mod \ 2. \end{cases} \end{align*} This shows that the trace $t_L$ does not vanish: Indeed \[t_L (id_L) := t_{DS^i(L)}^p (id_{DS^i(L)}) = m(\lambda) t^P_{L^{core}}(id_{L^{core}}) \neq 0\] since $t^P$ is nontrivial. Using our particular choice for $sdim_0$ on $I_0 = Proj$, we define the normalized modified superdimension as \begin{align*} sdim_i (L(\lambda)) & = sdim_0 (DS^i(L)) = sdim_0 (m L^{core} \oplus m' \Pi L^{core}) \\ & = (-1)^{\varepsilon(\lambda)} m(\lambda) sdim_0(L^{core}) \end{align*} In particular the modified super dimension does not vanish. Consider for example the irreducible 4-fold atypical representation in ${\mathcal R}_6$ with cup diagram \begin{center} \begin{tikzpicture} \draw (-6.5,0) -- (5.5,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {2,-5} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \foreach \x in {5} \draw node at (5,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \foreach \x in {-6} \draw node at (-6,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \draw [-,black,out=270,in=270](-4,0) to (1,0); \draw [-,black,out=270,in=270](-3,0) to (-2,0); \draw [-,black,out=270,in=270](-1,0) to (0,0); \draw [-,black,out=270,in=270](3,0) to (4,0); \end{tikzpicture} \end{center} We have already seen above that $m(\lambda) = 8$ in this case. The core is given by the typical representation $L(3,-4 | 5,-5)$. As a consequence of our construction and the sign rule of the main theorem we get \begin{cor} If $L$ is irreducible of atypicality $k$, then $sdim_k(L) = sdim_{k-1}(DS(L))$. If $sdim_k(L) > 0$, then all summands in $DS(L)$ have $sdim_{k-1}(L) > 0$. \end{cor} We can now copy the proof of proposition \ref{Leray} to get \begin{cor} \label{Leray-2} For irreducible atypical objects $L$ in $T_n$ the Leray type spectral sequence degenerates: $$ \fbox{$ DS_{n,n_2}(L) \ \cong \ DS_{n_1,n_2}(DS_{n,n_1}(L)) $} \ .$$ \end{cor} \noindent \section{Strategy of the proof}\label{sec:strategy} We have already proved the Main Theorem for the groundstates of each block. Recall that a groundstate is a weight with completely nested cup diagram such that all the vertices labelled $\times$ or $\circ$ are to the right of the cups. In the maximally atypical case the ground state are just the Berezin-powers. In the lower atypical cases every ground state is a Berezin-twist of a mixed tensor and we have already seen that these satisfy the main theorem in section \ref{stable0}. The proof of the general case will be a reduction to the case of groundstates. In the singly atypical case we just have to move the unique label $\vee$ to the left of all of the crosses and circles. We will see in section \ref{sec:loewy-length} that we can always move $\vee$'s to the left of $\circ$'s or $\times$. The proof of the general case will induct on the degree of atypicality, hence we will always assume that the theorem is proven for irreducible modules of lower atypicality. Hence for the purpose of explaining the strategy of the proof we will focus on the maximally atypical case. {\it The modules $S^i$}. Let us consider the following special maximally atypical case. Let $Ber\simeq [1,\ldots,1]\in {\mathcal R}_n$ be the Berezin representation. Let $S^i$ denote the irreducible representation $[i,0,\ldots,0]$. Every $S^{i-1}$ occurs as the socle and cosocel of a mixed tensor denoted ${\mathbb A}_{S^{i+1}}$ \cite{Heidersdorf-mixed-tensors}. The Loewy structure of the modules ${\mathbb A}_{S^i} :=R((i),(1^i)) \in {{\mathcal R}}_n$ is the following: $$ {\mathbb A}_{S^i} \ = \ (S^{i-1}, S^i \oplus S^{i-2}, S^{i-1}) \ $$ for $i \neq n$ and $i\geq 1$ and $n\geq 2$ where we use $S^{-1} = 0$. Furthermore $$ {\mathbb A}_{S^n} \ = \ (S^{n-1}, S^n \oplus Ber^{-1} \oplus S^{n-2}, S^{n-1}) \ .$$ We saw in \ref{stable0} that for all mixed tensors $DS(R(\lambda^L,\lambda^R)) = R(\lambda^L,\lambda^R)$ holds, so we have $DS({\mathbb A}_{S^i}) = {\mathbb A}_{S^i}$ for all $i\geq 1$. Notice that by abuse of notation we view $S^i$ and also ${\mathbb A}_{S^i}$ as objects of ${\mathcal R}_n$ for all $n$. The image $S^i \mapsto DS(S^i)$ can be computed recursively from the two exact sequences in ${\mathcal R}_n$ \[ \xymatrix{ 0 \ar[r] & K_n^i \ar[r] & {\mathbb A}_{S^i} \ar[r]^p & S^{i-1} \ar[r] & 0 \\ 0 \ar[r] & S^{i-1} \ar[r]^j & K_n^i \ar[r] & S^i \oplus ? \oplus S^{i-2} \ar[r] & 0 } \] induced by projection $p$ onto the cosocle and the inclusion $j$ of the socle. According to the main theorem we should get for $n \geq 2$ $(Ber_{n})_x = \Pi Ber_{n-1}$ and \begin{enumerate} \item $DS(S^i) = S^i\ $ for $i< n-1$, \item $DS(S^{i}) = S^{i} \oplus \Pi^{n-1-i} Ber^{-1}\ $ for $i\geq n-1$. \end{enumerate} We proof this for $i \leq n-1$. First notice $H^-({\mathbb A}_{S^i})=0$ and $H^+({\mathbb A}_{S^i})={\mathbb A}_{S^i}$. Suppose $i\leq n-1$ and that $H^-(S^j)=0$, $H^+(S^j)=S^j$ already holds for $j<i$ by induction. This is justified since $S^0 = k$ equals the trival module. Then the exact hexagons give \[ \xymatrix@-4mm{ H^+(K_n^i) \ar[r] & {\mathbb A}_{S^i} \ar[r] & S^{i-1} \ar[d] \\ 0 \ar[u] & 0 \ar[l] & H^-(K_n^i) \ar[l] }\] and \[ \xymatrix@-4mm{ S^{i-1} \ar[r] & H^+(K_n^i) \ar[r] & H^+(S^i\oplus ?)\oplus S^{i-2} \ar[d] \\ H^-(S^i\oplus ?) \ar[u] & H^-(K_n^i) \ar[l] & 0 \ar[l] }\] If $H^+(p)=0$, then $H^+(K_n^i)\cong {\mathbb A}_{S^i}$. Hence $H^+(K_n^i) \twoheadrightarrow H^+(S^i\oplus ?)\oplus S^{i-2}$ composed with the projection to $S^{i-2}$ is zero, since the cosocle of $H^+(K_n^i)\cong {\mathbb A}_{S^i}$ is $S^{i-1}$. This implies $S^{i-1}=0$, which is absurd. Hence $H^+(p)$ is surjective. Therefore $H^-(K_n^i)=0$ and $H^+(K_n^i)=K_{n-1}^i$, and in particular then $$H^+(K_n^i)=K_{n-1}^i$$ is indecomposable. Hence $K_{n-1}^i \to H^+(S^i \oplus ?)\oplus S^{i-2}$ is surjective, and $H^-(S^i)=0$. Furthermore $$H^+(S^i)=S^i \quad , \quad i < n-1 $$ and $$ H^+(S^i) = S^i \oplus Ber^{-1} \quad , \quad i=n-1 \ .$$ The proof for the cases $i \geq n$ is similar. The method described in the $S^i$-case doesn't work in general. In the general case we do not have exact analogs of the ${\mathbb A}_{S^i}$ - mixed tensors with the property $DS({\mathbb A}) = {\mathbb A}$. In section \ref{sec:loewy-length} we associate to every irreducible module three representations, the weight $L$, the \textit{auxiliary} representation $L^{aux}$ and the representation $L^{\times \circ}$ and an indecomposable rigid module $F_i (L^{\times \circ})$ of Loewy length $3$ with Loewy structure $(L, A, L)$ such that the irreducible module we started with and which we denote $L^{up}$ for reasons to be explained later is one of the composition factors of $A$. If we apply this construction to irreducible modules of the form $S^i = [i,0,\ldots,0]$ we recover the modules ${\mathbb A}_{S^i}$. Our aim is to use these indecomposable modules as a replacement for the modules ${\mathbb A}_{S^i}$. In the $S^i$-case we reduced the computation of $DS(S^i)$ by means of the indecomposable modules ${\mathbb A}_{S^i}$ to the trivial case $DS({\mathbf{1}}) = {\mathbf{1}}$. In the general case we will reduce the computation of $DS(L)$ by means of the indecomposable modules $F_i (L^{\times \circ})$ to the case of ground states. For that we define an order on the set of cup diagrams for a fixed block such that the completely nested cup diagrams (for which the Main Theorem holds) are the minimal elements. We prove the general case by induction on this order and will accordingly assume that the main theorem holds for all irreducible modules of lower order then a given module $L$. The key point is that for a given module $L^{up}$ we can always choose our weights $L^{aux}$ and $L^{\times \circ} = F_i(L(\lambda_{\times \circ}))$ such that all other composition factors of $F_i (L^{\times \circ})$ are of lower order then $L^{up}$. Hence the Main Theorem holds for all composition factors of $F_i (L^{\times \circ})$ except possibly $L^{up}$. This setup is similar to the ${\mathbb A}_{ S^i}$-case where we assumed by induction on $i$ that the Main Theorem held for all composition factors of ${\mathbb A}_{S^i} = (S^{i-1}, S^{i-2} + S^i, S^{i-1})$ except possibly $S^i$. Unlike the ${\mathbb A}_{S^i}$ the indecomposable modules $F_i ( L^{\times \circ})$ are not mixed tensors and hence we do not know a priori their behaviour under $DS$. However assuming that the Main Theorem holds for all composition factors except possibly $L^{up}$ we prove in section \ref{sec:loewy-length} a formula for $DS(F_i (L^{\times \circ}))$. In section \ref{sec:inductive} we show that under certain axioms on the modules $F_i (L^{\times \circ})$ and their image under $DS$ the module $DS(L^{up})$ is semisimple. These axioms are verified in section \ref{sec:moves}. Here it is very important that we can control the composition factors of the $F_i (L^{\times \circ})$. The composition factors in the middle Loewy layer will be called $\textit{moves}$ since they can be obtained from the labelled cup diagram of $L$ by moving certain $\vee$'s in a natural way. The moves are described in detail in section \ref{sec:loewy-length}. We still have to explain how the induction process works, i.e. how we relate a given irreducible module to irreducible modules with lesser number of segments respectively sectors. This is done by the so-called Algorithms I and II described first in \cite{Weissauer-gl}. As above for a given module $L^{up}$ all other composition factors of $F_i (L^{\times \circ})$ are of lower order then $L^{up}$. For $L^{up}$ with more then one segment we can choose $i$ and the representations $L^{aux}$ and $L^{\times \circ}$ in such a way that all composition factors have one segment less then $L^{up}$. We can now apply the same procedure to all the composition factors of $F_i(L^{\times \circ})$ with more then one segment - i.e. we choose for each of these (new) weights $L^{aux}$ and $L^{\times \circ}$ such that the composition factors of the (new) associated indecomposable modules have less segments then them. Iterating this we finally end up with a finite number of indecomposable modules where all composition factors have weight diagrams with only one segment. This procedure is called Algorithm I. In Algorithm II we decrease the number of sectors in the same way: If we have a weight with only one segment but more then one sector we can choose $i$ and the weights $L^{aux}$ and $L^{\times \circ}$ such that the composition factors of $F_i (L^{\times \circ})$ have less sectors then $L^{up}$. Applying this procedure to the composition factors of $F_i(L^{\times \circ})$ and iterating we finally relate the cup diagram of $L^{up}$ to a finite number of cup diagrams with only one sector. Hence after finitely many iterations we have reduced everything to irreducible modules with one segment and one sector. This sector might not be completely nested, e.g. we might end up with weights with labelled cup diagrams of the type \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270,in=270](-3,0) to (2,0); \draw [-,black,out=270,in=270](-2,0) to (-1,0); \draw [-,black,out=270,in=270](0,0) to (1,0); \end{tikzpicture} \end{center} In this case we can apply Algorithm II to the internal cup diagram having one segment enclosed by the outer cup. If we iterate this procedure we will finally end up in a collection of Kostant weights (i.e. weights with completely nested cup diagrams) of this block. We still have to find the decomposition of the semisimple module $DS(L^{up})$ into its simple summands. Since we know the semisimplicity, we can compute $DS(L^{up})$ on the level of Grothendieck groups. Essentially we compute this in the following way: using the notation ${\mathbb A} = F_i(L(\lambda_{\times \circ}))$, we compute \[ d({\mathbb A}) = H^+({\mathbb A}) - H^-({\mathbb A}) = 2d(L) + d(A) = 2d(L) + d(L^{up}) + d(A - L^{up}) \] in $K_0({\mathcal R}_{n-1})$ where we do not know $d(L^{up})$ and compare this to the known composition factors of $\tilde{{\mathbb A}} = DS({\mathbb A})$. For this we need the so-called commutation rules for Algorithm I and Algorithm II. Using that the main theorem holds for all composition factors of ${\mathbb A}$ except possibly $L^{up}$ we can cancel most composition factors. The remaining factors have to be the simple factors of $DS(L^{up})$ and these factors are exactly those given by the derivative of $L^{up}$ (seen as a plot), finally proving the theorem. This is done in section \ref{sec:inductive}. \noindent {\it The case $[2,2,0]$}. We illustrate the above strategy with an example. In this part we ignore systematically all signs or parity shifts. The module $[2,2,0]$ has the labelled cup diagram \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {-2,1,2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,-1,0,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270,in=270](-2,0) to (-1,0); \draw [-,black,out=270,in=270](1,0) to (4,0); \draw [-,black,out=270,in=270](2,0) to (3,0); \end{tikzpicture} \end{center} hence it has two segments and two sectors. We will associate to $[2,2,0]$ an auxiliary weight $L$ and a twofold atypical weight $L^{\times \circ}$ in $T_3$ such that $[2,2,0]$ is of the form $L^{up}$ in the indecomposable module $F_i(L^{\times \circ})$. The auxiliary weight is in this case $[2,1,0]$ with labelled cup diagram \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {-2,0,2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,-1,1,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270,in=270](-2,0) to (-1,0); \draw [-,black,out=270,in=270](0,0) to (1,0); \draw [-,black,out=270,in=270](2,0) to (3,0); \end{tikzpicture} \end{center} with one segment and three sectors. The weight $\lambda_{\times \circ}$ is obtained from $[2,1,0]$ by replacing the $\vee \wedge$ at the vertices 0 and 1 by $\times \circ$ \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {-2,2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,-1,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {0} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \foreach \x in {1} \draw node at (1,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \draw [-,black,out=270,in=270](-2,0) to (-1,0); \draw [-,black,out=270,in=270](2,0) to (3,0); \end{tikzpicture} \end{center} The module $F_0 (L^{\times \circ})$ is $*$-selfdual of Loewy length $3$ and socle and cosocle $[2,1,0]$. It contains the module $[2,2,0]$ with multiplicity 1 in the middle Loewy layer. The rules of section \ref{sec:loewy-length} give the following composition factors (\textit{moves}) in the middle Loewy layer. In the labelled cup diagram of $[2,1,0]$ there is one internal upper sector $[2,3]$. The internal upper sector move gives the labelled cup diagram \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {-2,0,1} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,-1,2,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270,in=270](-2,0) to (-1,0); \draw [-,black,out=270,in=270](0,0) to (3,0); \draw [-,black,out=270,in=270](1,0) to (2,0); \end{tikzpicture} \end{center} hence the composition factor $[1,1,0]$. The labelled cup diagram of $[2,1,0]$ has one internal lower sector, namely the interval $[-2,-1]$. The associated internal lower sector move gives the labelled cup diagram \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {-2,-1,2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,0,1,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270,in=270](-2,0) to (1,0); \draw [-,black,out=270,in=270](-1,0) to (0,0); \draw [-,black,out=270,in=270](2,0) to (3,0); \end{tikzpicture} \end{center} The sector $[0,1]$ is unencapsulated, it is in the middle of the segment $[-2,3]$. Hence we also have the unencapsulated boundary move, i.e. we move the $\vee$ at the vertex $0$ to the vertex -3, resulting in the labelled cup diagram \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {-2,-3,2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-1,0,1,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw [-,black,out=270,in=270](-3,0) to (0,0); \draw [-,black,out=270,in=270](-2,0) to (-1,0); \draw [-,black,out=270,in=270](2,0) to (3,0); \end{tikzpicture} \end{center} giving the composition factor $[2,-1,-1]$. The upward move of $[2,1,0]$ gives the composition factor $L^{up} = [2,2,0]$. Hence the Loewy structure of the indecomposable module $F_0(L^{aux})$ is \[ \begin{pmatrix} [2,1,0] \\ [2,-1,-1] + [1,1,0] + [2,0,0] + [2,2,0] \\ [2,1,0] \end{pmatrix}. \] We remark that all the composition factors have only one segment, hence we will not have to apply Algorithm I any more. Since the proof inducts on the degree of atypicality we know $DS(L^{\times \circ})$ and we can apply \ref{tildeA} to conclude $DS(F_i (L^{\times \circ})) = F_i (DS(L^{\times \circ})) = F_i (L_1 \oplus L_2)$ for two irreducible module obtained by applying $DS$ to $L^{\times \circ}$. By the main theorem $DS(L^{\times \circ})$ gives the modules \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,-2,-1,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {0} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \foreach \x in {1} \draw node at (1,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \draw [-,black,out=270,in=270](2,0) to (3,0); \end{tikzpicture} \end{center} and \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {-2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,2,-1,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {0} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \foreach \x in {1} \draw node at (1,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \draw [-,black,out=270,in=270](-2,0) to (-1,0); \end{tikzpicture} \end{center} Applying $F_0$ to the first summand gives the module ${\mathbb A}_1$ with socle and cosocle $[2,1]$. The upward move gives the composition factor $[2,2]$. The unique internal upper sector move gives the composition factor $[1,1]$. We do not have any lower sector moves. The non-encapsulated boundary move gives the composition factor $[2,0]$. This results in the Loewy structures of ${\mathbb A}_1 = F_0(L_1)$ and $A_2 = F_0 (L_2)$ \[ {\mathbb A}_1 = \begin{pmatrix} [2,1] \\ [1,1] + [2,0] + [2,2] \\ [2,1] \end{pmatrix}, \ \ {\mathbb A}_2 = \begin{pmatrix} [0,-1] \\ [1,-1] + [-1,-1] + [-2,-2] \\ [0,-1] \end{pmatrix}. \] The irreducible modules in the middle Loewy layers give the module $\tilde{A}$. We compare $\tilde{A}$ and $A'$ in $K_0$: Taking the derivative of $A = [2,-1,-1] + [1,1,0] + [2,0,0] + [2,2,0]$ gives \begin{align*} A' & = [2,-1] + [-2,-2] + [1,-1] + [1,1] + [2,0] \\ & + [-1,-1] + [2,-1] + [2,2] \end{align*} with the module $[2,-1] = L^{aux}$ appearing twice. The computation above of ${\mathbb A}_1$ and ${\mathbb A}_2$ gives \[ \tilde{A} = [-2,-2] + [1,-1] + [1,1] + [2,0] + [-1,-1] + [2,2].\] This shows the following commutation rule in this example \[ A' = \tilde{A} + 2 (-1)^{i+n} L^{aux} \ \ \ in \ \ \ K_0({\mathcal R}_{n-1}).\] We remark that the composition factors $[2,0]$ in ${\mathbb A}_1$ and $[-1,-1]$ are detecting objects in the sense of section \ref{sec:moves}. We will prove in section \ref{sec:moves} that the properties of the modules ${\mathbb A}, {\mathbb A}_1$ and ${\mathbb A}_2$ imply that $DS(L^{up})$ is semisimple. Hence we can compute $DS(L^{up})$ by looking at $K_0$. In Algorithm II we reduce everything to a single sector. Take one of the composition factors of $F_0 (L^{\times \circ})$ with more then one sector, eg. $[2,1,0]$ with one segment and three sectors. The associated auxiliary weight is in this case the weight $[2,0,0]$ with the twofold atypical weight $L^{\times \circ}$ given by the labelled cup diagram \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {2,-2} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {-5,-4,-3,1,3,4,5} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {-1} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \foreach \x in {0} \draw node at (0,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \draw [-,black,out=270,in=270](-2,0) to (1,0); \draw [-,black,out=270,in=270](2,0) to (3,0); \end{tikzpicture} \end{center} The module $F_{-1}(L^{\times \circ})$ has socle and cosocle $[2,0,0]$ and the followowing modules in the middle Loewy layer: The upward move gives $[2,1,0]$ and the upper sector move of the upper sector $[2,3]$ gives the weight $[0,0,0]$. There are no non-encapsulated boundary moves and no internal lower sector moves, hence we get the Loewy structure \[ F_{-1}(L^{\times \circ}) = \begin{pmatrix} [2,0,0] \\ [0,0,0] + [2,1,0] \\ [2,0,0] \end{pmatrix}.\] We compute $DS(F_{-1}(L^{\times \circ}))$ (using $DS(F_i (L^{\times \circ})) = F_i (DS(L^{\times \circ})) \ )$ (lemma \ref{tildeA}). By the main theorem $DS(L^{\times \circ})$ splits into two direct summands. Applying $F_{-1}$ to the first and second summand gives the indecomposable modules \[ {\mathbb A}_1 = \begin{pmatrix} [2,0] \\ [2,1] + [2,-1] + [0,0] \\ [2,0] \end{pmatrix}, \ \ {\mathbb A}_2 = \begin{pmatrix} [-1,-1] \\ [0,-1] \\ [-1,-1] \end{pmatrix} \] We remark that all the factors in the middle Loewy layers are detecting objects in the sense of section \ref{sec:inductive}. As shown in section \ref{sec:inductive} these properties already imply that $DS([2,1,0])$ is semisimple. To compute it we need the commutation rules for Algorithm II, i.e. we compare the derivative $A'$ of the middle Loewy layer of $F_{-1}(L^{\times \circ})$ with the modules $\tilde{A} = A_1 + A_2$ in the middle Loewy layers of ${\mathbb A}_1$ and ${\mathbb A}_2$. In both cases we get $[2,1] + [2,-1] + [0,0] + [0,-1]$, hence the commutation rule \[ \tilde{A} = A'.\] The general case is proven in lemma \ref{comII}. \noindent \section{Modules of Loewy length 3}\label{sec:loewy-length} \noindent As described in section \ref{sec:strategy} we reduce the main theorem to the case of ground states by means of translation functors $F_i(\ldots)$. In this section we describe the Loewy layers and composition factors of the objects $F_i(L_{\times \circ})$ and study their behaviour under $DS$. {\it Khovanov algebras}. We review some facts from the articles by Brundan and Stroppel \cite{Brundan-Stroppel-1}, \cite{Brundan-Stroppel-2}, \cite{Brundan-Stroppel-4}, \cite{Brundan-Stroppel-5}. We denote the Khovanov-algebra of \cite{Brundan-Stroppel-4} associated to $Gl(m|n)$ by $K(m,n)$. These algebras are naturally graded. For $K(m,n)$ we have a set of weights or weight diagrams which parametrise the irreducible modules (up to a grading shift). This set of weights is again denoted $X^+$. For each weight $\lambda \in X^+$ we have the irreducible module $L(\lambda)$, the indecomposable projective module $P(\lambda)$ with top $L(\lambda)$ and the standard or cell module $V(\lambda)$. If we forget the grading structure on the $K(m,n)$-modules, the main result of \cite{Brundan-Stroppel-4} is: \begin{thm} There is an equivalence of categories $E$ from ${\mathcal R}_{m|n}$ to the category of finite-dimensional left-$K(m,n)$-modules such that $EL(\lambda) = L(\lambda)$, $EP(\lambda) = P(\lambda)$ and $EK(\lambda) = V(\lambda)$ for $\lambda \in X^+$. \end{thm} $E$ is a Morita equivalence, hence $E$ will preserve the Loewy structure of indecomposable modules. This will enable us to study questions regarding extensions or Loewy structures in the category of Khovanov modules. We will use freely the terminology of \cite{Brundan-Stroppel-1}, \cite{Brundan-Stroppel-2}, \cite{Brundan-Stroppel-4}, \cite{Brundan-Stroppel-5}. The notion of cups, caps, cup and cap diagrams are introduced in \cite{Brundan-Stroppel-1}. For the notion of \textit{matching} between a cup and a cap diagram see \cite{Brundan-Stroppel-2}, section 2. For the notion of $\Gamma$-\textit{admissible} see \cite{Brundan-Stroppel-4}, section 2. Let $\lambda$ in ${{\mathcal R}}_n$ be any atypical weight with a $\vee\wedge$-pair in its weight diagram, i.e. such that there exists an index $i$ labelled by $\vee$ and the index $i+1$ is labelled by $\wedge$. Fix such an index $i$ and replace $(\vee \wedge)$ by the labelling $(\times,\circ)$. This defines a new weight $\lambda_{\times \circ}$ of atypicality $atyp(\lambda)-1$. We denote by $F_i$, $i \in {\mathbf{Z}}$, the endofunctor from \cite{Brundan-Stroppel-4}, (2.13). The functor $F_i$ has an avatar $F_i$ on the side of Khovanov-modules. This projective functor $F_i$ is defined by $F_i := \bigoplus K_{(\Gamma - \alpha_i)\Gamma}^{t_i(\Gamma)} \otimes_K -$, see \cite{Brundan-Stroppel-4}, (2.3), for summation rules and also \cite{Brundan-Stroppel-2}, (4.1). Since by loc. cit. lemma 2.4, $F_i L(\lambda_{\times \circ})$ is indecomposable, $F_i L(\lambda_{\times \circ}) = K_{(\Gamma - \alpha_i)\Gamma}^{t_i(\Gamma)} \otimes_K -$ for one specific $i$-admissible $\Gamma$ \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {0} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw node at (1,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \begin{scope} [yshift = -3 cm] \draw (-6,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \end{scope} \draw [-,black,out=270,in=90](-5,0) to (-5,-3); \draw [-,black,out=270,in=90](-4,0) to (-4,-3); \draw [-,black,out=270,in=90](-3,0) to (-3,-3); \draw [-,black,out=270,in=90](-2,0) to (-2,-3); \draw [-,black,out=270,in=90](-1,0) to (-1,-3); \draw [-,black,out=270,in=90](3,0) to (3,-3); \draw [-,black,out=270,in=90](4,0) to (4,-3); \draw [-,black,out=270,in=90](2,0) to (2,-3); \draw [-,black,out=270,in=90](5,0) to (5,-3); \draw [-,black,out=90, in=90](0,-3) to (1,-3); \end{tikzpicture} Here the matching between $(\Gamma - \alpha_i)$ and $\Gamma$ is given by the diagram above and the rule that all other vertices, except those labelled by $\times$ or $\circ$, are connected by a vertical identity line segment. We want to determine its composition factors and Loewy layers. For that one considers the modules $F_i L(\lambda_{\times \circ})$ as modules in the \textit{graded} category of $K=K(n,n)$-modules where $K(n,n)$ is the Khovanov algebra from \cite{Brundan-Stroppel-4}. We recall some facts from \cite{Brundan-Stroppel-1} and \cite{Brundan-Stroppel-4}, see also \cite{Heidersdorf-mixed-tensors}. Let $\Lambda$ be any block in the category of graded $K$-modules. For a graded $K$-module $M = \bigoplus_{j \in {\mathbf{Z}}} M_j$, we write $M\langle j \rangle$ for the same module with the new grading $M\langle j \rangle_i := M_{i-j}$. Then the modules $\{ L(\lambda)\langle j \rangle \ | \ \lambda \in \Lambda, \ j \in {\mathbf{Z}} \}$ give a complete set of isomorphism classes of irreducible graded $K_{\Lambda}$-modules. For the full subcategory $Rep(K_{\Lambda})$ of $Mod_{lf}(K_{\Lambda})$ consisting of finite-dimensional modules, the Grothendieck group is the free ${\mathbf{Z}}$-module with basis given by the $L(\lambda) \langle j \rangle$. Viewing it as a ${\mathbf{Z}}[q,q^{-1}]$-module, so that by definition $q^j [M] : = [M\langle j \rangle]$ holds, $K_0(Rep(K_{\Lambda}))$ is the free ${\mathbf{Z}}[q,q^{-1}]$-module with basis $\{ L(\lambda) \ | \ \lambda \in \Lambda \}$. We refer to \cite{Brundan-Stroppel-2}, section 2, for the definition of the functors $G_{\Lambda \Gamma}^t$. For terminology used in the statement of the next theorem see loc.cit or section \ref{sec:n^n}. We quote from \cite{Brundan-Stroppel-2}, thm 4.11 \begin{thm} \label{compos} Let $t$ be a proper $\Lambda \Gamma$-matching and $\gamma \in \Gamma$. Then in the graded Grothendieck group \[ [ G_{\Lambda \Gamma}^t L(\gamma) ] = \sum_{\mu} (q + q^{-1})^{n_{\mu}} [L(\mu)] \] where $n_{\mu}$ denotes the number of lower circles in $\underline{\mu}t$ and the sum is over all $\mu \in \Lambda$ such that a) $\underline{\gamma}$ is the lower reduction of $\underline{\mu}t$ and b) the rays of each lower line in $\underline{\mu}\mu t$ are oriented so that exactly one is $\vee$ and one is $\wedge$. \end{thm} Up to a grading shift by $-caps(t)$ we have $F_i L(\lambda_{\times \circ}) = G_{(\Gamma-\alpha_i) \Gamma}^t L(\gamma)$ for some $\gamma$ and we may apply the theorem above to compute their Loewy structure. By \cite{Brundan-Stroppel-4}, lemma 2.4.v, $F_i L(\lambda_{\times \circ})$ is indecomposable with irreducible socle and head isomorphic to $L(\lambda)$. \begin{prop} $F_i L(\lambda_{\times \circ})$ has a three step Loewy filtration \[ F_i L(\lambda_{\times \circ}) = \begin{pmatrix} L(\lambda) \\ F \\ L(\lambda) \end{pmatrix} \] where all irreducible constituents in (the semisimple) module $F$ occur with multiplicity 1. \end{prop} {\it Proof}. Let $F(j)$ be the submodule of $F_i L(\lambda_{\times \circ})$ spanned by all graded pieces of degree $\geq j$. Let $k$ be large enough so that all constituents of $F_i L(\lambda_{\circ \times})$ have degree $\geq -k$ and $\leq k$. Then \[ F = F(- k) \supset F(- k +1) \supset \ldots \supset F(k)\] with successive semisimple quotients $F(j)/F(j+1)$ in degree $j$. In our case we take $k=1$, since the irreducible socle and top $L(\lambda) = L(\lambda_{\vee\wedge})$ satisfies $n_{\lambda} = 1$. Then all other composition factors $L(\mu)$ necessarily satisfy $n_{\mu} = 0$ (we ignore the shift by $\langle -caps(t)\rangle $ here). The grading filtration thus gives our three step Loewy filtration. The statement about the multiplicity follows since the multiplicity of $L(\mu)$ in $F$ is given by $2^{n_{\mu}}$. The Loewy filtration of $F_i L(\lambda_{ \times \circ})$ is preserved by the Morita equivalence $E^{-1}$ of $K(n,n)\text{\it -mod}$ with ${{\mathcal R}}_n$. \qed \begin{lem} $F_i L(\lambda_{\times \circ})$ is $*$-invariant. \end{lem} {\it Proof}. Since $X \otimes L_{\times,\circ}$ is $^*$-invariant, $^*$ permutes its indecomposable summands. The indecomposable summands are either irreducible or are of the form $F_ j L(\lambda_{\times \circ})$ for some $j$ with labeling $(\times,\circ)$ at position $(j,j+1)$. Since $^*$ preserves irreducible modules, the indecomposable summands corresponding to the $(\times,\circ)$-pairs in $\lambda_{x\circ}$ are permuted amongst themselves. Since $^*$ preserves irreducible modules $[M^*] = [M]$ in $K_0$. However all the non-irreducible $F_j (L(\lambda_{\times \circ}))$ lie in different blocks for $j \neq j'$ by the rules of \cite{Brundan-Stroppel-4}, lemma 2.4.\qed {\it Composition factors}. We describe the composition factors of $F_i (L^{\times \circ})$. We can restrict ourselves to the maximally atypical block (i.e. we can ignore $\times$'s and $\circ$'s). Let $\lambda$ be $i$-fold atypical. Since $F_i(L(\lambda_{\times \circ}))$ is indecomposable, any highest weight of a composition factor $\mu$ has the same positioning of the $n-i$ crosses and $n-i$ circles as $\lambda$. In particular it has the same positioning of the circles and crosses as $\lambda_{\times \circ}$ except at the position $(i,i+1)$. Let $F_i (L(\lambda_{\times \circ}))$ be given by a matching $t$ as follows \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {0} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \draw node at (1,0) [fill=white,draw,circle,inner sep=0pt,minimum size=6pt]{}; \begin{scope} [yshift = -3 cm] \draw (-6,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \end{scope} \draw [-,black,out=270,in=90](-4,0) to (-4,-3); \draw [-,black,out=270,in=90](-3,0) to (-3,-3); \draw [-,black,out=270,in=90](-2,0) to (-2,-3); \draw [-,black,out=270,in=90](-1,0) to (-1,-3); \draw [-,black,out=270,in=90](4,0) to (4,-3); \draw [-,black,out=270,in=90](2,0) to (2,-3); \draw [-,black,out=270,in=90](3,0) to (3,-3); \draw [-,black,out=270,in=90](4,0) to (4,-3); \draw [-,black,out=270,in=90](5,0) to (5,-3); \draw [-,black,out=90, in=90](0,-3) to (1,-3); \end{tikzpicture} \end{center} The crosses and the circles are now fixed. Since the composition factors depend only on the nesting structure and the matching $t$ as in theorem \ref{compos} we can fix them and assume that we are in the maximally atypical block of $Gl(i|i)$. In this case the composition factors can be determined from the segment and sector structure of $\lambda$ as in \cite{Weissauer-gl}. For symbols $x,y \in \{\circ,\wedge,\vee,\times\}$ we write $\lambda_{xy}$ for the diagram obtained from $\lambda$ with the $i$th and $(i+1)$th vertices relabeled by $x$ and $y$, respectively. \begin{itemize} \item {\it Socle and cosocle}. They are defined by $L(\mu)$ for $\mu=\lambda_{\vee\wedge}$. \item {\it The upward move}. It corresponds to the weight $\mu = \lambda_{\wedge\vee}$ which is obtained from $\lambda_{\vee\wedge}$ by switching $\vee$ and $\wedge$ at the places $i$ and $i+1$. It is of type $\lambda_{\wedge\vee}$. \item {\it The nonencapsulated boundary move}. It only occurs in the nonencapsulated case. It moves the $\vee$ in $\lambda_{\vee\wedge}$ from position $i$ to the left boundary position $a$. The resulting weight $\mu$ is of type $\lambda_{\wedge\wedge}$. \item {\it The internal upper sector moves}. For every internal upper sector $[a_j,b_j]$ (i.e. to the right of $[i,i+1]$) there is a summand whose weight is obtained from $\lambda_{\vee\wedge}$ by moving the label $\vee$ at $a_j$ to the position $i+1$. These moves define new weights $\mu$ of type $\lambda_{\vee\vee}$. \item {\it The internal lower sector moves}. For every internal lower sector $[a_j,b_j]$ (i.e. to the left of $[i,i+1]$) there is a summand whose weight is obtained from $\lambda_{\vee\wedge}$ by moving the label $\vee$ from the position $i$ to the position $b_j$. These moves define new weights $\mu$ of type $\lambda_{\wedge\wedge}$. \end{itemize} For examples see \cite{Weissauer-gl} or section \ref{sec:strategy}. It follows from the maximal atypical case and the definition of our sign $\varepsilon(L)$ that we have $F_i L(\lambda_{\circ \times}) = (L,F,L)$ with $L \in {\mathcal R}_n(\pm \varepsilon)$ and $F \in {\mathcal R}_n(\mp \varepsilon)$. For the following lemma see also \cite{Serganova-kw}, thm. 2.1 and cor. 4.4. \begin{lem} \label{tildeA} Suppose theorem \ref{mainthm} holds for the irreducible representation $L^{\times \circ}=L(\lambda_{\times \circ})$ in the block $\Gamma$ of ${\mathcal R}_n$. Suppose $i\in \mathbb Z$ is $\Gamma$-admissible in the sense of \cite{Brundan-Stroppel-4}, p.6. Then for the special projective functor $F_i$ the following holds: $$ \fbox{$ DS( F_i L_{\times \circ}) = F_i DS(L_{\times \circ}) $} \ .$$ \end{lem} {\it Proof}. Given $(V,\rho)$ in ${\mathcal R}_n$ the Casimir $C_n$ of ${\mathcal R}_n$ restricts on $DS(V,\rho)$ to the Casimir $C_{n-1}$ of ${\mathcal R}_{n-1}$ by lemma \ref{Cas}. On irreducible representations $V$ the Casimir acts by a scalar $c(V)$. Given representations $V_1,V_2$ in ${\mathcal R}_n$, such that $C_n$ acts by $c(V_i) \cdot id_{V_i}$ on $V_i$, and $v \in V_1\otimes V_2$, then $C_n(v) = (c(V_1)+ c(V_2))\cdot v + 2\Omega_n(v)$ for $\Omega_n= \sum_{r,s=1}^n (-1)^{\overline s} e_{r,s} \otimes e_{s,r} \!\in\! {\mathfrak g}_n \otimes {\mathfrak g}_n$. Note $F_i(V) = pr_{\Gamma - \alpha_i} \circ ( V \otimes X_{st}) \circ pr_\Gamma$, so $F_i L(\lambda_{\times \circ}) = pr_{\Gamma - \alpha_i} ( L(\lambda_{\times \circ}) \otimes X_{st})$. By \cite{Brundan-Stroppel-4}, lemma 2.10, this is also the generalized $i$-eigenspace of $\Omega_n$ on $L(\lambda_{\times \circ}) \otimes X_{st}$. Put $c=c(L(\lambda_{\times \circ})) + c(X_{st}) +2i$. Then $F_i L(\lambda_{\times \circ}) $ is the generalized $c$-eigenspace of $C_n$ on $L(\lambda_{\times \circ}) \otimes X_{st}$. Hence $DS( F_i L(\lambda_{\times \circ}))$ is the generalized $c$-eigenspace of $C_{n-1}$ on $DS( L(\lambda_{\times \circ}) \otimes X_{st}) = DS( L(\lambda_{\times \circ})) \otimes DS(X_{st}) = DS( L(\lambda_{\times \circ})) \otimes X_{st,n-1}$. Observe that $c(DS(V_1))+ c(DS(V_2)) = c(V_1)+ c(V_2)$, since $C_n$ induces $C_{n-1}$ on $DS(V_i)$. By the main theorem \ref{mainthm} (using induction over degree of atypicity) $DS(L(\lambda_{\times \circ}))$ is in a unique block $\overline\Gamma$. So $F_i DS(L(\lambda_{\times \circ})) = pr_{\overline\Gamma - \alpha_i} \circ (? \otimes X_{st,n-1}) \circ pr_{\overline\Gamma} DS(L(\lambda_{\times \circ})) = pr_{\overline\Gamma - \alpha_i} (DS(L(\lambda_{\times \circ})) \otimes X_{st,n-1})$, and again by \cite{Brundan-Stroppel-4}, lemma 2.10, this is the generalized $c$-eigenspace of the Casimir $C_{n-1}$ on $DS(L(\lambda_{\times \circ})) \otimes X_{st,n-1}$. Thus $DS( F_i L(\lambda_{\times \circ})) \cong F_i DS(L(\lambda_{\times \circ}))$. \qed \noindent {\it Weights, sectors, segments}. Let $L(\lambda)$ be $i$-atypical in a block $\Gamma$. Let $X^+_{\Gamma}$ denote the set of weights in $\Gamma$. Then we define a map \[ \phi = \phi_{\Gamma}: X^+_{\Gamma} \to \{ \text{plots of rank } i \} \] by sending $\lambda$ to the plot of the weight of the irreducible representation $\phi_n^i(L(\lambda))$. Then $\phi_{\Gamma}$ is a bijection. Each plot has defining segments and sectors, and by transfer with $\phi_{\Gamma}$ this defines the segments and sectors of a given weight diagram in $X_{\Gamma}^+$. {\it Shifting $\times$ and $\circ$}. We now quote from \cite{Brundan-Stroppel-4}, lemma 2.4 \begin{lem}\label{translation} Let $\lambda \in X^+(n)$ and $i \in \mathbb Z$. For symbols $x,y \in \{\circ,\wedge,\vee,\times\}$ we write $\lambda_{xy}$ for the diagram obtained from $\lambda$ with the $i$th and $(i+1)$th vertices relabeled by $x$ and $y$, respectively. \begin{itemize} \item[\rm(i)] If $\lambda = \lambda_{{\vee}\times}$ then $E_i L(\lambda) \cong L(\lambda_{\times {\vee}})$. If $\lambda = \lambda_{\times \vee}$ then $F_i L(\lambda) \cong L(\lambda_{{\vee} \times})$. \item[\rm(ii)] If $\lambda = \lambda_{{\wedge}\times}$ then $E_i L(\lambda) \cong L(\lambda_{\times {\wedge}})$. If $\lambda = \lambda_{\times \wedge}$ then $F_i L(\lambda) \cong L(\lambda_{\wedge \times})$. \item[\rm(iii)] If $\lambda = \lambda_{{\vee}\circ}$ then $F_i L(\lambda) \cong L(\lambda_{\circ {\vee}})$. If $\lambda = \lambda_{\circ \vee}$ then $E_i L(\lambda) \cong L(\lambda_{{\vee} \circ})$. \item[\rm(iv)] If $\lambda = \lambda_{{\wedge}\circ}$ then $F_i L(\lambda) \cong L(\lambda_{\circ {\wedge}})$. If $\lambda = \lambda_{\circ \wedge}$ then $E_i L(\lambda) \cong L(\lambda_{\wedge \circ})$. \item[\rm(v)] If $\lambda = \lambda_{{\times}\circ}$ then: $F_i L(\lambda)$ has irreducible socle and head both isomorphic to $L(\lambda_{\vee\wedge})$, and all other composition factors are of the form $L(\mu)$ for $\mu \in\lambda$ such that $\mu = \mu_{\vee \wedge}$, $\mu = \mu_{\wedge \vee}$ or $\mu = \mu_{\wedge \vee}$. Likewise for $\lambda = \lambda_{\circ \times}$ and $E_i L(\lambda)$. \item[\rm(vi)] If $\lambda = \lambda_{{\vee\wedge}}$ then $F_i L(\lambda) \cong L(\lambda_{\circ{\times}})$. \end{itemize} \end{lem} For a pair of neighbouring vertices $(i,i+1)$ in the weight diagram of $\lambda = \lambda_{\vee\times }$, labelled by $( \vee \times)$, we get $$E_i L(\lambda_{\vee\times}) = L(\lambda_{ \times\vee})$$ from \ref{translation}.1. In other words, the functor replaces the irreducible representation of weight $\lambda_{\times \vee}$ by the irreducible representation of weight $\lambda_{\vee \times}$, which has the same weight diagram as $\lambda_{\times \vee}$, except that the positions of $\times$ and $\vee$ are interchanged. Note that $$ \phi(\lambda_{\vee\times }) \ =\ \phi(\lambda_{\times\vee }) \ ,$$ but $L=L(\lambda_{\vee\times})$ and $L^{up}= L(\lambda_{\times\vee})$ lie in different blocks. \begin{lem} Suppose for the representation $L=L(\lambda_{\vee\times})$ in ${\mathcal R}_n^{i}$ the assertion of theorem \ref{mainthm} holds. Then it also holds for the representation $L^{up}= L(\lambda_{\times\vee})$. \end{lem} {\it Proof}. By assumption we have a commutative diagram $$ \xymatrix{ L \ar[rr]^p\ar[dd] & & \lambda \ar[dd] \cr & & \cr DS(L) \ar[rr]^p & & \lambda' \cr} $$ We have to show that we have the same diagram for $L^{up}$ instead of $L$. Let $S_\nu$ denote the sectors of the plot $\lambda=\phi(\lambda_{\vee\times})$ and let $S_j$ denote the sector containing the integer $p(i)$. Then $DS(L)$ is a direct sum of irreducible representations $L_\nu$, whose sector structure either is obtained by replacing one of the sectors $S_\nu, \nu\neq j$ by $\partial S_\nu$, and there is the unique irreducible summand $L_j$ whose sector structure either is obtained by replacing the sectors $S_j$ by $\partial S_j$. We would like to show that $DS(L^{up})$ can be similarly described in terms of the sector structure of $L^{up}$. The sectors of $L^{up}$ literally coincide with the $S_\nu$ for $\nu\neq j$, and for $\nu=j$ the remaining sector of $L^{up}$ is obtained from the sector $S_j$ by transposing the positions at the labels $i,i+1$ (within this sector). Hence to show our claim, it remains to show that $DS(L^{up})$ is isomorphic to a direct sum of irreducible representations $L^{up}_\nu$ with the sector structures such that $L^{up}_\nu$ is obtained from $L_\nu$ by applying the functor $E_i$ (i.e. replacing the positions of $\vee$ and $\times$ at the labels $i,i+1$). Indeed, the derivative $\partial$ for sectors commutes with the interchange of labels at $i,i+1$ in our situation (the sign rule is obviously preserved). Hence it remains to show $$E_i(DS(L(\lambda_{\vee\times}))) = DS(E_i (L(\lambda_{\vee\times}))) \ .$$ But this assertion follows by an argument similarly to the one used for the proof of lemma \ref{tildeA}. \qed Likewise by lemma \ref{translation} one can show \begin{lem}\label{shift-1} Suppose for the representation $L=L(\lambda_{\vee\circ})$ in ${\mathcal R}_n^{i}$ the assertion of theorem \ref{mainthm} holds. Then it also holds for the representation $L^{up}= L(\lambda_{\circ\vee})$. \end{lem} \begin{lem}\label{shift-2} Suppose the main theorem holds for the representation $L=L(\lambda_{\wedge\times})$ in ${\mathcal R}_n^{i}$. Then it also holds for the representation $L^{up}= L(\lambda_{\times\wedge})$. \end{lem} \begin{lem}\label{shift-3} Suppose the main theorem holds for the representation $L=L(\lambda_{\wedge\circ})$ in ${\mathcal R}_n^{i}$. Then it also holds for the representation $L^{up}= L(\lambda_{\circ\wedge})$. \end{lem} \section{Inductive Control over $DS$}\label{sec:inductive} \noindent We prove now the main theorem under the assumption that there exist objects ${\mathbb A}$ with certain nice properties. Under these assumptions we give an inductive proof of theorem \ref{mainthm} using the proposition \ref{3} below. We verify in section \ref{sec:moves} that certain objects $F_i(L_{\times \circ})$ verify these conditions. First recall that for $\varepsilon \in \{ \pm 1\}$ the full abelian subcategories ${\mathcal R}_n(\varepsilon)$ of ${\mathcal R}_n$ consist of all objects whose irreducible constituents $X$ have sign $\varepsilon(X)=\varepsilon$. We quote from section \ref{sec:main} the following \begin{prop} \label{ext-0} The categories ${\mathcal R}_n(\varepsilon)$ are semisimple abelian categories. \end{prop} {\it Definition}. An object $M$ in ${\mathcal R}_n$ is called {\it semi-pure} (of sign $\varepsilon$), if its socle is in the category ${\mathcal R}_n(\varepsilon)$. Every subobject of a semi-pure object is semi-pure. For semi-pure objects $M$ the second layer of the lower Loewy series (i.e. the socle of $M/socle(M)$) is in ${\mathcal R}_n(-\varepsilon)$ by the last proposition. Hence by induction, the $i$-th layer of the lower Loewy filtration is in ${\mathcal R}_n((-1)^{i-1}\varepsilon)$. Hence all layers of the lower Loewy filtration are semi-pure. The last layer $top(M)$ of the lower Loewy series is semisimple. Since $cosocle(M) \cong cosocle(M)^* \cong socle(M^*)$ this easily implies \begin{lem} \label{purity} For semi-pure $*$-selfdual indecomposable objects $M$ in ${\mathcal R}_n$ of Loewy length $\leq 3$ the lower and the upper Loewy series coincide. \end{lem} \noindent We now formulate certain {\it axioms} for an object ${\mathbb A}$ of ${\mathcal R}_n$. Along with the results of section \ref{sec:loewy-length} we will see in section \ref{sec:moves} that the translation functors $F_i(L^{\times \circ})$ verify these conditions. \begin{enumerate} \item ${\mathbb A} \in {\mathcal R}_n$ is indecomposable with Loewy structure $(L,A,L)$. \item ${\mathbb A}$ is $*$-selfdual. \item $L\in {\mathcal R}_n(\varepsilon) $ is irreducible and satisfies theorem \ref{mainthm} with $A\in {\mathcal R}_n(-\varepsilon)$. \item $\tilde {\mathbb A}:=DS({\mathbb A}) = \tilde {\mathbb A}^+ \oplus \Pi(\tilde {\mathbb A}^-)$ is the direct sum of $\tilde {\mathbb A}^+ :=H^+({\mathbb A})$ and $\tilde {\mathbb A}^- = H^-({\mathbb A})$ such that $\tilde {\mathbb A}^+= \bigoplus_{\nu} \tilde {\mathbb A}_{\nu}$ and $\tilde {\mathbb A}^-= \bigoplus_{\overline\nu} \tilde {\mathbb A}_{\overline\nu}$ with indecomposable objects $\tilde{\mathbb A}_\nu \in {\mathcal R}_{n-1}$ of Loewy structure $\tilde{\mathbb A}_\nu = (\tilde L_\nu,\tilde A_\nu,\tilde L_\nu)$ resp. $\tilde{\mathbb A}_{\overline\nu}, \in {\mathcal R}_{n-1}$ of Loewy structure $\tilde{\mathbb A}_{\overline\nu} = (\tilde L_{\overline\nu},\tilde A_{\overline\nu},\tilde L_{\overline\nu})$. \item All $\tilde L_\nu$ and $\tilde L_{\overline\nu}$ are irreducible so that $\tilde L_\nu \in {\mathcal R}_{n-1}(\varepsilon)$ and $\tilde L_{\overline\nu} \in {\mathcal R}_{n-1}(-\varepsilon)$; furthermore $\tilde A_\nu \in {\mathcal R}_{n-1}(-\varepsilon)$ and $\tilde A_{\overline\nu} \in {\mathcal R}_{n-1}(\varepsilon)$ \item For each $\mu=\nu$ (resp. $\mu=\overline\nu$) there exist irreducible {\it detecting} objects $$A'_\mu \subseteq \tilde A_\mu \ ,$$ also contained in $H^+(A)$ (resp. in $H^-(A)$), such that $$Hom_{{\mathcal R}_{n-1}}(A'_\mu,H^\pm(L))=0 \ \ \text{ and } \ \ Hom_{{\mathcal R}_{n-1}}(A'_\mu,\bigoplus _{\rho\neq \nu} \tilde A_\rho)=0\ .$$ \end{enumerate} {\bf Remark}. For $*$-selfdual indecomposable objects as above the layers (graded pieces) of the upper and lower Loewy filtrations coincide, since otherwise proposition \ref{ext-0} would give a contradiction. In the situation above we assume that ${\mathbb A}$ is $*$-selfdual of Loewy length 3 with socle $socle({\mathbb A}) \cong L$ and $cosocle({\mathbb A}) \cong socle({\mathbb A})^* \cong L^* \cong L$ and middle layer $A$. {\bf Remark}. For the later applications we notice that we will construct the detecting objects $A'_\mu$ in $H^+(A^{down})$ (resp. $H^-(A^{down}))$ where $A^{down}$ will be an accessible summand of $A$. By induction we later will also know that these submodules $A'_\mu$ therefore already satisfy theorem \ref{mainthm}. Hence it suffices to check the properties $A'_\mu \subseteq \tilde A_\mu$ and $A'_\mu \subset H^{\pm}(A)$, since these already imply by the main theorem (valid for summands of $A^{down}$) the stronger assertion made in the axiom telling whether $A'_\mu$ appears in $H^+(A)$ or $H^-(A)$. Notice $A'_\mu \subseteq \tilde A_\mu$ and $\tilde A_\mu \in {\mathcal R}_{n-1}(\mp \varepsilon)$ depending on $\mu=\nu$ resp. $\overline \nu$. On the other hand $A^{down} \subset A\in {\mathcal R}_n(-\varepsilon)$. Hence, if the main theorem is valid for $A^{down}$, we get $A'_\mu \in H^+(A)$ for $\mu=\nu$ and $A'_\mu \in H^-(A)$ for $\mu=\overline\nu$. \begin{prop} \label{3} Under the assumptions on ${\mathbb A}$ from above the $H^\pm(A)$ will be semisimple objects in ${\mathcal R}_{n-1}(\mp \varepsilon)$. \end{prop} We will prove the key proposition \ref{3} below after listing some of its consequences. {\it The ring homomorphism $d$}. As an element of the Grothendieck group $K_0({\mathcal R}_{n-1})$ we define for a module $M \in {\mathcal R}_n$ $$d(M)= H^+(M) - H^-(M)\ .$$ Notice $d$ is additive by lemma \ref{hex}. Notice $$K_0(T_n) = K_0({\mathcal R}_n) \oplus K_0(\Pi {\mathcal R}_n) = K_0({\mathcal R}_n) \otimes (\mathbb Z \oplus \mathbb Z\cdot \Pi) \ .$$ We have a commutative diagram $$ \xymatrix{ K_0(T_n) \ar[d]_{DS} \ar[r] & K_0({\mathcal R}_n) \ar[d]^d \cr K_0(T_{n-1}) \ar[r] & K_0({\mathcal R}_{n-1}) \cr} $$ where the horizontal maps are surjective ring homomorphisms defined by $\Pi \mapsto -1$. Since $DS$ induces a ring homomorphism, it is easy to see that $d$ defines a ring homomorphism. The assertion of the last proposition implies that $H^+(A)$ and $H^-(A)$ have no common constituents in ${\mathcal R}_{n-1}$ and that they are semisimple. Therefore $d(A)= H^+(A) - H^-(A) \in K_0({\mathcal R}_{n-1})$ uniquely determines $H^{\pm}(A)$ up to an isomorphism. By the additivity of $d$ and $d({\mathbb A}) = \tilde{\mathbb A}$ we get $2d(L) + d(A)= 2\tilde L + \tilde A$ in $K_0({\mathcal R}_{n-1})$. Hence \begin{cor}\label{determine-H+} $H^+(A)\in {\mathcal R}_{n-1}(-\varepsilon)$ and $H^-(A)\in {\mathcal R}_{n-1}(\varepsilon) $ are uniquely determined by the following formula in $K_0({\mathcal R}_{n-1})$ \[ H^+(A) - H^-(A) = d(A)= \tilde A + 2(\tilde L - d(L)) \ .\] \end{cor} We later apply this in situations where $\tilde L - d(L) = (-1)^{i+n}L^{aux}$ holds by lemma \ref{tildeL} and \ref{tildeLII} and $A' - \tilde A = 2(-1)^{i+n}L^{aux}$ holds by lemma \ref{commute} and \ref{comII}, for some object $L^{aux}$. Here $A'$ denotes the normalized derivative of $A$, introduced in section \ref{signs}, defining a homomorphism \[ {}': K_0({\mathcal R}_n) \to K_0({\mathcal R}_{n-1}) \ .\] Hence the last corollary implies the following theorem which repeatedly applied proves theorem \ref{mainthm} by induction. \begin{thm} \label{derive} Under the axioms on ${\mathbb A}$ from above $d(A) = H^+(A) - H^-(A)$ is the derivative $A'$ of $A$. \end{thm} {\it Proof of the proposition \ref{3}}. Step 1). Assumption (4) implies $H^+({\mathbb A})=\tilde {\mathbb A}^+$ and $H^-({\mathbb A})=\tilde {\mathbb A}^-$ in ${\mathcal R}_{n-1}$. Step 2). Axiom (1) on the Loewy structure of ${\mathbb A}$ therefore gives exact hexagons in ${\mathcal R}_{n-1}$ for $K:= Ker({\mathbb A} \to L)$ using $K/L \cong A$: $$ \xymatrix@+0.3cm{ H^+(K) \ar[r] & \tilde{\mathbb A}^+ \ar[r]^-{H^+(p)} & H^+(L) \ar[d]\cr H^-(L) \ar[u]^\delta & \tilde{\mathbb A}^- \ar[l]^{H^-(p)} & H^-(K) \ar[l] }\ \quad \xymatrix@R+0.3cm{ H^+(K) \ar[r] & H^+(A) \ar[r] & H^-(L) \ar[d]\cr H^+(L) \ar[u]^-{H^+(j)} & H^-(A) \ar[l] & H^-(K) \ar[l] }\ $$ Step 3) Assumption (3),(4),(5) on the Loewy structure of the $\tilde{\mathbb A}_\nu$ and $H^{\pm}(L)$ imply the following factorization property for $\tilde{\mathbb A}^+$ (and then similarly also for $\tilde{\mathbb A}^-$) $$ \[email protected]{ \tilde {\mathbb A}^+= \bigoplus_{\nu} \tilde{\mathbb A}_\nu \ar[dr]_{\oplus q_\nu} \ar[rr]^{H^+(p)} & & H^+(L) \cr & \bigoplus_{\nu} \tilde L_\nu \ar@{.>}[ur]^{\exists !}_{\oplus p_\nu} & } \ $$ Step 4) Let $\Sigma$ be the set of all $\nu$ such that $p_\nu=0$. (Similarly let $\overline\Sigma$ be the set of all $\overline\nu$ such that $p_{\overline\nu}=0$). Then we obtain exact sequences $$ \[email protected]@C-0.1cm{ & & H^+(L)\ar[d]^{H^+(j)} & & \cr 0\to \bigoplus_{\overline\nu\notin \overline\Sigma}\tilde L_{\overline\nu} \ar[r] & H^-(L)\ar[r]^\delta & H^+(K) \ar[d] \ar[r] & \bigoplus_{\nu\in\Sigma} \tilde{\mathbb A}_\nu \oplus \bigoplus_{\nu\notin\Sigma} \tilde K_\nu \ar[r] & 0 \cr & A'_\nu \ar@{.>}[ur]\ar[dr]^0 \ar@{^{(}->}[r] & H^+(A) \ar[d] & & \cr & & H^-(L) & & \cr } $$ Step 5) The detecting object $A'_\nu \hookrightarrow H^+(A)$ has trivial image in $H^-(L)$ by axiom (6), hence can be viewed as a quotient object of $H^+(K)$. Again by axiom (6) we can then view $A'_\nu$ as a nontrivial quotient object of $$ H^+(K)/(I + \delta(H^-(L))) $$ where $ I := H^+(j)(H^+(L))$ is the image of $H^+(L)$ in $H^+(K)$. Step 6) The cosocle of $\bigoplus_{\nu\in\Sigma} \tilde{\mathbb A}_\nu \oplus \bigoplus_{\nu\notin\Sigma} \tilde K_\nu $ is $\bigoplus_{\nu\in\Sigma} \tilde L_\nu \oplus \bigoplus_{\nu\notin\Sigma} \tilde A_\nu $ by assumption (4) on the Loewy structure of $\tilde{\mathbb A}_\nu, \tilde K_\nu$. Step 7) The simple quotient object $A'_\nu$ of $H^+(K)$ can be viewed as a nontrivial quotient object of the cosocle of $H^+(K)$ by step 5). We have an exact sequence $$ H^-(L)/\bigoplus_{\overline\nu\notin \overline\Sigma}\tilde L_{\overline\nu} \to cosocle(H^+(K)) \to cosocle( \bigoplus_{\nu\in\Sigma} \tilde{\mathbb A}_\nu \oplus \bigoplus_{\nu\notin\Sigma} \tilde K_\nu) \to 0$$ and we can view $A'_\nu$ as a nontrivial quotient object of $$ cosocle( \bigoplus_{\nu\in\Sigma} \tilde{\mathbb A}_\nu \oplus \bigoplus_{\nu\notin\Sigma} \tilde K_\nu) \ =\ \bigoplus_{\nu\in\Sigma} \tilde L_\nu \oplus \bigoplus_{\nu\notin\Sigma} \tilde A_\nu $$ by step 5) and 6). Notice that we can consider $A'_\nu$ for arbitrary $\nu$. For $\nu\in \Sigma$ the last assertion contradicts axiom (6): $$ Hom_{{\mathcal R}_{n-1}}(\bigoplus_{\mu\in\Sigma} \tilde L_\mu \oplus \bigoplus_{\mu\notin\Sigma} \tilde A_\mu, A'_\nu) \ = \ 0 \ .$$ This contradiction forces $$ \Sigma = \emptyset \ \ \text{ and similarly } \ \ \overline\Sigma = \emptyset \ ,$$ so we obtain two exact sequences $$ \xymatrix@C+0.5cm{ 0 \ar[r] & \bigoplus_{\overline\nu}\tilde L_{\overline\nu} \ar[r] & H^-(L) \ar[r] & H^+(K) \ar[r] & \bigoplus_\nu \tilde K_\nu \to 0} $$ $$ \xymatrix@C+0.5cm{ 0 \ar[r] & \bigoplus_{\nu}\tilde L_{\nu} \ar[r] & H^+(L) \ar[r] & H^-(K) \ar[r] & \bigoplus_{\overline\nu} \tilde K_{\overline\nu} \to 0} $$ Step 8) The last step 7) proves that $$ \text{\it $H^+(p)$ is injective on the cosocle $\bigoplus_\nu \tilde L_\nu$ of $H^+({\mathbb A})$} \ .$$ Let $i: L\hookrightarrow {\mathbb A}$ be the composition of $j: L\hookrightarrow K$ and the inclusion $K\hookrightarrow {\mathbb A}$. Then $i: L \hookrightarrow {\mathbb A}$ is the $*$-dual of the projection $p: {\mathbb A} \twoheadrightarrow L$ by the axiom (2). Hence by $*$-duality we get from the previous assertion on $H^+(p)$ the following assertion $$ \text{ \it $H^+(i)$ surjects onto the socle $\bigoplus_\nu \tilde L_\nu$ of $H^+({\mathbb A})$} \ .$$ Now considering $$ \xymatrix{ L \ar@{^{(}->}[d]_j \ar@{^{(}->}[dr]^i & \cr K \ar@{^{(}->}[r] & {\mathbb A} } \quad \xymatrix{ \cr \text{and} } \quad \xymatrix{ I \ar@{^{(}->}[d] \ar@{->>}[dr] & \cr socle(H^+(K)) \ar[r] & socle(\tilde {\mathbb A}^+) \cong \bigoplus_\nu \tilde L_\nu } $$ we see that $\bigoplus_\nu \tilde L_\nu$ can also be embedded into the semisimple $I$ as a submodule $ \bigoplus_\nu \tilde L_\nu \ \hookrightarrow\ I $. Step 9) Recall the following diagram $$ \[email protected]{ & & & I \ar[d] & & \cr & \bigoplus_{\nu}\tilde L_{\nu} \ar@{^{(}->}[r] & H^-(L) \ar[r]^\delta & H^+(K) \ar[d]\ar[r]^\pi & \bigoplus_\nu \tilde K_\nu \ar[r] & 0 \cr & & & H^+(A) & & }$$ Since $I$ is in ${\mathcal R}_{n-1}(\varepsilon)$ and $H^-(L) \in {\mathcal R}_{n-1}(-\varepsilon)$ by our axioms, we also have $$ \fbox{$ \delta(H^-(L)) \ \cap \ I \ = \ \{ 0\} $} \ .$$ Hence the composite of $\pi$ and the inclusion $I \hookrightarrow H^+(K)$ maps the semisimple module $I$ injectively into the socle of $\bigoplus_\nu \tilde K_\nu$. Since $socle(\bigoplus_\nu \tilde K_\nu) = \bigoplus_\nu \tilde L_\nu$ and since $I$ contains $\bigoplus_\nu \tilde L_\nu $ as a submodule, this implies that $$ \xymatrix{ \pi: I \ \ar[rr]^-\sim & & \ socle( \bigoplus_\nu \tilde K_\nu) \cong \bigoplus_\nu \tilde L_\nu } \ $$ is an isomorphism. Notice $(\bigoplus_\nu \tilde K_\nu)/socle( \bigoplus_\nu \tilde K_\nu) \cong \bigoplus_\nu \tilde A_\nu$. Step 10) The last isomorphism of step 9) gives the exact sequence $$ 0 \to \bigoplus_{\overline\nu} \tilde L_\nu \to H^-(L) \to \Bigl( H^+(K)/I \Bigr) \to \bigoplus_\nu \tilde A_\nu \to 0 \ .$$ By our assumptions $H^-(L)$ is in ${\mathcal R}_{n-1}(-\varepsilon)$, and hence semisimple. Furthermore all $\tilde A_\nu$ are semisimple and contained in ${\mathcal R}_{n-1}(-\varepsilon)$. Hence by proposition \ref{ext-0} $H^+(K)/I$ is semisimple and contained in ${\mathcal R}_{n-1}(-\varepsilon)$. Step 11). By step 10) and the exact hexagon $$ \[email protected]{ & H^+(K) \ar[r] & H^+(A) \ar[r] & H^-(L) \ar[dd]\cr I \ar@{^{(}->}[ur] & & & \cr & H^+(L) \ar@{->>}[ul] \ar[uu]_{H^+(j)} & H^+(A) \ar[l] & H^-(K) \ar[l] }\ .$$ $H^+(A)$ defines an extension of the semisimple module $H^+(K)/I $ by a submodule of $H^-(L)$ $$ 0 \to H^+(K)/I \to H^+(A) \to Ker\Bigl(H^-(L)\to H^-(K)\Bigr) \to 0 \ .$$ Since $H^+(K)/I$ and $Ker(H^-(L)\to H^-(K))$ are both in ${\mathcal R}_{n-1}(-\varepsilon)$, the proposition \ref{ext-0} implies that $$ H^+(A) \ \cong \ \Bigl( H^+(K)/I \Bigr) \ \oplus \ Ker\Bigl(H^-(j): H^-(L)\to H^-(K)\Bigr) \ $$ is semisimple and contained in ${\mathcal R}_{n-1}(-\varepsilon)$. The first summand has been computed above. Similarly then $$ H^-(A) \ \cong \ \Bigl( H^-(K)/\overline I \Bigr) \ \oplus \ Ker\Bigl(H^+(j): H^+(L)\to H^+(K)\Bigr) \ $$ is semisimple and contained in ${\mathcal R}_{n-1}(\varepsilon)$. \qed {\bf Example}. Recall the indecomposable $*$-selfdual objects ${\mathbb A}_{S^i}$ in ${\mathcal R}_{n}, n \geq 2$ for $i=1,2,...$ with Loewy structure $(L,A,L)$ where $L=S^{i-1}$ and $$ A = S^i \oplus S^{i-2} \oplus \delta_n^i \cdot Ber_n^{-1} \ .$$ Concerning the notations: $\delta_n^i$ denotes Kronecker's delta and $S^{-1}=0$. The conditions (1)-(5) are satisfied for $\varepsilon =(-1)^{i-1}$ and $A'=S^{i-2}$. Indeed condition (5) follows, since by induction on $i$ one can already assume that $H^-(L)=H^-(S^{i-1})$ is $Ber^{-1}$ or zero and that $H^+(S^{i-2})$ contains $S^{i-2}$. Then by induction on $i$ the computation of $H^{\pm}(A)$ in terms of $\tilde A, \tilde L, H^\pm(L)$ from above easily gives as in section \ref{sec:strategy} the following result \begin{prop} Suppose $n\geq 2$. Then for the functor $DS: {\mathcal R}_n \to T_{n-1}$ of Duflo-Serganova we obtain $DS(Ber_{n}) = \Pi(Ber_{n-1})$ and \begin{enumerate} \item $DS(S^i) = S^i\ $ for $i< n-1$, \item $DS(S^{i}) = S^{i} \oplus \Pi^{n-1-i} Ber^{-1}$ for $i\geq n-1$. \end{enumerate} \end{prop} \noindent \section{Moves}\label{sec:moves} We verify now the conditions on the indecomposable objects ${\mathbb A}$ of section \ref{sec:inductive} for the translation functors $F_i(L(\lambda_{\times \circ}))$. Additionally we verify the commutation rules in and after corollary \ref{determine-H+}. Instead of working directly with the irreducible representation $L$ we use the associated plot as in section \ref{derivat} and \ref{sec:loewy-length}. Recall that a plot $\lambda$ is a map $\lambda: \mathbb Z \to \{\boxplus,\boxminus\}$. We also use the notation $\boxplus_i$ to indicate that the $\lambda(i) = \boxplus$ and likewise for $\boxminus$. For an overview of the algorithms I and II used in this section see section \ref{sec:strategy}. Let $L=(I,K)$ for $I=[a,b]$ be a segment with sectors $S_1,..,S_k$ from left to right. Suppose $S_j =[i,i+1]$ is a sector of rank 1. Then the segment may be visualized as $$ L \ = \ (S_1 \cdots S_{j-1} [\boxplus_i,\boxminus_{i+1}] S_{j+1} \cdots S_k) \ .$$ We define the {\it upward move} of the segment $L$ as the plot defined by the {\it two segments} with intervals $[a,i-1]$ and $[i+1,b+1]$ $$ L^{up} \ =\ (S_1\cdots S_{j-1})\ \boxminus_i \ (\int(S_{j+1}\cdots S_k)) \ .$$ Similarly we define the {\it downward move} of the segment $L$ as the plot defined by the {\it two segments} with intervals $[a-1,i]$ and $[i+2,b]$ $$ L^{down} \ = \ (\int(S_1\cdots S_{j-1}))\ \boxminus_{i+1}\ (S_{j+1}\cdots S_k) \ .$$ Furthermore for $r\neq j$ we define additional $r$-th {\it internal} lower resp. upper {\it downward moves} $L^{down}_r$ by the plots associated\footnote{For $r=j-1$ or $r=j+1$ the inner integral over the empty sector is understood to give the sector $([i-1,i],\{i-1\})$ respectively $([i+1,i+2],\{i+1\})$.} to the {\it single segments} $$ (S_1 \cdots S_{r-1} \int\bigl(S'_r \int (S_{r+1} \cdots S_{j-1})\bigr)\ S_{j+1} \cdots S_k) $$ for each $1\leq r \leq j-1$ respectively $$ (S_1 \cdots S_{j-1} \int\bigl(\int(S_{j+1} \cdots S_{r-1}) \ S'_r\bigr)\ S_{r+1} \cdots S_k) $$ for each $j+1\leq r \leq k$. Explaining the notion 'internal', notice that the segments defined by these internal downward moves have the same underlying interval $I=[a,b]$ as the segment $L$ we started from. We remark that the last formulas do remind on partial integration. Formally by setting $L^{down}_r := L^{down}$ for $r=j$, we altogether obtain $k$ downward moves and one upward move. All these moves preserve the rank. The plot $L$ has a sector $[i,i+1]$ of rank 1. The {\it auxiliary plot} $L^{aux}$ attached to $L$ (and $[i,i+1]$) is the plot of rank $r(L)-1$ defined by two segments with intervals $[a,i-1]$ and $[i+2,b]$ $$ L^{aux}\ = \ (S_1\cdots S_{j-1})\ \boxminus_i \boxminus_{i+1} (S_{j+1}\cdots S_k) $$ and we also consider $$ L^{\times\circ} = (S_1\cdots S_{j-1})\ \times_i \circ_{i+1}\ (S_{j+1}\cdots S_k) \ .$$ {\bf Algorithm I} (lowering sectors). For a plot with $k$ sectors $S_\nu$ with ranks $r_\nu=r(S_\nu) \geq 0$ and the distances $d_\nu \geq 0$ for $\nu=1,....k$ (from left to right) we formally define $r_{k+1} = r_{k+1} = ... =0$ and $d_k=d_{k+1}=...= 0$. We can then compare different plots with respect to the lexicographic ordering of the sequences $$ (-r_1,d_1,-r_2,d_2,..... ) \ .$$ Within the set of plots of fixed rank say $n$, the minimum with respect to this ordering is attained if $r_1=n$, i.e. if there exists only one sector. {\it Algorithm I will be applied to given plots, say $\lambda_{\wedge\vee}$, with {\it more than one segment}}. The upshot is: In this situation one can always find a lexicographic smaller plot $L$ so that the given plot is of the form $\lambda_{\wedge\vee} = L^{up}$ and such that $L$ and all plots obtained by the moves $L^{down}_r$ of $L$ are strictly smaller than the starting plot $L^{up}$. Algorithm I is used for induction arguments to reduce certain statements (e.g. theorem \ref{mainthm}) to the case of plots with 1 segment. {\it Definition of $L$}. For a given plots say $\lambda_{\wedge\vee}$, with more than one segment, $d_\nu>0$ holds for some integer $\nu$. So choose $j$ so that the distances $dist(S_1,S_2)=...=dist(S_{j-2},S_{j-1}) =0$ for the sectors $S_1,..,S_{j-1}$ of $\lambda_{\wedge\vee}$ and $dist(S_{j-2},{j-1}) > 0$. We temporarily write $S$ for the next sector $S$ of $\lambda_{\wedge\vee}$. Interpret $S=\int(S_{j+1} \cdots S_k)$ for some sectors $S_j,...,S_k$. This is possible, but keep in mind that $S_j, ... , S_k$ are not sectors of $\lambda_{\wedge\vee}$ but will be sectors of $L$, and this explains the notation. Indeed, for $i+1=min(S)$, we define $L$ to be $$ L \ = \ (S_1 \cdots S_{j-1}) ... d_{j-1} .... (S_jS_{j+1}\cdots S_k) ... d_k .... $$ with $S_{j}$ of rank 1 at the positions $[i,i+1]$. To simplify notations we do not write further sectors to the right, since the sectors of $\lambda_{\wedge\vee}$ to the right of $S$ will not play an essential role in the following. Indeed, they will appear verbatim in the sector structure of $L$ up to some distance shifts at the following positions $$ dist(S_{j-1},S) = 1+ d_{j-1} \quad , \quad dist(S,\text{next sector}) = d_k -1 \ .$$ Concerning the lexicographic ordering $$dist(S_{j-1},S_j) =d_{j-1} < dist(S_{j-1},S) = 1+ d_{j-1}$$ shows that $L$ is smaller than $L^{up}= \lambda_{\wedge\vee}$. We leave it to the reader to check that also all $L^{down}_j$ are smaller than $L^{up}= \lambda_{\wedge\vee}$. Notice, here we apply the moves as in the preceding paragraph with the notable exceptions that \begin{enumerate} \item There may be further sectors beyond $S_k$. These are just appended, and do not define new moves. \item If $d_{j-1} \geq 1$ the sector $S_{j-1}$ has distance $>0$ to the sector $S_j$ and therefore does not define downward moves, so that only the doenward moves $L^{down}_r$ for $r=j,...,k$ are relevant. \end{enumerate} In the later discussion we always display the {\it more complicated} case where $d_{j-1}=0$ (without further mentioning). For the case $d_{j-1}>0$ one can simply \lq{omit}\rq\ $S_1,...,S_{j-1}$, by just appending them in the same way as we agreed to \lq{omit}\rq\ sectors to the right of $S_k$. {\bf Construction of detecting objects for algorithm I}. Fix $L= L(\lambda_{\wedge\vee})$ with the sector $[i,i+1]$. Then $L$ is determined by its sectors. For the construction of detecting objects we are only interested in down moves. In the following it therefore suffices only to keep track of the sectors below $[i,i+1]$ in the segment containing the sector $[i,i+1]$. Notice that $L$ is a union of the sector $[i,i+1]$ and, say $s$, other sectors $S_\nu$. Let $S_1,..,S_{j-1}$ denote the sectors below $[i,i+1]$ in the segment of $[i,i+1]$. Hence $L$ is $$ \boxminus S_1 \cdots S_{j-1} [\boxplus_i \boxminus_{i+1}] $$ and the union of other disjoint sectors $S_\nu$ for $j+1 \leq \nu \leq s$. Then $L^{\times \circ}$ is $$ \boxminus S_1 \cdots S_{j-1} [\times_i \circ_{i+1}] $$ and the union of other disjoint sectors $S_\nu$ for $j+1 \leq \nu \leq s$. We define ${\mathbb A} = F_i L^{\times \circ}$. Then ${\mathbb A}$ is $*$-self dual of Loewy length 3 with socle and cosocle $L$. The term $A$ in the middle is semisimple and the weights of its irreducible summands are given by $L^{up}$ and the $k$ down moves of $L$ according to section \ref{sec:loewy-length}. To determine $\tilde {\mathbb A} = DS({\mathbb A})$ we use induction and lemma \ref{tildeA}. This implies that $\tilde {\mathbb A}$ is the direct sum of $\Pi^{m_{\nu}} \tilde {\mathbb A}_\nu$ for indecomposable objects $\tilde {\mathbb A}_\nu$ in ${\mathcal R}_{n-1}$, which uniquely correspond to the irreducible summands of $DS(L^{\times \circ})$. However these correspond to the irreducible summands $\tilde L_\nu$ of $DS(L^{aux})$. Again by induction (now induction on the degree of atypicity) the summands of $DS(L^{\times\circ})$ respectively $DS(L^{aux})$ are already known to be given by the derivative of $\lambda_{aux}$. These facts imply the next \begin{lem} \label{tildeL} We have $$ \tilde {\mathbb A} = \bigoplus_{\mu=1}^s \tilde \Pi^{m_{\mu}} {\mathbb A}_\mu$$ where each $\tilde {\mathbb A}_\mu \in{\mathcal R}_{n-1}$ has Loewy lenght 3 with irreducible socle and cosocle $\tilde L_\mu$ defined by the $s$ plots for $\mu=1,...,s$ $$ [\boxplus_i \boxminus_{i+1}] \cup S'_\mu \cup \bigcup_{\nu\neq \mu} S_\nu \ .$$ In particular, for $m_{aux}$ (which is congruent to $ i+n-1$ modulo 2), we get\footnote{assuming that theorem \ref{mainthm} holds for $L$, say by induction assumption.} $$ \fbox{$ DS(L) \cong \Pi^{m_{aux}} L^{aux} \ \oplus \ \bigoplus_{\mu=1}^s \tilde \Pi^{m_{\mu}} L_\mu $} \ .$$ Hence in $K_0({\mathcal R}_{n-1})$ $$ \fbox{$ d(L)\ =\ L' \ =\ \tilde L \ + \ (-1)^{i+n-1} \cdot L^{aux} $} \ .$$ \end{lem} Now each $\tilde A_\mu$ is determined from $\tilde L_\mu$ by applying certain upward and the downward moves starting from $\tilde L_\mu$. We indicate that the segment of $\tilde L_\mu$ containing $[\boxplus_i \boxminus_{i+1}]$ has less than $r$ sectors, if $1 \leq \mu \leq j-1$. Indeed the union of the sectors of $\tilde L_\mu$ in the segment of $[\boxplus_i \boxminus_{i+1}]$ is $$ ... \boxminus S_{\mu+1} \cdots S_{j-1} [\boxplus_i \boxminus_{i+1}] S_{j+1} \cdots S_r \boxminus ... \ $$ for $\mu \leq j-2$ and by $[\boxplus_i \boxminus_{i+1}] S_{j+1} \cdots S_r \boxminus ... $ for $\mu= j-1$. We are now able to define the {\it detecting objects} $A'_\mu \subseteq \tilde A_\mu$ for $\mu=1,...,s$ by $\tilde L_\mu^{down}$, given by induction as follows \begin{enumerate} \item $ (\int(S_1 \cdots S_{j-1})) \boxminus_{i+1} \cup (S'_\mu) \cup \bigcup_{j-1 <\mu\neq \ell} S_\ell $ for $\mu \notin \{1,...,j-1\}$, \item $S_1 \cdots S'_\mu (\int (S_{\mu +1} \cdots S_{j-1})) \boxminus_{i+1} S_{j+1} \cdots S_k \cup \bigcup_{k <\ell} S_\ell $ for $\mu \leq j-2$, \item $S_1 \cdots S_{j-2} \boxminus S'_{j-1} (\boxplus \boxminus_i) \boxminus_{i+1} S_{j+1} \cdots S_k \cup \bigcup_{k <\ell} S_\ell $ for $\mu= j-1$. \end{enumerate} It is therefore clear that the detecting object is different from all objects in $DS(L)$, which by induction are known to be given by the derivative of $L$. Furthermore $A'_\mu \subseteq \tilde A_\mu$. It requires some easy but tedious inspection to see that $A'_\mu$ is not contained in $\tilde A_\nu$ for $\nu\neq \mu$. Hence to see that the $A'_{\mu}$ are detecting objects, it suffices to show the next \begin{lem} The objects $A'_\mu$ are contained in $DS(A)$. If $L^{up}$ is stable, then $L$ is stable and $A'_\mu \subset H^+(A)\oplus H^-(A)$ for all $\mu$. \end{lem} {\it Proof}. Recall $$A \cong A^{up} \oplus A^{down} $$ for $ A^{up} := L^{up}$ and $A^{down} := \bigoplus_{i=1}^k L^{down}_r$. \noindent We do not know how to compute $DS(A^{up})$. However by induction we already know that the derivative computes $DS(A^{down})$. In $A^{down}\subset A$ we have the following objects $A_\mu$ \begin{enumerate} \item $ (\int(S_1 \cdots S_{j-1})) \boxminus_{i+1} \cup \bigcup_{j-1 <\ell} S_\ell $ for $\mu \notin \{1,...,j-1\}$, \item $S_1 \cdots \int( S'_\mu \ \int (S_{\mu +1} \cdots S_{j-1})) S_{j+1} \cdots S_k \cup \bigcup_{k <\ell} S_\ell $ for $\mu \leq j-2$, \item $S_1 \cdots S_{j-2} (\boxplus S'_{j-1} \boxplus \boxminus_i \boxminus_{i+1}) S_{j+1} \cdots S_k \cup \bigcup_{k <\ell} S_\ell $ for $\mu= j-1$. \end{enumerate} Their derivative $DS(A_\mu)$ contains \begin{enumerate} \item $ (\int(S_1 \cdots S_{j-1}) \boxminus_{i+1}) \cup (S'_\mu) \cup \bigcup_{j-1 <\ell\neq \mu} S_\ell $ for $\mu \notin \{1,...,j-1\}$, \item $S_1 \cdots S'_\mu (\int (S_{\mu +1} \cdots S_{j-1})) \boxminus_{i+1} S_{j+1} \cdots S_k \cup \bigcup_{k <\ell} S_\ell $ for $\mu \leq j-2$, \item $S_1 \cdots S_{j-2} \boxminus S'_{j-1} (\boxplus \boxminus_i) \boxminus_{i+1} S_{j+1} \cdots S_k \cup \bigcup_{k <\ell} S_\ell $ for $\mu= j-1$. \end{enumerate} This proves $A'_{\mu} \subset DS(A_\mu)$ and hence our claim. \qed {\bf Commutation rule for algorithm I}. Now we discuss how moves commute with differentiation for a given $L$ as above. It is rather obvious from the definitions that for this we can restrict ourselves to the situation where $L$ is the single segment $$ L = (S_1 \cdots S_{j-1} \boxplus_i \boxminus_{i+1} S_{j+1} \cdots S_k) \ .$$ So let us assume this for simplicity of exposition. 1) {\it Computation of $\tilde A$}. Taking first the derivative we obtain $L^{aux}$ and $(k-1)$ plots $\tilde L_\mu$ of the form $$ S_1 \cdots S'_\mu (S_{\mu+1} \cdots S_{j-1} \boxplus_i \boxminus_{i+1} S_{j+1} \cdots S_k) \ $$ (lower group where $\mu \leq j-1$) respectively $$ (S_1 \cdots S_{j-1} \boxplus_i \boxminus_{i+1} S_{j+1} \cdots S_{\mu -1}) S'_\mu \cdots S_k \ $$ (upper group where $\mu \geq j+1$). The sign in the Grothendieck group attached to these is $(-1)^{a_\mu + n-1} =(-1)^{i+n-1}$ for $S_\mu = [a_\mu,b_\mu]$. Notice that $L^{aux}$ does not define any moves. The segment containing $\boxplus_i \boxminus_{i+1}$ (indicated by the brackets) defines the possible moves of each of these derived plots $\tilde L_\mu$. These are e.g. in the {\it lower group case} the upward move $$ \fbox{$ S_1 \cdots S'_\mu S_{\mu +1} \cdots S_{j-1} \boxminus_i \int (S_{j+1} \cdots S_k) $} \ $$ and the downward move $$ \fbox{$ S_1 \cdots S'_\mu \int(S_{\mu +1} \cdots S_{j-1}) \boxminus_{i+1} S_{j+1} \cdots S_k$} \ $$ and the internal upper/lower downward moves $$ \fbox{$ S_1 \cdots S'_\mu S_{\mu+1} \cdots S_{j-1} \int(\int ( S_{j+1} \cdots S_{r-1}) \ S'_r\ ) S_{r+1} \cdots S_k $} \ $$ $$ \fbox{$ S_1 \cdots S'_\mu S_{\mu+1} \dots S_{r-1} \int (S'_r \int (S_{r+1} \cdots S_{j-1}))\ S_{j+1} \cdots S_k $} \ $$ 2) {\it Computation of the derivative $A'$}. Now we revert the situation and first consider the moves of $L$, the upward and downward moves $$ L^{up} \ =\ S_1\cdots S_{j-1}\ \boxminus_i \ \int(S_{j+1}\cdots S_k) \ ,$$ $$ L^{down} \ = \ \int(S_1\cdots S_{j-1})\ \boxminus_{i+1}\ S_{j+1}\cdots S_k \ ,$$ and the internal downward moves (for lower sectors) $$ S_1 \cdots S_{r-1} \int\bigl(S'_r \int (S_{r+1} \cdots S_{j-1})\bigr)\ S_{j+1} \cdots S_k $$ respectively (for upper sectors) $$ S_1 \cdots S_{j-1} \int\bigl(\int(S_{j+1} \cdots S_{r-1}) \ S'_r\bigr)\ S_{r+1} \cdots S_k \ .$$ If we differentiate $L^{up}$, we get the plots of the form $$ \fbox{$ S_1\cdots S'_\mu \cdot S_{j-1}\ \boxminus_i \ \int(S_{j+1}\cdots S_k) $} $$ with sign $(-1)^{s_\mu + n-1} = (-1)^{i +n-1}$ and similarly $$ L^{aux} = S_1\cdots S_{j-1}\ \boxminus_i \ \boxminus_{i+1} S_{j+1}\cdots S_k $$ with sign $(-1)^{i+n}$. If we differentiate $L^{down}$, we get $ \int(S_1\cdots S_{j-1})\ \boxminus_{i+1}\ S_{j+1}\cdots S'_\mu \cdots S_k $ and similarly $$ L^{aux} = S_1\cdots S_{j-1}\ \boxminus_i \ \boxminus_{i+1} S_{j+1}\cdots S_k \ $$ with sign $(-1)^{i+n}$. If we derive the plots defined by the internal moves ({\it lower group}, where we derive at $\nu \leq j-1$) we get the plots of the form $$ \fbox{$ S_1 \cdots S_{r-1} S'_r \int (S_{r+1} \cdots S_{j-1})\boxminus_{i+1} \ S_{j+1} \cdots S_k $} $$ with sign $(-1)^{s_r + n-1} = (-1)^{i+n-1}$ together with $$ \fbox{$ S_1 \cdots S'_\mu S_{\mu+1} \cdots S_{j-1} \int(\int ( S_{j+1} \cdots S_{r-1}) \ S'_r\ ) S_{r+1} \cdots S_k $} \ $$ $$ \fbox{$ S_1 \cdots S'_\mu S_{\mu +1} \cdots S_{r-1} \int\bigl(S'_r \int (S_{r+1} \cdots S_{j-1})\bigr)\ S_{j+1} \cdots S_k $} $$ of sign $(-1)^{i+n-1}$ respectively similar terms for the upper group, where we differentiate at $\mu \geq j+1$. Altogether, besides two additional signed plots of the form $L^{aux}$, these give precisely the plots obtained before. This implies \begin{lem} \label{commute}The differential of the moves of $L$ gives the term $2 \cdot L^{aux}$ plus the moves of the differential of $L$, i.e. $$ \fbox{$ {A'} \ =\ \tilde A \ +\ 2(-1)^{i+n} \cdot L^{aux} $} \ $$ holds in $K_0({\mathcal R}_{n-1})$. \end{lem} {\bf Algorithm II} (melting sectors). Suppose $\lambda_{\wedge\vee}$ is a plot with a single segment and at least two sectors. $[a,i]$ and point $i$ respectively left boundary point $i+1$ of the segment defining the plot $\lambda_{\wedge\vee}$. In algorithm II we melt the first two adjacent sectors $[a,i]$ and $[i+1,b]$ together into a single sector $S^{melt}$ to obtain a new plot $\lambda_{\vee\wedge}$ so that $$supp(\lambda_{\wedge\vee}) - \{i+1\} = supp(\lambda_{\vee\wedge}) - \{i\} \ .$$ This new plot $\lambda_{\wedge\vee}$ again has a unique segment with the same underlying interval as the plot $\lambda_{\wedge\vee}$. But the sector structure is different, since the number of sectors decreases by one. Notice, opposed to algorithm I, the interval $[i,i+1]$ does not define a sector of the original plot $\lambda_{\wedge\vee}$. However $[i,i+1]$ defines a sector of the 'internal' plot $$L_{int} := \partial(S^{melt}) \ ,$$ with sector structure say $$ L_{int} = S_1 \cdots S_{j-1} [\boxplus_i \boxminus_{i+1}] S_{j+1} \cdots S_k \ ,$$ so that $$ \lambda_{\vee\wedge} = (\int L_{int}) \ \text{ other sectors} \quad , \quad \lambda_{\times\circ} := \ (\int (L_{int})^{\times\circ} ) \ \text{ other sectors} $$ We similarly define for $r=1,...,k$ and $r\neq j$ the plots $$ \lambda^{down}_r \ := \ (\int (L_{int})^{down}_r )\ \text{ other sectors} \ .$$ Finally $\ \lambda_{\wedge\vee} = (\int (L_{int})^{up} )\text{other sectors }$, which is the plot we started from. Since $\int (L_{int})^{up}$ has two sectors, all the plots $\lambda^{down}_r$ for $1 \leq r\neq j \leq k$ have less sectors than the plot $\lambda_{\wedge\vee}$. Indeed, the plots $\int (L_{int})^{down}_r$ are irreducible as an easy consequence of the integral criterion. {\bf Construction of detecting objects for algorithm II}. Fixing $\lambda_{\wedge\vee}$ as above, $$ {\mathbb A} = F_i(L(\lambda_{\times\circ})) \ = \ (L,A,L) $$ defines a $*$-self dual object in ${\mathcal R}_n$ of Loewy length 3 with socle and cosocle $L$, where $$ L= L(\lambda_{\vee\wedge}) $$ and $A=A^{up} \oplus A^{down}$ for $ A^{up} = L(\lambda_{\wedge\vee}) $ and $ A^{down} = \bigoplus_{r\neq j,1}^k L(\lambda_r^{down})$. To determine $DS({\mathbb A}) = \bigoplus_\mu \Pi^{m_{\mu}} \tilde{\mathbb A}_\mu$ we use induction and lemma \ref{tildeA}. This implies that $\tilde {\mathbb A}$ is the direct sum of $\tilde \Pi^{m_{\mu}} {\mathbb A}_\mu$ for indecomposable objects $\tilde {\mathbb A}_\mu$ in ${\mathcal R}_{n-1}$, which uniquely correspond to the irreducible summands $\tilde L_\mu$ of $DS(L^{\times\circ})$. But by induction (now induction on the degree of atypicity !) the irreducible summands of $DS(L^{\times\circ})$, that determine the irreducible modules $\tilde L_\mu$, can be computed by the derivative of $\lambda_{aux}$. Since in the present situation replacing $i,i+1$ by $\times,\circ$ commutes with the derivative, these facts imply the next \begin{lem} \label{tildeLII} If $L$ has $s$ sectors, for the melting algorithm we have $$ \tilde {\mathbb A} = \bigoplus_{\mu=1}^s \Pi^{m_{\mu}} \tilde {\mathbb A}_\mu $$ where each $\tilde {\mathbb A}_\mu \in{\mathcal R}_{n-1}$ has Loewy length 3 of Loewy structure $(\tilde L_\mu, \tilde A_\mu, \tilde L_\mu)$ with irreducible socle and cosocle $\tilde L_\mu$. For the various summands, for varying $\mu$, up to the shift $m_\mu$ the socles $L_\mu$ are defined by the $s-1$ different plots plots arising from the derivative $$ (\int L_{int})\ \text{(other sectors)}' $$ together with the plot $$ L_{int} \ \text{(other sectors)} \ .$$ In particular\footnote{assuming that theorem \ref{mainthm} holds for $L$ and $L^{\times\circ}$, say by induction assumption.}, if $L$ has $s$ sectors, $ DS(L) \cong \bigoplus_{\mu=1}^s \Pi^{m_{\mu}} \tilde L_\mu $. This gives in $K_0({\mathcal R}_{n-1})$ the formula $$ \fbox{$ d(L)\ =\ L' \ =\ \tilde L $} \ .$$ \end{lem} \begin{cor} \label{cohomII} In the situation of the last lemma the morphisms $$H^i(p): H^i({\mathbb A}) \to H^i(L)$$ are surjective for all $i\in \mathbb Z$. \end{cor} {\it Proof}. We already know that $H^{\pm}(p) : H^{\pm}({\mathbb A}) \to H^{\pm}(L)$ induces injective maps on the cosocle of $H^{\pm}({\mathbb A})$. By lemma \ref{tildeLII} therefore these induced maps are bijections between the cosocle of $H^{\pm}({\mathbb A})$ and $H^{\pm}(L)$. In particular the morphisms $H^{\pm}(p) : H^{\pm}({\mathbb A}) \to H^{\pm}(L)$ are surjective. This implies the assertion. \qed This being said note that $2d(L)+d(A) = d({\mathbb A}) =d(\tilde {\mathbb A}) = 2\tilde L + \tilde A$ together with the assertion $d(L) = \tilde L$ from the lemma \ref{tildeLII} above implies $d(A) = \tilde A$. Any $\tilde L_\mu$ defines a nontrivial term $\tilde A_\mu$. We claim that any irreducible summand $A'_\mu \subset \tilde A_\mu$ is a detecting object now. Indeed any summand $A'_\mu $ of $\tilde A$ appears in $H^+(A)$ by the formula $d(A) = H^+(A) - H^-(A) = \tilde A$. Checking the possible moves that define the constituents of $\tilde A_\mu$ from $\tilde L_\mu$ it is clear that $A'_\mu$ is not a constituent of any $\tilde A_\nu$ for $\nu\neq \mu$. Hence \begin{lem} Detecting objects $A'_\nu$ exist for algorithm II. \end{lem} {\bf Commutation rule for algorithm II}. Now we discuss how moves commute with differentiation for a given $L$ as above. It is rather obvious from the definitions that we can restrict ourselves for this to the situation where the segment of plot $\lambda_{\wedge\vee}$ has only two sectors. In other words we claim that we can assume without restriction of generality that the terms 'other factors' does not appear, so that $s=2$ holds in the last lemma \ref{tildeLII}. The reason for this is, that moves for $L_{int} \ (others\ sectors) $ are the same as for $\int L_{int}\ (others\ sectors)'$, since by \cite{Brundan-Stroppel-2} the relevant moves are moves 'within' the sector $\int L_{int}$. Hence for the proof of the next lemma we can assume that $L=\int L_{int}$ has a unique sector so that $d(L)=\tilde L=L'= L_{int}$, which has a single segment. \begin{lem} \label{comII} The differential of the moves of $L$ gives the moves of the differential of $L$, i.e. $$ \fbox{$ {A'} \ =\ \tilde A \ $} \ $$ holds in $K_0({\mathcal R}_{n-1})$. \end{lem} {\it Proof}. Without restriction of generality we can assume that the plot $\lambda_{\vee\wedge}$, we are starting with, is a segment with only two sectors, so that $L= \int L_{int}$. Let the single segment of $L_{int}$ have the form $$ S_1 \cdots S_{j-1} [\boxplus_i \boxminus_{i+1}] S_{j+1} \cdots S_k \ $$ with $k$ sectors $S_1,...,S_k$ where the underlying interval of $S_j$ is $[i,i+1]$. 1) {\it Computation of $\tilde A$}. According to \cite{Brundan-Stroppel-2}, \cite{Weissauer-gl} the constituents of $\tilde{\mathbb A} = \bigoplus_{\mu=1}^s \Pi^{m_{\mu}} \tilde{\mathbb A}_\mu$ are obtained from the socle module $\tilde L_\mu$ of $\tilde{\mathbb A}_\mu$ by moves. The last lemma shows that $\tilde L = \bigoplus_\mu \tilde L_\mu$ is the derivative $L' = (\int L_{int})' = L_{int}$ of $L$ up to a shift determined by the sign factor $(-1)^{a + n-1}$. Since $s=1$ by assumption, $\tilde {\mathbb A}= (\tilde L, \tilde A, \tilde L)$ is an indecomposable module with socle $$ \tilde L \ = \ L_{int} \ .$$ Up to a parity shift by $m=a+n-1$, the module $\tilde A$ therefore is the direct sum $$ \tilde A \ =\ (L_{int})^{up} \oplus\ \bigoplus_{r=1}^k \ (L_{int})^{down}_r \ $$ of the irreducible modules obtained from $\tilde L = L_{int}$ by the unique upward and the $k$ downward moves. Notice $(L_{int})^{down}_j = (L_{int})^{down}$ is the 'nonencapsulated' downward move in the notions of \cite{Weissauer-gl}. Here it occurs, since $[i,i+1]$ is one of the sectors of the $\tilde L$. 2) {\it Computation of the derivative $A'$}. Now we revert the situation and first consider ${\mathbb A} =(L,A,L)$ and the moves of $L=\int L_{int}$ that determine the irreducible summands of $A$. Indeed $$ A = \ \Bigl(\int L_{int}\Bigr)^{up} \oplus \bigoplus_{r\neq j, r=1}^k \Bigl(\int L_{int}\Bigr)^{down}_r \ $$ holds for the irreducible modules obtained from $L = \int L_{int}$ by the upward move $L^{up}$ and the $k-1$ internal inner/upper downward moves $(L_{int})^{down}_r$ for $r\neq j$. Notice that $(L_{int})^{down}_j = (L_{int})^{down}$, as opposed to the situation above, this time does not appear as a move, since we are in the 'encapsulated' case in the notions \cite{Weissauer-gl} where $[i,i+1]$ is not a sector of $L$ (but only an internal sector of $L$). The formulas above imply that $A' = (A^{up})' \oplus (A^{down})'$ is a direct sum of the two irreducible summands $$ (A^{up})' = B_1 \oplus B_2 \ ,$$ coming from $ ((\int L_{int})^{up})' = (L^{up})' = L(\lambda_{\wedge\vee})' $ for $\lambda_{\wedge\vee}=[a,i][i+1,b]$ with derivative $(-1)^{a+n-1}( \partial([a,i]) \cup [i+1,b] ) + (-1)^{i+n-1} ([a,i] \cup \partial([i+1,b]))$, and the $k-1$ irreducible summands $(A^{down}_r)'$ of $(A^{down})'$ given by $$ (A^{down}_r)' \ = \ \Bigl(\Bigl(\int L_{int}\Bigr)^{down}_r\Bigr)' \ .$$ This gives $2+(k-1)=k+1$ irreducible factors in $\tilde A$, and all signs coincide by $(-1)^{i+n-1} = (-1)^{a+n-1}$. {\it The comparison}. Since all signs are $(-1)^{a+n-1}$ for both computations, we can ignore the parity shift. Then observe that $ (\int L_{int})^{down}_r = \int (L_{int})^{down}_r)$ holds for $r\neq j$, hence $(A^{down}_r)' = ((\int L_{int})^{down}_r)' = L_{int})^{down}_r $ for $r\neq j$. So it remains to compare the two remaining summands $$ B_1 \quad \ \ , \ \ \quad B_2$$ of $A'$ and the two remaining summands $$ (L_{int})^{up} \quad , \quad (L_{int})^{down}_j $$ of $\tilde A$. The latter correspond to the plots $S_1... S_{j-1} \boxminus_i \int (S_{j+1} ... S_k)$, giving the upward move, resp. $\int (S_1 ... S_{j-1}) \boxminus_{i+1} S_{j+1} \cdots S_k$, giving the downward move. Obviously these two define the plots $\partial([a,i]) \cup [i+1,b]$ respectively $[a,i] \cup \partial([i+1,b]) $ defining the two summands $B_1$ and $B_2$. \qed \part{Consequences of the Main Theorem} We describe some applications of the main theorem. The main result is the computation of the ${\mathbf{Z}}$-grading of $DS(L)$ for any irreducible representation $L$ in sections \ref{kohl} - \ref{koh3}. This result is based on the main theorem and its proof. For that we first need a description of the dual of an irreducible representation in section \ref{duals}. In the later sections \ref{kac-module-of-one} - \ref{hooks} we obtain various results about the cohomology of maximally atypical indecomposable representations. \section{Tannaka Duals}\label{duals} Let $\lambda$ be an atypical weight, and $L(\lambda)$ the associated irreducible representation. Note that $(Ber^k \otimes L(\lambda))^{\vee} = Ber^{-k} \otimes L(\lambda)^{\vee}$. We use the description of the duals obtained in \cite{Heidersdorf-mixed-tensors}. Note that $L(\lambda)=socle(P(\lambda))= cosocle(P(\lambda))$, since projective modules are $*$-self dual. Hence $L(\lambda)^\vee = socle(P(\lambda)^{\vee})$, so it suffices to compute the socle of $P(\lambda)^\vee$. Now $P(\lambda) = R(\lambda^L,\lambda^R)$ for the bipartition $(\lambda^L,\lambda^R) = \theta^{-1}(\lambda)$ satisfying $k(\lambda^L; \lambda^R) = n$ by \cite{Heidersdorf-mixed-tensors}. The dual of any mixed tensor is $ R(\lambda^L,\lambda^R)^{\vee} = R(\lambda^R,\lambda^L)$, hence we simply have to calculate the socle of $R(\lambda^R,\lambda^L)$. If $k(\lambda^L,\lambda^R) = n$, the description of the map $\theta$ is easy: Calculate the weight diagram of $(\lambda^L, \lambda^R)$ as in section \ref{stable0} and write down its labeled cup diagram. Then turn all $\vee$'s which are not part of a cup into $\wedge$'s and leave all other symbols unchanged. The resulting diagram is the weight diagram of $socle(P(\lambda))$. Hence in order to calculate the dual of $L(\lambda)$ we simply have to understand the effect of changing $(\lambda^L,\lambda^R)$ to $(\lambda^R,\lambda^L)$ on the weight diagram. Recall from section \ref{stable0} that \[ I_{\wedge}(\lambda) := \{ \lambda_1^L, \lambda_2^L - 1, \lambda_3^L - 2, \ldots \} \quad \text{and}\quad I_{\vee}(\lambda) := \{1 -\lambda_1^R, 2 - \lambda_2^R, \ldots \}\ . \] If $\lambda_i^L - (i-1) =s$, then $i - \lambda_i^L = i - s - i + 1 = 1 -s$ and likewise for $\lambda_j^R$. Hence interchanging $\lambda^L$ and $\lambda^R$ means reflecting the symbols $s \mapsto 1-s$ and swapping $\vee$'s with $\wedge$'s. If the vertex $s$ is labelled by a $\times$, then there exist $i,j$ such that $\lambda_i^L - (i-1) = j - \lambda^R_j = s$. But then $\lambda_j^R - (j-1) = i - \lambda_i^L = 1-s$ and we obtain a $\times$ at the vertex $1-s$. We argue in the same way for the $\circ$'s. If $(s,s+r)$ is labelled by $(\vee, \wedge)$-pair such that we have a cup connecting $s$ and $s+r$, we obtain a $(\vee, \wedge)$-pair at $(1-s-r,1-s)$ which is connected by a cup. To obtain the highest weight $\theta(\lambda^R,\lambda^L)$ the $\vee$'s not in cups get flipped to $\wedge$'s. \begin{prop} \label{irreducible-dual} The weight diagram of the dual of an irreducible representation $L$ is obtained from the weight diagram of $L$ as follows: Interchange all $\vee \wedge$-pairs in cups, then apply the reflection $s \mapsto 1-s$ to each symbol. \end{prop} It is easy to see that this description is valid for $m \geq n$ if we use the reflection $1 - \delta -s$ instead of $1-s$ where $\delta = m-n$. \textit{The maximal atypical case}. We describe the dual in the language of plots. We assume here that $L(\lambda)$ is maximally atypical, but we can reduce the general case to this one, using the map $\phi$ from section \ref{sec:loewy-length} and lemma \ref{phi-dual}. Let $\lambda$ denote the unique plot corresponding to the weight $\lambda$. Let $\lambda(s) = \prod_i \lambda_i(s)$ be its prime factorization. For each prime factor $\lambda_i(s)=(I,K)$ with segment $I$ and support $K$ we define $ \lambda_i^c(s):= (I,K^c)$, where $K^c = I\!-\!K$ denotes the complement of $K$ in $I$. Then put $$ \lambda^c(s) := \prod_i \lambda_i^c(s) \ .$$ The previous description of the duals implies the next proposition. \begin{prop}\label{dual-plot} The Tannaka dual representation $\lambda^{\vee}$ of a maximal atypical representation $\lambda$ is given by the plot \[ \lambda^\vee(s) = \lambda^c(1-s).\] \end{prop} {\bf Example 1}. Suppose $\lambda = [0,\lambda_2,\ldots,\lambda_n]$ holds with $0> \lambda_2$ and $\lambda_i > \lambda_{i+1} $ for $2 \leq i \leq n-1$. Then $\lambda^{\vee} = [n - \lambda_n - 1, n - \lambda_{n-1} - 1, \ldots, n- \lambda_2 - 1,n-1]$. Dualising is compatible with the normalized block equivalence $\phi_n^i$ of section \ref{signs}. \begin{lem} \label{phi-dual} For irreducible $i$-atypical $L$ we have $\phi_n^i (L^{\vee}) = \phi_n^i(L)^{\vee}$. \end{lem} {\it Proof}. If $L$ is $i$-atypical, then $\tilde{\phi}_n^i$ preserves the distances between the sectors, hence $\tilde{\phi}_n^i(L)^{\vee} = Ber^{\ldots} \otimes \tilde{\phi}_n^i(L^{\vee})$. Since we remove $2(n-i)$ symbols from the weight diagram of $L$, we obtain the shift \[ \tilde{\phi}_n^i(L^{\vee}) = Ber^{-2(n-i)} \otimes \tilde{\phi}_n^i(L)^{\vee}.\] Now we calculate for the normalised block equivalence \begin{align*} \phi_n^i(L^{\vee}) & = Ber^{n-k} \tilde{\phi}_n^i(L^{\vee}) = Ber^{n-k} Ber^{-2(n-k)} \tilde{\phi}_n^i(L)^{\vee} \\ & = Ber^{-n+k} \otimes \tilde{\phi}_n^i(L)^{\vee} = (Ber^{n-k})^{\vee} \otimes \phi(L)^{\vee} = (Ber^{n-k} \otimes \tilde{\phi_n^i}(L))^{\vee} \\ & = \phi_n^i(L)^{\vee}. \end{align*} \qed \begin{lem} \label{basic} For maximal atypical irreducible $L=[\lambda_1,...,\lambda_n]$ such that $\lambda_n=0$ the following assertions are equivalent. \begin{enumerate} \item $L^\vee \cong [\rho_1,...,\rho_n]$ holds such that $\rho_n\geq 0$. \item $L$ is basic, i.e. $\lambda_1 \geq \ldots \lambda_n \geq 0$ and $\lambda_i \leq n-i$ holds for all $i=1,...,n$. \item $\lambda_1 \leq n-1$ and $L^\vee \cong [\lambda^*_1,...,\lambda^*_n]$ holds for the transposed partition $\lambda^*=(\lambda^*_1,...,\lambda^*_n)$ of the partition $\lambda=(\lambda_1,...,\lambda_n)$. \end{enumerate} \end{lem} {\bf Remark}. The number of {\it basic} maximal atypical weights in $X^+(n)$ is equal to the Catalan number $C_n$. {\it Proof}. i) implies ii): If $\rho_n = 0$ the leftmost $\vee$ in the weight diagram of $[\rho]$ is at position $-n+1$. Then the smallest $\wedge$ bound in a cup is at a position $\leq 1$ and $\geq 1-n$. After the change $(I,K) \to (I,I-K)$ and the reflection $s \mapsto 1-s$ this means that the rightmost $\vee$ in $[\rho]^{\vee}$ is at position $\leq n-1$ and $\geq 0$ which is equivalent to $0 \leq \lambda_1 \leq n-1$. Likewise the $i$-th leftmost $\wedge$ bound in a cup is at a position $ \geq -n + i + 1$ and $\leq n$. It will give the $i$-th largest $\vee$ in the weight diagram of $[\lambda]$. After the change $(I,K) \mapsto (I,I-K)$ and the reflection the $i$-th largest $\vee$ is at a position $\leq n- 2i +1$ which is equivalent to $\lambda_i \leq n-i$. ii) implies i): If $\lambda$ is basic the largest $\vee$ is at position $\leq n-1$, hence the largest $\wedge$ bound in a cup is at position $\leq n$. It gives the smallest $\vee$ of $[\lambda]^{\vee}$. Hence the smallest $\vee$ of $[\lambda]^{\vee}$ is at a position $\geq 1-n$ which is equivalent to $\lambda_n^{\vee} \geq 0$. ii) implies iii): If $\lambda$ is basic, the $2n$ vertices in cups form the intervall $J := [-n+1,n]$ of length $2n$. If $J_{\vee}$ is the subset of vertices labelled by $\vee$, the subset $J \setminus J_{\vee}$ is the subset of vertices labelled by $\wedge$. The intervall $J$ is preserved by the reflection $s \mapsto 1-s$. If $\lambda$ is basic, so is $\lambda^*$. We use the following notation: If\[ \lambda_1 = \ldots = \lambda_{s_1} > \lambda_{s_1 + 1} = \ \ldots \ = \lambda_{s_2} > \lambda_{s_2 + 1} = \ldots = \lambda_{s_r } > \lambda_{s_r + 1} = 0 \] put $\delta_1 = s_1$ and $\delta_i = s_i - s_{i-1}$ and $\Delta_i = \lambda_{s_i}- \lambda_{s_i + 1}$: Likewise for $\lambda^*$ with $\delta_i^*$ and $\Delta_i^*$. Then \[ \delta_i = \Delta_{i-r}^*, \ \Delta_i = \delta_{i-r}^*.\] Then the weight diagram of $ [\lambda^*]$ looks, starting from $n$ and going to the left \[ \ldots \overbrace{\vee \ldots \vee}^{\delta_3^*} \overbrace{\wedge \ldots \wedge}^{\Delta_2^*} \overbrace{\vee \ldots \vee}^{\delta_2^*} \overbrace{\wedge \ldots \wedge}^{\Delta_1^*} \overbrace{\vee \ldots \vee}^{\delta_1^*} \wedge \ldots \wedge\] and the weight diagram of $[\lambda]$ looks, starting from $-n +1$ and going to the right like\[ \vee \ldots \vee \overbrace{\wedge \ldots \wedge}^{\Delta_r = \delta_1^*} \overbrace{\vee \ldots \vee}^{\delta_r = \Delta_1^*} \overbrace{\wedge \ldots \wedge}^{\Delta_{r-1} = \delta_2^*} \overbrace{\vee \ldots \vee}^{\delta_{r-1} = \Delta_2^*} \ldots.\] The two weight diagrams are mirror images of each other and the rule for the $\vee$'s in cups in one is the same as the rule for the $\wedge$'s in the cups of the other. Hence after the change $(I,K) \mapsto (I,I-K)$ and the reflection $s \mapsto 1-s$ the two weight diagrams agree. iii) implies i): trivial. \qed {\bf Example 3}. Duals in the ${{\mathcal R}}_3$-case. If $a>b>0$, then $[a,b,0]^{\vee} = [2,2-b,2-a] = Ber^{2-a} [a,a-b,0]$. If $a \geq 1$ then $[a,a,0]^{\vee} = [2,1-a,1-a] = Ber^{1-a}[a+1,0,0] = Ber^{1-a} S^{a+1}$. {\it A better description.} If $L=L(\lambda)$ is an irreducible maximal atypical representation in ${\mathcal R}_n$, its weight $\lambda$ is uniquely determined by its plot. Let $S_1...S_2... S_k$ denote the segments of this plot. Each segment $S_\nu$ has even cardinality $2r(S_\nu)$, and can be identified up to a translation with a unique basic weight of rank $r(S_\nu) = r_{\mu}$ and a partition in the sense of lemma \ref{basic}. For the rest of this section we denote the segment of rank $r(S_\nu)$ attached to the dual partition by $S_\nu^*$, hoping that this will not be confused with the contravariant functor $*$. Using this notation, Tannaka duality maps the plot $S_1 .. S_2 ... S_k$ to the plot $S_k^* ... S^*_2 .. S_1^*$ so that the distances $d_i$ between $S_i$ and $S_{i+1}$ coincide with the distances between $S^*_{i+1}$ and $S^*_i$. This follows from proposition \ref{dual-plot} and determines the Tannaka dual $L^\vee$ of $L$ up to a Berezin twist. {\it The dual forest}. If we identify the basic plots with rooted trees $S_i \leftrightarrow {\mathcal T}_i$, we can describe a weight by a \textit{spaced} forest \[ {\mathcal F} = (d_0, {\mathcal T}_1, d_1, {\mathcal T}_2, \ldots, d_{k-1}, {\mathcal T}_k)\] where $d_0 = \lambda_n$. We describe the dual in this language. {\it Grafting}. Given a planar forest ${\mathcal F} = {\mathcal T}_1 \ldots {\mathcal T}_n$ of planar rooted trees, we can introduce a new $n$-ary root and graft the trees ${\mathcal T}_i$ onto this root. This new tree is called the grafting product $\vee({\mathcal T}_1 \ldots {\mathcal T}_n)$ of ${\mathcal T}_1 \ldots {\mathcal T}_n$. The grafting product of the trees in a spaced forest is obtained by forgetting the distances and simply taking the grafting product of the trees. {\bf Example}. Consider the forest of two rooted planar trees \[ \[email protected]@C-0.2cm{ & \bullet \ar@{-}[d] & & & \bullet \ar@{-}[dl] \ar@{-}[d] \ar@{-}[dr] & \\ & \bullet \ar@{-}[dl] \ar@{-}[dr] & & \bullet & \bullet & \bullet \\ \bullet & & \bullet & & & } \] Grafting this planar forest gives the forest with the single tree \[ \[email protected]@C-0.2cm{ & & & \bullet \ar@{-}[dr] \ar@{-}[dl] & & \\ & & \bullet \ar@{-}[dl] & & \bullet \ar@{-}[dl] \ar@{-}[d] \ar@{-}[dr] & \\ & \bullet \ar@{-}[dl] \ar@{-}[dr] & & \bullet & \bullet & \bullet \\ \bullet & & \bullet & & &} \] {\it Mirror tree}. If ${\mathcal T}$ is a planar rooted tree, then the mirror image ${\mathcal T}^*$ of ${\mathcal T}$ along the root axis is recursively defined as follows: Put $(\vee(\emptyset))^* = \vee(\emptyset), \ \emptyset^* = \emptyset$ where $\emptyset$ is the empty tree and extend via \[ (\vee({\mathcal T}_1 \ldots {\mathcal T}_n))^* = \vee({\mathcal T}_n^* \ldots {\mathcal T}_1^*).\] {\bf Example}. The mirror image of the grafted planar tree above is \[ \reflectbox{ \[email protected]@C-0.2cm{ & & & \bullet \ar@{-}[dr] \ar@{-}[dl] & & \\ & & \bullet \ar@{-}[dl] & & \bullet \ar@{-}[dl] \ar@{-}[d] \ar@{-}[dr] & \\ & \bullet \ar@{-}[dl] \ar@{-}[dr] & & \bullet & \bullet & \bullet \\ \bullet & & \bullet & & &} } \] \begin{lem} The weight of the dual representation corresponds to the spaced forest \[ {\mathcal F}^{\vee} = (d_0^*, {\mathcal T}_k^*, d_1^*, {\mathcal T}_{k-1}^*, d_2^*, \ldots, d_{k-1}^*, {\mathcal T}_1^*)\] where $d_i^* := d_{k-i}$ for $i=1,\ldots,k-1$ and $d_0^* = - d_0 - d_1 - \ldots - d_{k-1}$ and ${\mathcal T}_i^{*}$ denotes the mirror image (along the root axis) of the planar tree ${\mathcal T}_i$. \end{lem} {\it Proof}. The claim about the distances $d_1^*,\ldots,d_{k-1}^*$ follows from the description of the dual plots. We first prove the claim about $d_0^*$. Now $d_0^* = (1- b) + n-1$ where $b$ is the last point of the rightmost sector with rank $r_k$ \[ b = \lambda_1 + (2r_k - 1).\] Hence $d_0^* =-b + n= -\lambda_1 - 2 r_k + n + 1$. Now use that $\lambda_1 = (\lambda_n - n+ 1) + 2 r_1 + \ldots + 2r_{k-1} + d_1 + \ldots + d_{k-1}$, hence \begin{align*} d_0^* & = [- \lambda_n + n - 1 - 2(r_1 + \ldots + r_{k-1}) - (d_1+ \ldots + d_{k-1})] - 2r_k + n + 1 \\ & = - \lambda_n - (d_1+ \ldots + d_{k-1}) = -d_0 - d_1 … - d_{k-1}.\end{align*} It remains to prove that if $S_i$ corresponds to ${\mathcal T}_i$, then the dual plot $S_i^*$ corresponds to the mirrored tree ${\mathcal T}_i^*$. We induct on the rank of the sector. The case $r_k =1$ is clear. If $[a,b]$ is a sector, then $\lambda(a) = \boxplus$ and $\lambda(b) = \boxminus$. According to proposition \ref{dual-plot} the dual plot is obtained by first exchanging $\boxplus$ and $\boxminus$ and then reflecting $s \mapsto 1-s$. Hence the dual plot of $S_i^*$ is obtained (ignoring distances) by keeping the outer labels $\boxplus$ and $\boxminus$ of the sector and dualising the plot of the inner segment $[a+1,b-1]$. This corresponds to keeping the root of the tree ${\mathcal T}_i$ and calculating the dual of the forest of the inner trees obtained from ${\mathcal T}_i$ by removing the root of ${\mathcal T}_i$. More precisely: The tree to the dual plot $S_i^*$ is obtained by taking the grafting product of the inner subtrees corresponding to the dual of the plot of the inner segment. The interval $[a+1,b-1]$ is a segment consisting of sectors $\tilde{S}_1\ldots \tilde{S_l}$ corresponding to the trees $\tilde{{\mathcal T}}_1 \ldots\tilde{{\mathcal T}}_l$. Dualising the inner segment yields by induction the forest $\tilde{{\mathcal T}}_l^* \ldots \tilde{{\mathcal T}}_1^*$ since the ranks of inner sectors are smaller than the rank of $S_i$. Hence the tree corresponding to $S_i^*$ is obtained by grafting the forest $\tilde{{\mathcal T}}_l^* \ldots \tilde{{\mathcal T}}_1^*$. This is just the definition of the mirror image of ${\mathcal T}_i$.\qed {\bf Example}. Consider the irreducible representation $[11,9,9,5,3,3,3]$ in ${\mathcal R}_7$. It has sector structure $S_1 = [-3,4]$, $S_2 = [7,10]$ and $S_3 = [11,12]$ with distances $d_0 = 3$, $d_1 = 2$ and $d_2 = 0$. The associated spaced forest is \[ \[email protected]@C-0.2cm{ d_0 = 3 & & & \bullet \ar@{-}[dl] \ar@{-}[dr] & & d_1 = 2 & \bullet \ar@{-}[d] & d_2 = 0 & \bullet \\ & & \bullet \ar@{-}[dl] & & \bullet & & \bullet & & \\ & \bullet & & & & & & & } \] The dual is the representation $[1,1,0,0,-4,-4,-5]$ with sectors (from left to right) $S_3^* = [-11,-10]$, $S_2^* = [-9,-6]$ and $S_1^* = [-3,4]$ with associated spaced forest \[ \[email protected]@C-0.2cm{ d_0^* = -5 & \bullet & d_1^* = 0 & \bullet \ar@{-}[d] & d_2^* = 2 & & \bullet \ar@{-}[dl] \ar@{-}[dr] & & \\ & & & \bullet & & \bullet & & \bullet \ar@{-}[dr] & \\ & & & & & & & & \bullet } \] \section{Cohomology I} \label{kohl} In corollary \ref{cohomII} we have seen that in the situation of the melting algorithm one obtains surjective maps $H^i(p): H^i({\mathbb A}) \to H^i(L)$ for all $i\in\mathbb Z$. For $K=Ker(p: {\mathbb A} \to L)$ we therefore get exact sequences $$ 0 \to H^i(K) \to H^i({\mathbb A}) \to H^i(L) \to 0 $$ for all integers $i$. Hence, if in addition $H^i({\mathbb A})=0$ and $H^i(L)=0$ vanish for all $i\neq 0$, then $H^i(K)=0$ holds for all $i\neq 0$. Then $K/L \cong A$ implies $H^i(A)=0$ for $i\neq -1,0$. Suppose, the same conditions are satisfied for ${\mathbb A}^\vee$ as well. Then also $H^i(A^\vee)=0$ holds for $i\neq -1,0$. Then, by duality $H^i(A)^\vee \cong H^{-i}(A^\vee)$, the cohomology modules $H^i(A)$ vanish for $i\neq 0$. This proves \begin{prop} \label{cohomvan} For irreducible basic modules $V=[\lambda_1,...,\lambda_{n-1},0]$ in ${\mathcal R}_n$ the cohomology modules $H^i(V)$ vanish for all $i\neq 0$. \end{prop} {\it Proof}. We use induction with respect to the degree $p = p(\lambda)=\sum_i \lambda_i$, where $\lambda_i$ for $i=1,..,n$ denote the coefficients of the weight vector. By induction assume the assertion holds for all irreducible basic modules of degree $<p$. For $V$ of degree $p$ by the melting algorithm there exists an irreducible basic module $L$ of degree $p-1$ and ${\mathbb A}$ with layer structure $(L,A,L)$ such that $A=V \oplus A'$, where $A'$ is a direct sum of irreducible basic modules of degree $<p$. Since $H^i(L)=0$ for $i\neq 0$, $H^i({\mathbb A})=0$ for $i\neq 0$ now follows from lemma \ref{tildeLII}. The same applies for the dual modules ${\mathbb A}^\vee$ and $L^\vee$. Indeed the dual module of a basic irreducible module is basic irreducible again with the same degree $<p-1$ (lemma \ref{basic}) using $\sum_i \lambda_i = \sum_i \lambda^*_i$. Hence the remarks preceding proposition \ref{cohomvan} imply $H^i(A)=0$ for $i\neq 0$. Since $V$ is a direct summand of $A$, this proves our assertion. \qed \section{Cohomology II}\label{kohl-2} We calculate the ${\mathbf{Z}}$-grading of $DS(L)$ for maximal atypical irreducible $L$. The case of general $L$ is treated in section \ref{koh3}. \begin{prop}\label{hproof} For maximal atypical irreducible $L(\lambda)$ in ${\mathcal R}_n$ with weight $\lambda$, normalized so that $\lambda_n=0$, suppose $\lambda$ has sectors $S_1,.., S_i,..,S_k$ (from left to right). Then the constituents $L(\lambda_i)$ of $DS(L(\lambda))$ for $i=1,...,k$ have sectors $S_1,.., \partial S_i , .. S_k$, and the cohomology of $L(\lambda)$ can be expressed in terms of the added distances $\delta_1,...,\delta_{k}$ between these sectors as follows: $$ \fbox{$ H^{\bullet}(L(\lambda)) \ = \ \bigoplus_{i=1}^k \ L(\lambda_i)\langle -\delta_i \rangle $} \ .$$ \end{prop} \textbf{Example.} We know by the main theorem that $DS([6,4,4,1]) = \Pi [3,3,0] \oplus \Pi[6,4,0] \oplus \Pi[6,4,4]$. The proposition above tells us the ${\mathbf{Z}}$-grading using \[DS(V) = \bigoplus_{\ell \in {\mathbf{Z}}} \ \Pi^\ell (H^\ell(V)).\] In this example $d_0 = 1, \ d_1 = 2$ and $d_2 = 0$. The summand $L(\lambda_i)$ is obtained by differentiating the $i$-th sector in the plot associated to $\lambda$, hence $L(\lambda_1) = [6,4,4], \ L(\lambda_2) = [6,4,0]$ and $L(\lambda_3) = [3,3,0]$. We obtain \[ H^{\bullet}([6,4,4,1]) = [6,4,4]\langle-1\rangle \oplus [6,4,0]\langle-3\rangle \oplus [3,3,0]\langle-3\rangle.\] {\it Proof}. In the special case where all distances vanish $d_1=\cdots =d_k=0$, i.e. in case where the plot of $\lambda$ has only one segment, the assertion of the proposition has been shown in proposition \ref{cohomvan}. We then prove the general case of nonvanishing distances by induction with respect to ($n$ and) the lexicographic ordering used for algorithm I. This means: We prove proposition \ref{hproof} recursively for $L^{up}$, thereby assuming that we already know the cohomology degrees of $L^{down}_r$, $L$ and $L^{aux}$ (using the notations of algorithm I). First recall the notations used for algorithm I: $$ L = (S_1\cdots S_{j-1}) \leftarrow \text{distance } d_j \to (S_jS_{j+1} \cdots S_k) \leftarrow \text{ distance } d_k \ \to ... $$ $$ L^{up} = (S_1\cdots S_{j-1}) \leftarrow \text{dist. } (d_j + 1) \to \int(S_{j+1} \cdots S_k) \leftarrow \text{dist. } (d_k-1) \to ... $$ for a sector $S_j$ with $r(S_j)=1$ supported at $i\in \mathbb Z$. Recall ${\mathbb A} = (L,A,L)$ with $$ A \ = \ L^{up}\ \oplus\ \bigoplus_{r=1}^k L^{down}_r \ .$$ Furthermore $DS(L)= \Pi^{m_{aux}} L^{aux} \oplus \bigoplus_{\mu=1}^s \Pi^{m_{\mu}} \tilde L_\mu$ for $DS({\mathbb A})= \bigoplus_{\mu=1}^s \tilde{\mathbb A}_\mu$ and $\tilde{\mathbb A}_\mu=(\tilde L_\mu,\tilde A_\mu,\tilde L_\mu)$ such that the derivative $d(A)$ of $A$ is \[ d(A) = \tilde A + 2(-1)^{i+n-1} L^{aux}\] in $K_0({\mathcal R}_n)$. Obviously $DS(L^{up})$ has the summands $$ \bigoplus_{\nu=1}^{j-1} (S_1 \cdots \partial S_\nu \cdots S_{j-1}) ... (d_{j-1}+1) ... S ... (d_k -1) ... S_{k+1} \cdots) $$ $$ \bigoplus_{\nu >k} (S_1 \cdots S_{j-1}) ... (d_{j-1}+1) ... S ... (d_k -1) ...S_{k+1} \cdots \partial S_\nu \cdots ) $$ and $$ L^{aux} = (S_1 \cdots S_{j-1}) ... (d_{j-1}+2) ... (S_{j+1} \cdots S_k) ... (d_k) ... S_{k+1} \cdots \ .$$ This immediately implies the next \begin{lem} \label{lastl} The following holds \begin{enumerate} \item $DS(L^{up}) \subseteq L^{aux} \oplus DS(L)^{up}$. \item None of the summands of $DS(L^{up})$ different from $L^{aux}$ is contained in $DS(L)$. \item $L^{aux}$ is not a summand of $\bigoplus_\mu \tilde A_\mu$. \end{enumerate} \end{lem} {\it Proof}. The last assertion holds, since the constituents of $\bigoplus_\mu \tilde A_\mu$ are obtained from $\tilde L_\mu$ by moves. It can be checked that $L^{aux}$ can not be realized in this way. \qed {\it The $d_{j-1}\pm 1$ alternative}. By the induction assumption $H^\bullet(L)$ contains $L^{aux}$ with multiplicity 1, and $L^{aux}$ appears in cohomology at the degree $d_{j-1}$. To determine $ H^i(L^{up}) \subseteq H^i(A) $ we may use step 11) of the proof of theorem \ref{3}. It easily implies by a small modification of the arguments that $$ H^i(A) = \bigoplus_{m_\mu =-i} \tilde A_\mu \oplus H^{i-1}(L)/H^{i-1}(\bigoplus_\mu\tilde L_\mu) \oplus Kern(H^{i+1}(L) \to H^{i+1}(K)) \ .$$ Since $ H^{\bullet}(L)/(\bigoplus_{\mu}\tilde L_\mu) \cong L^{aux}$ by lemma \ref{tildeL} and since \[ H^{i-1}(L)/(\bigoplus_{\mu=1-i}\tilde L_\mu) \ \cong \ L^{aux}\ \] for $i-1=d_{j-1}$ by the induction assumption, we get $$ Kern(H^{\bullet}(L) \to H^{\bullet}(K)) = L^{aux} \ ,$$ and this implies $$ Kern(H^{i+1}(L) \to H^{i+1}(K)) = L^{aux} \ $$ for $i+1=d_{j-1}$. In other words $DS(A) = \tilde A + 2\cdot L^{aux}$ and the two copies of $L^{aux}$ occur in the two possible cohomology degrees $$ d_{j-1} \pm 1\ .$$ {\it Continuation of the proof for proposition \ref{hproof}}. By lemma \ref{lastl} the cohomology degree of the constituents of $H^\bullet(L^{up})$ that appear in \[ \tilde A_\mu \subseteq \bigoplus_{m_\mu =-i} \tilde A_\mu \] can be immediately read of from the degrees $m_\mu$, i.e. from the cohomology degrees of $\tilde L_\mu$ in $H^\bullet(L)$. These degrees are known by the induction assumption. This easily proves proposition \ref{hproof} for all constituents $L(\lambda_i)$ of $H^\bullet(L^{up})$ that are not isomorphic to $L^{aux}$. Indeed, according to our claim the cohomological degrees for the constituents $L(\lambda_i)\not\cong L^{aux}$ of $H^\bullet(L^{up})$ are given by $$ 0,\cdots , 0, d_{j-1}+1,d_{j-1}+d_k, \cdots \ , $$ and the summand $L^{aux}$ should occur in degree $d_{j-1}+1$. The cohomology of $H^\bullet(L)$ on the other hand is concentrated in the degrees $$ 0,\cdots , 0, d_{j-1},d_{j-1}+d_k, \cdots \ $$ with the summand $L^{aux}$ corresponding to degree $d_{j-1}$. All summands $\not\cong L^{aux}$ precisely match, so this proves proposition \ref{hproof} for all constituents of $H^\bullet(L^{up})$ except for $L^{aux}$. It remains to determine the cohomology degree of $L^{aux} \subseteq H^\bullet(L^{up})$. As already explained, the summand $L^{aux}$ occurs in degree $d_{j-1}-1$ or $d_{j-1}+1$. So to show that $L^{aux}$ occurs in $H^\bullet(L^{up})$ for degree $d_{j-1}+1$, it now suffices by the $d_{j-1}\pm 1$ alternative to show that $L^{aux}$ occurs in $H^\nu(\bigoplus_r L^{down}_r)$ in the degree $\nu=d_{j-1}-1$. Indeed $L^{aux}$ appears in $DS(L^{down}) = \bigoplus_\nu H^\nu(L^{down})$ for $L^{down} := L^{down}_j$. This follows from the structure of the sectors of $$ L^{down} = \int(S_1\cdots S_{j-1}) \boxminus... (d_{j-1}-1) ... \boxminus_{i+1} S_{j+1} \cdots S_k \ $$ and the induction assumption. It gives the degree $d_{j-1}-1$, for $d_{j-1}\geq 1$, respectively in degree $d_{j-1}-1=-1$, for $d_{j-1}=0$, for the summand $L^{aux}$ in $H^\bullet(L^{down})$. Hence $$ \fbox{$ L^{aux} \subseteq H^{d_{j-1} +1}(L^{up}) $}\ ,$$ which completes the proof of proposition \ref{hproof}. \qed \noindent \section{Cohomology III}\label{koh3} The cohomology of an $i$-atypical $L$ can be calculated in the same way using the normalised block equivalence $\phi_n^i$ of section \ref{signs}. We call an irreducible module $L$ of atypicality $i$ $\phi$-basic if $\phi_n^i (L)$ is basic in ${\mathcal R}_i$. These will replace the basic modules in the proof of proposition \ref{cohomvan}. The unique mixed tensor in a block of atypicality $i$ replaces the trivial representation. \begin{prop} \label{cohomvan-2} For irreducible $\phi$-basic modules $V$ in ${\mathcal R}_n$ the cohomology modules $H^i(V)$ vanish for all $i\neq 0$. \end{prop} {\it Proof}. The remarks preceeding proposition \ref{cohomvan} are valid. By lemma \ref{mixed-tensor-derivative} the cohomology of the mixed tensor $L(\lambda)$ is concentrated in one degree, and by lemma \ref{stable} this degree is zero since $\lambda_n = 0$. Since $\phi_n^i(L(\lambda)) = {\mathbf{1}}$, we induct as in the proof of \ref{cohomvan} on the sum $p=\sum_i \lambda_i$ of the coefficients of $\phi_n^i(L)$. The rest of the proof works verbatim. Note that the dual of a $\phi$-basic module is $\phi$-basic again of the same degree using $\phi_n^i(L)^{\vee} = \phi_n^i(L^{\vee})$ of lemma \ref{phi-dual} and lemma \ref{basic}. \qed We can now copy the proof of proposition \ref{hproof} to obtain the next statement. Here the added distances $\delta_i$ are the distances in the plot $\phi(\lambda)$ associated to $\lambda$ in section \ref{sec:loewy-length}. \begin{prop}\label{hproof-2} For irreducible $L(\lambda)$ in ${\mathcal R}_n$ with weight $\lambda$, normalized so that $\phi_n^i(L(\lambda)) = [\lambda_1^{\phi}, \ldots, \lambda_i^{\phi}]$ satisfies $\lambda_i^{\phi}=0$, suppose $\lambda$ has sectors $S_1,.., S_j,..,S_k$ (from left to right). Then the constituents $L(\lambda_j)$ of $DS(L(\lambda))$ for $j=1,...,k$ have sectors $S_1,.., \partial S_j , .. S_k$, and the cohomology of $L(\lambda)$ can be expressed in terms of the added distances $\delta_1,...,\delta_{k}$ between these sectors as follows: $$ \fbox{$ H^{\bullet}(L(\lambda)) \ = \ \bigoplus_{j=1}^k \ L(\lambda_j)\langle -\delta_j \rangle $} \ .$$ \end{prop} \noindent \section{The forest formula} \label{sec:forest} Recall the functor $DS_{n,0}: T_n \to T_0=svec_k$ with its decomposition $DS_{n,0}(V) = \bigoplus_{\ell\in\mathbb Z} D_{n,0}^\ell(V)[-\ell]$ for objects $V$ in $T_n$ and objects $D_{n,0}^\ell(V)$ in $svec_k$. For $V\in T_n$ we define the Laurent polynomial $$ \omega(V,t) = \sum_{\ell\in\mathbb Z} sdim(D_{n,0}^\ell(V)) \cdot t^\ell \ $$ as the Hilbert polynomial of the graded module $DS^\bullet_{n,0}(V)= \bigoplus_{\ell\in\mathbb Z} D_{n,0}^\ell(V)$. Since $sdim(W[-\ell])=(-1)^\ell sdim(W)$ and $V= \bigoplus D_{n,0}^\ell(V)[-\ell]$ holds, the formula $$sdim(V) = \omega(V,-1)$$ follows. For $V= Ber_n^i$ $$ \omega(Ber_n^i,t) \ = \ t^{ni} \ .$$ Indeed, $H^\ell(Ber_n^i) =0$ for $\ell\neq i$ and $H^\ell(Ber_n^i) =Ber_{n-1}^i$ for $\ell=i$ implies $DS(Ber_n^i) = Ber_{n-1}^i[-i]$. If we apply this formula $n$-times and consider $B_0={\bf 1}$, we obtain $DS_{n,0}(Ber_n^i)=DS^n(Ber_n^i) ={\bf 1}[-ni]$ from the fact that $DS_{n,0}(L)=DS^n(L)$ holds for simple objects $L$. This implies $DS_{n,0}^{ni}(Ber_n^i)=\bf 1$ and that $DS_{n,0}^{\ell}(Ber_n^i)$ is zero otherwise. Since $DS_{n,0}$ is a tensor functor, $\omega(M\otimes L,t)= \omega(M,t)\omega(L,t)$ holds. Hence $$ \omega(Ber_n^i\otimes L,t) \ = \ t^{ni} \cdot \omega(L,t) \ .$$ Similar as in the proof of lemma \ref{-ell} one shows $$\omega(V^\vee,t) = \omega(V,t^{-1})\ .$$ Let now $L=L(\lambda)$ be a maximal atypical irreducible representation in ${\mathcal R}_n$. Associated to its plot $\lambda$ we have the basic plot $\lambda_{basic}$ and the numbers $d_0,\ldots,d_{k-1}$. Furthermore, let $S_1S_2\cdots S_k$ be the sector structure of $\lambda_{basic}$. For the degrees $r_i=r(S_i)$ we define the number $$ D(\lambda) \ =\ \sum_{i=1}^k r_i \sum_{0 \leq j<i} d_j = \sum_{i=1}^k r_i \delta_i\ ,$$ Recall $\delta_i=\sum_{\nu=0}^{i-1} d_{\nu}$ implies $\delta_1\leq \delta_2 \leq \cdots \leq \delta_k$ and $\delta_i\in \mathbb Z^k$. Consider the vector $D$ with coordinates $\delta_1, \ldots,\delta_k$. Together with $\lambda_{basic}$ the knowledge of $D$ determines $\lambda$. For simplicity, we express this by writing $\lambda = D \times \lambda_{basic}$ in the following argument. With this notation, our proposition \ref{hproof} gives for $DS(L)$ the following element in the Grothendieck group $K_0({\mathcal R}_{n-1})\otimes k[t]$ $$ DS\bigl( \begin{pmatrix} \delta_1 \cr . \cr \delta_{i-1} \cr \delta_i \cr \delta_{i+1} \cr . \cr \delta_k \end{pmatrix} \times (S_1\cdots S_k)_{basic}\Bigr) \ = \ \sum_{i=1}^k t^{\delta_i} \cdot \begin{pmatrix} \delta_1 -1 \cr . \cr \delta_{i-1} - 1\cr \delta_i \cr \delta_{i+1}+1 \cr . \cr \delta_k +1 \end{pmatrix} \times (S_1 \cdots \partial S_i \cdots S_k)_{basic} \ $$ where formally (and without loss of information) we replace the shifts $[-\nu]$ by $t^\nu$. In the following, we refer to this formula as the {\it key formula}. Now $\partial S_i$ may introduce new sectors in $(S_1 \cdots \partial S_i \cdots S_k)_{basic}$. So if we want to treat everything on an equal footing, we better count each sectors $S_i$ with the multiplicity $r_i$. This amounts to consider instead of the vector $D$ the new refined vector $\delta$ in $\mathbb Z^n$ with the coordinates \[ (\underbrace{\delta_1, \ldots,\delta_1}_{r_1}, \ldots, \underbrace{\delta_k, \ldots, \delta_k}_{r_k}).\] Then the number $D(\lambda)$ defined above is just the sum of the coordinates of this vector. With this new vector we have an analogous formula expressing $DS$ as above, where for the $i$-th summand on the right side one of the entries $\delta_i$ of $\delta$ has to be removed to obtain a vector in $\mathbb Z^{n-1}$. The right side is now of the correct form to enable the application of the formula for $DS$ to the right side again. Inductively, after $n$ steps this gives a complicated expression with at most $n!$ summands. The number of summands depends only on $\lambda_{basic}$. Since the additional monomial term in $t$ obtained from each derivative is of the form $t^{\delta_\nu \pm s_\nu}$, for some shift factors $s_\nu$ not depending on $\delta$, and since in each summand all coordinate entries of $\delta$ will be finitely successively deleted after $n$ times applying $DS$, this vector disappears and each of these summands has the form $$ t^{\sum_{i=1}^k r_i\delta_i} \cdot P(t) \times \emptyset $$ for a certain Laurent polynomials $P(t)$ that depends on the specific summand and on $\lambda_{basic}$, but that does not depend on the coefficients $\delta_1,...,\delta_k$. If we compare with the case $\delta_1\!=\! ...\! = \! \delta_k\! =\! 0$, we therefore obtain the following {\it translation formula}: $$ \omega(L(\lambda),t) \ = \ t^{D(\lambda)} \cdot \omega(L(\lambda_{basic}),t) \ .$$ This being said, we use that the basic plots of rank $n$ are in 1-1 correspondence with planar forests ${\mathcal F}$ with $n$ nodes $x\in {\mathcal F}$ as in sections \ref{sec:main} \ref{duals} and \cite{Weissauer-gl}. For a planar forest, let $\#{{\mathcal F}}$ denote the number of its nodes. We visual each of the trees in a plain forest top down, i.e. with their root on the top of the tree. Then, for each node $x\in {\mathcal F}$ let ${\mathcal F}(x)$ denote the subtree of the tree containing $x$ with all nodes removed that are not below the node $x$. In this way the node $x$ becomes the root of the tree ${\mathcal F}(x)$ by definition. For a forest ${\mathcal F}$ we recursively define the quantum forest factorials $$ [{\mathcal F}]_t ! = \prod_{x \in {\mathcal F}} [\#{\mathcal F}(x)]_t \ \in\ \mathbb Z[t] $$ using the following abbreviations: For the number $m=\#{\mathcal F}(x)$ of nodes in ${\mathcal F}(x)$ we define the quantum numbers $$ [m]_t \ := \ \frac{t^m - t^{-m}}{t - t^{-1}} \ .$$ Clearly $[m]_t = (-1)^{m-1}[m]_{-t} = [m]_{t^{-1}}$. Obviously the tree factorial ${\mathcal T} !$ of section \ref{sec:main} equals $[{\mathcal T}]_1!$. Example: For the forest ${\mathcal F}$ that contains only one linear tree, the forest factorial $[{\mathcal F}]_t !$ specializes to the quantum factorial $[n]_t! = \prod_{m=1}^n [m]_t$. For a planar forest ${\mathcal F}$, given as the union of trees ${\mathcal T}_i$ for $i=1,\ldots,k$ with $r_i$ nodes respectively, one has $[{\mathcal F}]_t ! = \prod_{i=1}^k [{\mathcal T}_{i}]_t ! $ and hence $$(*) \quad \quad \frac{[\#{\mathcal F}]_t!}{[{\mathcal F}]_t !} = \frac{ [\sum_i r_i]_t! }{ [r_1]_t ! \cdots [r_k]_t!} \cdot \prod_{i=1}^k \frac{ [\#{\mathcal T}_i]_t ! }{[{\mathcal T}_i]_t ! }\ .$$ Observe, for a tree ${\mathcal T}$ the value $\frac{[\#{\mathcal T}]_t!}{[{\mathcal T}]_t !}$ does not change under grafting, i.e. replacing ${\mathcal T}$ by a new tree with $\#{\mathcal T} +1$ nodes by putting a new root on top. Similar $\frac{[\#{\mathcal F}]_t!}{[{\mathcal F}]_t !}$ does not change under the grafting of the planar forest ${{\mathcal F}}$, that replaces ${\mathcal F}$ by a forest with a single tree with $\#{\mathcal T} +1$ nodes obtain by putting a new root on top of all trees connected to the old roots of the trees of ${\mathcal F}$. \begin{lem} \label{thm:forest-formula} For irreducible maximal atypical representations $L=L(\lambda)$ in ${\mathcal R}_n$ we have the forest formula $$ \fbox{$ \omega(L,t) \ =\ t^{D(\lambda)} \cdot \frac{ [n]_t !}{[\lambda_{basic}]_t !} $}\ $$ where $\lambda_{basic}$ is viewed as the planar forest associated to $L$. \end{lem} {\it Proof}. From the translation formula we may assume $\lambda=\lambda_{basic}$. Let us first consider the simple case of basic representations $L$, where all sectors $S_i$ for $i=1,..,k$ are intervals $I_i=[a_i,a_i +2r_i-1]$ where the support of the plot $S_i$ is $[a_i,...,a_i+r_i -1]$. The corresponding $\omega(L,t)$ then only depends on the ranks $r_1,...,r_k$ of the sectors $S_1,...,S_k$, hence will be denoted $\omega_{r_1,...,r_k}(t)$ in the following. From the key formula then, for general basic $\lambda$ with sector structure $S_1 \cdots S_k$ and $r_i=r(S_i)$, similarly to the translation formula we easily obtain the following generalized {\it Leibniz formula} $$ \omega(L(\lambda),t ) = \omega_{r_1,...,r_k}(t) \cdot \prod_{i=1}^k \ \omega(L(S_i),t) \ ,$$ where $L(S_i)$ denotes the irreducible basic maximal atypical representation in ${\mathcal R}_{r_i}$ whose plot is $S_i$ (up to a translation on the number line). Now each of these $S_i$ has a unique sector. For basic plots $S$ with a unique sector (like the $S_i$) the key formula obviously implies the following {\it grafting formula} $$ \omega(L(S_i),t) \ = \ \omega(L(\partial S),t) \ ,$$ where $L(\partial S)$ denotes the unique maximal atypical basic representation in ${\mathcal R}_{r(S) -1}$ whose plot is $\partial S$. So the forest attached to $L(S)$ is obtained by grafting the forest of $\partial S$. It is clear that inductively the translation formula, the generalized Leibniz formula and the grafting formula determine the Laurent polynomials $\omega(L,t)$ for irreducible maximal atypical $L \in {{\mathcal R}}_n$ uniquely. Hence for the proof it suffices that the expression on the right side of the identity stated in lemma \ref{thm:forest-formula} satisfies the analogous formulas and that it holds for $n=1$. Indeed, our assertion is obvious for $n=1$. The translation formula and the grafting formula for the right side are also obvious. To check the generalized Leibniz formula, by the formula (*) from above it suffices to prove $$ \omega_{r_1,...,r_k}(t) = \frac{ [\sum_i r_i]_t !}{ \prod_{i=1}^k [r_i]_t! } \ .$$ To this end it is helpful that for $a=r_1+\cdots + r_i$ and $b=r_{i+1} + \cdots + r_k$, by the key formula, also the following version of the generalized Leibniz formula holds $$ \omega_{r_1,...,r_k}(t) \ = \ \omega_{a,b}(t) \omega_{r_1,...,r_i}(t) \omega_{r_{i+1},...,r_k}(t) \ .$$ So it suffices to verify that $\omega_{a,b}(t) [a]_t! [b]_t! = [a+b]_t !$, which finally is proved by induction on $n=a+b$. For this notice that the key formula immediately implies the following generalized {\it Pascal rule} $$\omega_{a,b}(t) = t^b\cdot \omega_{a-1,b}(t) + t^{-a}\cdot \omega_{a,b-1}(t) $$ for the generalized binomial coefficients $\omega_{a,b}(t)$. Indeed, the derivative of the two sectors $S_1S_2$ give $\partial S_1 S_2$ with $d_0=0$, $d_1=1$ respectively $S_1 \partial S_2$ with $d_0=-1$, $d_1=1$. Hence $D(\partial S_1 S_2)= 0\cdot (a-1) + 1 \cdot b=b$ and $D(S_1 \partial S_2)= -a + 0\cdot (b-1) =-a$. Hence using the induction assumption, we already know $\omega_{a-1,b}(t) [a-1]_t! [b]_t! = [a+b-1]_t !$ and $\omega_{a,b-1}(t) [a]_t! [b-1]_t! = [a+b-1]_t !$. Hence the proof of the induction step finally amounts for the quantum numbers $[m]_t$ to the following generalized {\it additivity} $$ [a+b]_t = t^b \cdot [a]_t + t^{-a} \cdot [b]_t \ $$ that is easily verified. This completes the proof. \qed Since $[n]_t!$ and $[\lambda_{basic}]_t!$ are products of certain Laurent polynomials $[m]_t$ for integers $m$, the forest formula implies $\omega(L,-t) = \pm\ \omega(L,t)$. The forest formula also gives $\omega(L,1) >0$. Since $\omega(L,-1)=sdim(L)$, hence $\omega(L,-t) = sign(sdim(L))\cdot \omega(L,t)$. Recall that for irreducible $L$ in ${\mathcal R}_n$ we defined a sign $\varepsilon(L)$ and that the sign of $sdim(L)$ is $\varepsilon(L)$, as shown in section \ref{sec:main} and also in \cite{Weissauer-gl}. Finally, the forest formula also implies $\omega(L,t^{-1}) /\omega(L,t)= t^{-2D(\lambda)}$ for $L=L(\lambda)$. Hence we obtain \begin{lem}\label{ev} For irreducible (maximal atypical) $L=L(\lambda)$ in ${\mathcal R}_n$ one has the formulas $\omega(L^\vee,t)=\omega(L,t^{-1}) = t^{-2D(\lambda)} \omega(L,t)$ and $$\omega(L,-t) = \varepsilon(L) \cdot \omega(L,t)\ .$$ \end{lem} {\bf Example}. For $S^{n-1+d}$ in ${\mathcal R}_n$ and for integers $d\geq 0$ $$ \omega(S^{n-1+d},t) = t^{d-n+1} + t^{d-n+3} + \cdots + t^{d+n-1} = t^d\cdot \omega(S^{n-1},t) \ .$$ \begin{lem}\label{maxde} For irreducible max. atypical representations $L=L(\lambda)$ in ${\mathcal R}_n$ the Laurent polynomial $\omega(L,t)$ has degree $p(\lambda)=\sum_{i=1}^n \lambda_i $ in the sense that $$ \omega(L,t) = t^{p(\lambda)} + \sum_{\ell<p(\lambda)} a_\ell \cdot t^\ell \ .$$ \end{lem} {\it Proof}. This follows from the key formula. Indeed its $i$-th summand gives rise to shifts by $\delta_{i+1} +1, ... , \delta_k +1$. To determine the highest cohomology degree of $L(\lambda)$ one has to look for the maximal contributions from all these shifts. Each time we apply $DS$, the maximal contribution is obtained from the first summand $i=1$. Hence the highest $t$-power arises from the first summands of the key formula each times we apply $DS$ ($n$-times), in other words by applying the derivative $\partial$ each time to the leftmost sector. In particular the highest $t$-power of $\omega(L(\lambda),t)$ is $t^{\delta_1}$ times the highest $t$-power of $\omega(\overline L,t)$ for the representation $\overline L\in T_{n-1}$ associated to the plot $(\partial S_1)S_2\cdots S_k$ with the new vector $\delta=(\delta_1,...,\delta_1, \delta_2 +1,\cdots, \delta_k+1)$ with one copy of $\delta_1$ deleted. Now it is not hard to see, by unraveling the weight associated to this spaced forest, that the associated representation $\overline L$ is the highest weight module $L(\overline \lambda)$ in the sense of Lemma \ref{stable}. Therefore, the highest $t$-power of $\omega(L,t)$ is $t^{\lambda_n}$ times the highest $t$-power of $\omega(\overline L,t)$. In other words $deg_t(\omega(L,t))= \lambda_n + deg_t(\omega(\overline L,t))$. By induction on $n$ hence $deg_t(\omega(L,t))= \lambda_n + p(\overline \lambda) = p(\lambda)$. \qed Therefore the forest formula implies \begin{cor}\label{corforest} For irreducible maximal atypical representations $L=L(\lambda)$ in ${\mathcal R}_n$ one has the formula $D(\lambda)= p(\lambda) - p(\lambda_{basic})$. \end{cor} By lemma \ref{maxde} and \ref{ev} furthermore \begin{cor} \label{lem:top-degree} For irreducible maximal atypical representations $L=L(\lambda)$ in ${\mathcal R}_n$ $$ \omega(L,t) = t^{p(\lambda)} + \sum_{q(\lambda) <\ell<p(\lambda)} a_\ell \cdot t^\ell \ \ + \ t^{q(\lambda)} \ $$ holds for $p(\lambda) - q(\lambda) = p(\lambda) + p(\lambda^\vee) = 2\cdot p(\lambda_{basic})$. \end{cor} {\it Proof}. From the forest formula and lemma \ref{ev} we obtain $$ q(\lambda)= D(\lambda) + q(\lambda_{basic}) = D(\lambda) - p(\lambda_{basic})\ .$$ Hence $p(\lambda) - q(\lambda) = 2 p(\lambda_{basic})$. Since $\omega(L^\vee,t)= \omega(L,t^{-1})$, we obtain $p(\lambda^\vee)= -q(\lambda)= - D(\lambda) + p(\lambda_{basic})$. Combined with corollary \ref{corforest} this last formula gives $ p(\lambda) + p(\lambda^\vee) = 2 p(\lambda_{basic})$. \qed \section{$I$-module structure on the cohomology $H^\bullet_{DS_n}$} \label{sec:chevalley-eilenberg} In this section we show that the cohomology of the operator $DS_{n,0}$ is a graded module under the invariant algebra $I = \Lambda^{\bullet}(\mathfrak{p}_{-1})^H$ defined below. As an application we compute the cohomology and the Hilbert polynomial of a maximal atypical Kac module $V(\lambda)$ for the operator $DS_{n,0}$. We also show that the projection of $V(\lambda)$ to $L(\lambda)$ induces a map on the $DS_{n,0}$-cohomology which vanishes except in the top degree $p(\lambda)$. Note that it does not make sense to consider the Hilbert polynomial $\omega(V,t)$ for $V = V(\lambda)$ and the Dirac operator $D$ since any Kac module is in the kernel of $H_D$. \noindent The tensor functor associated to an element $x$ in $X = \{x \in {\mathfrak g}_1 \ | \ [x,x] = 0\}$ only depends by \cite{Duflo-Serganova} on the $G_0$-orbit on $X$. We therefore work in this section with the operator $DS_n$ associated to the action of the element \[ \mathbb{D} = \begin{pmatrix} 0 & id_n \\ 0 & 0 \end{pmatrix} \] which is clearly in the same $G_0$-orbit as our usual choice of $x \in {\mathfrak g}_1$ with $1$'s in the anti-diagonal. It defines a (graded) tensor functor $DS_n\!:\! T_n \to T_0$ which is isomorphic to $DS_{n,0}$. \noindent {\it Notations and conventions}. \begin{itemize} \item Let $H$ denote $Gl(n)$ diagonally embedded into $G_0\! =\! Gl(n) \times Gl(n)$ via $g \!\mapsto\! diag(g,g) \in G_0$. Then $Lie(H) \cong \mathfrak{gl}(n)$. \item We consider the subalgebra ${\mathfrak p} \subset {\mathfrak g}$ with grading ${\mathfrak p} = {\mathfrak p}_{-} \oplus {\mathfrak p}_0 \oplus {\mathfrak p}_+$ \begin{align*} {\mathfrak p}_{-} & = \{ \begin{pmatrix} 0 & 0 \\ x & 0 \end{pmatrix} \ | \ x \in \mathfrak{gl}(n) \} \\ {\mathfrak p}_0 & = \{ \begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} \ | \ x \in \mathfrak{gl}(n) \} = Lie(H) \\ {\mathfrak p}_+ & = \{ \begin{pmatrix} 0 & id_n \\ 0 & 0 \end{pmatrix} \}. \end{align*} \item Recall that the restriction of the Kac module $V(\lambda)$ to $\frak p$ is given by \[ V(\lambda)\vert_{\frak p} = \Lambda^{\bullet}({\mathfrak p}_{-}) \otimes L_0(\lambda) \] where $L_0(\lambda)$ is the irreducible ${\mathfrak g}_0$-module $L_0(\lambda)$ trivially extended to the parabolic subalgebra of upper triangular block matrices. \item We write in this section $\rho^{\vee} \boxtimes \rho$ for the irreducible representation $L_0(\lambda)$ of ${\mathfrak g}_0$ which is given by the external tensor product of the irreducible ${\mathfrak{gl}}(n)$-representation $\rho^{\vee}$ of weight $(\lambda_1,\ldots,\lambda_n)$ with its dual of weight $(-\lambda_n, \ldots, - \lambda_1)$. If viewed as a representation of $H \subset G_0$ this becomes $\rho^{\vee} \otimes \rho \cong End(\rho)$. In this notation $V(\lambda) = V(\rho^{\vee} \boxtimes \rho)$ and $L(\lambda) = L(\rho^{\vee} \boxtimes \rho)$. \item The tensor product $\rho^{\vee} \boxtimes \rho$ contains the trivial representation with multiplicity 1. We call a vector in this subspace an $H$-\textit{spherical} vector. In this sense $L_0(\lambda)$ has an $H$-spherical vector if and only if $\lambda$ is maximal atypical. \end{itemize} The action of the generator $\mathbb{D}$ of $\frak p_+$ on $\frak p$-modules induces the operator denoted $DS_n$ in the following. In particular $\mathbb{D}$ acts on ${\frak p}$ and the ideal ${\frak p}_- \oplus {\frak p}_0$ via the adjoint representation. Notice $Lie(H)$ acts on ${\frak p}_0 \cong Lie(H)$ by the adjoint representation of $Lie(H)$ and on ${\frak p}_{-}$ by the adjoint action of ${\frak p}$ such that the map $$ \begin{pmatrix} 0 & 0 \\ x & 0 \end{pmatrix} \ \mapsto \ \begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} $$ is a $Lie(H)$-linear isomorphism ${\frak p}_{-} \cong {\frak p}_0$ inducing a canonical identification $\Lambda^\bullet({\frak p}_{-}) \cong \Lambda^\bullet({\frak p}_0)$ of $H$-modules. \noindent The universal enveloping algebra $U({\frak p})$ of $\frak p$ contains the universal enveloping algebras $U({\frak p}_{-}\oplus {\frak p}_0)$ and $U({\frak p}_{-})$ as subalgebras. For $\theta\in \Lambda^\bullet({\frak p}_{-}) \cong U({\frak p}_{-})$ the supercommutator $[ \mathbb{D}, \theta]$ is contained in $U({\frak p}_{-}\oplus {\frak p}_0)$. For a basis $x_{ij}$ of $Lie(H)$ it has the form $ [ \mathbb{D}, \theta] \ = \ \theta_0 + \theta_1$ with $\theta_1= \sum_{i,j=1}^n \theta_{ij}\cdot x_{ij} $ for uniquely defined elements $\theta_0, \theta_{ij} \in \Lambda^\bullet({\frak p}_{-}) \cong U({\frak p}_{-})$. \noindent We now consider $V(\lambda)$ as a ${\mathfrak p}$-module \[ V(\rho^{\vee} \boxtimes \rho)|_{{\mathfrak p}} = \Lambda^{\bullet}({\mathfrak p}_{-}) \otimes End(\rho).\] \begin{lem} \label{lem:CE} The operator $DS_n$ induces the Lie algebra homology differential $\delta$ on the Chevalley-Eilenberg complex $\Lambda^{\bullet}({\mathfrak p}_{-}) \otimes End(\rho)$. \end{lem} We shortly recall the definition. For a Lie algebra ${\mathfrak g}$ and a ${\mathfrak g}$-module $V$ we consider the complex with $p$-th entry $V_p({\mathfrak g},V) = \Lambda^p({\mathfrak g}) \otimes V$ and differential $\delta: V_{p}({\mathfrak g},V) \to V_{p-1}({\mathfrak g},V)$ given for $p \geq 2$ by $\delta(x_1 \wedge \ldots \wedge x_p \otimes v) = \theta_0\otimes v + \theta_1(v) $ for $x_1,...,x_p\in \frak g$ where \begin{align*} \theta_0\otimes v & = \Bigl(\sum_{\mu < \nu} (-1)^{\mu + \nu} [x_{\mu},x_{\nu}] \otimes x_1 \wedge \ldots \wedge \hat{x}_{\mu} \wedge \ldots \wedge \hat{x}_{\nu} \wedge \ldots \wedge x_p\Bigr) \otimes v \\ \theta_1(v) & = \sum_{\nu=1}^p \Bigl( (-1)^{\nu+1} x_1 \wedge \ldots \wedge \hat{x}_{\nu} \wedge \ldots \wedge x_p\Bigr) \otimes x_{\nu}(v) \ .\end{align*} \noindent {\it Proof}. $\mathbb{D}$ acts on an element $x_1 \wedge \ldots \wedge x_r\otimes \varphi$ in $\Lambda^{\bullet}({\mathfrak p}_{-}) \otimes End(\rho)$ for $x_1,...,x_r\in {\frak p}_{-}$ and $\varphi\in End(\rho)$ as \begin{align*} & \mathbb{D}(x_1 \wedge \ldots \wedge x_r \otimes \varphi) \\& = \sum \pm x_1 \wedge \ldots \mathbb{D}(x_i) \wedge \ldots x_r \otimes \varphi \pm \sum x_1 \wedge \ldots \wedge x_r \otimes \mathbb{D}(\varphi) \end{align*} with $\mathbb{D}(x_i) \in {\mathfrak p}_0$. The second sum vanishes since $\mathbb{D}(\varphi) = 0$ for all $\varphi \in End(\rho)$ by definition of the Kac module. We now evaluate the first sum. $\mathbb{D}$ acts on an element in ${\mathfrak p}_{-1}$ by the supercommutator \[ \left[\begin{pmatrix} 0 & id_n \\ 0 & 0 \end{pmatrix},\begin{pmatrix} 0 & 0 \\ x & 0 \end{pmatrix} \right] \ =\ \begin{pmatrix} x & 0 \\ 0 & x \end{pmatrix} \ \in {\mathfrak p}_0.\] Therefore $\mathbb{D}$ acts on an element in $V(\lambda)$ as \begin{align*} & \mathbb{D}(x_1 \wedge \ldots \wedge x_r \otimes \varphi) \\ & = [\mathbb{D},x_1]x_2 \wedge \ldots \wedge x_r \otimes \varphi - x_1 \wedge [\mathbb{D},x_2]x_3 \wedge \ldots \wedge x_r \otimes \varphi \ldots \\ & + (-1)^{r+1} x_1 \wedge \ldots [\mathbb{D},x_r]\otimes \varphi \\ & = \begin{pmatrix} x_1 & 0 \\ 0 & x_1 \end{pmatrix} (x_2 \wedge \ldots \wedge x_r \otimes \varphi) - x_1 \wedge \begin{pmatrix} x_2 & 0 \\ 0 & x_2 \end{pmatrix} (x_3 \wedge \ldots \wedge x_r \otimes \varphi) + \ldots \\ & + (-1)^{r+1} x_1 \wedge \ldots \wedge x_{r-1}\otimes \begin{pmatrix} x_r & 0 \\ 0 & x_r \end{pmatrix} (\varphi) \end{align*} where the derivations $[\mathbb{D},x_{\nu}] \in {\mathfrak p}_0$ act on all terms to the right. The $\theta_1$-term arises from the action of $[\mathbb{D},x_{\nu}] $ on the last term $\varphi$ to the right, the remaining terms lead to a sum with the $\sum_{\mu < \nu}$-condition defining the $\theta_0$-term. \qed Viewing $\theta:= x_1\wedge \cdots \wedge x_r$ as an element in the universal enveloping algebra $U({\frak p}_{-})$ of ${\frak p}_{-}$, the super commutator $[\mathbb{D}, \theta]$ in the universal enveloping algebra $U({\frak p})$ of $\frak p$ is $[\mathbb{D}, \theta] = \theta_0 + \theta_1$ with $\theta_1$ in the universal enveloping algebra $U({\frak p}_{-}\oplus {\frak p}_0)$ of ${\frak p}_{-}\oplus {\frak p}_0$ and $\theta_0$ in the universal enveloping algebra $U({\frak p}_{-})$ of ${\frak p}_{-}$ as defined above, but viewed as element in the universal enveloping algebra of $\frak p$. Furthermore $\theta_1$ annihilates $H$-invariant vectors in any ${\frak p}$-module. Stated in this form, the assertion obviously holds for arbitrary elements $\theta$ in the universal enveloping algebra of ${\frak p}_{-}$. \noindent {\it $H^{\bullet}(V(\lambda))$ and the theorem of Hopf}. Lemma \ref{lem:CE} identifies $H_{DS_n}^{\bullet}(V(\lambda))$ with the Lie algebra homology ring $H_{\bullet}({\mathfrak p},End(\rho))$. We recall some facts about Lie algebra (co)homology. Note that $H_{\bullet}({\mathfrak g}) = H^{\bullet}({\mathfrak g})$. \noindent Let ${\mathfrak g}$ be a reductive Lie algebra and $\Lambda^{\bullet}({\mathfrak g})^{{\mathfrak g}}$ the space of invariants under the adjoint action of ${\mathfrak g}$. It has the structure of a graded super Hopf algebra. Let $P({\mathfrak g})$ denote the space of primitive elements, i.e. \[ P({\mathfrak g}) = \{ x \in \Lambda^{\bullet}({\mathfrak g})^{{\mathfrak g}} \ | \ \Delta(x) = x \otimes 1 + 1 \otimes x \} \] where $\Delta$ denotes the comultiplication. Define a grading on $P({\mathfrak g})$ by requiring that the inclusion $P({\mathfrak g}) \to \Lambda^{\bullet}(P({\mathfrak g}))$ preserves degrees. \begin{thm} \cite[Theorem 10.2, Corollary 10.2, Corollary 10.3]{Meinrenken} (Hopf-Koszul-Samelson) \label{thm:hopf} \begin{enumerate} \item The inclusion of $P({\mathfrak g})$ in $\Lambda^{\bullet}({\mathfrak g})^{{\mathfrak g}}$ extends to an isomorphism of graded super Hopf algebras $\Lambda^{\bullet}(P({\mathfrak g})) \cong \Lambda^{\bullet}({\mathfrak g})^{{\mathfrak g}}$. \item There is an isomorphism $H^{\bullet}({\mathfrak g}) \cong \Lambda^{\bullet}(P({\mathfrak g}))$ of graded super Hopf algebras, i.e. the cohomology ring is an exterior algebra over the primitive elements. In particular the elements in $\Lambda^{\bullet}(P({\mathfrak g}))$ are closed. \item The space of primitive elements has dimension $rank({\mathfrak g})$. For $\mathfrak{gl}(n)$ the basis elements $f_1,f_3, .. , f_{2n-1}\in P({\mathfrak g})$ have degree $1, 3, .. , 2n-1$. \end{enumerate} \end{thm} We now apply this theorem for the Lie algebra ${\mathfrak g}$ of $H$ and the $H$-invariant ring $I$ in the universal enveloping algebra of ${\frak p}_-$ using the following identifications $I \cong \Lambda^\bullet({\frak p}_{-})^{H}\cong \Lambda^{\bullet}({\frak p}_0)^{H}\cong \Lambda^{\bullet}({\mathfrak g})^{{\mathfrak g}} \cong V({\mathbf{1}})^H$ for the invariant ring $$ I:= U({\frak p}_{-})^{H} \ .$$ From theorem \ref{thm:hopf} we obtain the following corollary. \begin{cor} The cohomology $H_{DS_n}^{\bullet}(V(1))$ is isomorphic to $I\cong V({\mathbf{1}})^H$ and $I$ has the structure of a supercommutative polynomial ring $\mathbb C\{ f_1,..,f_{2n-1}\}$ generated by elements $f_\nu$ in the degrees $1-2\nu$ for $\nu=1,..,n$. In particular $$\omega_{DS_n}(t) = \prod_{\nu=1}^n (1 + t^{1-2\nu})\ .$$ \end{cor} \begin{lem} \label{HOPF} For any ${\mathfrak p}$-module $V$, the cohomology group $H^{\bullet}_{DS_n}(V^H)$ is a graded $I$-module. \end{lem} {\it Proof}. By theorem \ref{thm:hopf} we have $\mathbb{D}(v)=\theta_0(v) + \theta_1(v)=0$ for every element $v\in V(1)^H$. Since $\theta_1(v)=0$ holds for $H$-invariant vectors, we get $\theta_0(v)=0$ and hence $\theta_0=0$ holds in $U({\frak p}_{-})$. This implies $[\mathbb{D},\theta] = \theta_1$ for all $\theta\in U({\frak p}_{-})^H \cong \Lambda^\bullet({\frak p}_{-})^H = I$. For any $P\in I$, hence $[\mathbb{D},P] = P_1 \in U({\frak p})$ annihilates $H$-invariant vectors. For any finite dimensional algebraic ${\frak p}$-module $V$, $V$ in particular is an $U({\frak p}_{-})$-module and the subspace $V^H$ obviously is an $I$-module. Since $\mathbb{D}$ commutes with $H$, we obtain a linear map $\mathbb{D}: V^H \to V^H$. For $P\in I$ and $v\in V^H$ the formulas $\mathbb{D}(P v)= [\mathbb{D},P ]v + P \mathbb{D}(v)$ and $[\mathbb{D},P] = P_1$ and $P_1v=0$ imply $\mathbb{D}(Pv)=P\mathbb{D}(v)$ for all $v\in V^H$. Hence the subspace of $\mathbb{D}$-coboundaries resp. of $\mathbb{D}$-closed elements in $V^H$ are both $I$-modules. \qed \begin{lem} \label{lem:cohom-invar0} For finite-dimensional $\mathfrak{gl}(n\vert n)$-modules $M$ the following holds: $H_{DS_n}^{\bullet} (M) \cong H_{DS_n}^{\bullet} (M^H)$. \end{lem} {\it Proof}. $H$ commutes with $\mathbb{D}$ and operates therefore on the cohomology $H_{DS_n}^{\bullet} (M)$. Since $H$ is reductive, a finite-dimensional representation of $H$ is trivial if and only if its restriction to a Cartan subgroup is trivial. We therefore show that the diagonal torus $T \subset H$ acts trivially on the cohomology. By the Leray spectral sequence \[ DS_{n,n-1} \circ DS_{n-1,n-2} \circ \ldots \circ DS_{1,0} \Longrightarrow DS_{n,0} = DS_n.\] By section \ref{DDirac} $DS_{n,n-1}$ is invariant under \[ H_{n,n-1} = \begin{pmatrix} 0 & & & &\\ & \ldots & & & \\ & & 0 & \\ & & & 1 \end{pmatrix}, \] $DS_{n-1,n-2}$ is invariant under \[ H_{n-1,n-2} = \begin{pmatrix} 0 & & & &\\ & \ldots & & & \\ & & 1 & \\ & & & 0 \end{pmatrix} \] and so on. Hence $H_{DS_n}^{\nu}(M)$ has a filtration which is respected by $T$ such that $T$ acts trivially on the graded pieces. Since $T$ acts in a semisimple way, this implies that the operation of $T$, and therefore of $H$, is trivial. \qed \begin{prop} \label{lem:cohom-invar} For $M \in \mathcal{R}_n$ the cohomology $H_{DS_n}^\bullet(M)$ is a graded $I$-module for the graded polynomial ring $I$. For morphisms $f: M \to M'$ in $\mathcal{R}_n$ the induced map $H^\bullet_{DS_n}(M) \to H^\bullet_{DS_n}(M')$ is graded $I$-linear. \end{prop} {\it Proof}. This follows from the lemmas \ref{HOPF} and \ref{lem:cohom-invar0}. \qed \begin{lem} Let $\rho$ be an irreducible representation of $Gl(n)$. Then the map \[ \varphi: V({\mathbf{1}})|_{{\mathfrak p}} \to V(\rho^{\vee} \boxtimes \rho)|_{{\mathfrak p}}\ \ , \ \ v \otimes 1 \mapsto v \otimes id_{\rho} \] is a ${\mathfrak p}$-linear inclusion. \end{lem} {\it Proof}. We have $x_{\nu} (\varphi) = 0$ for all $\nu$ if $\varphi \in End_H(\rho) = \mathbb{C} id_{\rho}$. \qed \noindent {\it Remark.} Every maximal atypical ${\mathfrak g}$-module $V$, when restricted to ${\mathfrak g}_0$, has the form $V|_{{\mathfrak g}_0} = \bigoplus \rho_{\nu} \boxtimes \rho_{\mu}$ with $deg(\rho_{\nu}) = deg(\rho_{\mu})$. This degree makes $V$ into a graded ${\mathfrak p}$-module. For $V({\mathbf{1}})$ we obtain the degree defined previously. For $V(\rho^{\vee} \boxtimes \rho)$, $id_{\rho}$ has degree $deg(\rho)$. Therefore $\varphi$ shifts the degrees by $deg(\rho)$. The degree $deg(\rho)$ coincides with $p(\lambda)$ for $\rho$ with highest weight $\lambda = (\lambda_1,\ldots,\lambda_n)$. \begin{lem} \label{lem:kac=kac} The induced morphism \[ \xymatrix{ H^{\bullet}_{DS_n}(V({\mathbf{1}})) \ \ar[r]^-{H_{DS_n}^{\bullet}(\varphi)} & \ H_{DS_n}^{\bullet + deg(\rho)} (V(\rho^{\vee} \boxtimes \rho)) } \] is a graded isomorphism on the cohomology. Hence $ H_{DS_n}^{\bullet + deg(\rho)} (V(\rho^{\vee} \boxtimes \rho)) $ is the free $I$-module of rank one generated by the top cohomology. \end{lem} {\it Proof}. As a ${\mathfrak p}$-module \[ V(\rho^{\vee} \boxtimes \rho)|_{{\mathfrak p}} \cong \varphi(V({\mathbf{1}})) \oplus (\Lambda^{\bullet}({\mathfrak p}_-) \otimes End^0(\rho)) \] where \[ End^0(\rho) = \{ \phi \in End(\rho) \ | \ Tr(\phi) = 0 \}. \] Since $D \in {\mathfrak p}$, we obtain \[ H_{DS_n}^{\bullet} (V(\rho^{\vee} \boxtimes \rho))\ =\ H_{DS_n}^{\bullet} (\varphi( V({\mathbf{1}}))) \ \oplus\ H^{\bullet}_{DS_n} (\Lambda^{\bullet}({\mathfrak p}_-) \otimes End^0(\rho)). \] By \cite{Hochschild-Serre} the Lie algebra cohomology for reductive $H$ with coefficients in a representation $W$ is trivial except for the trivial representation \[ H^{\bullet}(Lie(H),W) \ \cong\ H^{\bullet}(Lie(H),{\mathbf{1}}) \otimes W^H.\] Since the Lie algebra cohomology is dual to the homology, this shows \[ H^{\bullet}_{DS_n} (\Lambda^{\bullet}({\mathfrak p}_-) \otimes End^0(\rho)) = 0. \] Since $\delta = ad_{\mathbb{D}}$ commutes with $\varphi$, we get \[ \xymatrix{ H^{\bullet}_{DS_n}(V({\mathbf{1}})) \ \ar[r]_-{H^{\bullet}_{DS_n}(\varphi)}^-{\cong} & \ H^{\bullet}_{DS_n}(\Lambda^{\bullet}({\mathfrak p}_-) \otimes id_{\rho}) } \] up to the degree shift with $deg(\rho)$. \qed \begin{cor} For the Hilbert polynomial of $V(\lambda)$ relative to $D$ we obtain \[ \omega_{DS_n}(V(\lambda),t) = t^{p(\lambda)}\cdot \omega_{DS_n} (V({\mathbf{1}}),t) = t^{p(\lambda)}\cdot \prod_{\nu=1}^n (1 + t^{1-2\nu}) .\] \end{cor} \begin{thm}\label{cohom-proj} Let $L(\lambda)$ be an irreducible and maximal atypical representation and $pr: V(\lambda) \to L(\lambda)$ be a projection onto the top. Then the induced homomorphism \[ H^{\nu}_{DS_n}(pr): H_{DS_n}^{\nu}(V(\lambda)) \to H_{DS_n}^{\nu}(L(\lambda)) \] is zero in degrees $\nu < p(\lambda)$ and an isomorphism for $\nu = p(\lambda)$. \end{thm} {\it Proof}. As a graded $I$-module $H_{DS_n}^{\nu}(V(\rho^{\vee} \boxtimes \rho))$ is the free $I$-module generated by the cohomology in the top degree. To prove our claim it suffices that the primitive elements $f_1, f_3,\ldots, f_{2n}-1\in I$ act trivially on $H_{DS_n}^{\nu}(L(\rho^{\vee} \boxtimes \rho)) $. This follows from the discussion in section \ref{sec:forest}, lemma \ref{ev}, which shows for $\nu = 1,\ldots,n$ \[ H_{DS_n}^{deg(\rho) - 2\nu +1} (L(\rho^{\vee} \boxtimes \rho)) = 0 \ .\] \qed \section{Primitive elements of $H^{\bullet}_{DS_n}(V({\mathbf{1}}))$} \label{sec:primitive} We will now describe the primitive elements of $H^{\bullet}_{DS_n}(V({\mathbf{1}}))$ in terms of the representation theory of the superlinear group $Gl(n\vert n)$. The radical filtration on $V({\mathbf{1}})$ defines a decreasing filtration $F_i$ of $V({\mathbf{1}})$. The $H$-invariants $F_i^H$ coincide with the powers $(I^+)^i$ of the augmentation ideal $I^+$ of the invariant ring $I = V({\mathbf{1}})^H$. In this way monomials of degree $i$ in the primitive generators $f_1,...,f_{2n-1}$ can be identified with the generators of the cohomology $H^\bullet(F_i^H/F_{i+1}^H)$. \noindent {\it The Murnagan-Nakayama rule}. Let $\lambda=(\lambda_1,...,\lambda_r)$ with $\lambda_1\geq \lambda_2\geq \cdots \geq \lambda_r$ be a partion of degree $n=deg(\lambda)$. For partitions $\nu$ and $\mu$ of $m$ and $n-m$ let $c_{\mu\nu}^\lambda$ denote the {\it Littlewood-Richardson} coefficient. Assume that $\nu$ is a {\it hook}, i.e a partition of type $\nu_1=r$, $\nu_2=\cdots =\nu_{m-r+1}=1$ and $\nu_i=0$ for $i>m-r+1$. Recall that a hook is a special case of a {\it rim hook} (also called {\it skew hook}). We say $\nu$ is a {\it symmetric hook} if $m=2r-1$. According to \cite[Section 4.10]{Sagan} we have \begin{prop} \label{murnaghan-nakayama} Suppose $\nu$ is a hook. Then $c_{\mu\nu}^\lambda=0$ unless the Young diagram of $\mu$ is contained in the Young diagram of $\lambda$ and the complement $\lambda/\mu$ is a union of $k$ edgewise connected rim hooks. If this is the case, then $$ c_{\mu\nu}^\lambda \ = \ {k-1 \choose c - r } $$ where $r=\nu_1$ and $c$ is the number of rows spread by the rim hooks contained in $\lambda/\mu$. \end{prop} We remark that in \cite{Sagan} $c$ denotes the number of columns instead of rows, since Young diagrams in \cite{Sagan} are written top down instead of being written from left to right, as with our conventions. \begin{cor} \label{cor:LRR} Suppose $\lambda$ and $\nu$ are symmetric hooks. Then $c_{\mu\nu}^\lambda\!=\!0$ unless $\mu\!=\!\nu$ or $\nu\!=\!0$. \end{cor} {\it Proof}. Suppose $c_{\mu\nu}^\lambda\neq 0$. By the proposition the edgewise connected components of $\lambda/\mu$ are rim hooks, hence $\#(\lambda/\mu) \leq 2$. Since $deg(\nu)$ is odd for symmetric hooks and $ \#(\lambda/\mu) =deg(\nu)$, we may assume without restriction of generality that $\#(\lambda/\mu) =1$. But this gives a contradiction since $deg(\lambda) - deg(\mu) =1$ would be the difference of two odd numbers. \qed \noindent Let $\rho^\vee$ denote the dual representation of $\rho$. Suppose $\lambda$ is a partition of $n$ and $\lambda^*$ is the dual partition of $n$, then define $(\rho_\lambda)^* := \rho_{\lambda^*}$ for the representations $\rho=\rho_\lambda$ of $GL(n)$ with highest weight $\lambda$. \begin{cor} Suppose that $\nu$ is a symmetric hook of degree $2r-1$ and suppose $k=1$ (in the notation of proposition \ref{murnaghan-nakayama}). Suppose the rim hook $\lambda/\mu$ reaches from $(i, \lambda_i)$ to $(j,\lambda_j)$ where $i>j$. Then $c_{\mu\nu}^\lambda\!=\!0$ hold unless $\lambda_i - \lambda_j \! =\! i - j\!=\! r$. \end{cor} {\it Proof}. Since $k=1$, ${k-1 \choose c - r }\neq 0$ if and only if $\lambda_i-\lambda_j=c=r$. Since $\nu$ is a rim hook, we have $2r-1=deg(\nu)= (\lambda_i-\lambda_j)+ (j-i) - 1$. Hence $\lambda_i-\lambda_j=r$ implies $j-i=r$. \qed \noindent {\it The Lie superalgebra $\mathfrak{gl}(n\vert n)$ and primitive elements of $\mathfrak{gl}(n)$}. The following proposition is a well-known consequence of the dual Cauchy identity. \begin{prop} \cite[Theorem B.17]{Bump-Schilling} The space of matrices $M_{nn}(k)$ is a $Gl(n,k)\times Gl(n,k)$-module in a natural way by left and right multiplication, hence also the Gra\ss mann algebra $\Lambda:=\Lambda^\bullet(M_{n}(k))$. As a representation of $Gl(n,k)\times Gl(n,k)$ we have $$ \Lambda^\bullet(M_{nn}(k)) \ \cong \ \bigoplus_{\rho} \rho^\vee \boxtimes \rho^*$$ where $\rho=\rho_\lambda$ runs over all partitions in \[ P(n,n)= \{ \lambda \in {\bf Z}^n\ \vert \ n \geq \lambda_1 \geq \lambda_2 \geq ... \geq \lambda_n \geq 0 \} \ .\] \end{prop} \noindent Warning: The degree $deg(\rho^\vee)$ is the the negative of the degree in the Gra\ss mann algebra $\Lambda$! \begin{cor} Let $H=GL(n,k)$ be embedded diagonally. Then \[ I:=\Lambda^\bullet(M_{nn}(k))^H \cong \bigoplus_{\rho} (\rho^\vee \boxtimes \rho^*)^H \ , \] and $ (\rho^\vee \boxtimes \rho^*)^H\neq 0$ if and only if $\rho=\rho_\lambda$ for a symmetric Young diagram $\lambda=\lambda^*$. There exist $2^n$ symmetric Young diagrams with $\lambda=(\lambda_1,...,\lambda_n)$ and $\lambda_1\leq n$. \end{cor} \noindent The space $I$ is an algebra with respect to the wedge product. The subspace $I^+\subseteq I$ of elements of degree $\geq 1$ is an ideal (the augmentation ideal). \begin{prop} \cite[Proposition 10.11]{Meinrenken} $I^+$ decomposes as \[ I^+ \ = \ P(H) \oplus (I^+)^2 \ .\] \end{prop} \begin{cor} \label{prim-hooks} With summation over all $\rho\!=\!\rho_\lambda$ for symmetric hook diagrams $\lambda$ of degrees $deg(\lambda)\!=\!1,3,5,....,2n-1$ the space of primitive elements is $$ P(H) \ = \ \bigoplus_{\rho} \ (\rho^\vee\boxtimes \rho^*)^H \ .$$ \end{cor} \noindent {\it Proof}. This follows from the fact that for hook diagrams $\lambda$ the space $\rho_\lambda $ cannot be a constituent of $\rho_\mu \otimes\rho_\nu$ for $\mu=\mu^*$ and $\nu=\nu^*$ where $(\rho_\mu^\vee \boxtimes \rho_\mu^*)^H$ and $(\rho_\nu^\vee \boxtimes \rho^*_\nu)^H$ are constituents of $I^+$. Hence $(\rho_\lambda^\vee \boxtimes \rho^*_\lambda)$ cannot be contained in $(I^+)^2$. \qed \noindent {\it The index}. The selftransposed weights $\lambda_{(i)}=(i,..,i,0,..,0)$ for $i=0,..,n$ in $P(n,n)$ are called the {\it basic selftransposed weights}. The index $ind(\lambda)$ of a selftransposed $\lambda$ in $P(n,n)$ is the maximal index $i$ of a basic selftransposed $\lambda_i$ whose Young diagram is contained in the Young diagram of $\lambda$. The index of $\lambda$ is the unique $i$ between $1$ and $n$ such that $\lambda_i \geq i$ and $\lambda_{i+1} \leq i$. We denote by $P_i(n,n)$ the set of all weights in $P(n,n)$ with index $i$. \begin{prop} Using that $\Lambda \cong V({\mathbf{1}})$, the canonical filtration defined by the radical filtration of $V({\mathbf{1}})$ in the category of $\mathfrak{gl}(n\vert n)$-modules gives a filtration $F_i$ on $\Lambda$ such that $$ F_i \ = \ \bigoplus_{\rho} \ (\rho^\vee \boxtimes \rho^*) $$ for all $\rho=\rho_\lambda$ running over all partitions $\lambda$ containing the partition $(i^i)$ of degree $i^2$. \end{prop} Before the proof we recall that $V({\mathbf{1}})$ has a decreasing filtration (the radical filtration) of $Gl(n\vert n)$-subrepresentations with $n+1$ irreducible graded pieces $L_i$ such that $L_0=k$ is the maximal irreducible quotient representation. The highest weights of the $L_i$ can be computed from \cite[Theorem 5.2]{Brundan-Stroppel-1} to be the duals $$ \lambda_{(i)}^\vee = (0,\cdots,0,-i,...,-i) \quad , \quad \mbox{ for }\ i=0,...,n \ $$ of the basic selftransposed weights $\lambda_i$ in $P(n,n)$. \noindent {\it Proof}. We need to show that the representation $L_i$, considered as a representation of $G \subset Gl(n\vert n)$, decomposes into a direct sum over the duals of all irreducible representations $\rho(\lambda)\boxtimes \rho(\lambda^*)$ for which $$\lambda \in P_i(n,n) \ .$$ Consider the decomposition of $L_i^\vee$ under $G=Gl(n)\times Gl(n)$. Let $\lambda=(\lambda_1,..,\lambda_n)$ be a corresponding highest weight of $G$ in $L_i$. We then claim $ind(\lambda)=i$. Obviously $$ \lambda \geq \lambda_{(i)} = (i,...,i,0,...,0)\ .$$ On the other hand we have $$ V({\mathbf{1}})^\vee \cong (det^n\boxtimes det^n) \otimes V({\mathbf{1}}) \ $$ since the dual of a Kac module is a Kac module. Hence, since the order of the socle layers in the dual Kac module is reversed and since the Loewy length of $V({\mathbf{1}})$ is $n+1$ \cite[Theorem 5.2]{Brundan-Stroppel-1}, this implies $$ L_i^\vee \cong (det^n\boxtimes det^n) \otimes L_{n-i} \ .$$ This in turn implies $$ \lambda \leq (n,...,n) + \lambda_{(n-i)}^\vee = (n,...,n,i,...,i) \ $$ with $i$ copies of $n$ and $n-i$ copies of $i$. Both estimates together force $ \lambda_i \geq i$ and $\lambda_{i+1}\leq i$, hence $ind(\lambda)=i$. This proves our claim. Since any $\lambda$ in $P(n,n)$ appears in one of the $L_i^\vee$, $L_i^\vee$ then consists precisely of the $G$-constituents $\rho(\lambda)\boxtimes \rho(\lambda^*)$ for $\lambda$ in $P_i(n,n)$. \qed \begin{cor} $F_i^H = (I^+)^i$, hence we can identify monomials of degree $i$ in the primitive generators $f_1,...,f_{2n-1}$ with the generators of the cohomology $H^\bullet(F_i^H/F_{i+1}^H)$. \end{cor} We introduce the notation $Prim_i \subseteq I^+$ for the space that is spanned by monomials in the primitive elements $f_{2\nu-1}$ with exactly $i$ factors. In this notation $Prim_1 = P(H)$. \noindent {\it Proof}. By corollary \ref{prim-hooks} $Prim_1$ is a complement to $F_2^H$. We now show $Prim_i \cap F_{i+1}^H = 0$ by induction on $i$. Using the induction assumption and $Prim_i = Prim_{i-1} \cdot Prim_1$, the space $Prim_i$ only gives rise to Young diagrams $\lambda$ that occur in the tensor product of some $\mu \in P_{j}(n,n)$ for $j< i$ and a symmetric hook $\nu\in P_1(n,n)$. By proposition \ref{murnaghan-nakayama}, $\mu$ is obtained from $\lambda$ by removing a (possibly disconnected) rim hook. If $\lambda\in P_k(n,n)$, this implies $j=k$ or $j=k-1$ and hence $k\leq j+1 \leq i$. This proves $Prim_i \cap F_{i+1}^H = 0$ since all selftransposed weights $\lambda$ in $F_{i+1}^H$ are contained in $P_k(n,n)$ for $k \geq i+1$. \noindent This implies $Prim_\nu \cap F_{j}^H = 0$ for $\nu<j$. Since $I^+$ is the direct sum of $ \bigoplus_{\nu =0}^i Prim_\nu$ and $(I^+)^{i+1}$, this implies that $F_{i+1}^H$ is in the complement of $\bigoplus_{\nu =0}^i Prim_\nu$ and therefore $F_{i+1}^H \subseteq (I^+)^{i+1}$. There are ${n \choose i}$ selftransposed weights $\lambda\in P_i(n,n)$ and all of them occur in $V({\mathbf{1}})$ with multiplicity one. On the other hand, the space $Prim_i \subseteq I^+$ that is spanned by monomials in the primitive elements $f_{2\nu-1}$ with exactly $i$ factors also has dimension ${n \choose i}$. Since the dimensions agree, this implies $(I^+)^{i+1} = F_{i+1}^H$ and $Prim_i$ is therefore represented by $(F_i/F_{i+1})^H \cong H^\bullet_{DS_n}(F_i/F_{i+1})$. \qed \section{Kac module of ${\mathbf{1}}$}\label{kac-module-of-one} \noindent \textit{Overview.} We now study the effect of $DS$ on indecomposable modules in the remaining sections \ref{kac-module-of-one} - \ref{hooks}. The easiest examples are perhaps the extensions of two irreducible modules, and we focus here on the case of extensions of the trivial representation ${\mathbf{1}}$ by another irreducible module. Our main result in these sections is corollary \ref{splitting1}, saying that a representation $Z$, such that the projection onto its cosocle ${\mathbf{1}}$ induces a surjection $\omega: \omega(Z) \to \omega({\mathbf{1}})$, is equal to the trivial representation. Such a representation $Z$ contains extensions of the trivial representation ${\mathbf{1}}$ with other irreducible representations. We show in the resum\'{e} of section \ref{strictmorphisms} that if $Z$ is not irreducible, we obtain extensions $V$ of ${\mathbf{1}}$ by an irreducible representation $S$ such that the induced morphism $\omega(V) \to \omega({\mathbf{1}})$ is surjective. Since the dimension of $Ext^1(S,{\mathbf{1}})$ is at most one-dimensional, any two such extensions are isomorphic. Hence we can study them by realizing them as quotients of modules whose cohomology is sufficiently understood. A typical example occurs in the current section \ref{kac-module-of-one}: The Kac module $V({\mathbf{1}})$ of ${\mathbf{1}}$ contains an extension of ${\mathbf{1}}$ with the irreducible representation $[0,\ldots,0,-1]$; and by considering the cohomology of the Kac module we are able to compute the cohomology of this extension and its dual in lemma \ref{Ia} and lemma \ref{aKac}. We also show in corollary \ref{aKac2} that $\omega^0(V) = 0$ for the extension $V$ between ${\mathbf{1}}$ and $Ber \otimes S^{n-1}$. The other $n$ nontrivial extensions of ${\mathbf{1}}$ (listed in lemma \ref{ext}) are studied in section \ref{hooks}. We realize these extensions as a quotient of the mixed tensor $R(n^n)$ studied in section \ref{sec:n^n}. The key proposition \ref{trivialextension} shows that for any of our nontrivial extensions $V$ the zero degree part $\omega^o$ of the induced map $\omega(q_V): \omega(V) \to \omega({\mathbf{1}})$ vanishes, a contradiction our analysis in the resum\'{e} of section \ref{strictmorphisms}, hence $Z \simeq {\mathbf{1}}$. The constituents of the Kac module $V({\bf 1}) \in {\mathcal R}_n$ are \cite{Brundan-Stroppel-1}, thm. 5.2, $$L_a = Ber^{-a} \otimes [a,...,a,0,...,0] \ \text{ for } \ a=0,...,n \ ,$$ where the last entry of $a$ is at the position $i=n-a$. Therefore $Ber^a \otimes L_a$ is basic and therefore has cohomology concentrated in degree zero, hence the cohomology of $L_a$ is concentrated in degree $-a$ and $$ H^{-a}(L_a) \ \cong \ I_a \ \oplus \ I_{a-1} \quad , \quad a=0, 1, \ ...\ , n $$ where $I_{-1}:=I_n:=0$ and $$ I_a \ := \ Ber^{-a-1} \otimes [a+1,...,a+1,0,...,0] $$ (with $n-a-1$ entries $a+1$ and $a$ entries $0$). Notice $I_1^\vee \cong Ber \otimes S^{n-1}$ and $I_0={\bf 1}$, $I_1=[0,..,0,-2], ... , I_{n-1}=Ber^{-n}$. For the cyclic quotient $Q_a$ of $V({\bf 1})$ with socle $L_a$ this implies inductively \begin{lem}\label{Qa} The natural quotient map $Q_a \to {\bf 1}$ induces an isomorphism $H^0(Q_a)\cong H^0({\bf 1}) \cong {\bf 1}$ and $$ H^{-\nu}(Q_a) = \begin{cases} I_\nu \oplus I_{\nu -1} & \text{$\nu=0,...,a$}, \\ 0 & \text{otherwise}. \end{cases} \ .$$ \end{lem} Notice $Q_a=V({\bf 1})$ for $a=n$ and $Q_a={\bf 1}$ for $a=0$. Similar as in the proof of the last lemma, for $K_a=Ker(V({\bf 1})\to Q_a))$ and $a \leq n-1$ we obtain exact sequences $$ 0 \to H^\bullet(K_a) \to H^\bullet(V({\bf 1}) ) \to H^\bullet(Q_a ) \to 0 \ .$$ Indeed, the cohomology of $H^\bullet(K_a)$ is concentrated in degrees $\leq -a-1$, whereas the cohomology of $H^\bullet(Q_a )$ is concentrated in degrees $\geq -a$. We can view these as short exact sequences of homology complexes $$ 0 \to (H^\bullet(K_a),\overline\partial) \to (H^\bullet(V({\bf 1}) ),\overline\partial) \to (H^\bullet(Q_a ),\overline\partial) \to 0 \ .$$ The long exact homology sequence for the $H_{\overline\partial}$-homology together with $H_{\overline\partial}(H^{\nu}(V)) = H_D^{\nu}(V)$ (lemma \ref{abutment}) implies $$ \[email protected]{ H_D^{-\nu}(K_a) \ar[r] & H_D^{-\nu}(V({\bf 1})) \ar[r] & H_D^{-\nu}(Q_a) \ar[r]^-\delta & H_D^{-\nu-1}(K_a) \ar[r] & H_D^{-\nu-1}(V({\bf 1}))} \ $$ and $H^\nu_D(V({\bf 1}))=0$ for all $\nu$ hence gives $H^ \nu_D(Q_a) \cong H_D^{\nu -1}(K_a)$. Now $H_D^{\nu}(Q_a)$ vanishes unless $\nu \geq -a$ by lemma \ref{Qa}. The right hand side $H^{\nu - 1}(K_a)$ is concentrated in degrees $\nu \leq -a$. Hence the long exact homology sequence has at most one nonvanishing connecting morphism $\delta$, namely $\delta: H_D^{-a}(Q_a) \to H_D^{-a-1}(K_a)$ in degree $-a$. Hence $H_D^\nu(Q_a)=0$ for $\nu\neq {-a}$. Since there is a unique common irreducible module $I_{a}$ in the cohomology $H^{-1-a}(K_a)$ and $H^{-a}(Q_a)$ such that $d(Q_a) = \pm I_a$, we conclude \begin{lem}\label{Ia} For $0 \leq a \leq n-1$ we get $$ H_D^\nu(Q_a) = \begin{cases} I_{a} & \text{$\nu=-a$ }, \\ 0 & \text{otherwise}. \end{cases}$$ \end{lem} {\bf Remark}. This result shows that for the $H_D^{\nu}$-cohomology there are do not exist long exact sequences attached to short exacts sequences in ${\mathcal R}_n$. If these would exist, then $Q_1/L_1 \cong {\bf 1}$ would imply $H_D^{-1}(L_1)\cong H_D^{-1}(Q_1)$, in contrast to $H_D^{-1}(L_1) \cong I_1 \oplus {\bf 1}$ and $H^{-1}_D(Q_1) \cong I_1$. \begin{cor}\label{killing} $H_D^0(V)=0$ for $V=Q_a$ and $(Q_a^*)^\vee$ for $1 \leq a\leq n-1$. \end{cor} Now we analyse in the case $a=1$ the nontrivial extension $$ 0 \to [0,...,0,-1] \to Q_1 \to {\bf 1} \to 0 \ .$$ Since $L_1^\vee \cong [0,...,0,-1]^\vee \cong Ber \otimes S^{n-1}$, also $V=(Q_1^*)^\vee$ defines a nontrivial extension $$ 0 \to Ber\otimes S^{n-1} \to V \to {\bf 1} \to 0 \ .$$ \begin{lem} \label{aKac} $V= (Q_1^*)^\vee$ defines a nontrivial extension between ${\bf 1}$ and $Ber\otimes S^{n-1}$ in ${\mathcal R}_n$ such that in ${\mathcal R}_{n-1}$ the following holds $$ H_D^\nu(V) \cong H^\nu(V) = \begin{cases} Ber\otimes S^{n-1} & \text{$\nu=1$}, \\ 0 & \text{otherwise}. \end{cases} $$ \end{lem} {\it Proof}. The statement about $H_D^\nu(V)$ follows immediately from lemma \ref{Ia}. We now calculate $H^{\nu}(V)$. Since the cohomology of the anti-Kac module $(V({\bf 1})^*)^\vee$ vanishes, $0\to (K_1^*)^\vee \to (V({\bf 1})^*)^\vee \to V \to 0$ gives $$ H^{\ell -1}(V) \cong H^{\ell}((K_1^*)^\vee) \cong H^{-\ell}(K_1^*)^\vee \quad , \quad \text{ for all } \ \ell \ .$$ $K_1^*$ is filtered with graded components $L_2,...,L_n$ so that the cohomology of $K_1^*$ vanishes if the cohomology of the $L_i$ vanishes. Hence $H^{-\ell}(K_1^*)=0$ unless $-\ell \notin \{-2,-3,...,-n\}$ and $ H^\nu(V) = 0$ for all $ \nu \leq 0$ and all $\nu\geq n $. On the other hand $H^\nu(Ber\otimes S^{n-1})=0 $ for $\nu\neq 1$ and $H^1(Ber\otimes S^{n-1})= {\bf 1} \oplus (Ber\otimes S^{n-1})$. Since $H^\nu(V)=0$, if $H^\nu({\bf 1})=0$ and $H^\nu(Ber\otimes S^{n-1})=0$, therefore $H^\nu(V)=0$ unless $\nu=1$. \qed Applying $(n-1)$ times the functor $DS$ to $DS(V)\in {\mathcal R}_{n-1}$, the last lemma gives \begin{lem} If we apply $n$ times the functor $DS$ to $V= (Q_1^*)^\vee$ in ${\mathcal R}_n$, we obtain that $$ DS\circ DS \circ \cdots \circ DS(V) \ = \ \bigoplus_{\nu=0}^{n-2} \ k[-1-2\nu] \ $$ in ${\mathcal R}_0$ is concentrated in the degrees $1,3,\cdots, 2n-3$. \end{lem} The Leray type spectral sequences therefore imply the following result \begin{cor} \label{aKac2} For the module $V= (Q_1^*)^\vee$ in ${\mathcal R}_n$, defining a nontrivial extension between ${\bf 1}$ and $Ber\otimes S^{n-1}$, we have $$ \fbox{$ DS_{n,0}^\ell(V) \ = \ 0 \ \text{ and }\ \omega_{n,0}^\ell(V) \ = \ 0 \quad \text{ for } \ell \leq 0 $} \ \ .$$ \end{cor} \section{Strict morphisms}\label{strictmorphisms} \noindent Recall the functor $\omega: T_n \to svec_k$ defined by $\omega=\omega_{n,0}$. A morphisms $q: V \to W$ in $T_n$ will be called a {\it strict epimorphism}, if the following holds \begin{enumerate} \item {\it $q$ is surjective}. \item {\it $\omega(q)$ is surjective}. \end{enumerate} For a module $Z$ in $T_n$ and semisimple $L$ and $$ q: Z \twoheadrightarrow L $$ we make the following {\bf Assumption (S)}. {\it The induced morphism $$ \omega(q): \omega(Z) \to \omega(L) $$ is surjective, i.e. $q$ is a strict epimorphism.} \noindent Of course (S) holds for irreducible $Z$. In the special case $L={\bf 1}$ condition (S) is equivalent to $\omega(q)\neq 0$. We denote the cosocle of $Z$ by $C$. For any submodule $U\subseteq Kern(q)$ the map $q: Z\to L$ factorizes over the quotient $p: Z\to V=Z/U$ and induces the analogous morphism $q_V: V \to L \hookrightarrow cosocle(Z/U)$. Hence $$ q = q_V \circ p \quad , \quad \omega_{n,i}(q) = \omega_{n,i}(q_V) \circ \omega_{n,i}(p) \ .$$ implies: $\omega_{n,i}(q)$ is surjective $\Longrightarrow \omega_{n,i}(q_V)$ is surjective. For $i=0$ thus \begin{itemize} \item {\it If $Z$ is indecomposable, then $V$ is indecomposable}. \item {\it Condition (S) for $q$ implies condition (S) for $q_V$}. \item {\it $\omega(q_V)=0$ implies $\omega(q)=0$}. \end{itemize} {\it Indecomposable $Z$}. Now assume $Z$ is indecomposable and has upper Loewy length $m\geq 2$. If $m\geq 3$, there exists a submodule $U \subset Z$ such that $V=Z/U$ has Loewy length 2 and such that $V$ again is indecomposable and satisfies assumption (S). So $V$ has Loewy length two and is indecomposable with cosocle $C$. Then $(V,q_V)$ is a nontrivial extension $$ 0 \to S \to V \to C \to 0 \ $$ with semisimple socle $S$ decomposing into irreducible summands $S_\nu$ and cosocle $C$. The map $q$ is obtained from a projection map $pr_L:C \to L$ by composition with the canonical map $V \to C$. Since $V$ is indecomposable with cosocle $C$, all extensions $(V_\nu,q_\nu)$ obtained as pushouts $$ \xymatrix{0\ar[r] & \oplus_\nu S_\nu \ar@{->>}[d]\ar[r] & V \ar@{->>}[d]_{\pi_\nu}\ar[r]^p & C \ar@{=}[d]\ar[r] & 0 \cr 0\ar[r] & S_\nu \ar[r] & V_\nu \ar[r]^{p_\nu} & C \ar[r] & 0 } $$ must be nontrivial extensions. All $V_\nu$ again satisfy condition (S): Indeed $Im(\omega(q)) \subseteq Im(\omega(q_\nu))$ $$ \xymatrix@+0,5cm{\omega(V)\ar[rrd]^{\omega(q)}\ar@{->>}[rd]_{\omega(p)} \ar[dd]_{\omega(\pi_\nu)}& & \cr & \omega(C)\ar[r]^-{\omega(pr_L)} & \omega(L)\cr \omega(V_\nu) \ar[rru]_{\omega(q_\nu)}\ar@{->}[ru]^{\omega(p_\nu)} & &} $$ The projection $pr_L: C \to L$ splits by an inclusion $i_L:L \to C$, since $C$ is semisimple. Hence $C \cong L \oplus L'$ so that $pr_L$ and $i_L$ are considered as the canonical projection resp. inclusion for the first summand. Since $V$ is indecomposable, $Ext^1(L,S_\nu)\neq 0$ holds for at least one $S_\nu$. Now divide by the submodule $U' \subset S$ generated by all $S_\nu$ with the property $Ext^1(L,S_\nu)=0$ and obtain $V'=V/U'$. Then divide by the maximal submodule $U''$ of $L'$ that splits in $V'$. Then $V'/U''$ is indecomposable and the map $q$ factorizes over this quotient and satisfies condition $S$. {\it Resume}. Suppose $Z$ is indecomposable but not irreducible, $q:Z\to L$ satisfies condition (S), the cosocle of $Z$ is $C=L \oplus L'$. Then there exists a quotient $V$ of $Z$ and a quotient $\tilde L$ of $L'$ such that $$ \xymatrix{ 0 \ar[r] & S \ar[r] & V \ar[r]^-p & L \oplus \tilde L \ar[r] & 0 } $$ with \begin{itemize} \item $V$ is indecomposable, \item $S$ is irreducible such that $Ext^1(L,S)\neq 0$ and $Ext^1(\tilde L,S)\neq 0$, \item the map $q=pr_L \circ p$ satisfies condition (S). \end{itemize} The irreducible representations $X\not\cong {\bf 1}$ with the property $Ext^1(X,S)\neq 0$ will be called descendants of $S$. In the situation of the resume we get the extensions $E=E_S^L$ and $\tilde E= E_S^{\tilde L}$ defined by submodules of $V$. Hence $V/E_S^L \cong \tilde L$ and $V/E_S^{\tilde L} \cong L$ and we get the following exact sequences $$ \xymatrix{ \tilde L \ar@{=}[r] & \tilde L & \cr \tilde E \ar[u]\ar[r] & V \ar[u]\ar[r] & L \cr S \ar[r]\ar[u] & E \ar[r]\ar[u] & L \ar@{=}[u] } $$ One of the potential candidates for $S_\nu$ is the irreducible representation $L(\lambda -\mu)$ that appears in the second upper Loewy level of the Kac module $V(\lambda)$. Indeed this follows from lemma \ref{KAC}, since $H_D(V(\lambda))=0$. Since $Z_\nu$ is indecomposable, $Z_\nu$ is in this case a highest weight representation of weight $\lambda$. This is clear, because all weights of $Z_\nu$ are in $\lambda - \sum_{\alpha\in\Delta_n} \mathbb Z \cdot \alpha$. By corollary \ref{companion2} a highest representation $V$ contains a (nontrivial) highest weight subrepresentation $W$ of weight $\lambda-\mu$ only if $H_D(V)$ has trivial weight space $H_D(V)_{\overline \lambda}$. For $V=Z_\nu$ as above this gives a contradiction, if $S_\nu = L(\lambda-\mu)$ occurs in the socle of $Z$. Indeed, notice that $H_D(L)$ contains $L(\overline\lambda)$ by lemma \ref{stable}. By condition (S) then also $H_D(Z)$ contains $L(\overline\lambda)$. So by corollary \ref{companion2} $L(\lambda-\mu)$ is not contained in $Z_\nu$. This proves \begin{lem}\label{minusmu} Suppose $Z$ is an (indecomposable) module with irreducible maximal atypical cosocle $L=L(\lambda)$. If $Z$ satisfies condition (S), then the second layer of the upper Loewy filtration of $Z$ does not contain the irreducible module $L(\lambda-\mu)$. \end{lem} A case of particular interest is $L={\bf 1}$. Fix some irreducible $S$ with the property $Ext_{{\mathcal R}_n}(S,{\bf 1})\neq 0$. In section \ref{hooks} we will show for $L={\bf 1}$ that $\omega^0(q_E)=0$ (lemma \ref{omeganull}). \section{The module $R((n)^{n})$}\label{sec:n^n} \noindent We describe a certain maximal atypical mixed tensor for $n\geq 2$. We recall some terminology from \cite{Brundan-Stroppel-1}. Given weights $\lambda, \mu \sim \alpha$ in the same block one can label the cup diagram $\lambda$ resp. the cap diagram $\mu$ with $\alpha$ to obtain $\underline{\lambda}\alpha$ resp. $\alpha\overline{\mu}$. These diagrams are by definition consistently oriented if and only if each cup resp cap has exactly one $\vee$ and one $\wedge$ and all the rays labelled $\wedge$ are to the left of all rays labelled $\vee$. Set $\lambda \subset \alpha$ iff $\lambda \sim \alpha$ and $\underline{\lambda} \alpha$ is consistently oriented. A crossingless matching is a diagram obtained by drawing a cap diagram underneath a cup diagram and then joining rays according to some order-preserving bijection between the vertices. Given blocks $\Delta, \Gamma$ a $\Delta \Gamma$-matching is a crossingless matching $t$ such that the free vertices (not part of cups, caps or lines) at the bottom are exactly at the position as the vertices labelled $\circ$ or $\times$ in $\Delta$; and similarly for the top with $\Gamma$. Given a $\Delta \Gamma$-matching $t$ and $\alpha \in \Delta$ and $\beta \in \Gamma$, one can label the bottom line with $\alpha$ and the upper line with $\beta$ to obtain $\alpha t \beta$. $\alpha t \beta$ is consistently oriented if each cup resp cap has exactly one $\vee$ and one $\wedge$ and the endpoints of each line segment are labelled by the same symbol. Notation: $\alpha \rightarrow^t \beta$. For $t$ a crossingless $\Delta \Gamma$ and $\lambda \in \Delta, \ \mu \in \Gamma$ label the bottom and the upper line as usual. The \textit{lower reduction} $red(\underline{\lambda}t)$ is the cup diagram obtained from $\underline{\lambda}t$ by removing the bottom number line and all connected components that do not extend up to the top number line. \begin{thm} \cite{Brundan-Stroppel-5}, Thm 3.4. and \cite{Brundan-Stroppel-2}, Thm 4.11: In $K_0({\mathcal R}_n)$ the mixed tensor $R(\lambda)$ attached to the bipartition $\lambda$ satisfies \[ [ R(\lambda)] = \sum_{ \mu \subset \alpha \rightarrow^t {\mathbf{1}}, \ red(\underline{\mu}t) = \underline{{\mathbf{1}}} } [ L(\mu) ]\] where $t$ is a fixed matching determined by $\lambda$ between the block $\Gamma$ of ${\mathbf{1}}$ and the block $\Delta$ of $\lambda^{\dagger}$ \cite{Brundan-Stroppel-5}, 8.18. If $L(\mu)$ is a composition factor of $R(\lambda)$, its graded composition multiplicities are given by \[ \sum_{\mu} (q + q^{-1})^{n_{\mu}} [L(\mu)]\]where $n_{\mu}$ is the number of lower circles in $\underline{\mu}t$. \end{thm} \begin{lem} The module $R= R((n)^{n})$ in ${{\mathcal R}}_{n+r+1}$, $r \geq 0$, has Loewy length $2n+1$ with socle and cosocle equal to ${\mathbf{1}}$. We have $DS(R(n^n))) = R(n^n)$. If $r=0$, $DS(R) = P({\mathbf{1}})$. $R$ contains ${\mathbf{1}}$ with multiplicity $2^{2n}$. It contains the irreducible module $L(h) = [n,1,\ldots,1,0,\ldots,0]$ (with $1$ occurring $n-1$-times) in the second Loewy layer. The multiplicity of $L(h)$ in $R$ is $2^{2(n-1)}$. It contains the module $[n,n,\ldots,n,0,\ldots,0]$ as the constituent of highest weight in the middle Loewy layer with multiplicity 1. It does not contain the modules $BS^{n-1} = [n,1,\ldots,1]$, $BS^n = [n+1,1,\ldots,1]$, $[n,1,\ldots,1,-1]$ and $[n,1,\ldots,1,-1,\ldots,-1]$ (with $1$ occurring $n-1$-times) as composition factors. \end{lem} {\it Proof}. The Loewy length of a mixed tensor is $2d(\lambda)+1$ (where $d(\lambda)$ is the number of caps) and $d((n-1)^{n-1}) = n-1$ \cite{Heidersdorf-mixed-tensors}. The composition factors of $R$ are given as a sum $\sum_{\mu} (q + q^{-1})^{n_{\mu}} [L(\mu)]$. For our choice of $\lambda = (n-1)^{n-1}$ the matching is given by \cite{Heidersdorf-mixed-tensors} (picture for $n=4$) \begin{center} \begin{tikzpicture} \draw (-6,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \begin{scope} [yshift = -3 cm] \draw (-6,0) -- (6,0); \foreach \x in {} \draw (\x-.1, .2) -- (\x,0) -- (\x +.1, .2); \foreach \x in {} \draw (\x-.1, -.2) -- (\x,0) -- (\x +.1, -.2); \foreach \x in {} \draw (\x-.1, .1) -- (\x +.1, -.1) (\x-.1, -.1) -- (\x +.1, .1); \end{scope} \draw [-,black,out=270,in=270](-1,0) to (0,0); \draw [-,black,out=270,in=270](-2,0) to (1,0); \draw [-,black,out=270,in=270](-3,0) to (2,0); \draw [-,black,out=90,in=90](-1,-3) to (0,-3); \draw [-,black,out=90,in=90](-2,-3) to (1,-3); \draw [-,black,out=90,in=90](-3,-3) to (2,-3); \draw [-,black,out=270,in=90](-4,0) to (-4,-3); \draw [-,black,out=270,in=90](-5,0) to (-5,-3); \draw [-,black,out=270,in=90](3,0) to (3,-3); \draw [-,black,out=270,in=90](4,0) to (4,-3); \end{tikzpicture} \end{center} with $n$ caps and where the rightmost vertex in a cap is at position $n$. The irreducible module in the socle and cosocle is easily computed from the rules of the section \ref{stable0}. The weight \[h =(n,1,\ldots,1,0|0,-1,\ldots,-1,n-n)\] easily seen to satisfy $h \rightarrow^t {\mathbf{1}}, \ red(\underline{h}t) = \underline{{\mathbf{1}}}$, hence occurs as a composition factor. The number of lower circles in the lower reduction $\underline{h}t$ is $n-1$, hence $L(h)$ occurs with multiplicity $2^{2(n-1)}$. If we number the Loewy layers starting with the socle by $1,\ldots,2n+1$, $L(h)$ occurs in the $2k$-th Loewy layer ($k=1,\ldots,n$) with multiplicity $\binom{n-1}{k-1}$. Likewise for ${\mathbf{1}}$ with $n_{{\mathbf{1}}} = n-1$. We note: A weight $\mu$ can only satisfy $red(\underline{\lambda}t) = \underline{{\mathbf{1}}}$ if the vertices $-n, -n-1,\ldots, -n-r$ (the first vertices left of the caps) are labelled by $\vee$. Hence: \begin{itemize} \item $BS^{n-1}$ does not occur as a composition factor. The vertex $-n$ is labelled by $\wedge$. \item $[n,1,\ldots,1,-1]$ does not occur as a composition factor. The vertex $-n$ is labelled by $\wedge$ \item $[n+1,1,\ldots,1]$ does not occur as a composition factor since all composition factors $[\mu_1,\ldots,\mu_n]$ satisfy $\mu_1 \leq n$ since $[n,\ldots,n,0,\ldots,0]$ is the constituent of highest weight. \end{itemize} \qed \textbf{Remark}. In particular the constituent ${\mathbf{1}}$ occurs with the same multiplicity as in $P({\mathbf{1}}) \in {\mathcal R}_n$. \textbf{Remark}. The module $R(n^n)$ can be obtained as follows. Let $\{ n^n \}$ be the covariant module to the partition $(n^n)$ and $\{ n^n\}^{\vee}$ its dual. Then $R(n^n)$ is the projection on the maximal atypical block of $\{ n^n \} \otimes \{ n^n\}^{\vee}$. \textbf{Example}. For $Gl(3|3)$ the Loewy structure of the module $R(2^2)$ is \[ \begin{pmatrix} [0,0,0] \\ [1,0,0] \oplus [2,1,0] \\ [2,0,0] \oplus [2,-1,-1] \oplus [0,0,0] \oplus [0,0,0] \oplus [1,1,0] \oplus [2,2,0] \\ [1,0,0] \oplus [2,1,0] \\ [0,0,0] \end{pmatrix} .\] \section{The basic hook representations $S$}\label{hooks} {\it The case $L={\bf 1}$}. Suppose $Z$ has cosocle ${\bf 1}$ and the projection $q:Z\to {\bf 1}$ satisfies condition (S). If $Z$ is not simple, we constructed objects $V_\nu$ with cosocle ${\bf 1}$ and simple socle $S_\nu=Ker(q_\nu)$. In this situation $Ext^1_{{\mathcal R}_n}({\bf 1}, S_\nu)\neq 0$. \begin{prop}\label{trivialextension} For any nontrivial extension $$ \xymatrix{ 0 \ar[r] & S_\nu \ar[r] & V \ar[r] & {\bf 1} \ar[r] & 0} $$ the vectorspace $ \omega_{n,0}^0(V)$ is zero (for simple $S_\nu$). Hence $\omega(q): \omega(V) \to \omega({\bf 1})$ is the zero map. \end{prop} For the proof we use several lemmas. Finally lemma \ref{omeganull} proves the proposition. \begin{lem}\label{ext} Up to isomorphism there are $n+1$ irreducible modules $L$ in ${\mathcal R}_n$ such that $Ext^1({\bf 1},L)\neq 0$. They are \begin{enumerate} \item $L_n(n)=Ber_n\otimes S^{n-1}$ and \item its dual $L_n(n)^\vee \cong [0,..,0,-1]$, and for \item $i=1,..,n-1$ the basic selfdual representations $$L_n(i)=[i,1,\cdots,1,0,\cdots,0]$$ (with $n-i$ entries $0$). \end{enumerate} In all cases $\dim(Ext^1(L,{\bf 1}))=1$. Furthermore $$DS_{n,j}(L_n(i)) = L_{j}(i)$$ holds for $i < j \leq n$ and $$ \fbox{$ DS_{n,i}(L_n(i)) = L_{i}(i) \oplus L_{i}(i)^\vee \oplus Y $} \ $$ where $Y\not\cong {\bf 1}$ is an irreducible module with $Ext^1({\bf 1},Y)=0$ and sector structure $$ [\vee_{-n},\wedge_{-n+1}]\wedge_{-n+2} [-n+3,...,n-2]\wedge_{n-1} [\vee_n,\wedge_{n+1}] \ .$$ \end{lem} {\bf Example}. $L_n(1)=S^1$. {\it Proof}. $L^*\cong L$ for irreducible objects $L$ implies $Ext^1({\bf 1},L) \cong Ext^1(L,{\bf 1})$. Furthermore $Ext^1(L,{\bf 1}) \cong Ext^1((L^*)^\vee,{\bf 1})$ and $L=L^*$, hence $$Ext^1(L,{\bf 1}) \cong Ext^1(L^\vee,{\bf 1})\ .$$ By \cite{Brundan-Stroppel-2}, cor. 5.15 for $L=L(\lambda)$ $$\dim Ext^1(L(\lambda),{\bf 1}) = \dim Ext^1(V(\lambda),{\bf 1}) + \dim Ext^1(V(0),L(\lambda)) \ $$ holds. Since ${\bf 1}$ is a Kostant weight, there exists a unique weight $\lambda$ characterized by $\lambda \leq 0$ (Bruhat ordering) and $l(\lambda,0)=1$ in the notations of loc. cit. lemma 7.2, such that $\dim Ext^1(V(\lambda),{\bf 1})\neq 0$. One easily shows $L(\lambda) \cong [0,..,0,-1]$. On the other hand $ \dim Ext^1(V(0),L(\lambda)) \neq 0$ implies $0 < \lambda$ (see the explanations preceding loc. cit. (5.3) and loc. cit. lemma 5.2.(i)). Then for any pair of adjacent labels $i,i+1$ of $\rho$ of type $i=\vee,i+1=\wedge$ we write $\rho\in \Lambda^{\vee,\wedge}$, if the labels of $\rho$ at $i,i+1$ are the same $i=\vee,i+1=\wedge$. Then lemma 5.2(ii) of loc. cit. gives $$ \dim(Ext^1_{{\mathcal R}_n}(V({\rho}),L(\lambda)) = \begin{cases} dim(Ext^1_{{\mathcal R}_{n-1}}(V(\rho'),L(\lambda')) & \text{ if } \lambda \in \Lambda^{\vee,\wedge} \cr \dim(Hom_{{\mathcal R}_n}(V(\rho''),L(\lambda)) & \text{ otherwise } \cr \end{cases} $$ Here $\lambda',\mu'$ are obtained from $\lambda,\mu$ by deleting $i,i+1$, and $\rho''$ is obtained by transposing the labels at $i,i+1$. This shows our assertion, since for $$L(\rho)={\bf 1}$$ there is a unique pair of such neighbouring indices for $$ [\vee_{-n+1},...,\vee_0,\wedge_1,...,\wedge_n] \ ,$$ namely at the position $(i,i+1)=(0,1)$. We now assume $n\geq 2$. Then switching this pair gives $L_n(1)$ below. Freezing then also $(-1,.,.,2)$ gives $L_n(2)$ and so on. Hence applying this lemma of loc. cit. several times will prove our first claim. Indeed, as long as we freeze less than $n-2$ pairs, we end up for every $j$ from $1,...,n-1$ with a representation $L_n(j)$. It has only one sector $$ [\vee_{1-n},...,\vee_{-j-1}[\vee_{-j}\wedge_{-j+1}][\vee_{-j+2},...,\vee_0,\wedge_1,...,\wedge_{j-1}][\vee_j,\wedge_{j+1}]\wedge_{j+2},...,\wedge_n] \ .$$ In addition, if we freeze $n-1$ pairs we end up with $L_n(n)$ with the sector structure $$ [\vee_{2-n},\vee_{3-n}, ...., \wedge_{n-2},\wedge_{n-1}][\vee_n,\wedge_{n+1}] \ .$$ Indeed $L_n(n) \cong Ber_n \otimes S^{n-1}$. The remaining assertions now follow from theorem \ref{mainthm}, since $L_{n+1}(n)$ has sectors $$S_1S_2S_3 =[\vee_{-n},\wedge_{-n+1}][-n+2,...,n-1][\vee_n,\wedge_{n+1}]\ .$$ Hence $DS(L_{n+1}(n)) = (Ber\otimes S^{n-1}) \oplus (Ber\otimes S^{n-1})^\vee \oplus Y$ for $Y$ with sector structure $[\vee_{-n},\wedge_{-n+1}][-n+3,...,n-2]\wedge_{n-1}[\vee_n,\wedge_{n+1}]$. \qed {\it Basic cases}. For a nontrivial extension $$ \xymatrix{ 0 \ar[r] & L_n(i) \ar[r] & V \ar[r] & {\bf 1} \ar[r] & 0 } $$ first suppose $S=L_n(i)$ is {\it basic}, so $i\in \{1,...,n-1\}$. Since $(L_n(i)^*)^\vee \cong L_n(i)^\vee \cong L_n(i)$ for $i<n$, $(V^*)^\vee$ again defines a nontrivial extension $$ \xymatrix{ 0 \ar[r] & L_n(i) \ar[r] & (V^*)^\vee \ar[r] & {\bf 1} \ar[r] & 0 } \ .$$ We now use $DS(L_n(i))=L_{n-1}(i)$ for $1\leq i<n-1$. In $T_{n-1}$ the induced long exact sequence $$ \xymatrix{ H^{-1}({\bf 1}) \ar[r] & H^0(S) \ar[r] & H^0(V) \ar[r] & H^0({\bf 1}) \ar[r] & H^1(S) } $$ remains exact, since $H^\ell(S)=L_{n-1}(i)$ for $\ell=0$ and is zero otherwise and similarly $H^\ell({\bf 1})={\bf 1}$ for $\ell=0$ and is zero otherwise. In other words for {\it basic} $S$ we obtain from the given extension in ${\mathcal R}_n$ an exact sequence in ${\mathcal R}_{n-1}$ $$ \xymatrix{ 0 \ar[r] & L_{n-1}(i) \ar[r] & DS(V) \ar[r] & {\bf 1} \ar[r] & 0 } \ .$$ Repeating this $n-i$ times we obtain an exact sequence $$ \xymatrix{ 0 \ar[r] & L_{i}(i) \oplus L_i(i)^\vee \oplus Y \ar[r] & DS_{n,i}(V) \ar[r] & {\bf 1} \ar[r] & 0 } \ .$$ Since $Ext^1({\bf 1},Y)=0$ this implies $$ DS_{n,i}(V) = E \oplus Y $$ for some selfdual module $E$ defining an extension between ${\bf 1}$ and $L_{i}(i) \oplus L_i(i)^\vee$. We claim that this exact sequence does not split in ${\mathcal T}_i$. \begin{prop} Suppose $r$ is an integer $\geq 0$. For an indecomposable module $V$ defining a nontrivial extension between ${\bf 1}$ and $L_{n+1+r}(n)$ in ${\mathcal R}_{n+r+1}$, the object $(DS)^{\circ r+1} (V)$ decomposes into the direct sum of the irreducible module $Y$ from above and an indecomposable extension module $E$ in ${\mathcal R}_n$. \end{prop} {\it Proof}. Note that any two such indecomposable extensions define isomorphic modules $V$, since the relevant $Ext$-groups are one-dimensional. We assume $r=0$ for simplicity. Since the constituents $L_{n+1}(n)$ and ${\bf 1}$ of $V$ are basic, this implies $DS(V) = H^0(V)=Y\oplus E$. If the module $E$ is not indecomposable, it is semisimple (for this use Tannaka duality). We proceed as follows: For the mixed tensor $R=R_{n^{n}}$ in ${\mathcal R}_{n+1}$ we know that its image $DS(R_{n^n})$ is the projective hull $P({\bf 1})$ of ${\bf 1}$ in ${\mathcal R}_n$ and $P({\bf 1})$ is an indecomposable module with top ${\bf 1}$. The module $R_{n^{n}}$ admits as quotient an indecomposable module $V$ defining a nontrivial extension between ${\bf 1}$ (the top of $R$) and the module $L_{n+1}(n)$ (which sits in the second layer of the Loewy filtration of $R$). Hence $R/K \cong V$ for some submodule $K$ of $R$. We claim that $$ \xymatrix{ 0 \ar[r] & H^0(K) \ar[r]^i & H^0(R) \ar[r]^p & E \oplus Y \ar[r] & H^1(K) \ar[r] & 0 } $$ is exact and $H^\nu(K)=0$ for $\nu\neq 0,1$. For this use $H^\bullet(V)=H^0(V)$ and $H^\nu(R)=P({\bf 1})$ for a unique $\nu$. If $\nu\neq 0$, then $H^\nu(K) \to H^\nu(R)=P({\bf 1})$ would be surjective and therefore $H^\nu(K) = P({\bf 1}) \oplus ?$. We exclude this later. So suppose for the moment $\nu=0$. The image of $p$ can not contain the irreducible module $Y\not\cong 1$, since the top of $P({\bf 1})$ is ${\bf 1}$. If $E$ splits, it is semisimple. Then the image of $p$ can not contain $E$ either, since again this would contradict that $P({\bf 1})$ has top ${\bf 1}$. Therefore the image of $p$ is ${\bf 1}$ or zero, if $E$ splits. This leads to a contradiction: Look at all constituents $X$ of $R$ with $Ber \otimes S^{n-1}$ in $H^\nu(X)$ for $\nu=-1,0,1$. These $X$ are isomorphic to the following irreducible modules $X_{-1},X_0,X_1$ with $Ber \otimes S^{n-1}$ occuring in $H^i(X_i)$ respectively: the basic module $X_0=L_{n+1}(n)$ with sector structure $[-n,-n+1][-n+2,...,n-1][n,n+1]$ and $X_1$ with sector structure $[1-n[2-n,...,n-1][n,n+1]n+2]$ and $X_{-1}$ with sector structure $[-n-1,-n]\boxminus [2-n,...,n-1][n,n+1]$. Then $Ber\otimes S^{n-1}$ occurs in $H^i(X_i)$ for $i=0,\pm 1$. Let $F^i(.)$ denote the descending Loewy filtration. For a module $Z$ let $m(Z)$ denote the number of Jordan-H\"older constituents of $Z$ that are isomorphic to $Ber^\otimes S^{n-1}$. Next we use that for all $i$ $$ \fbox{$ X_1,X_{-1}\ \text{ does not occur in the }\ gr^i_F(R) $} \ .$$ Indeed according to section \ref{sec:n^n} all irreducible constituents $[\lambda]$ satisfy the property $\lambda_{n+1}=0$ except for one given by $[n,-n+1,...,-n+1]$. Therefore $m(H^{\pm 1}(gr^i_F(R)))=0$ and hence $m(H^1(F^i(X)))=0$. Since also $m(H^{-1}(gr^i_F(R)))=0$, then $$ H^{-1}(gr^i_F(R)) \to H^0(F^{i}(R)) \to H^0(F^{i-1}(R)) \to H^0(gr^i_F(R)) \to H^1(F^{i}(R)) $$ implies $ m(H^0(F^{i}(R))) = m(H^0(F^{i-1}(R))) + m(H^0(gr^i_F(R)))$. For small $i$ we have $F^i(R)=R$ and therefore $$ m(H^0(R)) = \sum_i m(H^0(gr^i_F(R))) \ .$$ They same argument then applies for the submodule $K$ of $R$. Hence $$ m(H^0(K)) = m(H^0(R)) - 1$$ by counting the multiplicities of $X_0$ in $K$ resp. $R$. Hence the image of $p$ must contain $Ber\otimes S^{n-1}$ and hence $E$ is an indecomposable quotient of $P({\bf 1})$. Now let us adress the assertion $\nu=0$ from above. If $\nu\neq 0$, then $H^0(K) \cong P({\bf 1}) \oplus ?$ gives a contradiction using the same counting argument. In the case $r>0$ one uses the same kind of argument. Again the extension defined by $V$ in ${\mathcal R}_{n+r+1}$ can be realized as a quotient of $R=R_{n^{n}}$ in ${\mathcal R}_{n+r+1}$. The argument is modificatis modificandis the same. {\it The non-basic cases}. For a nontrivial extension in $T_n$ of the form $$ \xymatrix{ 0 \ar[r] & L_n(n) \ar[r] & V \ar[r] & {\bf 1} \ar[r] & 0 } \ .$$ we get a dual nontrivial extension $$ \xymatrix{ 0 \ar[r] & [0,...,0,-1] \ar[r] & (V^*)^\vee \ar[r] & {\bf 1} \ar[r] & 0 } \ .$$ In lemma \ref{Qa} and lemma \ref{Ia} we defined $Q_a$, which for $a=1$ defines a nontrivial extension between ${\bf 1}$ and $L_n(n)^\vee$. Since $\dim(Ext^1({\bf 1}, L_n(n)^\vee))=1$, we get $$ (V^*)^{\vee} \cong Q_1 \ .$$ By corollary \ref{aKac2} we get $ DS_{n,0}^\ell(V) = 0$ and $\omega_{n,0}^\ell(V) = 0$ for all $\ell \leq 0$. Similarly by duality $DS_{n,0}^{\ell}((V^*)^\vee)=0$ for $\ell \geq 0$. This implies $\omega_{n,0}^{\ell}((V^*)^\vee)=0$ for $\ell \geq 0$. Finally consider the nontrivial extension $V_i$ between $\bf 1$ and $L_n(i)$ in ${\mathcal R}_n$ and the nontrivial extension $DS_{n,i}(V_i)$ in ${\mathcal R}_i$ from above. It has the form $DS_{n,i}(V_i) = E \oplus Y$ for $$ 0 \to L_i(i) \oplus L_i(i)^\vee \to E \to {\bf 1} \to 0 \ .$$ The module $E = DS_{n,i}((V_i)/Y$ is the pullback of a nontrivial extension of ${\bf 1}$ by $L_i(i)$ $$ 0 \to L_i(i) \to E_1 \to {\bf 1} \to 0 $$ and of a nontrivial extension of ${\bf 1}$ by $(L_i(i))^{\vee}$ $$ 0 \to (L_i(i))^{\vee} \to E_2 \to {\bf 1} \to 0 \ .$$ Hence there exists an exact sequence $$ 0 \to DS_{n,i}(V_i)/Y \to E_1 \oplus E_2 \to {\bf 1} \to 0 \ $$ so that $$ \to DS_{i,0}^{-1}({\bf 1}) \to DS_{i,0}^0(DS_{n,i}(V_i)/Y) \to DS_{i,0}^0(E_1) \oplus DS_{i,0}^0(E_2) \to $$ is exact. Since $DS_{i,0}^0(E_1)=0$ and $DS_{i,0}^0(E_2)=0$ by corollary \ref{aKac2} and since $DS_{i,0}^{-1}({\bf 1})=0$, therefore $DS_{i,0}^0(DS_{n,i}(V_i)/Y)=0$. Hence $\omega_{i,0}^0(DS_{n,i}(V_i))=\omega_{i,0}^0(Y)$. The Leray type spectral sequence $$ DS_{i,0}^p(DS_{n,i}^q(V_i)) \Longrightarrow DS_{n,0}^{p+q}(V_i) $$ degenerates, since $DS_{n,i}^q(V_i)=0$ for $q\neq 0$. Therefore also $\omega^{\bullet}_{n,i}(V_i) \cong DS^{\bullet}_{n,i}(V_i)$. One can now argue as in the proof of lemma \ref{insteps} to show $$ \omega_{n,0}^0(V_i) = \omega_{i,0}^0(DS_{n,i}(V_i))=\omega_{i,0}(Y)\ .$$ Since the map $$ \omega_{i,0}(q) : E \oplus Y \longrightarrow {\bf 1} $$ is trivial on the simple summand $Y\not\cong {\bf 1}$, the next lemma follows. \begin{lem}\label{omeganull} For every nontrival extension $$\xymatrix{ 0\ar[r] & S \ar[r] & V \ar[r]^{q_V} & {\bf 1} \ar[r] & 0 }$$ of ${\bf 1}$ by a simple object $S$ in ${\mathcal R}_n$ the map $\omega^0(q_V)$ vanishes. \end{lem} The last lemma completes the proof of proposition \ref{trivialextension}. This implies the following main result. \begin{cor}\label{splitting1} Suppose $Z$ is indecomposable and $cosocle(Z)\cong {\bf 1}$. If the quotient map $q:Z \to {\bf 1}$ is strict, then $q: Z\cong {\bf 1}$. \end{cor} {\it Remark.} A symmetric abelian tensor category in the sense of Deligne is semisimple if and only if $q:Z \to {\mathbf{1}}$, with cosocle of $Z$ isomorphic to ${\mathbf{1}}$, is an isomorphism. \begin{cor} For a nontrivial extension $V$ between ${\bf 1}$ and $L_n(n)$ or its dual $L_n(n)^\vee$ $$H_D^\nu(V)=0 \quad , \quad \nu\neq 1\ $$ holds, and hence the induced map $H_D(q): H_D(V) \to H_D({\bf 1})$ is trivial. \end{cor} \end{document}
arXiv
Estimating genome-wide regulatory activity from multi-omics data sets using mathematical optimization Saskia Trescher ORCID: orcid.org/0000-0002-0726-85901, Jannes Münchmeyer1 & Ulf Leser1 BMC Systems Biology volume 11, Article number: 41 (2017) Cite this article Gene regulation is one of the most important cellular processes, indispensable for the adaptability of organisms and closely interlinked with several classes of pathogenesis and their progression. Elucidation of regulatory mechanisms can be approached by a multitude of experimental methods, yet integration of the resulting heterogeneous, large, and noisy data sets into comprehensive and tissue or disease-specific cellular models requires rigorous computational methods. Recently, several algorithms have been proposed which model genome-wide gene regulation as sets of (linear) equations over the activity and relationships of transcription factors, genes and other factors. Subsequent optimization finds those parameters that minimize the divergence of predicted and measured expression intensities. In various settings, these methods produced promising results in terms of estimating transcription factor activity and identifying key biomarkers for specific phenotypes. However, despite their common root in mathematical optimization, they vastly differ in the types of experimental data being integrated, the background knowledge necessary for their application, the granularity of their regulatory model, the concrete paradigm used for solving the optimization problem and the data sets used for evaluation. Here, we review five recent methods of this class in detail and compare them with respect to several key properties. Furthermore, we quantitatively compare the results of four of the presented methods based on publicly available data sets. The results show that all methods seem to find biologically relevant information. However, we also observe that the mutual result overlaps are very low, which contradicts biological intuition. Our aim is to raise further awareness of the power of these methods, yet also to identify common shortcomings and necessary extensions enabling focused research on the critical points. Gene regulation is one of the most important biological processes in living cells. It is indispensable for adapting to changing environments, stimuli, and developmental stage and plays an essential role in the pathogenesis and course of diseases. Mechanistically, the transcription of DNA into RNA is predominantly controlled by a complex network of transcription factors (TFs) (see Fig. 1). These proteins bind to enhancer or promoter regions adjacent to the genes they regulate [1], which may enhance or inhibit the recruitment of RNA polymerase and thereby activate or repress gene transcription [2]. Gene products also can be modified post-translationally via microRNAs (miRNAs) degrading the transcript or inhibiting their translation [3]. Besides, a multitude of other mechanisms influence gene regulation, such as chromatin remodelling [4], epigenetic effects [5], and compound-building of transcription factors [2]. Distortion of regulatory processes is inflicted with various diseases [6, 7], especially with cancer [8, 9]. Transcription of DNA into RNA. Transcription factors (TFs) bind to distal or proximal TF binding sites (TFBS) enhancing the binding of RNA polymerase and activating the transcription of DNA into RNA Due to this importance, many efforts have been devoted to the elucidation of human regulatory relationships and networks. Wide-spread experimental techniques are transcriptome measurements to quantify gene and transcription factor co-expression [10], chromatin immunoprecipitation (ChIP) on chips or followed by sequencing for identifying binding patterns of specific TFs [11], and bisulfite sequencing to find epigenetic signals of regulation [12]. Many large-scale datasets of such experiments have been published and are available in public repositories such as the Gene Expression Omnibus (GEO) [13], the Cancer Genome Atlas (TCGA) [14] or the Encyclopedia of DNA Elements (ENCODE) [15]. Computational methods are also used, for instance, to identify transcription factor binding sites (TFBS) [16] or to find known TFBS within the genome (e.g., [17, 18]). Several databases have been created which store relevant information, such as lists of binding motifs (TRANSFAC [19] or JASPAR [20]) or targets of regulatory miRNAs [21]. Such measurements and predictions are used by network reconstruction algorithms to predict regulatory relationships and regulatory networks [22]. A plethora of different methods have been proposed, ranging from purely qualitative methods [23] over simple statistical approaches [24] to more advanced probabilistic frameworks [25]. Early methods were plagued by insufficient data and a general scarcity of background knowledge, which led to rather unstable results [26]. This situation has changed dramatically over the last years, as results of more and more large screens have been made publicly available [27] and also the knowledge on principal regulatory relationships has increased [28, 29]. This, in turn, has increased the interest in methods which predict genome-wide networks using a systematic, unified, mathematical framework. Here, we review five rather recent methods and conduct a quantitative comparison of their results with the goal to identify their mutual strengths and weaknesses. They all have in common that they assume both the set of regulators (transcription factors or micro RNAs) to be known and the topology of the regulatory network to be given. By combining this background knowledge with specific omics data sets, especially transcriptome data, they try to infer the activity of regulators in a certain experimental condition or disease using mathematical optimization. All presented methods are global methods in the sense that they compute activities genome-wide (as much as represented by the underlying network), thus removing the shortcomings of local methods which ignore cross-talk between sub-models and global effects within samples. The methods predominantly produce a ranked list of regulators, sorted by their activity in a given group of samples; given that a multitude of biological influences is ignored during inference, especially kinetic and temporal effects, their goal cannot be to produce absolute snapshots of regulatory activity. We describe each method in detail and compare them with respect to the most important properties, such as the data being used, the method applied for deriving optimized activity values, or the evaluation performed to show effectiveness. We further implemented a quantitative comparison including four of the presented methods to objectively analyze their results. As contrast, we also include ARACNE [30] as sixth method; this algorithm uses only local reasoning and requires no background knowledge, but is still rather popular. We describe in detail five methods which infer transcription factor activity from omics data sets using a background network of transcription factors and the genes they regulate. All use some form of mathematical optimization. To emphasize the common ground of these at-first-sight rather different methods, we explain their underlying models using a simple framework for defining the relationships of transcription factors and genes. This framework is presented first; it should be understood as a least common denominator, not as a proper method for network inference by itself. We then describe five recently published methods for genome-wide TF activity estimation as extensions or constraints to this general framework, namely the approach by Schacht et al. [31] (estimation of TF activity by the effect on their target genes), RACER [32], RABIT [33], ISMARA [34] and biRte [35]. Additionally, we contrast these more comprehensive methods with the local inference algorithm ARACNE [30], a popular tool for the de-novo reconstruction of gene regulatory networks. Key properties of all methods (input, mathematical model, computation, output) are summarized in Table 1. Table 1 Overview of methods for estimating regulatory activity from transcriptome data comparing input data, modelling, computational aspects and outcome variables Mathematical framework To combine regulatory networks and quantitative omics data and to thereby deduce regulatory activity, all methods described here use a genome-wide mathematical model. Sample specific gene expression values g i,s , derived from one biological condition, i.e., grouped into a single class, for in total G genes and S samples need to be provided as input. The background regulatory network is represented as a directed graph where the nodes designate regulators and regulated entities (mostly TFs and genes, but also miRNAs, regulatory sites, or TF complexes) and directed edges indicate a regulatory relationship between the two connected nodes, for example the influence of a TF on the expression of a gene (see Fig. 2). General scheme of a TF – gene network where all T TFs are connected to each other and can regulate all of the G genes We will use the variable t for regulators, i for regulated entities, and b t,i for the strength of an edge from a TF/miRNA t to a gene i representing, for instance, a binding affinity. As abstract framework for explaining the different methods we propose a simple linear model predicting gene expression \( \widehat{g_{i, s}} \) of gene i in sample s in terms of the activity of all T transcription factors β t,s , which regulate i, and the binding affinities b t,i . In contrast to Fig. 2, where TFs can influence each other, this model ignores TF – TF relations and feedback loops: $$ \widehat{g_{i, s}}={\displaystyle \sum_{t=1}^T}{\beta}_{t, s}{b}_{t, i} $$ Given this model and a set of quantitative measurements of gene expression g i,s , the goal of the mathematical optimization is to find parameters β such that the sum of squared errors of measured vs predicted gene expression over all genes and samples is minimized using a certain norm, for example the L2 norm: $$ \min {\displaystyle \sum_{i=1}^G}{\displaystyle \sum_{s=1}^S}{\left({g}_{i, s}-\widehat{g_{i, s}}\right)}^2 $$ Estimation of TF activity by the effect on their target genes [31] The idea of this method is to use the expression levels of TF's target genes to infer their integrated effect (see Fig. 3). The method uses expression data and database curated TF binding information as input whereby the TF – gene network is restricted to genes regulated by more than 10 TFs and TFs with at least 5 target genes. The model is closely related to the abovementioned general framework, only adding a term for the sample specific effect of a TF. Specifically, the activity of a TF is modelled linearly by its cumulative effect on its target genes normalized by the sum of target genes or the TF's gene expression level: Flow chart of the approach by Schacht et al. The input data sets (marked in blue) are partly filtered and passed to a linear regression model (yellow) which calculates an activity value for each TF (green) $$ \widehat{g_{i, s}}= c+{\displaystyle \sum_t}{\beta}_t{b}_{t, i}\left({\theta}_{a, t} ac{t}_{t, s}+{\theta}_{g, t}{g}_{t, s}\right) $$ where \( \widehat{g_{i, s}} \) denotes the predicted gene expression of gene i in sample s, c is an additive offset, β t describes the estimated activity of TF t and b t,i refers to the underlying strength of the relation between TF t and gene i reflecting the binding affinity. The estimated effect of a TF in a certain sample is calculated via the switch-like term in parentheses, where either the activity definition \( a c{t}_{t, s}=\frac{{\displaystyle {\sum}_i}{b}_{t, i}{g}_{i, s}}{{\displaystyle {\sum}_i}{b}_{t, i}} \) or the gene expression of the TF itself g t,s is taken into account using the restrictions θ a,t , θ g,t ∈ {0, 1} and θ a,t + θ g,t = 1. This switch term represents a meta-parameter to find the best model and has no biological interpretation. The model outputs an activity value and the information which switch parameter is chosen for each TF of the reduced network. During the optimization, the sum of error terms (absolute value of the difference between predicted and measured gene expression) is minimized which is achieved via mixed-integer linear programming using the Gurobi 5.5 optimizer.Footnote 1 The authors of this method state that the activity definition (see above) was used in 95% of their test cases, but the switch-like combination of both terms yielded still better optimization results. In the paper, the optimization task is greatly simplified as the model is computed for each gene separately and allows only a maximum number of 6 regulating TFs. The TF – gene network indicating the strength of a relation between a TF and a gene is created for 1120 TFs using knowledge from the commercial MetaCore™ database,Footnote 2 ChEA [36] and ENCODE [15]. Due to the restriction of the network mentioned above, the actual model is then based on 521 TFs and 636 target genes only. Evaluation of the results was performed using expression data from 59 cell lines of the NCI-60 panel [37, 38] and from melanoma cell lines ("Mannheim cohort") [39]. A sample based leave-one-out and 10-fold cross validation of predicted and measured gene expression yielded Pearson correlation scores of about 0.6 for both data sets. A gene set enrichment analysis of the target genes for TFs modelled by the activity definition yielded 64 significantly enriched concepts including cell cycle, immune response and cell growth for the data from the NCI-60 panel. Additionally, a t-test was computed between melanoma and other cell lines of the NCI-60 panel to find differentially expressed genes of melanogenesis. For the resulting genes, regulation models were built and used to predict gene expression in the melanoma cell line data set yielding good prediction performances. RACER [32] RACER (Regression Analysis of Combined Expression Regulation) aims to integrate generic cell-line data with sample-specific measurements using a two-stage regression (see Fig. 4). Firstly, sample-specific regulatory activities for TFs and miRNAs are calculated. Subsequently, general TF/miRNA – gene interactions are derived. Scheme of RACER method. The input data sets (marked in blue) are passed to a two-step linear regression model (yellow) which calculates sample specific activity values for each regulator and determines the most predominant regulators (green) Compared to our general framework, RACER includes additionally miRNA binding information. It assumes a linear combination, which is not further justified, of the regulatory effects of TFs and miRNAs on mRNA level. RACER can incorporate a variety of sample specific data including mRNA and miRNA expression values, CNV and DNA methylation. Optimization is applied twice to reduce model complexity, where the method first infers sample-specific TF and miRNA activities and uses these, in a second step, to compute general TF/miRNA – gene interactions. In the first regression step, mRNA, miRNA, CNV and DNA methylation data are used to calculate the sample specific activities: $$ \widehat{g_{i, s}}= c+{\theta}_{CNV, s} C N{V}_{i, s}+{\theta}_{DM, s} D{M}_{i, s}+{\displaystyle \sum_{t\ }}{\beta}_{t, s}\ {b}_{t, i} + {\displaystyle \sum_{mi\ }}{\beta}_{mi, s}\ {c}_{i, mi} miRN{A}_{mi, s} $$ where \( \widehat{g_{i, s}} \) denotes the predicted gene expression of gene i in sample s, c is an intercept, β t,s describes the estimated activity of TF t in sample s and b t,i is the TF – gene binding score for TF t and gene i. The parameter β mi,s stands for the estimated activity of miRNA mi in sample s and is multiplied by c i,mi , the number of conserved target sites on 3'UTR of the target gene i for miRNA mi, and by the expression level of miRNA mi in sample s. θ CNV,s (respectively θ DM,s ) are the regression parameters for CNV signals CNV i,s (respectively DNA methylation data DM i,s ). Using β t,s and β mi,s from the first regression step, TF – gene and miRNA – gene interactions across all samples are calculated in a second model: $$ \widehat{g_{i, s}} = \tilde{c}+{\tilde{\theta}}_{i, CNV} C N{V}_{i, s}+{\tilde{\theta}}_{i, DM} D{M}_{i, s}+{\displaystyle \sum_{t\ }}{\gamma}_{i, t}\ {\beta}_{t, s} + {\displaystyle \sum_{mi\ }}{\gamma}_{i, mi}\ {\beta}_{mi, s} $$ where the sums apply only to a number of selected TFs and miRNAs with nonzero binding signals b t,i > 0 and conserved target sites c i,mi > 0. The resulting parameters γ i,t and γ i,mi indicate the strength of a TF/miRNA – gene relationship across all samples. To obtain robust estimates, γ i,mi is additionally weighted by the averaged activities of the miRNA. In each of the two regression steps, the optimization criterion is to minimize the sum of squared errors with L1 penalty on the linear coefficients to induce a sparse solution and to set irrelevant parameters to zero after the fitting. This sparse LASSO solution is obtained through elastic-net regularized generalized linear models. A supplementary feature selection procedure comparing the full model to a restricted model leaving one TF or miRNA out provides the most predominant TF/miRNA regulators. TF binding scores are collected from the generic cell line of erythroleukemia cells K562 from ENCODE for 97 TFs and 16653 genes. Further, the number of conserved target sites on 3'UTR is taken from sequence-based information from TargetScan for 470 miRNAs and 16653 genes. The RACER method is implemented in R and publicly available under http://www.cs.utoronto.ca/~yueli/racer.html. The method was evaluated using expression data from an acute myeloid leukemia (AML) data set from TCGA with 173 samples [40] via a sample based 10-fold cross validation on the prediction of gene expression. To assess the quality of predictions, the Spearman rank correlation was calculated resulting in a reassuring value of approximately 0.6. Further, the full model was compared to models excluding one type of the input variables. The full model performed best and a substantial reduction of Spearman correlation was observed by omitting TF regulation (20%) and DNA methylation (5%). RACER also performed with competitive accuracy in predicting known miRNA – mRNA and TF – gene relationships compared to other methods like GenMiR++ [41] or ENCODE TF binding scores [15] using e.g., validated interactions from the MirTarBase [42] and knockdown studies. The feature selection procedure revealed 18 predominant transcriptional regulators in the AML dataset. Using their associated targets, a functional enrichment analysis showed that DNA repair and the tumor necrosis factor pathway were enriched. When applying this panel to cluster patients at different cytogenetic risks, the clustering pattern of the regulatory activities was largely consistent with the risk groups. Further, a literature survey on AML showed that many TF regulators among the top predictions had a role in leukemogenesis. RABIT [33] Regression Analysis with Background Integration (RABIT) is a method for finding expression regulators in cancer by a large scale analysis across diverse cancer types. It integrates TF binding information with tumor profiling data to search for TFs driving tumor-specific gene expression patterns (see Fig. 5). It can be applied to predict cancer-associated RNA-binding protein (RBP) recognition motifs which are key components in the determination of miRNA function [43]. Flow chart of RABIT method. The input data sets (marked in blue) are passed to a linear regression model (yellow) which calculates sample specific activity values for each regulator and determines general regulatory activities (green) In contrast to our general framework, RABIT can, like RACER, make use of CNV and DNA methylation data additionally integrating promoter CpG content and promoter degree information (total number of ChIP-seq peaks near the gene transcription start site) and takes RBP or TF binding information as regulatory input. The computational model consists of three steps (see Fig. 5). First, RABIT tests in each tumor whether the target genes, identified by the BETA method [44], show differential expression compared to the normal controls including a control for background effects from CNVs, promoter DNA methylation, promoter CpG content and promoter degree: $$ \widehat{g_i} = {\displaystyle \sum_f}{\theta}_f{B}_{f, i} + {\displaystyle \sum_t}{\beta}_t{b}_{t, i} $$ where \( \widehat{g_i} \) represents the predicted differential gene expression between tumor and normal samples in gene i, B includes values of the f different background factors for gene i, b contains RBP or TF binding information and θ and β are the respective regression parameter vectors. The regression coefficients β are estimated by minimizing the squared difference between measured and predicted gene expression. The regulatory activity score for each TF/RBP is defined by a t-value (regression coefficient divided by standard error) and its significance by the corresponding t-test. If multiple profiles exist for the same TF from different conditions or cell lines, the profile with the highest absolute value of TF regulatory activity score is selected. In a second step, a stepwise forward selection is applied to find a subset of TFs among those screened in step one optimizing the model error. Lastly, TFs with insignificant cross-tumor correlation are removed from the results. Computationally, the regression coefficients are calculated via the efficient Frisch-Waugh-Lovell method. TF binding information is taken from 686 TF ChIP-seq profiles from ENCODE representing 150 TFs and 90 cell types. Additionally, recognition motifs for 133 RBPs and their putative targets are collected by searching recognition motifs over the 3'UTR regions [45]. An implementation of the RABIT method can be downloaded from http://rabit.dfci.harvard.edu/download. RABIT was applied to 7484 tumor profiles of 18 cancer types from TCGA using gene expression, somatic mutation, CNV and DNA methylation data. To systematically assess the results, the cancer relevance level of a TF was calculated as percentage of tumors with the TF target genes differentially regulated (averaged across all TCGA cancer types). A comparison to cancer gene databases, i.e., the NCI cancer gene index project [46], the Bushman Laboratory cancer driver gene list [47, 48], the COSMIC somatic mutation catalog [49] and the CCGD mouse cancer driver genes [50], showed a consistent picture. Further, RABIT's performance was compared to other regression models like LAR or LASSO where RABIT had the best classification results when classifying all TFs into three categories by NCI cancer index and achieved better cross-validation error and shorter running time. The regulatory activity of RBPs showed that some alternative splicing factors could affect tumor-specific gene expression by binding to target gene 3'UTR regions. ISMARA [34] In contrast to the previous three methods and to our general framework which directly scores TFs or other regulators, ISMARA (Integrated System for Motif Activity Response Analysis) infers the activity of regulatory motifs (short nucleotide sequences) and thereby indirectly deduces the effects of TFs and miRNAs (see Fig. 6). ISMARA is a web service where no parameter settings or specific processing of the input data, gene expression or ChIP-seq data are necessary. It can also be used to calculate regulatory activity differences between samples and consider replicates or data from time series. ISMARA model scheme. The input data sets (marked in blue) are passed to a linear regression model (yellow) which calculates motif activities and determines associated regulators (green) ISMARA takes sample specific measurements and information about regulatory motifs for TFs and miRNAs into account. Based on the input of gene expression data or chromatin state measurements, the input signal is calculated for each promoter in each sample. The input signal levels are modelled linearly in terms of the binding site predictions and unknown motif activities: $$ \widehat{g_{p, s}}={c}_p+{c}_s+{\displaystyle \sum_m}{N}_{p, m}\ {\beta}_{m, s} $$ where \( \widehat{g_{p, s}} \) refers to the input signal for a promoter p in sample s, c p and c s are intercepts for each promoter and sample, N p,m summarizes the TF/miRNA binding site predictions (sum of the posterior probabilities of all predicted TF/miRNA binding sites for motif m in promoter p) and β m,s stands for the estimated motif activities. Like in the other presented methods, the optimization criterion is to minimize the sum of squared error terms between predicted and measured gene expression. Primarily, ISMARA provides the inferred motif activity profiles (β m,s ) sorted by significance and a set of TFs and miRNAs that bind to these motifs representing the key regulators. Further, a list containing their predicted target promoters, associated transcripts and genes, a network of known interaction between these targets and a list of enriched gene ontology categories is displayed. The web service ISMARA is available under http://ismara.unibas.ch. ISMARA employs a Bayesian procedure with a Gaussian likelihood model and a Gaussian prior distribution for β m,s to avoid overfitting. Information about regulatory motifs is provided via the annotation of promoters based on deep sequencing data of transcription start sites. To obtain a set of promoters and their associated transcripts, the 5' ends of mRNA mappings from UCSC genome database are clustered with the promoters. TF binding site predictions in the proximal promoter region are collected using 190 position weight matrices representing 350 TFs from JASPAR, TRANSFAC, motifs from the literature and their own analyses of ChIP-seq and ChIP-chip data. Additionally, miRNA target sites for about 100 seed families are annotated in the 3'UTRs of transcripts associated with each promoter. For evaluation, ISMARA was applied to data from well-studied systems and results were compared to the literature. Inferred motif activities were highly reproducible and even more robust than the expression profiles from which motif activities were derived. When comparing samples from 16 human cell types (GEO accession number GSE30611) from younger and older donors, ISMARA was able to identify a key regulator of aging-related changes in expression of lysosomal genes. A joint analysis of the human GNF atlas of 79 tissues and cell lines [51] and the NCI-60 reference cancer cell lines [52] revealed that many of the top dysregulated motifs were well-known in cancer biology like HIF1A and has-miR-205 miRNA. They also suggested novel predictions for regulating TFs in innate immunity, mucociliary differentiation and cancer. biRte [35] BiRte (Bayesian inference of context-specific regulator activities and transcriptional networks) takes a mathematically different approach compared to the abovementioned methods integrating TF/miRNA target gene predictions with sample specific expression data into a joint probabilistic framework (see Fig. 7). Compared to our general scheme of a TF – gene network ( Fig. 2), biRte takes the TF/miRNA – gene network without the interactions between regulators to estimate regulatory activities and infers the network between regulators in a second step. Scheme of biRte method. The input data sets (marked in blue) are passed to a likelihood model (yellow) which determines active regulators (green) BiRte takes as input differential gene expression data (mRNA), an underlying regulatory network including TF/miRNA – target gene binding information and optionally CNV data, miRNA and TF expression measurements. As opposed to our general framework, biRte defines a likelihood model for the set of active TFs/miRNAs (called regulators R which can be seen as hidden variables) based on the entire gene expression data D and certain model parameters θ: $$ {L}_{D,\theta}(R)= p\left( D\Big| R,\theta \right)={\displaystyle \prod_{\widehat{D}} p\left(\widehat{D}\Big| R,\theta \right)} = {\displaystyle \prod_{\widehat{D}}{\displaystyle \prod_c{\displaystyle \prod_i p\left({\widehat{D}}_{i c}\Big|{R}_c,\theta \right)}}} $$ Here D represents the set of all available experimental data including mRNA, CNV, miRNA and TF expression data and D ic refers to its ith feature measured under experimental condition c. The condition specific hidden state variables R c are estimated with help of the Markov Chain Monte Carlo (MCMC) method where a regulator can switch from an active to an inactive state (switch) or an inactive and an active regulator exchange their activity states (swap). Thereby, the posterior probability for each regulator and condition to influence the expression of its target genes is estimated. Simultaneously, a variable selection procedure is applied to achieve sparsity of the model. The optimization goal is not, as one would expect, to return the configuration with highest posterior probability among all sampled ones but to take marginal selection frequencies during sampling into account and filter those above a defined cutoff. After the determination of active regulators, the associated transcriptional network containing TFs and miRNAs is inferred from the observable differential expression of target genes and target gene predictions for individual regulators. In practice, the stochastic sampling scheme based on MCMC allows swap operations only when regulators show a significant overlap of regulated targets. The variable selection procedure is implemented via a spike and slab prior [53] which can integrate prior knowledge about the activity of regulators. To infer the associated transcriptional network, Nested Effects Model (NEM) [54] structure learning is applied. An input miRNA – gene network is constructed based on MiRmap [55] for 356 miRNAs. The TF – target gene network with 344 TFs is compiled by computing TF binding affinities to promoter sequences according to the TRAP model [56] using data from ENSEMBL, TRANSFAC, JASPAR and MetaCore™. An implementation of biRte is available for R on Bioconductor under https://bioconductor.org/packages/release/bioc/html/birte.html. Several simulations were conducted to study model behavior. On the basis of a human regulatory sub-network and accordingly simulated expression data of 900 target genes biRte was compared to BIRTA [57], GEMULA [58] and a hypergeometric test and further to other network reconstruction algorithms like ARACNE [30], GENIE3 [59] and GeneNet [60]. BiRte performed best in regulator activity predictions including a favorable computation time and was robust against false positive and false negative target gene predictions. Additionally, biRte was applied to an E.coli growth control and to a prostate cancer data set including 44 normal and 47 cancer samples from GEO (GSE29079) with corresponding array data from 464 human miRNAs (GSE54516) and the results showed a principal agreement with the biological literature. ARACNE [30] We compare ARACNE (Algorithm for the Reconstruction of Accurate Cellular Networks) [30] as an established, yet local, tool for the reconstruction of gene regulatory networks to the previous five recent genome-wide approaches. The algorithm is background knowledge-free and identifies transcriptional interactions based on mutual information including non-linear and non-monotonic relationships and distinguishes between direct and indirect relationships (see Fig. 8). ARACNE is a free tool available under http://califano.c2b2.columbia.edu/aracne. ARACNE flow chart. The input data set (marked in blue) is used to calculate pairwise mutual information where indirect interactions are removed (yellow) and which allow a reconstruction of the gene regulatory network (green) ARACNE uses as input only microarray expression profiles and estimates candidate interactions by calculating the pairwise gene expression profile mutual information I defined as $$ I\left({g}_i,{g}_j\right)={I}_{i, j}= S\left({g}_i\right)+ S\left({g}_j\right)- S\left({g}_i,{g}_j\right) $$ where S denotes the entropy. I i,j measures the relatedness of genes g i and g j and equals zero if both are independent. In a second step, the mutual information values are filtered using an appropriate threshold depending on the distribution of all mutual information values between random permutations of the original data set and indirect interactions are removed. Computationally, a Gaussian kernel operator is used to calculate mutual information scores. In a subsequent step, the data processing inequality (DPI) [61] is applied to remove probably indirect candidate interactions. The DPI states that if the genes g i and g k interact only through a third gene g j , then $$ I\left({g}_i,{g}_k\right)\le \min \left( I\left({g}_i,{g}_j\right), I\left({g}_j,{g}_k\right)\right) $$ Thus, the least of the three mutual information scores can come from indirect interactions only [30]. ARACNE's performance was evaluated on the reconstruction of realistic synthetic datasets [62] and on an expression profile dataset consisting of about 340 B lymphocytes derived from normal, tumor-related and experimentally manipulated populations [63] against Relevance Networks and Bayesian networks. Regarding the synthetic networks, ARACNE had consistently better precision and recall values compared to the two other algorithms and reached very good precision at significant recall levels. It recovers far more true connections and fewer false connections than the other methods with better performance on tree-like topologies compared to scale-free topologies. A reconstructed B-cell specific regulatory network was found to be highly enriched in known c-MYC targets where about 50% of the predicted genes to be first neighbors were reported in the literature. We described five recent methods for the genome-wide inference of regulatory activity, namely the approach by Schacht et al., RACER, RABIT, ISMARA, and biRte. They all assume the topology of the regulatory network to be known, cast activity estimation as an optimization problem regarding the difference between predicted and measured values, take different types of sample specific omics data into account, and eventually produce a list of regulators like transcription factors or miRNAs, ranked by their estimated activities in the samples under study. We also included ARACNE which is background knowledge-free and uses only local dependency measures to reconstruct a regulatory network and indirectly infer activities. All of the presented methods essentially follow the same goal, i.e., accurate ranking of regulatory activity, but differ in the types of measurements being integrated, the background knowledge necessary for their application, the complexity and refinement of the underlying model of gene regulation, and the concrete paradigm used for solving the optimization problem. Most of the methods, except for the approach by Schacht et al., are available online via a downloadable implementation, a web service, or an R package providing an operable solution for the interested user. Whereas an overview of the main features of each method ca be found in Table 1, we now first compare the algorithms regarding their general properties in a descriptive way. The data sets used for evaluation vary between all methods. Therefore, we further implemented an evaluation framework to compare the method by Schacht et al., RACER, RABIT and biRte in an objective and quantitative way. We used experimental data of three publicly available data sets from TCGA [64] and a regulatory network as background knowledge. We first used only mRNA expression data as input to the four methods to ensure the result's comparability, whereas in a second evaluation step, also other omics data sets were included where possible. We further analyzed the relevance of regulators found by different methods using a literature search. Experimental data types included The methods differ in the types of measurements being integrated, which corresponds to the level of detail of their model of gene regulations. All six methods use mRNA as input. RACER, RABIT and biRte can also integrate CNV, DNA methylation, TF/miRNA expression data, or somatic mutations. ISMARA calculates an input signal from microarray, RNA-seq, or ChIP-seq data. Additionally, all presented methods use prior knowledge about the underlying regulatory network. These networks are extracted from different data sources and pre-processed in different manners. All methods require at least knowledge about TF – gene relationships, yet RACER, biRte and ISMARA also incorporate information about miRNAs. When using RABIT, the user can choose whether to provide TF or RNA binding protein information. The approach of Schacht et al. and biRte extract regulatory information partly from the commercial MetaCore™ database, whereas the other methods use only publicly available databases, like ENCODE, JASPAR or TRANSFAC. The networks which are used for the evaluations published in the respective papers are publicly available for the case of RACER (network for 16653 genes, 97 TFs and 470 miRNAs), RABIT (predicted binding scores of 63 RBP motifs and 17463 genes) and biRte (network for E.coli including 160 TFs). Neither Schacht et al. nor ISMARA make this data available. Mathematical models of regulatory activity The methods use different mathematical models to infer regulatory activity. The approach by Schacht et al., RACER, RABIT and ISMARA use linear regression whereas biRte applies a probabilistic framework. ARACNE, as a local method, is based on mutual information. RACER and RABIT can be seen as extensions of the approach by Schacht et al. since they essentially use the same model structure but incorporate more input data types and more classes of regulatory information. Further, RACER applies a two-stage regression to infer regulatory activity. Optimization frameworks For assessing regulator activities, Schacht et al., RACER, RABIT and ISMARA minimize the sum of error terms between measured and predicted gene expression. However, the methods use rather different algorithms for solving the resulting optimization problem, and also apply different constraints to achieve model sparsity, robustness of inference, and feature selection. In the approach by Schacht et al., the regression model is computed for each gene separately and allows only a maximum number of six regulating TFs. RACER uses a LASSO approach, while ISMARA follows a Bayesian model that infers regulator activities as posterior distributions. LASSO can be interpreted as a Bayesian model using Laplacian priors instead of Gaussian priors in the regression framework obtaining point estimates of the regulatory activities and enforcing sparseness of the solution [32]. In contrast, biRte uses a likelihood model with a spike and slab prior to induce model sparsity. This approach implements a selective shrinkage of model coefficients such that estimates are less biased compared to a LASSO prior [65]. With the help of the spike and slab prior, sparsity can be controlled in a variable dependent manner allowing the inclusion of prior belief in the activity of each regulator [35]. Computed outputs Schacht et al. and biRte determine activity of regulators over all samples at once, whereas RACER and biRte first infer sample-specific activities which are combined to cross-tumor activities only in a second optimization step. In contrast, ISMARA in first place infers motifs activity; these activities are used to deduce the effects of TFs and miRNAs by their motif binding profiles. ISMARA primarily provides sample specific TF and miRNA activity but also offers an option to group samples and compare average regulatory activity between different conditions. Like biRte and ARACNE, it also infers the network of the regulators themselves. Methods and data sets used for evaluation The type and extent of evaluation performed for the different methods vary greatly. They range from direct application to biological problems over the comparison of results to the biological literature to simulation studies. All methods published evaluations results on publicly available datasets, e.g., from the National Cancer Institute, TCGA or GEO, but unfortunately address different tissues and cancer types. Sample-based cross-validation is applied in the work by Schacht et al., RACER, RABIT and ISMARA. The first two of these methods use correlation coefficients between measured and predicted gene expression for assessing prediction quality. RACER, RABIT and biRte compare their results to the outcome of other algorithms and to those of restricted models, for example excluding one type of the input variables. All methods search the literature to compare their predictions to previously published studies on the respective biological question. Overall, ISMARA provide the most extensive biological evaluation using a battery of relevant use cases, whereas biRte excels in systematic simulation studies. Sadly, there are very few works which compare any of the methods presented on the same problem; the only result we are aware of compared ARACNE and biRte regarding their performance in network reconstruction on simulated data, in which biRte attained higher robustness against false positive and false negative target gene predictions [35]. Quantitative comparison Although certain evaluation steps were carried out for all methods, results in the original papers are not comparable as they used different input datasets, different background regulatory networks, and different evaluation metrics. Therefore, in addition to the comparison of general properties of the methods, we implemented an evaluation framework using three independent and publicly available test data sets to compare the method by Schacht et al., RACER, RABIT and biRte in an objective and quantitative way. All evaluated methods were given the same regulatory network as input. For the evaluation we used experimental data from TCGA [64] for three cancer types: Colon adenocarcinoma (COAD), liver hepatocellular carcinoma (LIHC) and pancreatic adenocarcinoma (PAAD). For all three cancer types, mRNA expression, CNV, DNA methylation and miRNA expression data is available for primary tumor and normal tissue samples. These data sets are openly accessible via the NCI Genomic Data Commons Data PortalFootnote 3 or the NCI Genomic Data Commons Legacy ArchiveFootnote 4 (DNA methylation data). For mRNA gene expression we used processed RNA-seq data in the form of FPKM (fragment per kilobase of exon per million mapped reads) values. The files included Ensembl Gene IDs which were converted to HGNC symbols using the Ensembl [66] BioMart toolFootnote 5 to match the IDs of the TF – gene network. In two cases, when multiple Ensembl Gene IDs mapped to one HGNC symbol, we chose the gene with highest log2 fold change between case and control group. miRNA expression was given as RPM (reads per million miRNA mapped) measurements. Both mRNA and miRNA data were centered using a weighted mean such that the mean of the case group equaled the negative mean of the control group, and normalized via a weighted standard deviation. CNV data was retrieved as masked copy number segment where the Y chromosome and probe sets with frequent germline copy-number variation had already been removed. Chromosomal regions were mapped to genes using the R package biomaRt [67]. If multiple records mapped to one gene, the median of the segment mean values was calculated. For DNA Methylation data we used the beta-values of Illumina Human Methylation 450 arrays as methylation scores. Multiple scores for the same gene were averaged within a sample. We restricted our analyses to the samples for which all four input data types were available. When multiple measurements for one sample and data type were available, we used only the first one in alphabetical order of the file name. After this selection procedure, 165 samples remained for COAD, 404 for LIHC and 180 for PAAD. A list including sample and file information is available in Additional file 1. Together with the experimental data, all evaluated methods were given the same regulatory network as input. We used a publicly available human TF – gene network [28] based on a text-mining approach and complemented it with TF – gene interactions from the public TRANSFACFootnote 6 database [19]. This network included 2894 interactions between 429 TFs and 1218 genes. The network is provided in Additional file 2. Evaluated methods We conducted the quantitative comparison for the method proposed by Schacht et al., RACER, RABIT and biRte. ISMARA was not included since it is (a) only available as a web service, (b) can only be used with its own, proprietary underlying regulatory network model, and (c) requires the upload of raw data which is prohibited by TCGA's terms of use. Also ARACNE [30] was not included in the quantitative evaluation since it does not use background knowledge and we therefore consider its results as incomparable to the other methods. For the approach by Schacht et al. we re-implemented their method as closely as possible to the original design using Python and the Cuneiform workflow language [68, 69]. Due to the high number of integer parameters in the original method, the complexity of optimizing the whole network at once would have by far exceeded computational measures. Therefore, like in the original paper, we computed the model for each gene separately and restricted the number of regulating TFs per gene to six. We added a second step where we used these TF – gene interactions building a sub-network to optimize TF activity globally to describe the interplay of the TFs' effects on their target genes. As in the implementation of Schacht et al., we used the Gurobi Optimizer.Footnote 7 For RACER we used the available R scriptsFootnote 8 and extracted the resulting sample-specific regulatory activities. RABIT published a C++ implementation which they provide on their websiteFootnote 9 and which we used with the FDR option set to 1. As RABIT takes differential expression into account, we used the difference of expression values between case and control group as input and ordered the TFs by t-value as proposed in the RABIT paper. BiRte is available as a bioconductor R package. We used R version 3.3.2 with biRte version 1.10.0 and applied the method "birteLimma" to estimate regulatory activities with the options niter and nburnin set to 10000. As biRte has a randomized component, the resulting TF activities are not exactly the same for different runs. We averaged the final activity scores over 1000 iterations of birteLimma. For our re-implemented method by Schacht et al. and RACER we computed separate models for case and control group and ranked the TFs by their activity difference between the two groups. To ensure the result's comparability, we first used only mRNA expression data as input to the four methods. In a second evaluation, we included also other omics data sets where possible. BiRte was evaluated on mRNA and CNV data, RABIT on mRNA, CNV and DNA methylation data, and RACER additionally used miRNA expression as input. We obtained lists with the regulators ranked according to the absolute value of their computed activity for each cancer type and method, with and without the use of additional inputs. For each cancer type we calculated the size of the overlaps in the four different results using the top 10 and top 100 regulators. The results for the top 10 regulators using either only mRNA or multiple omics data sets as input are shown in Table 2. Table 2 HGNC Symbols of the top 10 regulators found by each method for COAD (using 165 samples), LIHC (404 samples) and PAAD (180 samples) and the use of only mRNA data as input (left panel) and multiple input data sets (RACER: mRNA, miRNA, CNV and DNA methylation; RABIT: mRNA, CNV and DNA methylation; biRte: mRNA and CNV; right panel). TFs with equal activity values are marked with*. TFs found by several method's top 10 are marked in bold (when found by RACER, RABIT and biRte), blue (RACER and RABIT), red (RABIT and biRte) or yellow (RACER and biRte) Only mRNA as input When only mRNA is used as input, one TF is commonly found by the three methods RACER, RABIT and biRte in each data set, respectively: PHOX2B for COAD, EPAS1 for LIHC and ELF1 for PAAD. A literature search of these TFs and their targets revealed clear associations to the respective cancer type. The TF obtained commonly for COAD, PHOX2B, is related to TLX2, a gene which has been shown to play a role in the tumorigenesis of gastrointestinal stromal tumors [70]. EPAS1, which was found in the LIHC top 10 TFs of three methods, is linked to CXCL12, which plays an important role in metastasis formation of hepatocellular carcinoma by promoting the migration of tumor cells [71, 72]. For PAAD, three methods ranked TF ELF1 high, which is related to 14 genes in our network, inter alia to BRCA2 and LYN. Mutations in the BRCA2 gene have been implicated in pancreatic cancer susceptibility [73, 74], whereas the knockdown of LYN reduced human pancreatic cancer cell proliferation, migration, and invasion [75]. These results underline that the methods are able to find biologically relevant information about regulation processes in cancer. Several TFs in the top 10 are found by two of the four methods For instance, RACER and RABIT have four common top 10 TFs (CDX2, NRF1 and MYC next to PHOX2B) in the COAD data set. However, the top 10 TFs found by the method by Schacht et al. do not overlap with any top 10 TFs of the other methods in any data set. The agreement of RACER, RABIT and biRte in the top 10 TFs hints to the biological importance of the found TFs since this overlap is statistical significant as the probability of finding common TFs in three sets of ten randomly chosen ones out of 429 TFs (p-value) is below 0.006. Additionally, the methods do identify different TFs for different data sets, indicating the importance of the actual cancer specific mRNA expression values and that results are not dictated by the background network. The results for the number of overlapping regulators in the top 100 between the four methods and the three different data sets are shown in Fig. 9. For RABIT, only 76 TFs for COAD (resp. 67 for LIHC and 57 for PAAD) could be ranked since all other TFs had an activity value equal to zero. Number of overlapping TFs in the top 100 of ranked TFs per method (for RABIT the overlap with the top 76/67/57 TFs (having activity > 0) in COAD/LIHC/PAAD is shown) When looking at the overlap of three of the four methods, the number of overlapping TFs is still the highest for the triplet RACER, RABIT and biRte. For the LIHC dataset two TFs are found in the top 100 of all four methods (E2F4 and SOX10). E2F4 is a downstream target of ZBTB7, which was associated to the expression of cell cycle-associated genes in liver cancer cells [76]. Two target genes of E2F4, CDK1 and TP73 were also involved in liver cancer development [77] and proposed as prognostic marker of poor patient survival prognosis in hepatocellular carcinoma [78]. Further, epigenetic alterations of the EDNRB gene, a target of SOX10, might play an important role in the pathogenesis of hepatocellular carcinoma [79]. Even if the result of four methods finding two common TFs is not statistically significant (p-value = 0.36), their association to liver hepatocellular carcinoma shows that the methods reach their goal of identifying relevant TFs. However, when comparing different data sets, the methods tend to rank the same TFs under the top 100 to a greater or lesser extent. For example, the overlap of all top 100 TFs of the three cancer types is only one TF for RABIT and nine TFs for biRte, but 16 TFs for the method by Schacht et al. and even 32 TFs for RACER. Therefore, the results from RABIT and biRte seem to be more cancer type specific and less dependent on the regulatory network than the results from RACER. However, we did not specifically investigate the influence of the underlying network and its topology on the results which would be an interesting point for further research. Multi-omics data as input When not only taking mRNA into account but also miRNA, CNV and DNA methylation, the results are more difficult to compare between the methods, since they all use a different way of combining different types of data due to their models and implementations. We are aware of the lower level of comparability of this approach regarding the multi-omics results in contrast to a scenario, where all methods are evaluated on the same set of input data. However, we intended to use maximum set of input data for each method to cover the effect of the use of multiple omics data sets compared to only mRNA as input. BiRte was evaluated on mRNA and CNV data, RABIT on mRNA, CNV and DNA methylation data, and RACER additionally used miRNA expression as input. Whereas RACER and RABIT considered CNV or DNA methylation data as one background factor and compute only one activity value, biRte evaluated the influence of each CNV separately. The results (see Table 2, right panel) show that RACER exclusively ranks miRNAs high; not a single TF is found among the top 10 regulators. Also, the influence of CNVs was high in LIHC and PAAD. However, the TFs that RACER found in the top 10 when using only mRNA data as input are still ranked high in the multi-omics scenario, e. g the COAD top three TFs of the mRNA results are ranked 13th, 16th and 14th in the results of the multi-omics input. The difference of the results coming from the two input types is less for RABIT: seven TFs are still in the top 10 for COAD (8 for LIHC and 6 for PAAD) when using CNV and DNA methylation additionally to mRNA data. Therefore, the contribution of additional input data seems not to be crucial for the performance of RABIT. BiRte considers each CNV as a potential regulator which increases the total number of regulators enormously. Still, two commonly present TFs in the top 10 of the COAD data set (even six for LIHC and one for PAAD) are found by either the sole mRNA input and the multi-omics approach. The overlap of the top 10 of RABIT and biRte in the multi omics case is considerable with three TFs in LIHC (HNF4A, EGR1 and MTF1; p-value = 0.001), and one TF in PAAD (SPI1; p-value = 0.21). Three of them (HNF4A, MTF1 and SPI1) were already found when using only mRNA data as input. The results for the use of different input data sets show that the top ranked regulators are drastically changed when using additionally miRNA data in RACER, but change less when only CNV or DNA methylation data is provided in RABIT and biRte. However, the results from multi omics analyses are difficult to compare since the combination of input data sets is not consistent across the three different methods. Background networks A crucial input to the models is the underlying regulatory network which is needed to reduce the search space for actual regulatory activity. However, the construction of comprehensive TF/miRNA – gene regulatory networks is difficult for various reasons. Firstly, a comprehensive characterization of the human regulatory repertoire is lacking since only about half of the estimated 1,500–2,000 TFs in the mammalian genome is known [80]. ChIP experiments, prone to a high false positive rate [81], were used to identify TF binding patterns but each assay is limited to the detection of one TF in one condition and therefore TF binding has not been characterized for many TFs in most cell types. Further, the local proximity of a binding site to the transcriptional start site of a gene does not automatically implicate transcriptional regulation. With regard to posttranscriptional regulators, the functions for only a few of the around 1,200 different miRNAs have been experimentally determined and current data on miRNA targets is mostly based on computational predictions [82]. Generally, the knowledge about TF and miRNA binding is scattered over the biological literature and different, partly commercial, databases, impeding the construction of comprehensive networks [28]. Therefore, any comparative evaluation of the methods presented here would have to make sure that the same background network is used for each computation. Besides, studies on the impact of network incompleteness or different error rates in networks would be important to assess the ability of the methods to cope with such common problems. Simulation studies will be vital in this regard. The graph view on regulation The modelling of regulatory networks as graphs, as used in all presented methods, is perhaps not the optimal representation for the underlying biological regulatory processes. A graph cannot easily account for important effects such as TF complex formation and temporal and spatial synchronization of activities. Furthermore, TF binding is affected by chromatin state and the impact of posttranslational modifications on transcriptional activity which are difficult to include in a graph view on regulation. The model's dependence on the topological structure and the robustness to changes in the underlying network have not been evaluated or discussed in any of the presented methods even if these issues are known to have a severe influence in network analysis [83]. Underlying mathematical model Linear models, widely spread in different fields of science, provide a simple and easily understandable design but over-simplify the underlying biological processes. Nonlinear behavior, e.g., saturation effects, cannot be represented. Considering that the number of available samples is typically relatively small, the incorporation of many different data types and according parameters into the model could result in excessively complex designs prone to overfitting, but this issue lacks general awareness. Only two of the presented methods incorporate parameter priors (ISMARA and biRte), and two apply cross validation techniques to estimate prediction performance (method by Schacht et al. and RACER). Further, the effect of temporal buffering between TF binding and the actual effect on gene expression is not included in any of the methods. All methods produce a ranked list of regulators. Comparing these results across different methods, even when applied on the same data set and using the same background network, is difficult since no generally accepted benchmarks are available. Therefore, there currently is no objective measure to designate a best method. The closest comparable evaluation effort we are aware of is implemented in the "DREAM5 – Network Inference" challenge [84], which targets gene regulatory network reconstruction. The invited participants reverse-engineered a network from gene expression data, including a simulated network, and evaluated the results on a subset of known interactions or the known network for the in-silico case. The approach of GENIE3 [59] which trains a random forest to predict target gene expression performed best and the integration of predictions from multiple inference methods showed robust and high performance across diverse data sets. However, an extensive competitive evaluation to determine active regulators based on a given regulatory network has, to the best of our knowledge not been carried out yet. We therefore compared the results of four methods in a quantitative way. The experimental data and the regulatory network we used as input are publicly available to ensure transparency of our results. The results suggest that the methods are able to find biologically relevant information about regulation processes in cancer. However, the result overlaps are rather low (though sometimes statistically significant). This seems surprising as all methods essentially follow the same goal, i.e., identification of the most differentially active TFs or genes. We think further research is necessary to exactly characterize the specific strengths of each method. Furthermore, we did not investigate the influence of the underlying network on the results, which is another topic for further research. Despite their often rather involved procedures and models, none of the presented methods adequately reflects the biological reality of regulatory activity in cells. A specific disease phenotype is rarely caused by a single gene but rather a product of the interplay of genetic variability, epigenetic modifications and post-transcriptional regulation mechanisms [85]. The presented methods ignore a multitude of such factors like the effects of chromatin state and alternative splicing, nonlinear relationships between regulatory activity and gene expression, or kinetic and temporal effects. Furthermore, TFs themselves regulate the expression of other TFs forming feedback loops which are not considered in any of the presented methods. Nevertheless, the methods apparently are able to detect strong signals and produced promising results in terms of ranking transcription factors by their activity and are thus valuable tools for identifying biomarkers for specific phenotypes. http://www.gurobi.com/products/gurobi-optimizer http://lsresearch.thomsonreuters.com/pages/solutions/1/metacore https://gdc-portal.nci.nih.gov https://gdc-portal.nci.nih.gov/legacy-archive http://www.ensembl.org/biomart/martview, release 87 http://www.gene-regulation.com/pub/databases.html, release 7.0 version 6.04, available under a free academic license http://www.cs.utoronto.ca/~yueli/racer.html (accessed 17 October 2016) http://rabit.dfci.harvard.edu (accessed 05 February 2016) Lemon B, Tjian R. Orchestrated response: a symphony of transcription factors for gene control. Genes Dev. 2000;14(20):2551–69. Spitz F, Furlong EE. Transcription factors: from enhancer binding to developmental control. Nat Rev Genet. 2012;13(9):613–26. Guo H, Ingolia NT, Weissman JS, Bartel DP. Mammalian microRNAs predominantly act to decrease target mRNA levels. Nature. 2010;466(7308):835–40. Clapier CR, Cairns BR. The biology of chromatin remodeling complexes. Annu Rev Biochem. 2009;78:273–304. Jaenisch R, Bird A. Epigenetic regulation of gene expression: how the genome integrates intrinsic and environmental signals. Nat Genet. 2003;33(Suppl):245–54. Gong X, Jia P, Zhao Z. Investigating microRNA-transcription factor mediated regulatory network in glioblastoma. 2010 IEEE International Conference on Bioinformatics and Biomedicine Workshops; 2010. p. 258–63. Jiang Q, Wang Y, Hao Y, Juan L, Teng M, Zhang X, Li M, Wang G, Liu Y. miR2Disease: a manually curated database for microRNA deregulation in human disease. Nucleic Acids Res. 2009;37:98–104. Mayo MW, Baldwin AS. The transcription factor NF-kappaB: control of oncogenesis and cancer therapy resistance. Biochim Biophys Acta. 2000;1470(2):M55–62. Esquela-Kerscher A, Slack FJ. Oncomirs - microRNAs with a role in cancer. Nat Rev Cancer. 2006;6(4):259–69. Allocco DJ, Kohane IS, Butte AJ. Quantifying the relationship between co-expression, co-regulation and gene function. BMC Bioinformatics. 2004;25:5–18. Johnson DS, Mortazavi A, Myers RM, Wold B. Genome-wide mapping of in vivo protein-DNA interactions. Science. 2007;316(5830):1497–502. Lou S, Lee H-M, Qin H, Li J-W, Gao Z, Liu X, Chan LL, Lam V, So W-Y, Wang Y, Lok S, Wang J, Ma RC, Tsui SK, Chan J, Chan T-F, Yip KY. Whole-genome bisulfite sequencing of multiple individuals reveals complementary roles of promoter and gene body methylation in transcriptional regulation. Genome Biol. 2014;15(7):408. Edgar R, Domrachev M, Lash AE. Gene expression omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acids Res. 2002;30(1):207–10. The Cancer Genome Atlas Research Network. Comprehensive genomic characterization defines human glioblastoma genes and core pathways. Nature. 2008;455(7216):1061–8. Gerstein MB, Kundaje A, Hariharan M, Landt SG, Yan K-K, Cheng C, Mu XJ, Khurana E, Rozowsky J, Alexander R, Min R, Alves P, Abyzov A, Addleman N, Bhardwaj N, Boyle AP, Cayting P, Charos A, Chen DZ, Cheng Y, Clarke D, Eastman C, Euskirchen G, Frietze S, Fu Y, Gertz J, Grubert F, Harmanci A, Jain P, Kasowski M, Lacroute P, Leng J, Lian J, Monahan H, O'Geen H, Ouyang Z, Partridge EC, Patacsil D, Pauli F, Raha D, Ramirez L, Reddy TE, Reed B, Shi M, Slifer T, Wang J, Wu L, Yang X, Yip KY, Zilberman-Schapira G, Batzoglou S, Sidow A, Farnham PJ, Myers RM, Weissman SM, Snyder M. Architecture of the human regulatory network derived from ENCODE data. Nature. 2012;489(7414):91–100. Tompa M, Li N, Bailey TL, Church GM, De Moor B, Eskin E, Favorov AV, Frith MC, Fu Y, Kent WJ, Makeev VJ, Mironov AA, Noble WS, Pavesi G, Pesole G, Régnier M, Simonis N, Sinha S, Thijs G, van Helden J, Vandenbogaert M, Weng Z, Workman C, Ye C, Zhu Z. Assessing computational tools for the discovery of transcription factor binding sites. Nat Biotechnol. 2005;23(1):137–44. Elemento O, Tavazoie S. Fast and systematic genome-wide discovery of conserved regulatory elements using a non-alignment based approach. Genome Biol. 2005;6:R18. Ernst J, Plasterer HL, Simon I, Bar-Joseph Z. Integrating multiple evidence sources to predict transcription factor binding in the human genome. Genome Res. 2010;20(4):526–36. Wingender E, Dietze P, Karas H, Knüppel R. TRANSFAC: a database on transcription factors and their DNA binding sites. Nucleic Acids Res. 1996;24(1):238–41. Sandelin A, Alkema W, Engström P, Wasserman WW, Lenhard B. JASPAR: an open-access database for eukaryotic transcription factor binding profiles. Nucleic Acids Res. 2004;32:D91–4. Griffiths-Jones S, Grocock RJ, van Dongen S, Bateman A, Enright AJ. miRBase: microRNA sequences, targets and gene nomenclature. Nucleic Acids Res. 2006;34:D140–4. Hecker M, Lambeck S, Toepfer S, van Someren E, Guthke R. Gene regulatory network inference: data integration in dynamic models-a review. Biosystems. 2009;96(1):86–103. Liang S, Fuhrman S, Somogyi R. Reveal, a general reverse engineering algorithm for inference of genetic network architectures. Pacific Symp Biocomput. 1998;18–29. Bansal M, Belcastro V, Ambesi-Impiombato A, di Bernardo D. How to infer gene networks from expression profiles. Mol Syst Biol. 2007;3:78. Li P, Zhang C, Perkins EJ, Gong P, Deng Y. Comparison of probabilistic Boolean network and dynamic Bayesian network approaches for inferring gene regulatory networks. BMC Bioinformatics. 2007;8 Suppl 7:S13. Markowetz F, Spang R. Inferring cellular networks-a review. BMC Bioinformatics. 2007;8 Suppl 6:S5. Rung J, Brazma A. Reuse of public genome-wide gene expression data. Nat Rev Genet. 2013;14:89–99. Thomas P, Durek P, Solt I, Klinger B, Witzel F, Schulthess P, Mayer Y, Tikk D, Blüthgen N, Leser U. Computer-assisted curation of a human regulatory core network from the biological literature. Bioinformatics. 2015;31(8):1258–66. Krämer A, Green J, Pollard J, Tugendreich S. Causal analysis approaches in ingenuity pathway analysis. Bioinformatics. 2014;30(4):523–30. Margolin AA, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Dalla Favera R, Califano A. ARACNE: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics. 2006;7 Suppl 1:S7. Schacht T, Oswald M, Eils R, Eichmüller SB, König R. Estimating the activity of transcription factors by the effect on their target genes. Bioinformatics. 2014;30(17):i401–7. Li Y, Liang M, Zhang Z. Regression analysis of combined gene expression regulation in acute myeloid leukemia. PLoS Comput Biol. 2014;10(10):e1003908. Jiang P, Freedman ML, Liu JS, Liu XS. Inference of transcriptional regulation in cancers. Proc Natl Acad Sci. 2015;112(25):7731–6. Balwierz PJ, Pachkov M, Arnold P, Gruber AJ, Mihaela Z, van Nimwegen E. ISMARA: automated modeling of genomic signals as a democracy of regulatory motifs. Genome Res. 2014;24(5):869–84. Fröhlich H. biRte: Bayesian inference of context-specific regulator activities and transcriptional networks. Bioinformatics. 2015;31(20):3290–8. Lachmann A, Xu H, Krishnan J, Berger SI, Mazloom AR, Ma'ayan A. ChEA: transcription factor regulation inferred from integrating genome-wide ChIP-X experiments. Bioinformatics. 2010;26(19):2438–44. Liu H, D'Andrade P, Fulmer-Smentek S, Lorenzi P, Kohn KW, Weinstein JN, Pommier Y, Reinhold WC. MRNA and microRNA expression profiles of the NCI-60 integrated with drug activities. Mol Cancer Ther. 2010;9(5):1080–91. Shoemaker RH. The NCI60 human tumour cell line anticancer drug screen. Nat Rev Cancer. 2006;6(10):813–23. Hoek KS, Schlegel NC, Brafford P, Sucker A, Ugurel S, Kumar R, Weber BL, Nathanson KL, Phillips DJ, Herlyn M, Schadendorf D, Dummer R. Metastatic potential of melanomas defined by specific gene expression profiles with no BRAF signature. Pigment Cell Res. 2006;19(4):290–302. The Cancer Genome Atlas Research Network. Genomic and epigenomic landscapes of adult de novo acute myeloid leukemia. N Engl J Med. 2013;368(22):2059–74. Article PubMed Central Google Scholar Huang JC, Babak T, Corson TW, Chua G, Khan S, Gallie BL, Hughes TR, Blencowe BJ, Frey BJ, Morris QD. Using expression profiling data to identify human microRNA targets. Nat Methods. 2007;4(12):1045–9. Hsu SD, Lin FM, Wu WY, Liang C, Huang WC, Chan WL, Tsai WT, Chen GZ, Lee CJ, Chiu CM, Chien CH, Wu MC, Huang CY, Tsou AP, Huang HD. MiRTarBase: a database curates experimentally validated microRNA-target interactions. Nucleic Acids Res. 2011;39:D163–9. van Kouwenhove M, Kedde M, Agami R. MicroRNA regulation by RNA-binding proteins and its implications for cancer. Nat Rev Cancer. 2011;11(9):644–56. Wang S, Sun H, Ma J, Zang C, Wang C, Wang J, Tang Q, Meyer CA, Zhang Y, Liu XS. Target analysis by integration of transcriptome and ChIP-seq data with BETA. Nat Protoc. 2013;8(12):2502–15. Ray D, Kazan H, Cook KB, Weirauch MT, Najafabad HS, Gueroussov S, Albu M, Zheng H, Yang A, Na H, Irimia M, Matzat LH, Dale RK, Smith SA, Yarosh C, Kelly SM, Nabet B, Mecenas D, Li W, Laishram RS, Qiao M, Lipshitz HD, Piano F, Corbett AH, Carstens RP, Frey BJ, Anderson RA, Lynch KW, Penalva LO, Lei EP, Fraser AG, Blencowe BJ, Morris QD, Hughes TR. A compendium of RNA-binding motifs for decoding gene regulation. Nature. 2013;499(7457):172–7. National Cancer Institute Wiki. Cancer gene index End user documentation. 2014. Available: https://wiki.nci.nih.gov/x/hC5yAQ. [Accessed 14 Jul 2016]. Sadelain M, Papapetrou EP, Bushman FD. Safe harbours for the integration of new DNA in the human genome. Nat Rev Cancer. 2012;12(1):51–8. Vogelstein B, Papadopoulos N, Velculescu VE, Zhou S, Diaz Jr LA, Kinzler KW. Cancer genome landscapes. Science. 2013;339(6127):1546–58. Futreal PA, Coin L, Marshall M, Down T, Hubbard T, Wooster R, Rahman N, Stratton MR. A census of human cancer genes. Nat Rev Cancer. 2004;4(3):177–83. Abbott KL, Nyre ET, Abrahante J, Ho YY, Vogel RI, Starr TK. The candidate cancer gene database: a database of cancer driver genes from forward genetic screens in mice. Nucleic Acids Res. 2015;43:D844–8. Su A, Wiltshire T, Batalov S, Lapp H, Ching KA, Block D, Zhang J, Soden R, Hayakawa M, Kreiman G, Cooke MP, Walker JR, Hogenesch JB. A gene atlas of the mouse and human protein encoding transcriptomes. Proc Natl Acad Sci. 2004;101(16):6062–7. Ross DT, Scherf U, Eisen MB, Perou CM, Rees C, Spellman P, Iyer V, Jeffrey SS, Van de Rijn M, Waltham M, Pergamenschikov A, Lee JC, Lashkari D, Shalon D, Myers TG, Weinstein JN, Botstein D, Brown PO. Systematic variation in gene expression patterns in human cancer cell lines. Nat Genet. 2000;24(3):227–35. George EI, Mcculloch RE. Approaches for bayesian variable selection. Stat Sin. 1997;7:339–73. Markowetz F, Kostka D, Troyanskaya OG, Spang R. Nested effects models for high-dimensional phenotyping screens. Bioinformatics. 2007;23(13):i305–12. Vejnar CE, Zdobnov EM. MiRmap: comprehensive prediction of microRNA target repression strength. Nucleic Acids Res. 2012;40(22):11673–83. Roider HG, Kanhere A, Manke T, Vingron M. Predicting transcription factor affinities to DNA from a biophysical model. Bioinformatics. 2007;23(2):134–41. Zacher B, Abnaof K, Gade S, Younesi E, Tresch A, Fröhlich H. Joint bayesian inference of condition-specific miRNA and transcription factor activities from combined gene and microRNA expression data. Bioinformatics. 2012;28(13):1714–20. Geeven G, van Kesteren RE, Smit AB, de Gunst MC. Identification of context-specific gene regulatory networks with GEMULA-gene expression modeling using LAsso. Bioinformatics. 2012;28(2):214–21. Huynh-Thu VA, Irrthum A, Wehenkel L, Geurts P. Inferring regulatory networks from expression data using tree-based methods. PLoS One. 2010;5(9):e12776. Opgen-Rhein R, Strimmer K. From correlation to causation networks: a simple approximate learning algorithm and its application to high-dimensional plant gene expression data. BMC Syst Biol. 2007;1:37. Cover T, Thomas J. Elements of Information Theory. New York: Wiley; 1991. Mendes P, Sha W, Ye K. Artificial gene networks for objective comparison of analysis algorithms. Bioinformatics. 2003;19 suppl 2:ii122–9. Klein U, Tu Y, Stolovitzky GA, Mattioli M, Cattoretti G, Husson H, Freedman A, Inghirami G, Cro L, Baldini L, Neri A, Califano A, Dalla-Favera R. Gene expression profiling of B cell chronic lymphocytic leukemia reveals a homogeneous phenotype related to memory B cells. J Exp Med. 2001;194(11):1625–38. Weinstein JN, Collisson EA, Mills GB, Shaw KRM, Ozenberger BA, Ellrott K, Shmulevich I, Sander C, Stuart JM. The cancer genome atlas Pan-cancer analysis project. Nat Genet. 2013;45(10):1113–20. Hernández-Lobato D, Hernández-Lobato JM, Suárez A. Expectation propagation for microarray data classification. Pattern Recognit Lett. 2010;31(12):1618–26. Yates A, Akanni W, Amode MR, Barrell D, Billis K, Carvalho-Silva D, Cummins C, Clapham P, Fitzgerald S, Gil L, Girón CG, Gordon L, Hourlier T, Hunt SE, Janacek SH, Johnson N, Juettemann T, Keenan S, Lavidas I, Martin FJ, Maurel T, McLaren W, Murphy DN, Nag R, Nuhn M, Parker A, Patricio M, Pignatelli M, Rahtz M, Riat HS, Sheppard D, Taylor K, Thormann A, Vullo A, Wilder SP, Zadissa A, Birney E, Harrow J, Muffato M, Perry E, Ruffier M, Spudich G, Trevanion SJ, Cunningham F, Aken BL, Zerbino DR, Flicek P. Ensembl 2016. Nucleic Acids Res. 2016;44(D1):D710–6. Durinck S, Spellman PT, Birney E, Huber W. Mapping identifiers for the integration of genomic datasets with the R/Bioconductor package biomaRt. Nat Protoc. 2009;4(8):1184–91. Brandt J, Bux M, Leser U. Cuneiform: a functional language for large scale scientific data analysis. Proc Work EDBT/ICDT. 2015;1330:17–26. Bux M, Brandt J, Lipka C, Hakimazadeh K, Dowling J, Leser U. SAASFEE: scalable scientific workflow execution engine. Very Large Data Bases. 2015;8(12):1892–5. Naumov VA, Generozov EV, Zaharjevskaya NB, Matushkina DS, Larin AK, Chernyshov SV, Alekseev MV, Shelygin YA, Govorun VM. Genome-scale analysis of DNA methylation in colorectal cancer using infinium human methylation 450 bead chips. Epigenetics. 2013;8(9):921–34. Liu H, Pan Z, Li A, Fu S, Lei Y, Sun H, Wu M, Zhou W. Roles of chemokine receptor 4 (CXCR4) and chemokine ligand 12 (CXCL12) in metastasis of hepatocellular carcinoma cells. Cell Mol Immunol. 2008;5(5):373–8. Rubie C, Frick VO, Wagner M, Weber C, Kruse B, Kempf K, König J, Rau B, Schilling M. Chemokine expression in hepatocellular carcinoma versus colorectal liver metastases. World J Gastroenterol. 2006;12(41):6627–33. Couch FJ, Johnson MR, Rabe KG, Brune K, de Andrade M, Goggins M, Rothenmund H, Gallinger S, Klein A, Petersen GM, Hruban RH. The prevalence of BRCA2 mutations in familial pancreatic cancer. Cancer Epidemiol Biomarkers Prev. 2007;16(2):342–6. Greer JB, Whitcomb DC. Role of BRCA1 and BRCA2 mutations in pancreatic cancer. Gut. 2007;56(5):601–5. Je DW, O YM, Ji YG, Cho Y, Lee DH. The inhibition of SRC family kinase suppresses pancreatic cancer cell proliferation, migration, and invasion. Pancreas. 2014;43(5):768–76. Yang X, Zu X, Tang J, Xiong W, Zhang Y, Liu F, Jiang Y. Zbtb7 suppresses the expression of CDK2 and E2F4 in liver cancer cells: implications for the role of Zbtb7 in cell cycle regulation. Mol Med Rep. 2012;5(6):1475–80. Bisteau X, Caldez MJ, Kaldis P. The complex relationship between liver cancer and the cell cycle: a story of multiple regulations. Cancers. 2014;6(1):79–111. Stiewe T, Tuve S, Peter M, Tannapfel A, Elmaagacli AH, Pützer BM. Quantitative TP73 transcript analysis in hepatocellular carcinomas. Clin Cancer Res. 2004;10(2):626–33. Hsu LS, Lee HC, Chau GY, Yin PH, Chi CW, Lui WY. Aberrant methylation of EDNRB and p16 genes in hepatocellular carcinoma (HCC) in Taiwan. Oncol Rep. 2006;15(2):507–11. Vaquerizas JM, Kummerfeld SK, Teichmann SA, Luscombe NM. A census of human transcription factors: function, expression and evolution. Nat Rev Genet. 2009;10(4):252–63. Pickrell JK, Gaffney DJ, Gilad Y, Pritchard JK. False positive peaks in ChIP-seq and other sequencing-based functional assays caused by unannotated high copy number regions. Bioinformatics. 2011;27(15):2144–6. Rajewsky N. microRNA target predictions in animals. Nat Genet. 2006;38(Suppl):S8–13. Luscombe NM, Babu MM, Yu H, Snyder M, Teichmann SA, Gerstein M. Genomic analysis of regulatory network dynamics reveals large topological changes. Nature. 2004;431:308–12. Marbach D, Costello JC, Küffner R, Vega NM, Prill RJ, Camacho DM, Allison KR, Consortium TD, Kellis M, Collins JJ, Stolovitzky G. Wisdom of crowds for robust gene network inference. Nat Methods. 2012;9:796–804. Davidsen PK, Turan N, Egginton S, Falciani F. Multi-level functional genomics data integration as a tool for understanding physiology: a network perspective. J Appl Physiol. 2016;120(3):297–309. We thank Dr. Holger Fröhlich, the author of biRte, for his help in the usage of biRte with multiple omics data sets as input and Christopher Schiefer for his contribution to the re-implementation of the method proposed by Schacht et al. We acknowledge the advice of Prof. Dr. Erik van Nimwegen concerning ISMARA. The results in this work are in part based upon data generated by the TCGA Research Network. We would like to acknowledge the funding provided to S.T. and J.M. from the Berlin School of Integrative Oncology (BSIO, Graduate School 1091) which is supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) in the framework of the Excellence Initiative of the German federal and state governments. The experimental datasets analyzed during the current study are available in the TCGA repository, https://gdc-portal.nci.nih.gov under the project names TCGA-COAD, TCGA-LIHC and TCGA-PAAD. TF – gene interactions were obtained from the TRANSFAC database (release 7.0, http://www.gene-regulation.com/pub/databases.html) and complemented with interactions from text mining based on the paper by Thomas et al. [28] (available via the FastForward DNA database under http://fastforward.sys-bio.net). The TF – gene network is provided in Additional file 1. ST performed literature research, quantitative comparisons and drafted the manuscript with the help of JM and UL. All authors read and approved the final manuscript. Authors' information Knowledge Management in Bioinformatics, Computer Science Department, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099, Berlin, Germany Saskia Trescher, Jannes Münchmeyer & Ulf Leser Saskia Trescher Jannes Münchmeyer Ulf Leser Correspondence to Saskia Trescher. Lists information about the samples and files from TCGA included in our quantitative evaluation for all three cancer types (COAD, LIHC and PAAD). (XLS 697 kb) Includes an adjacency list of the connected nodes of the TF – gene network. The list includes three columns ("TF", "gene", "edge") where each row indicates an association with the value of "edge" between a TF and a gene. Complexes of TFs are indicated with a separating "." between their components. (TXT 39 kb) Trescher, S., Münchmeyer, J. & Leser, U. Estimating genome-wide regulatory activity from multi-omics data sets using mathematical optimization. BMC Syst Biol 11, 41 (2017). https://doi.org/10.1186/s12918-017-0419-z Mathematical optimization Methods, software and technology
CommonCrawl
Protein ligand-specific binding residue predictions by an ensemble classifier Xiuzhen Hu1, Kai Wang2 & Qiwen Dong3,4,5 Prediction of ligand binding sites is important to elucidate protein functions and is helpful for drug design. Although much progress has been made, many challenges still need to be addressed. Prediction methods need to be carefully developed to account for chemical and structural differences between ligands. In this study, we present ligand-specific methods to predict the binding sites of protein-ligand interactions. First, a sequence-based method is proposed that only extracts features from protein sequence information, including evolutionary conservation scores and predicted structure properties. An improved AdaBoost algorithm is applied to address the serious imbalance problem between the binding and non-binding residues. Then, a combined method is proposed that combines the current template-free method and four other well-established template-based methods. The above two methods predict the ligand binding sites along the sequences using a ligand-specific strategy that contains metal ions, acid radical ions, nucleotides and ferroheme. Testing on a well-established dataset showed that the proposed sequence-based method outperformed the profile-based method by 4–19% in terms of the Matthews correlation coefficient on different ligands. The combined method outperformed each of the individual methods, with an improvement in the average Matthews correlation coefficients of 5.55% over all ligands. The results also show that the ligand-specific methods significantly outperform the general-purpose methods, which confirms the necessity of developing elaborate ligand-specific methods for ligand binding site prediction. Two efficient ligand-specific binding site predictors are presented. The standalone package is freely available for academic usage at http://dase.ecnu.edu.cn/qwdong/TargetCom/TargetCom_standalone.tar.gz or request upon the corresponding author. The purpose of protein research is to identify and annotate protein functions. Many proteins perform their functions by interacting with other ligands, although only a small portion of the residues are in contact with the ligands. The recognition of binding residues is important for the elucidation of protein functions and drug design applications [1]. Experimental methods to detect the binding residues are often expensive and time-consuming. With the large and increasing number of sequences deposited in various databases, it is valuable to predict the ligand binding sites using computational methods. During the last decade, much effort has been made towards accurately predicting ligand binding sites [2, 3]. Roughly speaking, these methods can be grouped into the following categories based on the source of the information used [4]: sequence-based methods, structure-based methods and hybrid methods that combine sequence with structure information [5]. The sequence-based methods [6] extract diverse features from the protein sequence directly or indirectly and input the features into a classifier to predict the possibility of binding residues. The most widely used feature is the position-specific scoring matrix (PSSM) generated by PSIBLAST [7]. Other predicted features have also been used, including the predicted secondary structure, predicted solvent accessibility and predicted dihedral angles. Fang et al. [8] demonstrates that PSSM contains most of the information needed for ligand function site prediction. Evolutionary conservation is an important indicator for function-related residues. The Rate4Site method [9] calculates the conservation score based on polygenetic trees and uses the score to detect functionally important regions in proteins with known three-dimensional structures. Capra et al. [10] presented a simple but efficient method that used Jensen-Shannon divergence to estimate sequence conservation. The structure-based methods basically dominate this field [11]. These methods generally use known templates with similar topology structures to find the "pocket" or "cavity" on the structure surface. The template-based methods search homologous structures with global topology; then, the putative binding sites can be transformed after superposition [12, 13]. The homology-derived model is still useful even if the structure of the target protein is not available [14]. The global comparison methods can find templates with similar topology, but the alignment in the binding pocket may not be accurate. The local comparison is sensitive to the binding pocket but has a high false positive rate [15]. The combination of global and local comparisons can obtain robust results, as shown by COFACTOR [16]. The other type of structure-based method searches the surface of the structure to find either a geometry-complementary [17] or energy-favourable [18, 19] region as the possible binding site. The hybrid methods use both sequence and structure information to obtain better predictions. For example, ConCavity [20] integrates the residue conservation scores and the output of other structure-based methods to identify protein surface cavities, and FREPS [21] predicts functional regions by detecting spatial clusters of conserved residues on the protein structure. Although much progress has been made in computational binding site predictions, many issues with the current methods require further investigation. First, many approaches use three-dimensional protein structures to identify the binding sites. In reality, only a very small proportion of proteins have experimentally solved structures deposited in Protein Data Bank (PDB) [22]. Obtaining structures for many proteins is difficult due to purification and crystallization issues. In contrast, available sequences [23] are exponentially increasing due to the advance of high-throughput sequencing techniques. Although structure models can be obtained using template-based [24] or ab initio structure prediction [25], the quality of the model has an important influence on the confidence of the binding site prediction, especially for hard target proteins [26] that do not have homologous templates in the current PDB library. Thus, it is necessary to develop powerful methods for binding site prediction from protein sequence information alone. This study will demonstrate that the sequence-based method is an effective complement when template-based methods fail to obtain a good predicted structure model. Second, most methods try to obtain all binding sites without carefully checking the differences between different ligands. However, ligands are chemically and structurally different. The assessment of binding site residue predictions in CASP9 [27] suggests that the assessment should be made according to the chemo-type categories of the ligand. The ProBiS-ligands server [28] predicts the types of ligands that can be bound to a given structure. Recently, researchers have paid attention to the differences in ligands, and many ligand-specific methods have been developed to obtain more accurate predictions. For example, Bharat et al. developed VitaPred [29] to predict vitamin-interacting residues, Moreover, nucleotide-binding residues were predicted using SITEpred [30] and ATP binding residue predictions were extensively investigated using many methods [31, 32]. Other ligands have also been explored, such as HEME [33], FAD [34], calcium [35], GTP [36], NAD [37], and zinc [38]. Third, the principle of protein-ligand binding is complicated, and each method can only explore specific binding site information. Thus, the combination of multi-methods can result in better predictions. For example, MetaPocket 2.0 [39] combines eight methods to generate a consensus output for function site predictions. COACH [40] also achieves better predictions by integrating five methods. In view of the above-mentioned statement, this study will present a robust ligand-specific binding residue predictor. Nine ligands were initially investigated to validate the proposed method. However, the proposed framework can easily integrate other ligand-specific predictors without much revision. First, a sequence-based method called TargetSeq was developed; this method only uses features from the protein sequence. The extracted features include the position-specific scoring matrix, the residue conservation scores, and the predicted secondary structure. These features are inputted into an ensemble classifier that is based on a modified AdaBoost algorithm to tackle the serious imbalance problem between the positive samples (binding residues) and negative samples (non-binding residues). Second, a combined method called TargetCom was developed that integrated the outputs of four well-established methods (COACH [40], COFACTOR [16], TM-SITE [40] and S-SITE [40]). Extensive experimental results show that the combined method outperforms each of the individual methods. Benchmark dataset and ligands Most ligand binding site prediction methods use three-dimensional structures from the PDB database [22]. A non-redundant subset for specific or general ligands is obtained as a benchmark dataset after filtering the whole database. However, not all the ligands in PDB are natively bound to the structures. Many ligands are included as additives to help solve the structures. Thus, much effort has been made to filter out the biologically relevant ligands from the PDB structures, and many well-established databases have been developed, such as FireDB [41], LigASite [42], PDBbind [43] and BioLip [44]. Because BioLip is a newly developed and semi-manually curated database, this study uses BioLip as the data source. First, PDB chains with specific ligands are extracted from the BioLip database. If one chain has multiple sites with the same type of ligand, all sites are considered effective. Then, these structures are filtered by keeping only structures with a resolution less than 3.0 Å and a sequence length larger than 50 residues. Redundant structures are removed using the CD-HIT program [45] with a sequence identity threshold of 0.4. Although CD-HIT is extremely fast and is widely used, similarities are estimated by common word counting instead of a sequence alignment. Thus, there are some odd data in which a pair of sequences may be a little higher than the specific threshold. To obtain strict non-redundant benchmark data, the dataset is filtered using the global dynamic programming algorithm of the Needleman-Wunsch alignment. Nine types of ligands are used here to evaluate the proposed ligand-specific method; these nine ligands are comprised of six small ligands and three large ligands. The small ligands contain four metal ions (BioLip ID: CU, FE, FE2 and ZN) and two acid radical ions (BioLip ID: SO4 and PO4). The large ligands contain two nucleotides (BioLip ID: ATP and FMN) and one HEME. The ligand HEME corresponds to the HEM and HEC ligands in the BioLip database because they are two subtypes of the HEME molecule. The detailed composition of the dataset is given in Table 1. Table 1 Composition of the dataset for the 9 types of ligands For each ligand, five-fold cross-validation is used to evaluate the performance of the proposed method. The dataset is randomly divided into five parts. One part is used to obtain the test results, and the other four parts are used to train the model. The above process is repeated five times so that each part is tested. The average performance over the five parts is reported as the final cross-validation result. Sequence-based method pipeline First, we present a sequence-based method named TargetSeq, which only uses information from protein sequences or their variants through a multiple sequence alignment (Fig. 1a). For a target residue in a protein sequence, a sliding window with length L is used to extract the protein sequence features including the position-specific scoring matrix, the predicted structure properties and the conservation scores. The target residues are then represented as feature vectors. These vectors are then inputted to support vector machine to get the classifier. Note that to handle the class-imbalance problem, the modified AdaBoost algorithm is used to get the ensemble classifier. For a testing target residue, the same procedure is used to get the feature vector and the ensemble classifier is used to get the probability of binding site. The binding sites are predicted in ligand-specific manner. For each type of ligands, the corresponding ensemble classifier is constructed. The overall flowchart is illustrated in Fig. 1a. Detailed feature encoding and training algorithm are explained below. The flowchart of the proposed TargetSeq (a) and TargetCom (b) methods for protein-ligand binding site prediction Position-specific scoring matrix The position-specific scoring matrix (PSSM) contains protein evolutionary information. PSSM has been widely used for many prediction problems in bioinformatics. In this study, the position-specific scoring matrix is generated by running PSI-BLAST [7] on the non-redundant protein dataset (nr) from NCBI with an e-value threshold of 0.001 and iteration time of three. The original PSSM scores are transformed by the following logistic function before they are extracted as features: $$ y=\frac{1}{1+{2}^{-x}} $$ where x is the original PSSM value, and y is the normalized value. A sliding window with length L centred at the target residue is used to extract the PSSM value. The window length is a parameter of the method and needs to be optimized during cross-validation. Due to the distinction of different ligands, each ligand has its own optimal window length as shown in the Results section. Therefore, the number of dimensions of the PSSM features is L*20. Predicted structure properties Previous studies showed that the predicted structure properties were helpful for function site identification. Here, we use the predicted secondary structure, relative solvent accessibility and torsion angles as additional features. The predicted secondary structures are obtained using PSIPRED [46], and a three-dimensional vector with a Boolean value is used to indicate the type of secondary structures defined as alpha-helix, beta-strand, and coil. The relative solvent accessibilities are predicted by ANGLOR [47] which uses the neural network as the classifier, and only one Boolean value is used to illustrate whether the residue is buried (<25%) or exposed (>25%). The backbone torsion angles are also predicted by ANGLOR [47], and the two-dimensional real value is used to show the φ and ψ dihedral angles. Taking the local window with length L into consideration, the number of dimensions of the predicted structure properties is L*6. Residue conservation is a crucial indicator for functionally important residues that has been extensively investigated and well used for ligand binding site prediction. First, the position-specific conservation is calculated by the software implemented by Capra and Singh [10], with two information-theoretic scores [the relative entropy score (RE) and Jensen-Shannon divergence score (JSD)] used as the features. The JSD score has been reported to perform similarly to the Rate4Site algorithm [48] for the identification of functionally important residues, but the JSD algorithm is several orders of magnitude faster than the Rate4Site algorithm. The number of dimensions of the position-specific conservation is L*2. In addition to the above position-specific conservation, we also consider the conservation of the sequence segment within the entire local window. A position weight matrix, which is similar to the PSSM, is constructed based on all sequence segments. The occurrence frequency of each residue in the specific position within the local window is calculated as follows: $$ {p}_{i,j}=\frac{n_{i,j}+\sqrt{N_i}/21}{N_i+\sqrt{N_i}} $$ where i denotes the position index within the window, j denotes one of the twenty residues plus an additional residue used to denote the unknown residue or the residue outside of the sequence, n ij is the occurrence number of residue j at position i, N i is the occurrence number of all residues in position i, and p ij is the frequency of residue j at position i and is further normalized by the background frequency: $$ {m}_{i,j}= \log \left(\frac{p_{i,j}}{p_j}\right) $$ where P j is the background frequency of residue j and m ij is the matrix element of the position weight matrix. A conservation score for a specific sequence segment can be calculated based on the position weight matrix and the sequence of the segment as follows: $$ S=\frac{{\displaystyle \sum_{i=1}^L\left({m}_{i,{s}_i}-{m}_{i, \min}\right)}}{{\displaystyle \sum_{i=1}^L\left({m}_{i, \max }-{m}_{i, \min}\right)}} $$ where m i,min and m i,max are the minimum and maximum values, respectively, for position i in the matrix, and s i is the residue type at position i for the target sequence segment. The above score can be calculated for the positive and negative samples so that a two-dimensional vector can be obtained as the feature for each sequence segment. In this study, support vector machine (SVM) is used as the base classifier. SVM is a class of supervised machine learning algorithms that was first presented by Vapnik [49]. SVM has shown excellent performance in practice and has a strong theoretical foundation of statistical learning. Here, the LibSVM package [50] is used as an implementation of the SVM, and the radial basis function is selected as the kernel. The parameter λ in the kernel function and the regularization parameter C are selected based on the cross-validation. There are serious class-imbalance problems in ligand binding site predictions (i.e., the number of binding site residues is far lower than the number of non-binding site residues). The traditional machine learning algorithms cannot perform well on these datasets because they are developed on the assumption that the class is balanced. Recently, the ensemble classifier has arisen as one possible way to solve the imbalance problem. The basic idea of the ensemble classifier is to train multiple base classifiers and combine them to obtain a single class label. The AdaBoost algorithm [51] is one of the most representative methods. AdaBoost trains a series of base classifiers by randomly selecting samples from the training dataset. For each round, the misclassified samples are assigned large weights so that they may be re-trained in the subsequent round. Additionally, each base classifier is assigned a weight associated with the overall accuracy. The output of the testing sample is the weighted vote of each of the base classifiers. In this study, a modified version of AdaBoost is used. First, random sample selection is performed only on the negative samples (non-binding residues). All positive samples are used in each round because the number of negative samples is several orders of magnitude larger than the number of positive samples, especially for small ligands. Second, to prevent over-fitting and make full use of the negative samples, the weight of the misclassified negative samples increases on a small scale. The overall modified AdaBoost is shown in algorithm 1. Combination of the template-free and template-based methods The proposed TargetCom method combines the template-free method (TargetSeq) and the template-based method (COFACTOR, TM-SITE, S-SITE and COACH) to get an improved performance (Fig. 1b). The process is similar to the proposed sequence-based method. A sliding window centred at the target residue is used to collect the output of each individual method. The target residue is then converted into a feature vector by concatenating the output of all residues in the window. The modified AdaBoost algorithm is then used to get the ensemble classifier which is then used to get the probability output for a testing residue. The overall flowchart is depicted in Fig. 1b. Template-based methods use proteins with known ligand binding sites to infer the binding residues of the target sequence. The basic assumption behind these methods is that homologous proteins often have similar functions. Template-based methods have attracted a great deal of attention and shown a powerful performance in CASP [11]. However, the similarities between the target sequence and the template can affect the accuracy of the template-based methods. If no homologous templates are available for the "hard" target protein, the template-based methods will fail. In contrast, the template-free methods are robust because they use only sequence information, although the performance of the template-free methods is worse than the performance of the template-based methods when homologous templates can be identified. Based on this observation, we presented a combined method named TargetCom that combined the sequence-based and template-free method TargetSeq with four template-based methods (COFACTOR [16], TM-SITE, S-SITE and COACH [40]). COFACTOR is a structure-based method that first uses a global structural alignment to identify possible templates with the same fold and then adopts the local 3D motif alignment to obtain the binding residues. TM-SITE uses a similar architecture but adds an additional clustering step to derive the binding sites. S-SITE uses a binding site-specific sequence profile-profile comparison to detect the templates and ligand binding sites. COACH is a consensus method that combines the output of the above three methods and two other methods and achieves a magnificent Continuous Automated Model EvaluatiOn (CAMEO) performance. To provide an unbiased comparison with the sequence-based method, all of the structure-based methods use a predicted model and are run in "benchmark" mode, in which all homologous templates with sequence identities larger than 30% are removed. The probability output of the TargetSeq method is collected as one of the features of the TargetCom method. The C-score and cluster density of the other four methods are selected as input features. The C-score is the confidence score of the prediction and is calculated based on the similarity between the query target and the templates. The cluster density is the percentage of templates in specific binding sites. Because the proposed combination method is ligand-specific, the binding site predictions for specific ligands need to be extracted from the other four general-purpose methods. The possible ligands of the predicted binding site are collected by the identified templates. If one ligand matches the specific ligand, the binding site is selected as a candidate. This methodology is better than the method that only uses the most possible ligand (data not shown). These features are also inputted to support vector machine to obtain the model. Then, the trained model is used to classify new testing samples. Evaluation metrics The following metrics are used to evaluate the proposed methods: accuracy, sensitivity, specificity and the Matthews correlation coefficient (MCC). $$ Accuracy=\frac{TP+TN}{TP+FP+TN+FN} $$ $$ Sensitivity=\frac{TP}{TP+FN} $$ $$ Specificity=\frac{TN}{TN+FP} $$ $$ MCC=\frac{TP\times TN-FP\times FN}{\sqrt{\left(TP+FP\right)\left(TP+FN\right)\left(TN+FP\right)\left(TN+FN\right)}} $$ where TP is the number of binding sites correctly predicted as binding residues, TN is the number of non-binding residues correctly predicted as non-binding residues, FP is the number of non-binding residues wrongly predicted as binding residues, and FN is the number of binding residues wrongly predicted as non-binding residues. Sequence-based method results The proposed method (TargetSeq) was evaluated using five-fold cross-validation and compared with the S-SITE method. Although S-SITE is a template-based method, it does not use three-dimensional structure information. Therefore, here the comparison is performed on two sequence-based methods (the template-free method and the template-based method). As shown in Table 2, the optimal window length of each ligand is different, with the small ligands usually having short window lengths and vice versa. The size of the binding pocket is generally proportional to the volume of the binding ligand; thus, the local neighbour information used to predict the binding residues might also change with the size of the binding ligand. The proposed method (TargetSeq) can make predictions for most ligands with an accuracy varying from 96.62 to 99.02%, specificity from 95.26 to 99.81% and MCC from 0.19 to 0.66. The performance on the SO4 ligand appeared to be especially low. As shown in the Additional file 1, none of the methods obtained a good performance on this ligand, indicating that SO4 was a hard ligand to predict. Overall, the proposed method outperformed the S-SITE method on most of the ligands with the exceptions of ATP and HEME, possibly because the large window length on these ligands introduced extra noise. Table 2 Performance of the proposed sequence-based methods on the 9 types of ligands over five-fold cross-validation and comparison with S-SITE Combined method results The proposed combination method (TargetCom) combines the output of the proposed template-free method and four other template-based methods. COACH is also a consensus method and outperforms other methods, as shown in reference [40]. Therefore, we only list the comparison results of TargetCom and COACH in Table 3. The detailed results of all methods are provided in the Additional file 1. Table 3 Performance of the proposed combined methods on the 9 types of ligands over five-fold cross-validation and comparison with COACH The proposed TargetCom outperformed COACH on all ligands with an average MCC value increase of 0.0533, which was on average 10% higher than the COACH MCC value. The improvement made by TargetCom is mainly a result of the complement properties of the individual component predictor, as demonstrated by a previous study [40]. The template-free method is a complement of the template-based method that will be discussed in the subsequent section. The head-to-head comparison of TargetCom with the other individual methods is shown in Fig. 2. The Pearson correlation coefficient is also provided in the figure. The maximum correlation is observed between TargetCom and COACH, indicating that COACH makes the greatest contribution to TargetCom, followed by S-Site, TM-Site, TargetSeq and COFACTOR. The P-values of Student's t-test between any two methods on the proteins of all ligands are calculated and shown in Table 4. The P-values between TargetCom and the other methods are all very small, demonstrating that the improvement from consensus is significant. Head-to-head comparisons between TargetCom and the individual component methods on the proteins of all ligands. CC is the Pearson's correlation coefficient between the MCCs of the two compared methods Table 4 The p-values in Student's t-test for the differences in the MCC scores between each pair of predictors on the proteins of all ligands Data difference between BioLip and LPC The first step towards the automatic prediction of ligand binding sites is defining the binding residues between the protein and ligand. Another important issue is that biologically irrelevant ligands need to be filtered before the ligand binding residues are identified. BioLip [44] is a newly developed, semi-manually curated database for biologically relevant ligand-protein interactions. The definition of a binding site is the same as the official CASP definition: a binding site is defined by all protein residues in the target structure having at least one (non-hydrogen) atom within a certain distance (d ij ) of biologically relevant ligand atoms: $$ {d}_{ij}<={r}_i+{r}_j+c $$ where d ij is the distance between a residue atom i and a ligand atom j, r i and r j are the van der Waals radii of the involved atoms, and c is a tolerance distance of 0.5 Å. Many previous studies used the Ligand Protein Contact (LPC) software [52] to define the binding residues; this software is based upon the surface complementarity analysis [53]. In this study, the difference between the binding sites defined by LPC and BioLip was investigated. The ATP168 dataset [32] is a representative dataset defined by LPC that is collected by Chauhan et al. for ATP binding site prediction. The same proteins are extracted from the BioLip database, and the corresponding binding sites of the ATP ligand are gathered. The binding sites of these proteins are compared using LPC and BioLip. We observed that the difference was significant. A total of 1968 common binding residues were defined by both methods. A total of 1117 binding residues were defined solely by LPC, and 208 binding residues were defined solely by BioLip. The number of binding residues defined by LPC was more than 40% higher than the number defined by BioLip. To quantitatively assess the influence of the binding site definition on the performance of the predictor, a base-line method (SVM-PSSM) that uses only PSSM as input for support vector machine is implemented and tested on the ATP168 dataset with different ligand binding site definitions. As shown in Table 5, the SVM-PSSM method with binding sites defined by the LPC database achieves performed significantly better than the method using the BioLip database. Because the method and the data are the same, this huge difference is definitely caused by the different binding site definitions. Because the LPC database defines more binding sites, the performance of the predictor trained on the LPC-derived dataset will be over-estimated. Table 5 Performance comparison of SVM-PSSM on the ATP168 dataset with different definitions of ligand binding sites The sequence-based method is a complement of the structure-based method The structure-based method uses three-dimensional structures to identify binding sites, which can usually obtain better predictions than other methods. However, the structure-based methods will fail when no structures or homologous templates are available. In this case, sequence-based methods may be helpful, which will be quantitatively assessed here. The "hard" target proteins, which do not have any homologous templates, are identified by the multi-threading programme LOMETS [26]. For each threading program, the target-template alignment is measured by the Z-score, which is defined as the difference between the raw alignment score and the mean in the unit of derivation. A target protein is classified as "hard" if none of the threading programmes identifies a template with a Z-score larger than the specific threshold. The performance of all methods used in this study on the "hard" target proteins is listed in Table 6. As expected, none of the methods generated satisfactory predictions using these hard target proteins. Among the non-combined methods, the sequence-based methods (S-SITE and TargetSeq) significantly outperformed the structure-based methods (COFACTOR and TM-SITE). In most cases, the structure-based methods could not identify any binding sites. S-SITE usually performs better than the other methods on small ligands (CU, FE and ZN). TargetSeq performs better than the other methods on the ATP and PO4 ligands. These results demonstrate that the sequence-based methods are effective complements of the structure-based methods when no homologous templates are available. Table 6 Performance of all methods on the "hard" target proteins over each type of ligand Ligand-specific method helps improve the prediction performance The ligand-specific method trains models for each type of ligand, whereas the general purpose methods only use one model for all types of ligands. We will experimentally demonstrate the different performances of these strategies. The datasets for all 9 ligands are merged into one single dataset. The positive samples are the binding residues regardless of the type of ligands to which they bind. The negative samples are the non-binding residues. The general purpose method is evaluated using this dataset via five-fold cross-validation. To give an unbiased comparison, the proposed TargetSeq method is re-performed on the merged dataset by cross-validation. During the evaluation phase, the performance is calculated for each type of ligand and compared with the ligand-specific mode of TargetSeq. As shown in Table 7, the ligand-specific mode of TargetSeq consistently outperforms the general purpose mode of TargetSeq in terms of accuracy, specificity and MCC. The performance of the general purpose mode decreases dramatically on small ligands. The sensitivities of the general purpose mode are higher than those of the specific mode, indicating that the general purpose mode of TargetSeq predicts too many binding residues. As expected, the average precision is only 13.39%. The precision is the percentage of correct predictions over all predictions. Table 7 Performance comparison of the general purpose and ligand-specific models of the TargetSeq method on the dataset of the 9 ligands by five-fold cross-validation Comparison with other methods There are many outstanding studies on ligand binding site prediction of proteins. The performance of the proposed methods is compared with that reported in other studies. ATP is one of the most extensively studied ligands for binding site prediction. The proposed TargetCom method achieves an overall accuracy of 97.17% and MCC value of 0.58 and the proposed TargetSeq method achieves an overall accuracy of 97.14% and MCC value of 0.48 on ATP ligand. The ATPsite method [31] reported an overall accuracy of 96.2% and MCC value of 0.43 which is lower than the proposed methods. The nSITEpred method [30] predicted the binding site for several nucleotides. They reported an overall accuracy of 96% and MCC value of 0.46 for ATP ligand, which is also lower than the proposed methods. The newly developed ATPBR method [54] reported an overall accuracy of 87.53% and MCC value of 0.55, where the accuracy is lower than the proposed methods, and the MCC value is larger than the TargetSeq method but lower than the TargetCom method. Lu et al. [55] predict the binding sites of metal ions by using fragment transformation method. There are three metal ions (CU, FE2 and ZN) overlapped with the current study. They used accuracy, true positive rate and false positive rate as the evaluation metrics, so we use the accuracy as the compared metric. Lu et al. reported the accuracy of 94.9%, 94.9 and 94.8% for ligand CU, FE2 and ZN respectively, while the proposed TargetSeq method achieves the accuracy of 99.02%, 99.20 and 99.01% and the proposed TargetCom method gets the accuracy of 99.21%, 99.27 and 98.99% for ligand CU, FE2 and ZN respectively. It is clearly show that the proposed methods outperform the method of Lu et al. The HemeBIND method [33] predict the binding sites of HEME ligand and reported an overall accuracy of 97.17% and MCC value of 0.58. The proposed TargetCom method achieves an overall accuracy of 94.96% and MCC value of 0.66 and the proposed TargetSeq method achieves an overall accuracy of 92.62% and MCC value of 0.53 on HEME ligand. The above comparison shows that the proposed methods provide the state-of-the-art performance for binding site prediction of proteins. This study presented two effective ligand-specific methods for ligand binding site prediction. The sequence-based method uses only sequence information and adopts the improved AdaBoost method for binding site predictions. The combined method combines the template-free and template-based methods. Both methods are tested on the dataset extracted from the recently developed, semi-manually curated ligand binding site database (BioLip). The experimental results demonstrate the efficacy of the proposed methods. The sequence-based method is an effective complement to the structure-based method when no structures are available or no homologous templates can be identified. The ligand-specific methods can help improve the prediction performance. We also found that the binding site definition in BioLip was stricter than the definition in LPC. Future directions are to use a feature selection or extraction algorithm to remove the possible noise in the high dimensional feature space. Another issue for ligand-specific binding site prediction is how to select the negative sample (non-binding residues) because proteins may have multiple ligands. The non-binding residues for one ligand may be binding residues for another ligand; thus, these residues have potential binding ability. The ligand-specific predictor needs to be intensively explored to develop an excellent method for ligand binding site prediction. Dong Q, Wang S, Wang K, Liu X, Liu B. Identification of DNA-binding proteins by auto-cross covariance transformation. In: 2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). 2015. p. 470–5. Dukka BK. Structure-based Methods for Computational Protein Functional Site Prediction. Comput Struct Biotechnol J. 2013;8:e201308005. Xie ZR, Hwang MJ. Methods for predicting protein-ligand binding sites. Methods Mol Biol. 2015;1215:383–98. Leis S, Schneider S, Zacharias M. In silico prediction of binding sites on proteins. Curr Med Chem. 2010;17(15):1550–62. Wong GY, Leung FH, Ling SH. Predicting protein-ligand binding site using support vector machine with protein properties. IEEE/ACM Trans Comput Biol Bioinform. 2013;10(6):1517–29. Chen P, Huang JZ, Gao X. LigandRFs: random forest ensemble to identify ligand-binding residues from sequence information alone. BMC Bioinformatics. 2014;15 Suppl 15:S4. Altschul SF, Madden TL, Schäffer AA, Zhang J, Zhang Z, Miller W, Lipman DJ. Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res. 1997;25(17):3389–402. Fang C, Noguchi T, Yamana H. Simplified sequence-based method for ATP-binding prediction using contextual local evolutionary conservation. Algorithms Mol Biol. 2014;9(1):7. Article CAS PubMed PubMed Central Google Scholar Pupko T, Bell RE, Mayrose I, Glaser F, Ben-Tal N. Rate4Site: an algorithmic tool for the identification of functional regions in proteins by surface mapping of evolutionary determinants within their homologues. Bioinformatics. 2002;18 suppl 1:S71–7. Capra JA, Singh M. Predicting functionally important residues from sequence conservation. Bioinformatics. 2007;23(15):1875–82. Gallo Cassarino T, Bordoli L, Schwede T. Assessment of ligand binding site predictions in CASP10. Proteins: Structure, Function, Bioinformatics. 2014;82(S2):154–63. Wass MN, Kelley LA, Sternberg MJ. 3DLigandSite: predicting ligand-binding sites using similar structures. Nucleic Acids Res. 2010;38(Web Server issue):W469–73. Roy A, Zhang Y. Recognizing protein-ligand binding sites by global structural alignment and local geometry refinement. Structure. 2012;20(6):987–97. Brylinski M, Skolnick J. FINDSITE: a threading-based approach to ligand homology modeling. PLoS Comput Biol. 2009;5(6):e1000405. Konc J, Janežič D. ProBiS algorithm for detection of structurally similar protein binding sites by local structural alignment. Bioinformatics. 2010;26(9):1160–8. Roy A, Yang J, Zhang Y. COFACTOR: an accurate comparative algorithm for structure-based protein function annotation. Nucleic Acids Res. 2012;40(Web Server issue):W471–7. Huang B, Schroeder M. LIGSITEcsc: predicting ligand binding sites using the Connolly surface and degree of conservation. BMC Struct Biol. 2006;6(1):19. Laurie AT, Jackson RM. Q-SiteFinder: an energy-based method for the prediction of protein–ligand binding sites. Bioinformatics. 2005;21(9):1908–16. Ngan C-H, Hall DR, Zerbe B, Grove LE, Kozakov D, Vajda S. FTSite: high accuracy detection of ligand binding sites on unbound protein structures. Bioinformatics. 2012;28(2):286–7. Capra JA, Laskowski RA, Thornton JM, Singh M, Funkhouser TA. Predicting protein ligand binding sites by combining evolutionary sequence conservation and 3D structure. PLoS Comput Biol. 2009;5(12):e1000585. Nemoto W, Toh H. Functional region prediction with a set of appropriate homologous sequences-an index for sequence selection by integrating structure and sequence information with spatial statistics. BMC Struct Biol. 2012;12(1):11. Rose PW, Prlić A, Bi C, Bluhm WF, Christie CH, Dutta S, Green RK, Goodsell DS, Westbrook JD, Woo J. The RCSB Protein Data Bank: views of structural biology for basic and applied research and education. Nucleic Acids Res. 2015;43(D1):D345–56. Consortium U. UniProt: a hub for protein information. Nucleic Acids Res. 2015;43(Database issue):D204. Yang J, Yan R, Roy A, Xu D, Poisson J, Zhang Y. The I-TASSER Suite: protein structure and function prediction. Nat Methods. 2015;12(1):7–8. Xu D, Zhang Y. Ab initio protein structure assembly using continuous structure fragments and optimized knowledgeion J: ion by inte. Proteins: Structure, Function, Bioinformatics. 2012;80(7):1715–35. Wu S, Zhang Y. LOMETS: a local meta-threading-server for protein structure prediction. Nucleic Acids Res. 2007;35(10):3375–82. Schmidt T, Haas J, Cassarino TG, Schwede T. Assessment of ligand binding residue predictions in CASP9. Proteins. 2009;77 Suppl 9:138. PubMed Central Google Scholar Konc J, Janežič D. ProBiS-ligands: a web server for prediction of ligands by examination of protein binding sites. Nucleic Acids Res. 2014;42(Web Server issue):W215–20. Panwar B, Gupta S, Raghava GP. Prediction of vitamin interacting residues in a vitamin binding protein using evolutionary information. BMC Bioinformatics. 2013;14:44. Chen K, Mizianty MJ, Kurgan L. Prediction and analysis of nucleotide-binding residues using sequence and sequence-derived structural descriptors. Bioinformatics. 2012;28(3):331–41. Chen K, Mizianty MJ, Kurgan L. ATPsite: sequence-based prediction of ATP-binding residues. Proteome Sci. 2011;9 Suppl 1:S4. Chauhan JS, Mishra NK, Raghava GP. Identification of ATP binding residues of a protein from its primary sequence. BMC Bioinformatics. 2009;10:434. Liu R, Hu J. HemeBIND: a novel method for heme binding residue prediction by combining structural and sequence information. BMC Bioinformatics. 2011;12:207. Mishra NK, Raghava GP. Prediction of FAD interacting residues in a protein from its primary sequence using evolutionary information. BMC Bioinformatics. 2010;11 Suppl 1:S48. Horst JA, Samudrala R. A protein sequence meta-functional signature for calcium binding residue prediction. Pattern Recogn Lett. 2010;31(14):2103–12. Chauhan JS, Mishra NK, Raghava GP. Prediction of GTP interacting residues, dipeptides and tripeptides in a protein from its evolutionary information. BMC Bioinformatics. 2010;11:301. Ansari HR, Raghava GP. Identification of NAD interacting residues in proteins. BMC Bioinformatics. 2010;11:160. Shu N, Zhou T, Hovmöller S. Prediction of zinc-binding sites in proteins from sequence. Bioinformatics. 2008;24(6):775–82. Zhang Z, Li Y, Lin B, Schroeder M, Huang B. Identification of cavities on protein surface using multiple computational approaches for drug binding site prediction. Bioinformatics. 2011;27(15):2083–8. Yang J, Roy A, Zhang Y. Protein-ligand binding site recognition using complementary binding-specific substructure comparison and sequence profile alignment. Bioinformatics. 2013;29(20):2588–95. Maietta P, Lopez G, Carro A, Pingilley BJ, Leon LG, Valencia A, Tress ML. FireDB: a compendium of biological and pharmacologically relevant ligands. Nucleic Acids Res. 2014;42(Database issue):D267–72. Dessailly BH, Lensink MF, Orengo CA, Wodak SJ. LigASite—a database of biologically relevant binding sites in proteins with known apo-structures. Nucleic Acids Res. 2008;36 suppl 1:D667–73. Wang R, Fang X, Lu Y, Yang C-Y, Wang S. The PDBbind database: methodologies and updates. J Med Chem. 2005;48(12):4111–9. Yang J, Roy A, Zhang Y. BioLiP: a semi-manually curated database for biologically relevant ligand-protein interactions. Nucleic Acids Res. 2013;41(Database issue):D1096–1103. Fu L, Niu B, Zhu Z, Wu S, Li W. CD-HIT: accelerated for clustering the next-generation sequencing data. Bioinformatics. 2012;28(23):3150–2. Buchan DW, Minneci F, Nugent TC, Bryson K, Jones DT. Scalable web services for the PSIPRED Protein Analysis Workbench. Nucleic Acids Res. 2013;41(W1):W349–57. Wu S, Zhang Y. ANGLOR: a composite machine-learning algorithm for protein backbone torsion angle prediction. PLoS One. 2008;3(10):e3400. Mayrose I, Graur D, Ben-Tal N, Pupko T. Comparison of site-specific rate-inference methods for protein sequences: empirical Bayesian methods are superior. Mol Biol Evol. 2004;21(9):1781–91. Vapnik VN, Vapnik V. Statistical learning theory, vol. 1. New York: Wiley; 1998. Chang C-C, Lin C-J. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST). 2011;2(3):27. Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci. 1997;55(1):119–39. Sobolev V, Sorokine A, Prilusky J, Abola EE, Edelman M. Automated analysis of interatomic contacts in proteins. Bioinformatics. 1999;15(4):327–32. Sobolev V, Wade RC, Vriend G, Edelman M. Molecular docking using surface complementarity. Proteins: Structure, Function, Bioinformatics. 1996;25(1):120–9. Ma X, Sun X. Sequence-based predictor of ATP-binding residues using random forest and mRMR-IFS feature selection. J Theor Biol. 2014;360:59–66. Lu CH, Lin YF, Lin JJ, Yu CS. Prediction of metal ion-binding sites in proteins using the fragment transformation method. PLoS One. 2012;7(6):e39252. We would like to acknowledgement Dr. Jianyi Yang for helpful suggestions which improve the performance of the proposed methods. Financial support was provided by the National Key Research and Development Program of China (Grant No. 2016YFB1000905) and National Natural Science Foundation of China (Grant No. U1401256, 61402177, 31260203 and 61672234) which covers the cost for data preparation, program coding and paper publish, and The "CHUN HUI" Plan of Ministry of Education and Science Foundation of Inner Mongolia at China (Grant No. 2016MS0378) which covers the cost for the design of the study and analysis of the results. The datasets analyzed during the current study are freely available at http://dase.ecnu.edu.cn/qwdong/download/databmcbio201610.zip or request upon the corresponding author. HX designed the experiments. DQ performed the experiments. WK analysed the data. HX and DQ wrote the paper. All authors read and approved the paper. Competing interest The authors declare that there is no conflict of interest regarding the publication of this article. College of Sciences, Inner Mongolia University of Technology, Hohhot, 010051, People's Republic of China Xiuzhen Hu College of Animal Science and Technology, Jilin Agricultural University, Changchun, 130118, People's Republic of China Kai Wang Institute for Data Science and Engineering, East China Normal University, Shanghai, 200062, People's Republic of China Qiwen Dong Key Laboratory of Network Oriented Intelligent Computation, Harbin Institute of Technology Shenzhen Graduate School, Shenzhen, Guangdong, 518055, People's Republic of China Present Address: School of Computer Science and Software Engineering, East China Normal University, #3663, North Zhongshan RD, Shanghai, 200062, China Correspondence to Qiwen Dong. Additional file 1: Table S1. Performance of all methods used in the paper on 9 types of ligands. (DOCX 20 kb) Hu, X., Wang, K. & Dong, Q. Protein ligand-specific binding residue predictions by an ensemble classifier. BMC Bioinformatics 17, 470 (2016). https://doi.org/10.1186/s12859-016-1348-3 Binding residue prediction Ensemble classifier Protein function Sequence analysis (methods)
CommonCrawl
\begin{definition}[Definition:Temperature/Absolute] '''Absolute temperature''' is a measure of the amount of heat energy in a body. It is defined as: :$T = \dfrac 1 k \paren {\dfrac {\partial U} {\partial \ln g} }$ where: :$k$ is a constant that relates the mean kinetic energy and '''absolute temperature''' of the body $B$ :$U$ is the total energy of $B$ :$g$ is the number of possible states in which $B$ can be. \end{definition}
ProofWiki
\begin{document} \begin{frontmatter} \title{Variable selection in \\seemingly unrelated regressions\\ with random predictors} \runtitle{Variable selection in SUR models} \begin{aug} \author{\fnms{David} \snm{Puelz}\ead[label=e1]{[email protected]}}, \author{\fnms{P. Richard} \snm{Hahn}\ead[label=e2]{[email protected]}} \and \author{\fnms{Carlos M.} \snm{Carvalho} \ead[label=e3]{[email protected]}} \runauthor{Puelz, Hahn and Carvalho} \affiliation{The University of Texas and The University of Chicago} \address{David Puelz\\ \printead{e1}\\} \address{P. Richard Hahn\\ \printead{e2}\\} \address{Carlos M. Carvalho\\ \printead{e3}\\} \end{aug} \begin{abstract} This paper considers linear model selection when the response is vector-valued and the predictors are randomly observed. We propose a new approach that decouples statistical inference from the selection step in a ``post-inference model summarization'' strategy. We study the impact of predictor uncertainty on the model selection procedure. The method is demonstrated through an application to asset pricing. \end{abstract} \end{frontmatter} \section{Introduction and overview} This paper develops a method for parsimoniously summarizing the shared dependence of many individual response variables upon a common set of predictor variables drawn at random. The focus is on multivariate Gaussian linear models where an analyst wants to find, among $p$ available predictors $X$, a subset which work well for predicting $q > 1$ response variables $Y$. The multivariate normal linear model assumes that a set of responses $\{ Y_{j} \}_{j=1}^{q}$ are linearly related to a shared set of covariates $\{ X_{i} \}_{i=1}^{p}$ via \begin{equation}\label{modelfirst} \begin{split} Y_{j} &= \beta_{j1}X_{1} + \cdots + \beta_{jp}X_{p} + \epsilon_{j}, \;\;\;\;\; \boldsymbol{\epsilon} \sim \mbox{N}(0, \Psi), \end{split} \end{equation} where $\Psi$ is a non-diagonal covariance matrix. Bayesian variable selection in (single-response) linear models is the subject of a vast literature, from prior specification on parameters \citep{Berger12} and models \citep{ScottBerger06} to efficient search strategies over the model space \citep{GeorgeandMcCulloch, hans2007shotgun}. For a more complete set of references we refer the reader to the reviews of \cite{Clyde04} and \cite{HahnCarvalho}. By comparison, variable selection has not been widely studied in concurrent regression models, perhaps because it is natural simply to apply existing variable selection methods to each univariate regression individually. Indeed, such joint regression models go by the name ``seemingly unrelated regressions'' (SUR) in the Bayesian econometrics literature, reflecting the fact that the regression coefficients from each of the separate regressions can be obtained in isolation from one another (i.e., conducting estimation as if $\Psi$ were diagonal). However, allowing non-diagonal $\Psi$ can lead to more efficient estimation \citep{zellner1962efficient} and can similarly impact variable selection \citep{brown1998multivariate, wangSUR}. This paper differs from \cite{brown1998multivariate} and \cite{wangSUR} in that we focus on the case where the predictor variables (the regressors, or covariates) are treated as random as opposed to fixed.Our goal will be to summarize codependence among multiple responses in {\em subsequent} periods, making the uncertainty in future realizations highly central to our selection objective. This approach is natural in many contexts (e.g., macroeconomic models) where the purpose of selection is inherently forward-looking. To our knowledge, no existing variable selection methods are suitable in this context. The new approach is based on the sparse summary perspective outlined in \cite{HahnCarvalho}, which applies Bayesian decision theory to summarize complex posterior distributions. By using a utility function that explicitly rewards sparse summaries, a high dimensional posterior distribution is collapsed into a more interpretable sequence of sparse point summaries. A related approach to variable selection in multivariate Gaussian models is the Gaussian graphical model framework \citep{jones2005experiments,dobra2004sparse,wang2009bayesian}. In that approach, the full conditional distributions are represented in terms of a sparse $(p+q)$-by-$(p+q)$ precision matrix. By contrast, we partition the model into response and predictor variable blocks, leading to a distinct selection criterion that narrowly considers the $p$-by-$q$ covariance between $Y$ to $X$. \subsection{Methods overview}\label{overview} Posterior summary variable selection consists of three phases: {\em model specification and fitting}, {\em utility specification}, and {\em graphical summary}. Each of these steps is outlined below. Additional details of the implementation are described in Section \ref{DSS} and the Appendix. \subsubsection*{Step 1: Model specification and fitting} The statistical model may be described compositionally as $p(Y,X) = p(Y \vert X)p(X)$. For $(Y,X) \sim \mbox{N}(\mu,\Sigma)$, the regression model (\ref{modelfirst}) implies $\Sigma$ has the following block structure: \begin{align}\label{model2} \Sigma = \left[ \begin{array}{c|c} {\boldsymbol \beta}^{T}\Sigma_{x}{\boldsymbol\beta} + \Psi & (\Sigma_{x}{\boldsymbol\beta})^{T} \\ \hline \Sigma_{x}{\boldsymbol\beta} & \Sigma_{x} \\ \end{array} \right]. \end{align} We denote the unknown parameters for the full joint model as $\Theta = \{\mu_{x},\mu_{y},\Sigma_{x},\boldsymbol{\beta},\Psi\}$ where $\mu = (\mu_y^T, \mu_x^T)^T$ and $\Sigma_x = \mbox{cov}(X)$. For a given prior choice $p(\Theta)$, posterior samples of all model parameters are computed by routine Monte Carlo methods, primarily Gibbs sampling. Details of the specific modeling choices and associated posterior sampling strategies are described in the Appendix. A notable feature of our approach is that {\it steps 2} (and {\it 3}) will be unaffected by modeling choices made in {\it step 1} except insofar as they lead to different posterior distributions $p(\Theta \vert \mathbf{Y}, \mathbf{X})$. In short, {\it step 1} is ``obtain a posterior distribution''; posterior samples then become inputs to {\it step 2}. \subsubsection*{Step 2: Utility specification} For our utility function we use the log-density of the regression $p(Y \vert X)$ above. It is convenient to work in terms of negative utility, or loss: \begin{equation} \begin{split} \mathcal{L}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma}) = \frac{1}{2}( \tilde{Y} - \boldsymbol{\gamma}\tilde{X} )^{T} \Omega ( \tilde{Y} - \boldsymbol{\gamma}\tilde{X} ), \end{split} \end{equation} where $\Omega = \Psi^{-1}$. Note that this log-density is being used in a descriptive capacity, not an inferential one; that is, all posterior inferences are based on the posterior distribution from {\it step 1}. The ``action'' $\boldsymbol{\gamma}$ is regarded as a point estimate of the regression parameters $\boldsymbol{\beta}$, which would be a good fit to {\em future} data $(\tilde{Y}, \tilde{X})$ drawn from the same model as the observed data. Taking expectations over the posterior distribution of all unknowns \begin{equation} \begin{split} p(\tilde{Y},\tilde{X}, \Theta \vert \textbf{Y}, \textbf{X}) = p(\tilde{Y} \vert \tilde{X}, \Theta) p(\tilde{X} \vert \Theta) p(\Theta \vert \textbf{Y}, \textbf{X}), \end{split} \end{equation} yields expected loss \begin{equation} \mathcal{L}(\boldsymbol{\gamma}) \equiv \mathbb{E}[ \mathcal{L}(\tilde{Y},\tilde{X}, \Theta, \boldsymbol{\gamma}) ] =\text{tr}[ M \boldsymbol{\gamma} S \boldsymbol{\gamma}^{T} ] - 2\text{tr}[A\boldsymbol{\gamma}^{T}] + \mbox{constant}, \end{equation}where $A=\mathbb{E}[\Omega\tilde{Y}\tilde{X}^{T}]$, $S=\mathbb{E}[\tilde{X}\tilde{X}^{T}] = \overline{\Sigma_{x}}$, and $M=\overline{\Omega}$, the overlines denote posterior means, and the final term is a constant with respect to $\boldsymbol{\gamma}$. Finally, we add an explicit penalty, reflecting our preference for sparse summaries: \begin{equation}\label{ex_loss} \mathcal{L}_{\lambda}(\boldsymbol{\gamma}) \equiv \text{tr}[ M \boldsymbol{\gamma} S \boldsymbol{\gamma}^{T} ] - 2\text{tr}[A\boldsymbol{\gamma}^{T}] + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}, \end{equation} where $\norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}$ counts the number of non-zero elements in $\boldsymbol{\gamma}$. In practice, we will use an approximation to this utility based on the $\ell_1$ penalty; optimal actions under this approximation will still be sparse. \subsubsection*{Step 3: Graphical summary} Traditional applications of Bayesian decision theory derive {\em point-estimates} by minimizing expected loss for certain loss functions. The present goal is not an {\em estimator} per se, but a parsimonious summary of information contained in a complicated, high dimensional posterior distribution. This distinction is worth emphasizing because we have not one, but rather a continuum of loss functions, indexed by the penalty parameter $\lambda$. This class of loss functions can be used to organize the posterior distribution as follows. Using available convex optimization techniques, expression (\ref{ex_loss}) can be optimized efficiently for a range of $\lambda$ values simultaneously. Posterior graphical summaries consist of two components. First, graphs depicting which response variables have non-zero $\boldsymbol{\gamma}_{\lambda}^*$ coefficients on which predictor variables can be produced for any given $\lambda$. Second, posterior distributions of the quantity \begin{equation} \Delta_{\lambda} = \mathcal{L}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma}_{\lambda}^*) - \mathcal{L}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma}^*) \end{equation} can be used to gauge the impact $\lambda$ has on the descriptive capacity of $\gamma_{\lambda}^*$. Here, $\boldsymbol{\gamma}^* = \gamma_{\lambda=0}^*$ is the unpenalized optimal solution to the minimization of loss (\ref{ex_loss}). \section{Posterior summary variable selection}\label{DSS} The statistical model is given in equations (\ref{modelfirst}) and (\ref{model2}); prior specification and model fitting details can be found in the Appendix. Alternatively, the models described in \cite{brown1998multivariate} or \cite{wangSUR} could be used. In this section, we flesh out the details of {\it steps 2} and {\it 3}, which represent the main contributions of this paper. \subsection{Deriving the sparsifying expected utility function} Define the optimal posterior summary as the $\boldsymbol{\gamma}^*$ minimizing some expected loss $\mathcal{L}_{\lambda}(\boldsymbol{\gamma}) = \mathbb{E}[\mathcal{L}_{\lambda}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma})]$. Here, the expectation is taken over the joint posterior predictive and posterior distribution: $p(\tilde{Y},\tilde{X}, \Theta \mid \textbf{Y}, \textbf{X})$. As described in the previous section, our loss takes the form of a penalized log conditional distribution: \begin{equation} \begin{split} \mathcal{L}_{\lambda}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma}) \equiv \frac{1}{2}( \tilde{Y} - \boldsymbol{\gamma}\tilde{X} )^{T} \Omega ( \tilde{Y} - \boldsymbol{\gamma}\tilde{X} ) + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}, \label{newlossstoch} \end{split} \end{equation}where $\Omega = \Psi^{-1}$, $\norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0} = \sum_{j}\mathds{1}\left(\text{{\bf vec}}(\boldsymbol{\gamma}) \neq 0\right)$, and $\text{{\bf vec}}(\boldsymbol{\gamma})$ is the vectorization of the action matrix $\boldsymbol{\gamma}.$ The first term of this loss measures the distance (weighted by the precision $\Omega$) between the linear predictor $\boldsymbol{\gamma}\tilde{X}$ and a future response $\tilde{Y}$. The second term promotes a sparse optimal summary, $\boldsymbol{\gamma}$. The penalty parameter $\lambda$ determines the relative importance of these two components. Expanding the quadratic form gives: \small \begin{equation}\label{modnew1} \begin{split} \mathcal{L}_{\lambda}(\tilde{Y},\tilde{X}, \Theta, \boldsymbol{\gamma}) &= \frac{1}{2}\left(\tilde{Y}^{T} \Omega \tilde{Y} - 2\tilde{X}^{T}\boldsymbol{\gamma}^{T} \Omega \tilde{Y} + \tilde{X}^{T}\boldsymbol{\gamma}^{T} \Omega \boldsymbol{\gamma} \tilde{X}\right) + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0} \\ &= \left( \tilde{X}^{T}\boldsymbol{\gamma}^{T} \Omega \boldsymbol{\gamma} \tilde{X} -2\tilde{X}^{T}\boldsymbol{\gamma}^{T} \Omega \tilde{Y}\right) + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0} + \mbox{constant}. \end{split} \end{equation} \normalsize Integrating over $(\tilde{Y},\tilde{X},\Theta \mid \textbf{Y}, \textbf{X})$ (and dropping the constant) gives: \begin{equation}\label{almostlassoform} \begin{split} \mathcal{L}_{\lambda}(\boldsymbol{\gamma}) &= \mathbb{E}[ \mathcal{L}_{\lambda}(\tilde{Y},\tilde{X}, \Theta, \boldsymbol{\gamma}) ]\\ &= \mathbb{E}\left[ \text{tr}[ \boldsymbol{\gamma}^{T} \Omega \boldsymbol{\gamma} \tilde{X} \tilde{X}^{T}] \right] - 2\mathbb{E}\left[ \text{tr}[\boldsymbol{\gamma}^{T} \Omega \tilde{Y}\tilde{X}^{T}] \right] + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}, \\ &= \mathbb{E}\left[ \text{tr}[ \boldsymbol{\gamma}^{T} \Omega \boldsymbol{\gamma} S] \right] - 2\text{tr}[A\boldsymbol{\gamma}^{T}] + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0},\\ &= \text{tr}[ M \boldsymbol{\gamma} S \boldsymbol{\gamma}^{T} ] - 2\text{tr}[A\boldsymbol{\gamma}^{T}] + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}, \end{split} \end{equation}where \begin{equation}\label{moments} \begin{split} A &\equiv\mathbb{E}[\Omega\tilde{Y}\tilde{X}^{T}],\\ S&\equiv\mathbb{E}[\tilde{X}\tilde{X}^{T}] = \overline{\Sigma}_{x},\\ M&\equiv\overline{\Omega}, \end{split} \end{equation} and the overlines denote posterior means. Define the Cholesky decompositions $M = LL^{T}$ and $S = QQ^{T}$. To make the optimization problem tractable we replace the $\ell_0$ norm with the $\ell_1$ norm, leading to an expression that can be formulated in the form of a standard penalized regression problem: \begin{equation}\label{lasso_form} \mathcal{L}_{\lambda}(\boldsymbol{\gamma}) = \norm{ \left[Q^{T} \otimes L^{T}\right]\text{\bf vec}(\boldsymbol{\gamma}) - \text{\bf vec}(L^{-1}AQ^{-T}) }_{2}^{2} + \lambda\norm{ \text{\bf vec}(\boldsymbol{\gamma})}_{1}, \end{equation} with covariates $Q^{T} \otimes L^{T}$, ``data" $L^{-1}AQ^{-T}$, and regression coefficients $\boldsymbol{\gamma}$ (see the Appendix for details). Accordingly, (\ref{lasso_form}) can be optimized using existing software such as the {\tt lars} R package of \cite{Efron} and still yield sparse solutions. \subsection{Sparsity-utility trade-off plots} Rather than attempting to determine an ``optimal'' value of $\lambda$, we advocate displaying plots that reflect the utility attenuation due to $\lambda$-induced sparsification. We define the ``loss gap'' between a $\lambda$-sparse solution, $\mathcal{L}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma}_{\lambda}^*)$, and the optimal unpenalized (non-sparse, $\lambda = 0$) summary, $\mathcal{L}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma}^*)$ as \begin{equation} \Delta_{\lambda} = \mathcal{L}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma}_{\lambda}^*) - \mathcal{L}(\tilde{Y},\tilde{X},\Theta,\boldsymbol{\gamma}^*). \end{equation} As a function of $(\tilde{Y},\tilde{X}, \Theta)$, $\Delta_{\lambda}$ is itself a random variable which we can sample by obtaining posterior draws from $p(\tilde{Y},\tilde{X}, \Theta \mid \textbf{Y}, \textbf{X})$. The posterior distribution(s) of $\Delta_{\lambda}$ (for various $\lambda$) therefore reflects the deterioration in utility attributable to ``sparsification''. Plotting these distributions as a function of $\lambda$ allows one to visualize this trade-off. Specifically, $\pi_{\lambda} \equiv \mbox{Pr}(\Delta_{\lambda} < 0 \mid \textbf{Y}, \textbf{X})$ is the (posterior) probability that the $\lambda$-sparse summary is no worse than the non-sparse summary. Using this framework, a useful heuristic for obtaining a single sparse summary is to report the sparsest model (associated with the highest $\lambda$) such that $\pi_{\lambda}$ is higher than some pre-determined threshold, $\kappa$; we adopt this approach in our application section. We propose summarizing the posterior distribution of $\Delta_{\lambda}$ via two types of plots. First, one can examine posterior means and credible intervals of $\Delta_{\lambda}$ for a sequence of models indexed by $\lambda$. Similarly, one can plot $\pi_{\lambda}$ across the same sequence of models. Also, for a fixed value of $\lambda$, one can produce graphs where nodes represent predictor variables and response variables and an edge is drawn between nodes whenever the corresponding element of $\gamma^*_{\lambda}$ is non-zero. All three types of plots are exhibited in Section \ref{apps}. \subsection{Relation to previous methods} Loss function (\ref{lasso_form}) is similar in form to the univariate \textit{DSS} (decoupled shrinkage and selection) strategy developed by \cite{HahnCarvalho}. Our approach generalizes \cite{HahnCarvalho} by optimizing over the matrix $\boldsymbol{\gamma} \in \mathbb{R}^{qxp}$ rather than a single vector of regression coefficients, extending the sparse summary utility approach to seemingly unrelated regression models \citep{brown1998multivariate, wangSUR}. Additionally, the present method considers random predictors, $\tilde{X}$, whereas \cite{HahnCarvalho} considered only a matrix of fixed design points. The impact of accounting for random predictors on the posterior summary variable selection procedure is examined in more detail in the application section. An important difference between the sparse summary utility approach and previous approaches is in the role played by the posterior distribution. Many Bayesian variable importance metrics are based on the posterior distribution of an indicator variable that records if a given variable is non-zero (included in the model). The model we will use in our application utilizes such an indicator vector, which is called $\alpha$. For example, a widely-used model selection heuristic is to examine the ``inclusion probability'' of predictor $i$, defined as the posterior mean of component $\alpha_{i}$. However, any approach based on the posterior mean of $\alpha$ necessarily ignores information about the codependence between its elements, which can be substantial in cases of collinear predictors. Our method focuses instead on the expected log-density of future predictions, which synthesizes information from all parameters simultaneously in gauging how important they are in terms of future predictions. \section{Applications}\label{apps} In this section, the sparse posterior summary method is applied to a data set from the finance (asset pricing) literature. A key component of our analysis will be a comparison between the posterior summaries obtained when the predictors are drawn at random versus when they are assumed fixed. The response variables are returns on 25 tradable portfolios and our predictor variables are returns on 10 other portfolios thought to be of theoretical importance. In the asset pricing literature \cite{Ross}, the response portfolios represent assets to be priced (so-called {\em test assets}) and the predictor portfolios represent distinct sources of variation (so-called {\em risk factors}). More specifically, the test assets $Y$ represent trading strategies based on company size (total value of stock shares) and book-to-market (the ratio of the company's accounting valuation to its size); see \cite{FF3} and \cite{FF5} for details. Roughly, these assets serve as a lower-dimensional proxy for the stock market. The risk factors are also portfolios, but ones which are thought to represent {\em distinct} sources of risk. What constitutes a distinct source of risk is widely debated, and many such factors have been proposed in the literature \citep{cochrane2011presidential}. We use monthly data from July 1963 through February 2015, obtained from Ken French's website: \begin{center} {\tt http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/}. \end{center} Our analysis investigates which subset of risk factors are most relevant (as defined by our utility function). As our initial candidates, we consider factors known in previous literature as: market, size, value, direct profitability, investment, momentum, short term reversal, long term reversal, betting against beta, and quality minus junk. Each factor is constructed by cross-sectionally sorting stocks by various characteristics of a company and forming linear combinations based on these sorts. For example, the value factor is constructed using the book-to-market ratio of a company. A high ratio indicates the company's stock is a ``value stock" while a low ratio leads to a ``growth stock" assessment. Essentially, the value factor is a portfolio built by going long stocks with high book-to-market ratio and shorting stocks with low book-to-market ratio. For detailed definitions of the first five factors, see \cite{FF5}. In the figures to follow, each is labeled as, for example, ``Size2 BM3," to denote the portfolio buying stocks in the second quintile of size and the third quintile of book-to-market ratio. Recent related work includes \cite{ericsson2004choosing} and \cite{harvey2015lucky}. \cite{ericsson2004choosing} follow a Bayesian model selection approach based off of inclusion probabilities, representing the preliminary inference step of our methodology. \cite{harvey2015lucky} take a different approach that utilizes multiple hypothesis testing and bootstrapping. \subsection{Results} As described in Section \ref{overview}, the first step of our analysis consists of fitting a Bayesian model. We fit model (\ref{modelfirst}) using a variation of the well-known stochastic search variable selection algorithm of \cite{GeorgeandMcCulloch} and similar to \cite{brown1998multivariate} and \cite{wangSUR}. Details are given in the Appendix. In the subsections to follow, we will show the following two figures. First, we plot the expectation of $\Delta_{\lambda}$ (and associated posterior credible interval) across a range of $\lambda$ penalties. Recall, $\Delta_{\lambda}$ is the ``loss gap'' between a sparse summary and the best non-sparse (saturated) summary, meaning that smaller values are ``better''. Additionally, we plot the probability that a given model is no worse than the saturated model $\pi_{\lambda}$ on this same figure, where ``no worse'' means $\Delta_{\lambda} < 0$. Note that even for very weak penalties (small $\lambda$), the distribution of $\Delta_{\lambda}$ will have non-zero variance and therefore even if it is centered about zero, some mass can be expected to fall above zero; practically, this means that $\pi_{\lambda} > 0.5$ is a very high score. Second, we display a summary graph of the selected variables for the $\kappa=12.5\%$ threshold. Recall that this is the highest penalty (sparsest graph) that is no worse than the saturated model with $12.5\%$ posterior probability. For these graphs, the response and predictor variables are colored gray and white, respectively. A test asset label of, for example, ``Size2 BM3," denotes the portfolio that buys stocks in the second quintile of size and the third quintile of book-to-market ratio. The predictors without connections to the responses under the optimal graph are not displayed. These two figures are shown in four scenarios: \begin{enumerate} \item Random predictors. \item Fixed predictors. \item Random predictors under alternative prior. \item Fixed predictors under alternative prior. \end{enumerate}The ``alternative prior" scenario serves to show the impact of the statistical modeling comprising {\it step 1}. Specifically, we use the same Monte Carlo model fitting procedure as before (described in the Appendix) but fix $\alpha$ to the identity vector. That is, we omit the point-mass component of the priors for the elements of $\boldsymbol{\beta}$. \subsubsection{Random predictors} This section introduces our baseline example where the risk factors (predictors) are random. We evaluate the set of potential models by analyzing plots such as figure \ref{Lossgraph25}. This shows $\Delta_{\lambda}$ and $\pi_{\lambda}$ evaluated across a range of $\lambda$ values. Additionally, we display the posterior uncertainty in the $\Delta_{\lambda}$ metric with gray vertical uncertainty bands: these are the centered $P\%$ posterior credible intervals where $\kappa = (1-P)/2$. As the accuracy of the sparsified solution increases, the posterior of $\Delta_{\lambda}$ concentrates around zero by construction, and the probability of the model being no worse than the saturated model, $\pi_{\lambda}$, increases. We choose the sparsest model such that its corresponding $\pi_{\lambda} > \kappa = 12.5\%$. This model is displayed in figure \ref{graph25} and is identified by the black dot in figure \ref{Lossgraph25}. The selected set of factors in graph \ref{graph25} are the market (Mkt.RF), value (HML), and size (SMB). This three factor model is no worse than the saturated model with $12.5\%$ posterior probability where all test assets are connected to all risk factors. Note also that in our selected model almost every test asset is distinctly tied to one of either value or size and the market factor. These are the three factors of Ken French and Eugene Fama's pricing model developed in \cite{FF3}. They are known throughout the finance community as being ``fundamental dimensions" of the financial market, and our procedure is consistent with this widely held belief at a small $\kappa$ level. The characteristics of the test assets in graph \ref{graph25} are also important to highlight. The test portfolios that invest in small companies (``Size1" and ``Size2") are primarily connected to the SMB factor which is designed as a proxy for the risk of small companies. Similarly, the test portfolios that invest in high book-to-market companies (``BM4" and ``BM5") have connections to the HML factor which is built on the idea that companies whose book value exceeds the market's perceived value should generate a distinct source of risk. As previously noted, all of the test portfolios are connected to the market factor suggesting that it is a relevant predictor even for the sparse $\kappa=12.5\%$ selection criterion. In figure \ref{graphseq25}, we examine how different choices of the $\kappa$ threshold change the selected set of risk factors. In this analysis, there is a tradeoff between the posterior probability of being ``close" to the saturated model and the utility's preference for sparsity. When the threshold is low ($\kappa=2,4,$ and $12.5$\%) the summarization procedure selects relatively sparse graphs with up to three factors (Mkt.RF, HML, and SMB). The market (Mkt.RF) and size (SMB) factors appear first, connected to a small number of the test assets ($\kappa=2$\%). As the threshold is increased, the point summary becomes denser and correspondingly more predictively accurate (as measured by the utility function). The value factor (HML) enters at $\kappa=12.5$\% and quality minus junk (QMJ), investment (CMA), and profitability (RMW) factors enter at $\kappa=32.5$\%. The graph for $\kappa=32.5$\% excluding QMJ is essentially the new five factor model proposed by \cite{FF5}. The five Fama-French factors (plus OMJ with three connections) persist up to the $\kappa=47.5\%$ threshold. This indicates that, up to a high posterior probability, the five factor model of \cite{FF5} does no worse than an asset pricing model with all ten factors connected to all test assets. Notice also that our summarization procedure displays the specific relationship between the factors and test assets through the connections. Using this approach, the analyst is able to identify which predictors drive variation in which responses and at what thresholds they may be relevant. This feature is significant for summarization problems where individual characteristics of the test portfolios and their joint dependence on the risk factors is may be a priori unclear. As $\kappa$ approaches the $50\%$ threshold ($\kappa=49.75\%$ in figure \ref{graphseq25}), the model summary includes all ten factors. Requesting a summary with this level of certainty results in little sparsification. However, compared to the nearby $\kappa=47.5\%$ model with only six factors, we also now know that the remaining four contribute little to our utility. These factors are betting against beta (BAB), momentum (Mom), long term reversal (LTR), and short term reversal (STR). Sparse posterior summarization applied in this context allows an analyst to study the impact of risk factors on pricing while taking uncertainty into account. Coming to a similar conclusion via common alternative techniques (e.g., component-wise ordinary least squares combined with thresholding by $t$-statistics) is comparatively ad hoc; our method is simply a perspicuous summary of a posterior distribution. Likewise, applying sparse regression techniques based on $\ell_1$ penalized likelihood methods would not take into account the residual correlation $\Psi$, nor would that approach naturally accommodate random predictors. \begin{figure} \caption{Evaluation of $\Delta_{\lambda}$ and $\pi_{\lambda}$ along the solution path for the 25 size/value portfolios modeled by the 10 factors. An analyst may use this plot to select a particular model. Uncertainty bands are 75\% posterior intervals on the $\Delta_{\lambda}$ metric. The large black dot represents the model selected in Figure \ref{graph25}.} \caption{The selected model for 25 size/value portfolios modeled by the 10 factors. The responses and predictors are colored in gray and white, respectively. Edges represent nonzero components of the optimal action, $\boldsymbol{\gamma}$.} \label{Lossgraph25} \label{graph25} \end{figure} \begin{figure} \caption{Sequence of selected models for varying threshold level $\kappa$ under the assumption of \textbf{random predictors}.} \label{graphseq25} \end{figure} \subsubsection{Fixed predictors} In this section, we consider posterior summarization with the loss function derived under the assumption of \textit{fixed predictors}. The analogous loss function when the predictor matrix is fixed is: \begin{equation}\label{Lossfix} \begin{split} \mathcal{L}_{\lambda}(\boldsymbol{\gamma}) &= \norm{ \left[Q_{f}^{T} \otimes L^{T}\right]\text{\bf vec}(\boldsymbol{\gamma}) - \text{\bf vec}(L^{-1}A_{f}Q_{f}^{-T}) }_{2}^{2} + \lambda\norm{ \text{\bf vec}(\boldsymbol{\gamma})}_{1},\\ \end{split} \end{equation} with $Q_{f}Q_{f}^{T} = \textbf{X}^{T}\textbf{X}$, $A_{f}=\mathbb{E}[\Omega\tilde{\textbf{Y}}^{T}\textbf{X}]$, and $M=\overline{\Omega}=LL^{T}$; compare to (\ref{moments}) and (\ref{lasso_form}). The derivation of (\ref{Lossfix}) is similar to the presentation in Section \ref{DSS} and may be found in the Appendix. The corresponding version of the loss gap is \begin{equation} \Delta_{\lambda} = \mathcal{L}(\tilde{\textbf{Y}},\textbf{X},\Theta,\boldsymbol{\gamma}_{\lambda}^*) - \mathcal{L}(\tilde{\textbf{Y}},\textbf{X},\Theta,\boldsymbol{\gamma}^*). \end{equation} which has distribution induced by the posterior over $(\tilde{\textbf{Y}}, \Theta)$ rather than $(\tilde{Y}, \tilde{X}, \Theta)$ as before. By fixing $\textbf{X}$, the posterior of $\Delta_{\lambda}$ has smaller dispersion which results in denser summaries for the same level of $\kappa$. For example, compare how dense Figure \ref{graph25XFIX} is relative to Figure \ref{graph25}. The denser graph in Figure \ref{graph25XFIX} contains nine out of ten potential risk factors compared to just three in Figure \ref{graph25}, which correspond to the Fama-French factors described in \cite{FF3}. Recall, both graphs represent the sparsest model such that the probability of being no worse than the saturated model is greater than $\kappa = 12.5\%$ --- the difference is that one of the graphs defines ``worse-than'' in terms of a fixed set of risk factor returns while the other acknowledge that those returns are themselves uncertain in future periods. Figure \ref{graphseq25XFIX} demonstrates this problem for several choices of the uncertainty level. Regardless of the uncertainty level chosen, the selected models contain most of the ten factors and many edges. In fact, it is difficult to distinguish even the $\kappa=2\%$ and $\kappa=49.75\%$ models. \begin{figure} \caption{Evaluation of $\Delta_{\lambda}$ and $\pi_{\lambda}$ along the solution path for the 25 size/value portfolios modeled by the 10 factors assuming \textbf{fixed predictors (factors)}. An analyst may use this plot to select a particular model. Uncertainty bands are 75\% posterior intervals on the $\Delta_{\lambda}$ metric. The large black dot represents the model selected in Figure \ref{graph25XFIX}.} \caption{The selected model for 25 size/value portfolios modeled by the 10 factors when \textbf{uncertainty in future factor returns is not taken into account}. The responses and predictors are colored in gray and white, respectively. Edges represent nonzero components of the optimal action, $\boldsymbol{\gamma}$.} \label{Lossgraph25XFIX} \label{graph25XFIX} \label{graph25XFIX} \end{figure} \begin{figure} \caption{Sequence of selected models for varying threshold level $\kappa$ under the assumption of \textbf{fixed predictors}.} \label{graphseq25XFIX} \end{figure} \subsubsection{Alternative prior analysis} Here, we consider how our posterior summaries change as a function of using a different posterior, based on a different choice of prior. Specifically, in this section we do not employ model selection point-mass priors on the elements of $\boldsymbol{\beta}$ as we did in the above analysis. These results are displayed in figures \ref{Lossgraph25MF} and \ref{graph25MF}. Broadly, the same risk factors are flagged as important --- the market factor followed by the size (SMB) and value (HML) factors. One notable difference is that the quality minus junk (QMJ), investment (CMA), and profitability (RMW) factors appear at smaller levels of $\kappa$. This result is intuitive in the sense that point-mass priors demand stronger evidence for a variable to impact the posterior means defining the loss function. Without the strong shrinkage imposed by the point-mass priors, these risk factors show up more strongly in the posterior and hence in the posterior summary. In each case, the three Fama and French factors from \cite{FF3} predictably appear and seem to be the only relevant factors for pricing these 25 portfolios. Similarly, the weaker shrinkage model in the fixed predictor version (figures \ref{Lossgraph25MFXFIX} and \ref{graph25MFXFIX}) yields yet denser summaries (for a given level of $\kappa$). \begin{figure} \caption{Evaluation of $\Delta_{\lambda}$ and $\pi_{\lambda}$ along the solution path for the 25 size/value portfolios modeled by the 10 factors with alternative prior. An analyst may use this plot to select a particular model. Uncertainty bands are 75\% posterior intervals on the $\Delta_{\lambda}$ metric. The large black dot represents the model selected in Figure \ref{graph25MF}.} \caption{The selected model for 25 size/value portfolios modeled by the 10 factors with alternative prior. The responses and predictors are colored in gray and white, respectively. Edges represent nonzero components of the optimal action, $\boldsymbol{\gamma}$.} \label{Lossgraph25MF} \label{graph25MF} \end{figure} \begin{figure} \caption{Evaluation of $\Delta_{\lambda}$ and $\pi_{\lambda}$ along the solution path for the 25 size/value portfolios modeled by the 10 factors assuming \textbf{fixed predictors (factors)} and with alternative prior. An analyst may use this plot to select a particular model. Uncertainty bands are 75\% posterior intervals on the $\Delta_{\lambda}$ metric. The large black dot represents the model selected in Figure \ref{graph25MFXFIX}.} \caption{The selected model for 25 size/value portfolios modeled by the 10 factors when \textbf{uncertainty in future factor returns is not taken into account} and with alternative prior. The responses and predictors are colored in gray and white, respectively. Edges represent nonzero components of the optimal action, $\boldsymbol{\gamma}$.} \label{Lossgraph25MFXFIX} \label{graph25MFXFIX} \end{figure} \subsubsection{Comparison of four scenarios at fixed $\kappa$} The selected summary graphs for the four scenarios are displayed together for comparison in figure \ref{summarygraph}. Observe that graphs (c) and (d) selected under the alternative prior are marginally denser than their counterparts (a) and (b) under the point-mass model selection prior. However, the assumption of random predictors results in notably sparser summaries -- graphs (a) and (c) are much sparser than (b) and (d). These comparisons emphasize the impact that incorporating random predictors may have on a variable selection procedure; especially the present approach where we extract point summaries from a posterior by utilizing uncertainty in all unknowns $(\tilde{Y},\tilde{X},\Theta)$. \begin{figure} \caption{Comparison of selected models under four scenarios.} \label{summarygraph} \end{figure} \section{Conclusion} In this paper, we propose a general model selection procedure for multivariate linear models when future realizations of the predictors are unknown. Such models are widely used in many areas of science and economics, including genetics and asset pricing. Our utility-based sparse posterior summary procedure is a multivariate extension of the ``decoupling shrinkage and selection" methodology of \cite{HahnCarvalho}. The approach we develop has three steps: (\textit{i}) fit a Bayesian model, (\textit{ii}) specify a utility function with a sparsity-inducing penalty term and optimize its expectation, and (\textit{iii}) graphically summarize the posterior impact (in terms of utility) of the sparsity penalty. Our utility function is based on the kernel of the conditional distribution responses given the predictors and can be formulated as a tractable convex program. We demonstrate how our procedure may be used in asset pricing under a variety of modeling choices. The remainder of this discussion takes a step back from the specifics of the seemingly unrelated regressions model and considers a broader role for utility-based posterior summaries. A paradox of applied Bayesian analysis is that posterior distributions based on relatively intuitive models like the SUR model are often just as complicated as the data itself. For Bayesian analysis to become a routine tool for practical inquiry, methods for summarizing posterior distributions must be developed apace with the models themselves. A natural starting point for developing such methods is decision theory, which suggests developing loss functions specifically geared towards practical posterior summary. As a matter of practical data analysis, articulating an apt loss function has been sorely neglected relative to the effort typically lavished on the model specification stage, specifically prior specification. Ironically (but not surprisingly) our application demonstrates that one's utility function has a dominant effect on the posterior summaries obtained relative to which prior distribution is used. This paper makes two contributions to this area of ``utility design''. First, we propose that the likelihood function has a role to play in posterior summary apart from its role in inference. That is, one of the great practical virtues of likelihood-based statistics is that the likelihood serves to summarize the data by way of the corresponding point estimates. By using the log-density as our utility function applied to {\em future} data, we revive the fundamental summarizing role of the likelihood. Additionally, note that this approach allows three distinct roles for parameters. First, all parameters of the model appear in defining the posterior predictive distribution. Second, some parameters appear in {\em defining} the loss function; $\Psi$ plays this role in our analysis. Third, some parameters define the action space. In this framework there are no ``nuisance'' parameters that vanish from the estimator as soon as a marginal posterior is obtained. Once the likelihood-based utility is specified, it is a natural next step to consider augmenting the utility to enforce particular features of the desired point summary. For example, our analysis above was based on a utility that explicitly rewards sparsity of the resulting summary. A traditional instance of this idea is the definition of high posterior density regions, which are defined as the {\em shortest, contiguous} interval that contains a prescribed fraction of the posterior mass. Our second contribution is to consider not just one, but a range, of utility functions and to examine the posterior distributions of the corresponding posterior loss. Specifically, we compare the utility of a sparsified summary to the utility of the optimal non-sparse summary. Interestingly, these utilities are random variables themselves (defined by the posterior distribution) and examining their distributions provides a fundamentally Bayesian way to measure the extent to which the sparsity preference is driving one's conclusions. The idea of comparing a hypothetical continuum of decision-makers based on the posterior distribution of their respective utilities represents a principled Bayesian approach to exploratory data analysis. This is an area of ongoing research. \appendix \section{Matrix-variate Stochastic Search} \subsection{Model fitting: The marginal and conditional distributions} The future values of the response and covariates are unknown. Acknowledging this uncertainty is important in the overall decision of which covariates to select and is a necessary ingredient of the selection procedure. As our examples consider financial asset return data, we choose to model the marginal distribution of the covariates via a latent factor model detailed in \cite{murray2013bayesian}. The responses are modeled conditionally on the covariates via a matrix-variate stochastic search which is a multivariate of extension of stochastic search variable selection (SSVS) from \cite{GeorgeandMcCulloch}. Recalling the block structure for the covariance of the full joint distribution of $(X,Y)$: \begin{align} \Sigma = \left[ \begin{array}{c|c} \boldsymbol{\beta}^{T}\Sigma_{x}\boldsymbol{\beta} + \Psi & (\Sigma_{x}\boldsymbol{\beta})^{T} \\ \hline \Sigma_{x}\boldsymbol{\beta} & \Sigma_{x} \\ \end{array} \right], \end{align}we obtain posterior samples of $\Sigma$ by sampling the conditional model parameters using a matrix-variate stochastic search algorithm (described below) and sampling the covariance of $X$ from a latent factor model where it is marginally normally distributed. To reiterate our procedure is \begin{itemize} \item $\Sigma_{x}$ is sampled from independent latent factor model, \item $\boldsymbol{\beta}$ is sampled from matrix-variate MCMC, \item $\Psi$ is sampled from matrix-variate MCMC. \end{itemize} \subsubsection{Modeling a full residual covariance matrix} In order to sample a full residual covariance matrix, we augment the predictor matrix with a latent factor $f$ by substituting $\epsilon_{j} = b_{j}f + \tilde{\epsilon}_{j}$: \begin{equation}\label{modelfirstA} \begin{split} Y_{j} &= \beta_{j1}X_{1} + \cdots + \beta_{jp}X_{p} + b_{j}f + \tilde{\epsilon}_{j}, \;\;\;\;\; \tilde{\boldsymbol{\epsilon}} \sim \mbox{N}(0, \tilde{\Psi}), \end{split} \end{equation}where $\tilde{\Psi}$ is now diagonal. Assuming that $f \sim N(0,1)$ is shared among all response variables $j$ and $\textbf{b} \in \mathbb{R}^{qx1}$ is a vector of all coefficients $b_{j}$, the total residual variance may be expressed as: \begin{equation} \begin{split} \Psi = \textbf{b}\textbf{b}^{T} + \tilde{\Psi}. \end{split} \end{equation}We incorporate this latent factor model into the matrix-variate MCMC via a simple Gibbs step to draw posterior samples of $f$. This augmentation allows us to draw samples of $\Psi$ that are not constrained to be diagonal. \subsubsection{Modeling the marginal distribution: A latent factor model} We model covariates via a latent factor model of the form: \begin{equation} \begin{split} \textbf{X}_{t} &= \mu_{x} + \textbf{B}\textbf{f}_{t} + \textbf{v}_{t} \\ \textbf{v}_{t} \sim \text{N}(0,\mathbf{\Lambda}), \hspace{4mm} &\textbf{f}_{t} \sim \text{N}(0,\mathbb{I}_{k}), \hspace{4mm} \mu_{x} \sim \text{N}(0,\Phi) \end{split} \end{equation}where $\Lambda$ is assumed diagonal and the set of $k$ latent factors $f_{t}$ are independent. The covariance of the covariates is constrained by the factor decomposition and takes the form: \begin{equation} \begin{split} \Sigma_{x} = \textbf{B}\textbf{B}^{T} + \Lambda. \end{split} \end{equation} Recall that this is only a potential choice for the $p(X)$ and it is chosen here primarily motivated by the applied context where financial assets tend to depend across each other through common factors. Our variable selection procedure would follow if any other choice was made at this point. To estimate this model, a convenient, efficient choice is the R package {\tt bfa} \citep{bfa}. The software allows us to sample the marginal covariance as well as the marginal mean via a simple Gibbs step assuming a normal prior on $\mu_{x}$. \subsubsection{Modeling the conditional distribution: A matrix-variate stochastic search} We model the conditional distribution, $Y \vert X$, by developing a multivariate extension of stochastic search variable selection of \cite{GeorgeandMcCulloch}. Recall that the conditional model is: $\textbf{Y} - \textbf{X} \boldsymbol \beta \sim \mathcal{N}\left(\mathbb{I}_{N \times N}, \hspace{1mm} \Psi_{q \times q} \right)$. In order to sample different subsets of covariates (different models) during the posterior simulations, we introduce an additional parameter $\alpha \in \mathbb{R}^{p}$ that is a binary vector identifying a particular model. In other words, all entries $i$ for which $\alpha_{i}=1$ denote covariate $i$ as included in model $M_{\alpha}$. Specifically, we write the model identified by $\alpha$ as $M_{\alpha}: \textbf{Y} - \textbf{X}_{\alpha} \boldsymbol{\beta}_{\alpha} \sim \mathcal{N}\left(\mathbb{I}_{N \times N}, \hspace{1mm} \Psi_{q \times q} \right)$. As in \cite{GeorgeandMcCulloch}, we aim to explore the posterior on the model space, $\textbf{P}\left(M_{\alpha} \hspace{1mm} \vert \hspace{1mm} \textbf{Y} \right)$. Our algorithm explores this model space by calculating a Bayes factor for a particular model $M_{\alpha}$. Given that the response $\textbf{Y}$ is matrix instead of a vector, we derive the Bayes factor as a product of vector response Bayes factors. This is done by separating the marginal likelihood of the response matrix as a product of marginal likelihoods across the separate vector responses. This derivation requires our priors to be independent across the responses and is shown in the Appendix. It is important to note that we do not run a standard SSVS on each univariate response regression separately. Instead, we generalize \cite{GeorgeandMcCulloch} and require all covariates to be included or excluded from a model for each of the responses \textit{simultaneously}. The marginal likelihood requires priors for the parameters $\boldsymbol{\beta}$ and $\sigma$ parameters in our model. We choose the standard g-prior for linear models because it permits an analytical solution for the marginal likelihood integral \citep{Z1,Z3,Liangetal08}. Our Gibbs sampling algorithm directly follows the stochastic search variable selection procedure described in \cite{GeorgeandMcCulloch} using these calculated Bayes factors, now adapted to a multivariate setting. The aim is to scan through all possible covariates and determine which ones to include in the model identified through the binary vector $\alpha$. At each substep of the MCMC, we consider an individual covariate $i$ within a specific model and compute its inclusion probability as a function of the model's prior probability and the Bayes factors: \begin{equation*} \begin{split} p_{i} = \frac{B_{a0} \textbf{P}\left(M_{\alpha_{a}}\right)}{B_{a0} \textbf{P}\left(M_{\alpha_{a}}\right) + B_{b0} \textbf{P}\left(M_{\alpha_{b}}\right)}. \end{split} \end{equation*}The Bayes factor $B_{a0}$ is a ratio of marginal likelihoods for the model with covariate $i$ included and the null model, and $B_{b0}$ is the analogous Bayes factor for the model without covariate $i$. The prior on the model space, $\textbf{P}\left(M_{\gamma}\right)$, can either be chosen to adjust for multiplicity or to be uniform - our results appear robust to both specifications. In this setting, adjusting for multiplicity amounts to putting equal prior mass on different sizes of models. In contrast, the uniform prior for models involving $p$ covariates puts higher probability mass on larger models, reaching a maximum for models with ${ p \choose 2 }$ covariates included. The details of the priors on the model space and parameters, including an empirical Bayes choice of the g-prior hyperparameter, are discussed in the Appendix. \subsection{Details} Assume we have observed $N$ realizations of data $(\mathbf{Y},\mathbf{X})$. For model comparison, we calculate the Bayes factor with respect to the null model without any covariates. First, we calculate a marginal likelihood. This likelihood is obtained by integrating the full model over $\boldsymbol \beta_{\alpha}$ and $\sigma$ multiplied by a prior, $\pi_{\alpha}\left(\boldsymbol \beta_{\alpha}, \sigma\right) $, for these parameters. A Bayes factor of a given model $\alpha$ versus the null model, $B_{\alpha 0} = \frac{m_{\alpha}\left(\textbf{R}\right)}{m_{0}\left(\textbf{R}\right)}$ with: \begin{align} \label{marginal} m_{\alpha}\left(\mathbf{Y}\right) = \int \textrm{Matrix Normal}_{N,q}\left( \mathbf{Y} \hspace{1mm} \vert \hspace{1mm} \textbf{X}_{\alpha} \boldsymbol \beta_{\alpha}, \hspace{1mm} \mathbb{I}_{N x N}, \hspace{1mm} \tilde{\Psi}_{q \times q} \right) \pi_{\alpha}\left(\boldsymbol \beta_{\alpha}, \sigma_{i}\right) d\boldsymbol \beta_{\alpha} d\sigma_{i}. \end{align}We assume independence of the priors across columns of $\mathbf{Y}$ so we can write the integrand in (\ref{marginal}) as a product across each individual response vector: \begin{align*} m_{\alpha}\left(\mathbf{Y}\right) &= \int \Pi_{i=1}^{q} \hspace{1mm} N_{N}\left( \mathbf{Y}^{i} \hspace{1mm} \vert \hspace{1mm} \textbf{X}_{\alpha} \boldsymbol \beta_{\alpha}^{i}, \hspace{1mm} \sigma_{i}^{2}\mathbb{I}_{N x N}\right) \pi_{\alpha}^{i} \left(\boldsymbol \beta_{\alpha}^{i}, \sigma_{i}\right) d\boldsymbol \beta_{\alpha}^{i} d\sigma_{i} \\ &\iff \\ m_{\alpha}\left(\mathbf{Y}\right) &= \int \hspace{1mm} N_{N}\left( \mathbf{Y}^{1} \hspace{1mm} \vert \hspace{1mm} \textbf{X}_{\alpha} \boldsymbol \beta_{\alpha}^{1}, \hspace{1mm} \sigma_{1}^{2}\mathbb{I}_{N x N}\right) \pi_{\alpha}^{1} \left(\boldsymbol \beta_{\alpha}^{1}, \sigma_{1}\right) d\boldsymbol \beta_{\alpha}^{1} d\sigma_{1} \\ & \times \cdots \times \int N_{N}\left( \mathbf{Y}^{q} \hspace{1mm} \vert \hspace{1mm} \textbf{X}_{\alpha} \boldsymbol \beta_{\alpha}^{q}, \hspace{1mm} \sigma_{q}^{2}\mathbb{I}_{N x N}\right) \pi_{\alpha}^{q} \left(\boldsymbol \beta_{\alpha}^{q}, \sigma_{q}\right) d\boldsymbol \beta_{\alpha}^{q} d\sigma_{q} \\ &= m_{\alpha}\left(\mathbf{Y}^{1}\right) \times \cdots \times m_{\alpha}\left(\mathbf{Y}^{q}\right) \\ &= \Pi_{i=1}^{q} m_{\alpha}\left(\mathbf{Y}^{i}\right), \end{align*} with: \begin{align} \label{A5} \mathbf{Y}^{i} \sim N_{N}\left(\textbf{X}_{\alpha} \boldsymbol \beta_{\alpha}^{i}, \hspace{1mm} \sigma_{i}^{2}\mathbb{I}_{N x N}\right). \end{align}Therefore, the Bayes factor for this matrix-variate model is just a product of Bayes factors for the individual multivariate normal models. \begin{align} \label{A6} B_{\alpha0} = \widetilde{B}_{\alpha0}^{1} \times \cdots \times \widetilde{B}_{\alpha0}^{q} \end{align} with: \begin{align} \label{A6} \widetilde{B}_{\alpha0}^{i} = \frac{m_{\alpha}\left(\mathbf{Y}^{i}\right)}{m_{0}\left(\mathbf{Y}^{i}\right)}. \end{align} The simplification of the marginal likelihood calculation is crucial for analytical simplicity and for the resulting SSVS algorithm to rely on techniques already developed for univariate response models. In order to calculate the integral for each Bayes factor, we need priors on the parameters $\boldsymbol{\beta}_{\alpha}$ and $\sigma$. Since the priors are independent across the columns of $\mathbf{Y}$, we aim to define $\pi_{\alpha}^{i} \left(\boldsymbol \beta_{\alpha}^{i}, \sigma_{i}\right)$ $\forall i \in \{1,...,q\}$, which we express as the product: $\pi_{\alpha}^{i} \left(\sigma_{i}\right) \pi_{\alpha}^{i} \left(\boldsymbol \beta_{\alpha}^{i} \hspace{1mm} \vert \hspace{1mm} \sigma_{i}\right)$. Motivated by the work on regression problems of Zellner, Jeffreys, and Siow, we choose a non-informative prior for $\sigma_{i}$ and the popular g-prior for the conditional prior on $\boldsymbol \beta_{\alpha}^{i}$, \citep{Z1}, \citep{Z2}, \citep{Z3}, \citep{J1}: \begin{align} \label{A7} \pi_{\alpha}^{i} \left(\boldsymbol \beta_{\alpha}^{i}, \sigma_{i} \hspace{1mm} \vert \hspace{1mm} g \right) = \sigma_{i}^{-1} \textrm{N}_{k_{\alpha}}\left(\boldsymbol \beta_{\alpha}^{i} \hspace{1mm} \vert \hspace{1mm} \textbf{0}, g_{\alpha}^{i} \sigma_{i}^2 (\textbf{X}_{\alpha}^{T}(\mathbb{I} - N^{-1}\textbf{1}\textbf{1}^{T})\textbf{X}_{\alpha})^{-1}\right). \end{align}Under this prior, we have an analytical form for the Bayes factor: \begin{align} \label{A8} B_{\alpha0} &= \widetilde{B}_{\alpha0}^{1} \times \cdots \times \widetilde{B}_{\alpha0}^{q} \\ &= \Pi_{i=1}^{q} \frac{\left(1 + g_{\alpha}^{i}\right)^{(N-k_{\alpha}-1)/2}}{\left(1 + g_{\alpha}^{i}\frac{SSE_{\alpha}^{i}}{SSE_{0}^{i}}\right)^{(N+1)/2}}, \end{align}where $SSE_{\alpha}^{i}$ and $SSE_{0}^{i}$ are the sum of squared errors from the linear regression of column $\mathbf{Y}^{i}$ on covariates $\textbf{X}_{\alpha}$ and $k_{\alpha}$ is the number of covariates in model $M_{\alpha}$. We allow the hyper parameter $g$ to vary across columns of $\mathbf{Y}$ and depend on the model, denoted by writing, $g_{\alpha}^{i}$. \\ \\ We aim to explore the posterior of the model space, given our data: \begin{align} \label{A9} \textbf{P}\left(M_{\alpha} \hspace{1mm} \vert \hspace{1mm} \mathbf{Y} \right) = \frac{B_{\alpha0} \textbf{P}\left(M_{\alpha}\right)}{\Sigma_{\alpha} B_{\alpha0} \textbf{P}\left(M_{\alpha}\right)}, \end{align}where the denominator is a normalization factor. In the spirit of traditional stochastic search variable selection \cite{OnSam}, we propose the following Gibbs sampler to sample this posterior. \subsection{Gibbs Sampling Algorithm} Once the parameters $\boldsymbol \beta_{\alpha}$ and $\sigma$ are integrated out, we know the form of the full conditional distributions for $\alpha_{i} \hspace{1mm} \vert \hspace{1mm} \alpha_{1}, \cdots, \alpha_{i-1}, \alpha_{i+1}, \cdots, \alpha_{p}$. We sample from these distributions as follows: \begin{enumerate} \item Choose column $\mathbf{Y}^{i}$ and consider two models $\alpha_{a}$ and $\alpha_{b}$ such that: \begin{align*} \alpha_{a} = (\alpha_{1}, \cdots, \alpha_{i-1}, 1, \alpha_{i+1}, \cdots, \alpha_{p}) \\ \alpha_{b} = (\alpha_{1}, \cdots, \alpha_{i-1}, 0, \alpha_{i+1}, \cdots, \alpha_{p}) \end{align*} \item For each model, calculate $B_{a0}$ and $B_{b0}$ as defined by (\ref{A8}). \item Sample \begin{align*} \alpha_{i} \hspace{1mm} \vert \hspace{1mm} \alpha_{1}, \cdots, \alpha_{i-1}, \alpha_{i+1}, \cdots, \alpha_{p} \sim Ber(p_{i}) \end{align*} where \begin{align*} p_{i} = \frac{B_{a0} \textbf{P}\left(M_{\alpha_{a}}\right)}{B_{a0} \textbf{P}\left(M_{\alpha_{a}}\right) + B_{b0} \textbf{P}\left(M_{\alpha_{b}}\right)}, \end{align*} \end{enumerate} Using this algorithm, we visit the most likely models given our set of responses. Under the model and prior specification, there are closed-form expressions for the posteriors of the model parameters $\beta_{\alpha}$ and $\sigma$. \subsection{Hyper Parameter for the $g$-prior} We use a local empirical Bayes to choose the hyper parameter for the $g$-prior in (\ref{A7}). Since we allow $g$ to be a function of the columns of $\mathbf{Y}$ as well as the model defined by $\alpha$, we calculate a separate $g$ for each univariate Bayes factor in (\ref{A7}) above. An empirical Bayes estimate of $g$ maximizes the marginal likelihood and is constrained to be non-negative. From \cite{Liang}, we have: \begin{align} \hat{g}_{\alpha}^{EB(i)} &= max\{F_{\alpha}^{i}-1,0\} \\ F_{\alpha}^{i} &= \frac{R_{\alpha}^{2i} / k_{\alpha}}{(1-R_{\alpha}^{2i}) / (N - 1 - k_{\alpha})}. \end{align} For univariate stochastic search, the literature recommends choosing a fixed $g$ as the number of data points \cite{OnSam}. However, the multivariate nature of our model induced by the vector-valued response makes this approach unreliable. Since each response has distinct statistical characteristics and correlations with the covariates, it is necessary to vary $g$ among different sampled models and responses. We find that this approach provides sufficiently stable estimation of the inclusion probabilities for the covariates. \section{Derivation of lasso form} In this section of the Appendix, we derive the penalized objective (lasso) forms of the utility functions. After integration over $p(\tilde{Y},\tilde{X}, \Theta \vert \textbf{Y}, \textbf{X})$, the utility takes the form (from equation (\ref{almostlassoform})): \begin{equation} \begin{split} \mathcal{L}(\boldsymbol{\gamma}) &= \text{tr}[ M \boldsymbol{\gamma} S \boldsymbol{\gamma}^{T} ] - 2\text{tr}[A\boldsymbol{\gamma}^{T}] + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}, \end{split} \end{equation}where $A=\mathbb{E}[\Omega\tilde{Y}\tilde{X}^{T}]$, $S=\mathbb{E}[\tilde{X}\tilde{X}^{T}] = \overline{\Sigma_{x}}$, and $M=\overline{\Omega}$, and the overlines denote posterior means. Defining the Cholesky decompositions: $M = LL^{T}$ and $S = QQ^{T}$, combining the matrix traces, completing the square with respect to $\boldsymbol{\gamma}$, and converting the trace to the vectorization operator, we obtain: \begin{equation} \begin{split} \mathcal{L}(\boldsymbol{\gamma}) &= \text{tr}[M(\boldsymbol{\gamma} S \boldsymbol{\gamma}^{T} - 2M^{-1}A\boldsymbol{\gamma}^{T} ] + \lambda \norm{\text{ {\bf vec}}(\boldsymbol{\gamma})}_{0} \\ &\propto \text{tr}\left[M(\boldsymbol{\gamma} - M^{-1}AS^{-1})S(\boldsymbol{\gamma} - M^{-1}AS^{-1})^{T}\right] + \lambda \norm{\text{ {\bf vec}}(\boldsymbol{\gamma})}_{0} \\ &= \text{tr}\left[LL^{T}(\boldsymbol{\gamma} - L^{-T}L^{-1}AS^{-1})S(\boldsymbol{\gamma} - L^{-T}L^{-1}AS^{-1})^{T}\right] + \lambda \norm{\text{ {\bf vec}}(\boldsymbol{\gamma})}_{0} \\ &= \text{tr}\left[L^{T}(\boldsymbol{\gamma} - L^{-T}L^{-1}AS^{-1})S(\boldsymbol{\gamma} - L^{-T}L^{-1}AS^{-1})^{T}L\right] + \lambda \norm{\text{ {\bf vec}}(\boldsymbol{\gamma})}_{0} \\ &=\text{tr}\left[(L^{T}\boldsymbol{\gamma} -L^{-1}AQ^{-T}Q^{-1})QQ^{T}((L^{T}\boldsymbol{\gamma} - L^{-1}AQ^{-T}Q^{-1})^{T}\right] + \lambda \norm{\text{ {\bf vec}}(\boldsymbol{\gamma})}_{0} \\ &=\text{tr}\left[(L^{T}\boldsymbol{\gamma}Q - L^{-1}AQ^{-T})(L^{T}\boldsymbol{\gamma}Q - L^{-1}AQ^{-T})^{T}\right] + \lambda \norm{\text{ {\bf vec}}(\boldsymbol{\gamma})}_{0} \\ &= \text{ {\bf vec}}(L^{T}\boldsymbol{\gamma}Q - L^{-1}AQ^{-T})^{T} \text{{\bf vec}}(L^{T}\boldsymbol{\gamma}Q - L^{-1}AQ^{-T}) + \lambda \norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}. \end{split} \end{equation} The proportionality in line 2 is up to an additive constant with respect to the action variable, $\boldsymbol{\gamma}$. We arrive at the final utility by distributing the vectorization and rewriting the inner product as a squared {$\ell_{2}$} norm. \begin{equation} \mathcal{L}(\boldsymbol{\gamma}) = \norm{ \left[Q^{T} \otimes L^{T}\right]\text{\bf vec}(\boldsymbol{\gamma}) - \text{\bf vec}(L^{-1}AQ^{-T}) }_{2}^{2} + \lambda\norm{ \text{\bf vec}(\boldsymbol{\gamma})}_{0}. \end{equation}The $l_{0}$ norm penalty yields a difficult combinatorial optimization problem even for a relatively small dimensions ($pq \approx 30$). Thus, one may use an $\ell_{1}$ norm as the most straightforward approximation to the $\ell_{0}$ norm, yielding the loss function: \begin{equation} \mathcal{L}(\boldsymbol{\gamma}) = \norm{ \left[Q^{T} \otimes L^{T}\right]\text{\bf vec}(\boldsymbol{\gamma}) - \text{\bf vec}(L^{-1}AQ^{-T}) }_{2}^{2} + \lambda\norm{ \text{\bf vec}(\boldsymbol{\gamma})}_{1}. \end{equation} \section{Derivation of the loss function under fixed predictors} We devote this section to deriving an analogous loss function for multivariate regression when the predictors are assumed fixed. Notice that this is essentially an extension of \cite{HahnCarvalho} to the multiple response case and adds to the works of \cite{brown1998multivariate} and \cite{wangSUR} by providing a posterior summary strategy that relies on more than just marginal quantities like posterior inclusion probabilities. Suppose we observe $N$ realizations of the predictor vector defining the design matrix $\textbf{X} \in \mathbb{R}^{Nxp}$. Future realizations $\tilde{\textbf{Y}} \in \mathbb{R}^{Nxq}$ at this fixed set of predictors are generated from a matrix normal distribution: \begin{equation} \mathbf{\tilde{Y}} \sim \textrm{Matrix Normal}_{N,q} \left(\textbf{X} \boldsymbol{\gamma}^{T}, \hspace{1mm} \mathbb{I}_{N x N}, \hspace{1mm} \Psi_{q x q} \right).\label{matrixdist} \end{equation}In this case, the optimal posterior summary $\boldsymbol{\gamma}^*$ minimizes the expected loss $\mathcal{L}_{\lambda}(\boldsymbol{\gamma}) = \mathbb{E}[\mathcal{L}_{\lambda}(\tilde{\bf Y},\Theta,\boldsymbol{\gamma})]$. Here, the expectation is taken over the joint space of the predictive and posterior distributions: $p(\tilde{\bf Y}, \Theta \vert \textbf{Y}, \textbf{X})$ {\it where $\tilde{X}$ is now absent since we are relegated to predicting at the observed covariate matrix} $\textbf{X}$. We define the utility function using the negative kernel of distribution (\ref{matrixdist}) where, as before, $\boldsymbol{\gamma}$ is the summary defining the sparsified linear predictor and $\Omega = \Psi^{-1}$: \begin{equation} \begin{split} \mathcal{L}_{\lambda}(\tilde{\textbf{Y}},\Theta,\boldsymbol{\gamma}) &= \frac{1}{2}\text{tr}\left[\Omega(\tilde{\textbf{Y}} - \textbf{X}\boldsymbol{\gamma}^{T})^{T} (\tilde{\textbf{Y}} - \textbf{X}\boldsymbol{\gamma}^{T}) \right] + \lambda\norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}, \end{split} \end{equation} Expanding the inner product and dropping terms that do not involve $\boldsymbol{\gamma}$, we define the loss up to proportionality: \begin{equation} \begin{split} \mathcal{L}_{\lambda}(\tilde{\textbf{Y}},\Theta,\boldsymbol{\gamma}) &\propto \text{tr}\left[\Omega( \boldsymbol{\gamma}\textbf{X}^{T}\textbf{X}\boldsymbol{\gamma}^{T} - 2\tilde{\textbf{Y}}^{T}\textbf{X}\boldsymbol{\gamma}^{T} ) \right] + \lambda\norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}. \end{split} \end{equation}Analogous to the stochastic predictors derivation, we integrate over $(\tilde{Y},\Theta)$ to obtain our expected loss: \begin{equation} \begin{split} \mathcal{L}_{\lambda}(\boldsymbol{\gamma}) &= \mathbb{E}[\mathcal{L}_{\lambda}(\tilde{\bf Y},\Theta,\boldsymbol{\gamma})] \\ &= \text{tr}[M\boldsymbol{\gamma}S_{f}\boldsymbol{\gamma}^{T}] - 2\text{tr}[A_{f}\boldsymbol{\gamma}^{T} ] + \lambda\norm{\text{{\bf vec}}(\boldsymbol{\gamma})}_{0}. \end{split} \end{equation} where, similar to the random predictor case, $A_{f}=\mathbb{E}[\Omega\tilde{\textbf{Y}}^{T}\textbf{X}]$, $S_{f}=\textbf{X}^{T}\textbf{X}$, $M=\overline{\Omega}$, and the overlines denote posterior means. The subscript $f$ is used to denote quantities calculated at {\it fixed} design points $\textbf{X}$. Defining the Cholesky decompositions: $M = LL^{T}$ and $S_{f} = Q_{f}Q_{f}^{T}$ and replacing the $\ell_0$ norm with the $\ell_1$ norm, this expression can be formulated in the form of a standard penalized regression problem: \begin{equation} \mathcal{L}_{\lambda}(\boldsymbol{\gamma}) = \norm{ \left[Q_{f}^{T} \otimes L^{T}\right]\text{\bf vec}(\boldsymbol{\gamma}) - \text{\bf vec}(L^{-1}A_{f}Q_{f}^{-T}) }_{2}^{2} + \lambda\norm{ \text{\bf vec}(\boldsymbol{\gamma})}_{1}\label{lassoformfixed} \end{equation}with covariates $Q_{f}^{T} \otimes L^{T}$, ``data" $L^{-1}A_{f}Q_{f}^{-T}$, and regression coefficients $\boldsymbol{\gamma}$. Accordingly, (\ref{lasso_form}) can be optimized using existing software such as the {\tt lars} R package of \cite{Efron}. We use loss function (\ref{lassoformfixed}) as a point of comparison to demonstrate how incorporating covariate uncertainty may impact the summarization procedure in our applications. \end{document}
arXiv
Vector notation In mathematics and physics, vector notation is a commonly used notation for representing vectors,[1][2] which may be Euclidean vectors, or more generally, members of a vector space. Vector notation Vector arrow Pointing from A to B Vector components Describing an arrow vector v by its coordinates x and y yields an isomorphism of vector spaces. Scalar product Two equal-length sequences of coordinate vectors and returns a single number Vector product The cross-product in respect to a right-handed coordinate system For representing a vector, the common[3] typographic convention is lower case, upright boldface type, as in v. The International Organization for Standardization (ISO) recommends either bold italic serif, as in v, or non-bold italic serif accented by a right arrow, as in ${\vec {v}}$.[4] In advanced mathematics, vectors are often represented in a simple italic type, like any variable. History In 1835 Giusto Bellavitis introduced the idea of equipollent directed line segments $AB\bumpeq CD$ which resulted in the concept of a vector as an equivalence class of such segments. The term vector was coined by W. R. Hamilton around 1843, as he revealed quaternions, a system which uses vectors and scalars to span a four-dimensional space. For a quaternion q = a + bi + cj + dk, Hamilton used two projections: S q = a, for the scalar part of q, and V q = bi + cj + dk, the vector part. Using the modern terms cross product (×) and dot product (.), the quaternion product of two vectors p and q can be written pq = –p.q + p×q. In 1878, W. K. Clifford severed the two products to make the quaternion operation useful for students in his textbook Elements of Dynamic. Lecturing at Yale University, Josiah Willard Gibbs supplied notation for the scalar product and vector products, which was introduced in Vector Analysis.[5] In 1891, Oliver Heaviside argued for Clarendon to distinguish vectors from scalars. He criticized the use of Greek letters by Tait and Gothic letters by Maxwell.[6] In 1912, J.B. Shaw contributed his "Comparative Notation for Vector Expressions" to the Bulletin of the Quaternion Society.[7] Subsequently, Alexander Macfarlane described 15 criteria for clear expression with vectors in the same publication.[8] Vector ideas were advanced by Hermann Grassmann in 1841, and again in 1862 in the German language. But German mathematicians were not taken with quaternions as much as were English-speaking mathematicians. When Felix Klein was organizing the German mathematical encyclopedia, he assigned Arnold Sommerfeld to standardize vector notation.[9] In 1950, when Academic Press published G. Kuerti’s translation of the second edition of volume 2 of Lectures on Theoretical Physics by Sommerfeld, vector notation was the subject of a footnote: "In the original German text, vectors and their components are printed in the same Gothic types. The more usual way of making a typographical distinction between the two has been adopted for this translation."[10] Rectangular coordinates See also: Real coordinate space Given a Cartesian coordinate system, a vector may be specified by its Cartesian coordinates, which are a tuple of numbers. Ordered set notation A vector in $\mathbb {R} ^{n}$ can be specified using an ordered set of components, enclosed in either parentheses or angle brackets. In a general sense, an n-dimensional vector v can be specified in either of the following forms: • $\mathbf {v} =(v_{1},v_{2},\dots ,v_{n-1},v_{n})$ • $\mathbf {v} =\langle v_{1},v_{2},\dots ,v_{n-1},v_{n}\rangle $ [11] Where v1, v2, …, vn − 1, vn are the components of v.[12] Matrix notation A vector in $\mathbb {R} ^{n}$ can also be specified as a row or column matrix containing the ordered set of components. A vector specified as a row matrix is known as a row vector; one specified as a column matrix is known as a column vector. Again, an n-dimensional vector $\mathbf {v} $ can be specified in either of the following forms using matrices: • $\mathbf {v} ={\begin{bmatrix}v_{1}&v_{2}&\cdots &v_{n-1}&v_{n}\end{bmatrix}}={\begin{pmatrix}v_{1}&v_{2}&\cdots &v_{n-1}&v_{n}\end{pmatrix}}$ • $\mathbf {v} ={\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n-1}\\v_{n}\end{bmatrix}}={\begin{pmatrix}v_{1}\\v_{2}\\\vdots \\v_{n-1}\\v_{n}\end{pmatrix}}$ where v1, v2, …, vn − 1, vn are the components of v. In some advanced contexts, a row and a column vector have different meaning; see covariance and contravariance of vectors for more. Unit vector notation A vector in $\mathbb {R} ^{3}$ (or fewer dimensions, such as $\mathbb {R} ^{2}$ where vz below is zero) can be specified as the sum of the scalar multiples of the components of the vector with the members of the standard basis in $\mathbb {R} ^{3}$. The basis is represented with the unit vectors ${\boldsymbol {\hat {\imath }}}=(1,0,0)$, ${\boldsymbol {\hat {\jmath }}}=(0,1,0)$, and ${\boldsymbol {\hat {k}}}=(0,0,1)$. A three-dimensional vector ${\boldsymbol {v}}$ can be specified in the following form, using unit vector notation: $\mathbf {v} =v_{x}{\boldsymbol {\hat {\imath }}}+v_{y}{\boldsymbol {\hat {\jmath }}}+v_{z}{\boldsymbol {\hat {k}}}$ where vx, vy, and vz are the scalar components of v. Scalar components may be positive or negative; the absolute value of a scalar component is its magnitude. Polar coordinates The two polar coordinates of a point in a plane may be considered as a two dimensional vector. Such a vector consists of a magnitude (or length) and a direction (or angle). The magnitude, typically represented as r, is the distance from a starting point, the origin, to the point which is represented. The angle, typically represented as θ (the Greek letter theta), is the angle, usually measured counter­clockwise, between a fixed direction, typically that of the positive x-axis, and the direction from the origin to the point. The angle is typically reduced to lie within the range $0\leq \theta <2\pi $ radians or $0\leq \theta <360^{\circ }$. Ordered set and matrix notations Vectors can be specified using either ordered pair notation (a subset of ordered set notation using only two components), or matrix notation, as with rectangular coordinates. In these forms, the first component of the vector is r (instead of v1), and the second component is θ (instead of v2). To differentiate polar coordinates from rectangular coordinates, the angle may be prefixed with the angle symbol, $\angle $. Two-dimensional polar coordinates for v can be represented as any of the following, using either ordered pair or matrix notation: • $\mathbf {v} =(r,\angle \theta )$ • $\mathbf {v} =\langle r,\angle \theta \rangle $ • $\mathbf {v} ={\begin{bmatrix}r&\angle \theta \end{bmatrix}}$ • $\mathbf {v} ={\begin{bmatrix}r\\\angle \theta \end{bmatrix}}$ where r is the magnitude, θ is the angle, and the angle symbol ($\angle $) is optional. Direct notation Vectors can also be specified using simplified autonomous equations that define r and θ explicitly. This can be unwieldy, but is useful for avoiding the confusion with two-dimensional rectangular vectors that arises from using ordered pair or matrix notation. A two-dimensional vector whose magnitude is 5 units, and whose direction is π/9 radians (20°), can be specified using either of the following forms: • $r=5,\ \theta ={\pi \over 9}$ • $r=5,\ \theta =20^{\circ }$ Cylindrical vectors A cylindrical vector is an extension of the concept of polar coordinates into three dimensions. It is akin to an arrow in the cylindrical coordinate system. A cylindrical vector is specified by a distance in the xy-plane, an angle, and a distance from the xy-plane (a height). The first distance, usually represented as r or ρ (the Greek letter rho), is the magnitude of the projection of the vector onto the xy-plane. The angle, usually represented as θ or φ (the Greek letter phi), is measured as the offset from the line collinear with the x-axis in the positive direction; the angle is typically reduced to lie within the range $0\leq \theta <2\pi $. The second distance, usually represented as h or z, is the distance from the xy-plane to the endpoint of the vector. Ordered set and matrix notations Cylindrical vectors use polar coordinates, where the second distance component is concatenated as a third component to form ordered triplets (again, a subset of ordered set notation) and matrices. The angle may be prefixed with the angle symbol ($\angle $); the distance-angle-distance combination distinguishes cylindrical vectors in this notation from spherical vectors in similar notation. A three-dimensional cylindrical vector v can be represented as any of the following, using either ordered triplet or matrix notation: • $\mathbf {v} =(r,\angle \theta ,h)$ • $\mathbf {v} =\langle r,\angle \theta ,h\rangle $ • $\mathbf {v} ={\begin{bmatrix}r&\angle \theta &h\end{bmatrix}}$ • $\mathbf {v} ={\begin{bmatrix}r\\\angle \theta \\h\end{bmatrix}}$ Where r is the magnitude of the projection of v onto the xy-plane, θ is the angle between the positive x-axis and v, and h is the height from the xy-plane to the endpoint of v. Again, the angle symbol ($\angle $) is optional. Direct notation A cylindrical vector can also be specified directly, using simplified autonomous equations that define r (or ρ), θ (or φ), and h (or z). Consistency should be used when choosing the names to use for the variables; ρ should not be mixed with θ and so on. A three-dimensional vector, the magnitude of whose projection onto the xy-plane is 5 units, whose angle from the positive x-axis is π/9 radians (20°), and whose height from the xy-plane is 3 units can be specified in any of the following forms: • $r=5,\ \theta ={\pi \over 9},\ h=3$ • $r=5,\ \theta =20^{\circ },\ h=3$ • $\rho =5,\ \phi ={\pi \over 9},\ z=3$ • $\rho =5,\ \phi =20^{\circ },\ z=3$ Spherical vectors A spherical vector is another method for extending the concept of polar vectors into three dimensions. It is akin to an arrow in the spherical coordinate system. A spherical vector is specified by a magnitude, an azimuth angle, and a zenith angle. The magnitude is usually represented as ρ. The azimuth angle, usually represented as θ, is the (counter­clockwise) offset from the positive x-axis. The zenith angle, usually represented as φ, is the offset from the positive z-axis. Both angles are typically reduced to lie within the range from zero (inclusive) to 2π (exclusive). Ordered set and matrix notations Spherical vectors are specified like polar vectors, where the zenith angle is concatenated as a third component to form ordered triplets and matrices. The azimuth and zenith angles may be both prefixed with the angle symbol ($\angle $); the prefix should be used consistently to produce the distance-angle-angle combination that distinguishes spherical vectors from cylindrical ones. A three-dimensional spherical vector v can be represented as any of the following, using either ordered triplet or matrix notation: • $\mathbf {v} =(\rho ,\angle \theta ,\angle \phi )$ • $\mathbf {v} =\langle \rho ,\angle \theta ,\angle \phi \rangle $ • $\mathbf {v} ={\begin{bmatrix}\rho &\angle \theta &\angle \phi \end{bmatrix}}$ • $\mathbf {v} ={\begin{bmatrix}\rho \\\angle \theta \\\angle \phi \end{bmatrix}}$ Where ρ is the magnitude, θ is the azimuth angle, and φ is the zenith angle. Direct notation Like polar and cylindrical vectors, spherical vectors can be specified using simplified autonomous equations, in this case for ρ, θ, and φ. A three-dimensional vector whose magnitude is 5 units, whose azimuth angle is π/9 radians (20°), and whose zenith angle is π/4 radians (45°) can be specified as: • $\rho =5,\ \theta ={\pi \over 9},\ \phi ={\pi \over 4}$ • $\rho =5,\ \theta =20^{\circ },\ \phi =45^{\circ }$ Operations In any given vector space, the operations of vector addition and scalar multiplication are defined. Normed vector spaces also define an operation known as the norm (or determination of magnitude). Inner product spaces also define an operation known as the inner product. In $\mathbb {R} ^{n}$, the inner product is known as the dot product. In $\mathbb {R} ^{3}$ and $\mathbb {R} ^{7}$, an additional operation known as the cross product is also defined. Vector addition Vector addition is represented with the plus sign used as an operator between two vectors. The sum of two vectors u and v would be represented as: $\mathbf {u} +\mathbf {v} $ Scalar multiplication Scalar multiplication is represented in the same manners as algebraic multiplication. A scalar beside a vector (either or both of which may be in parentheses) implies scalar multiplication. The two common operators, a dot and a rotated cross, are also acceptable (although the rotated cross is almost never used), but they risk confusion with dot products and cross products, which operate on two vectors. The product of a scalar k with a vector v can be represented in any of the following fashions: • $k\mathbf {v} $ • $k\cdot \mathbf {v} $ Vector subtraction and scalar division Using the algebraic properties of subtraction and division, along with scalar multiplication, it is also possible to “subtract” two vectors and “divide” a vector by a scalar. Vector subtraction is performed by adding the scalar multiple of −1 with the second vector operand to the first vector operand. This can be represented by the use of the minus sign as an operator. The difference between two vectors u and v can be represented in either of the following fashions: • $\mathbf {u} +-\mathbf {v} $ • $\mathbf {u} -\mathbf {v} $ Scalar division is performed by multiplying the vector operand with the numeric inverse of the scalar operand. This can be represented by the use of the fraction bar or division signs as operators. The quotient of a vector v and a scalar c can be represented in any of the following forms: • ${1 \over c}\mathbf {v} $ • ${\mathbf {v} \over c}$ • ${\mathbf {v} \div c}$ Norm The norm of a vector is represented with double bars on both sides of the vector. The norm of a vector v can be represented as: $\|\mathbf {v} \|$ The norm is also sometimes represented with single bars, like $|\mathbf {v} |$, but this can be confused with absolute value (which is a type of norm). Inner product The inner product of two vectors (also known as the scalar product, not to be confused with scalar multiplication) is represented as an ordered pair enclosed in angle brackets. The inner product of two vectors u and v would be represented as: $\langle \mathbf {u} ,\mathbf {v} \rangle $ Dot product In $\mathbb {R} ^{n}$, the inner product is also known as the dot product. In addition to the standard inner product notation, the dot product notation (using the dot as an operator) can also be used (and is more common). The dot product of two vectors u and v can be represented as: $\mathbf {u} \cdot \mathbf {v} $ In some older literature, the dot product is implied between two vectors written side-by-side. This notation can be confused with the dyadic product between two vectors. Cross product The cross product of two vectors (in $\mathbb {R} ^{3}$) is represented using the rotated cross as an operator. The cross product of two vectors u and v would be represented as: $\mathbf {u} \times \mathbf {v} $ By some conventions (e.g. in France and in some areas of higher mathematics), this is also denoted by a wedge,[13] which avoids confusion with the wedge product since the two are functionally equivalent in three dimensions: $\mathbf {u} \wedge \mathbf {v} $ In some older literature, the following notation is used for the cross product between u and v: $[\mathbf {u} ,\mathbf {v} ]$ Nabla Main articles: Del and Nabla symbol Vector notation is used with calculus through the Nabla operator: $\mathbf {i} {\frac {\partial }{\partial x}}+\mathbf {j} {\frac {\partial }{\partial y}}+\mathbf {k} {\frac {\partial }{\partial z}}$ With a scalar function f, the gradient is written as $\nabla f\,,$ with a vector field, F the divergence is written as $\nabla \cdot F,$ and with a vector field, F the curl is written as $\nabla \times F.$ See also • Euclidean vector • ISO 31-11 § Vectors and tensors • Phasor References 1. Principles and Applications of Mathematics for Communications-electronics. 1992. p. 123. 2. Coffin, Joseph George (1911). Vector Analysis. J. Wiley & sons. 3. "Vector Introduction | MIT - KeepNotes". keepnotes.com. Retrieved 2023-07-18. 4. "ISO 80000-2:2019 Quantities and units — Part 2: Mathematics". International Organization for Standardization. August 2019. 5. Edwin Bidwell Wilson (1901) Vector Analysis, based on the Lectures of J. W. Gibbs at Internet Archive 6. Oliver Heaviside, The Electrical Journal, Volume 28. James Gray, 1891. 109 (alt) 7. J.B. Shaw (1912) Comparative Notation for Vector Expressions, Bulletin of the Quaternion Society via Hathi Trust. 8. Alexander Macfarlane (1912) A System of Notation for Vector-Analysis; with a Discussion of the Underlying Principles from Bulletin of the Quaternion Society 9. Karin Reich (1995) Die Rolle Arnold Sommerfeld bei der Diskussion um die Vektorrechnung 10. Mechanics of Deformable Bodies, p. 10, at Google Books 11. Wright, Richard. "Precalculus 6-03 Vectors". www.andrews.edu. Retrieved 2023-07-25. 12. Weisstein, Eric W. "Vector". mathworld.wolfram.com. Retrieved 2020-08-19. 13. Cajori, Florian (2011). A History of Mathematical Notations. Dover Publications. p. 134 (Vol. 2). ISBN 9780486161167. Linear algebra • Outline • Glossary Basic concepts • Scalar • Vector • Vector space • Scalar multiplication • Vector projection • Linear span • Linear map • Linear projection • Linear independence • Linear combination • Basis • Change of basis • Row and column vectors • Row and column spaces • Kernel • Eigenvalues and eigenvectors • Transpose • Linear equations Matrices • Block • Decomposition • Invertible • Minor • Multiplication • Rank • Transformation • Cramer's rule • Gaussian elimination Bilinear • Orthogonality • Dot product • Hadamard product • Inner product space • Outer product • Kronecker product • Gram–Schmidt process Multilinear algebra • Determinant • Cross product • Triple product • Seven-dimensional cross product • Geometric algebra • Exterior algebra • Bivector • Multivector • Tensor • Outermorphism Vector space constructions • Dual • Direct sum • Function space • Quotient • Subspace • Tensor product Numerical • Floating-point • Numerical stability • Basic Linear Algebra Subprograms • Sparse matrix • Comparison of linear algebra libraries • Category • Mathematics portal • Commons • Wikibooks • Wikiversity
Wikipedia
\begin{document} \begin{center} {\large\bf A 2-edge partial inverse problem for the Sturm-Liouville operators with singular potentials on a star-shaped graph } \\[0.2cm] {\bf Natalia P. Bondarenko} \\[0.2cm] \end{center} {\bf Abstract.} Boundary value problems for Sturm-Liouville operators with potentials from the class $W_2^{-1}$ on a star-shaped graph are considered. We assume that the potentials are known on all the edges of the graph except two, and show that the potentials on the remaining edges can be constructed by fractional parts of two spectra. A uniqueness theorem is proved, and an algorithm for the constructive solution of the partial inverse problem is provided. The main ingredient of the proofs is the Riesz-basis property of specially constructed systems of functions. {\bf Keywords:} partial inverse problem, quantum graph, Sturm-Liouville operator, singular potential, Weyl function, Riesz basis. {\bf AMS Mathematics Subject Classification (2010):} 34A55 34B09 34B24 34B45 47E05 {\large \bf 1. Introduction} The paper concerns the theory of inverse spectral problems for differential operators on geometrical graphs. Differential operators on graphs (or so-called quantum graphs) have been actively studied by mathematicians in recent years and have applications in different branches of science and engineering (see \cite{Kuch02, PPP04} and the bibliography therein). Inverse spectral problems consist in recovering differential operators from their spectral characteristics. Nowadays inverse problems for quantum graphs attract much attention of mathematicians. The reader can find an extensive bibliography on this subject in the survey \cite{Yur16}. In this paper, we consider a star-shaped graph $G$ with edges $e_j$, $j = \overline{1, m}$, of equal length $\pi$. For each edge $e_j$, introduce a parameter $x_j \in [0, \pi]$. The value $x_j = 0$ corresponds to the boundary vertex, associated with $e_j$, and $x_j = \pi$ corresponds to the internal vertex. Let $y = [y_j(x_j)]_{j = 1}^m$ be a vector function on the graph $G$, and let $q_j$, $j = \overline{1, m}$, be real-valued functions from $W_2^{-1}(0, \pi)$, i.e. $q_j = \sigma_j'$, $\sigma_j \in L_2(0, \pi)$, where the derivative is considered in the sense of distributions. The functions $\sigma_j$ are called the {\it potentials}. The Sturm-Liouville operator $$ \ell_j y_j := -y_j'' + q_j(x_j) y_j $$ on the edge $e_j$ can be understood in the following sense: $$ \ell_j y_j = -(y_j^{[1]})' - \sigma_j(x_j) y_j^{[1]} - \sigma_j^2(x_j) y_j, $$ where $y_j^{[1]} = y_j' - \sigma_j y_j$ is a {\it quasi-derivative}, and $$ \mbox{Dom}(\ell_j) = \{ y_j \in W_2^1[0, \pi] \colon y_j^{[1]} \in W_1^1[0, \pi], \: \ell_j y_j \in L_2(0, \pi) \}. $$ Properties of Sturm-Liouville operators with singular potentials in the described form were established in \cite{SS99}. Inverse spectral problems on a {\it finite interval}, consisting in recovering singular potentials from different types of spectral characteristics, were extensively studied by R.O.~Hryniv and Ya.V.~Mykytyuk \cite{HM03, HM04-2spectra, HM04-half, HM04-transform}. However, as far as we know, there is the only paper \cite{FIY08}, concerning an inverse problem for Sturm-Liouville operators with the potentials $q_j$ from $W_2^{-1}$ on graphs. In the present paper, we study the system of the Sturm-Liouville equations on the graph~$G$: \begin{equation} \label{eqv} (\ell_j y_j)(x_j) = \lambda y_j (x_j), \quad x_j \in (0, \pi), \: y_j \in \mbox{Dom}(\ell_j), \: j = \overline{1, m}. \end{equation} Let $L$ and $L_0$ be the boundary value problems for the system \eqref{eqv} with the standard matching conditions in the internal vertex \begin{equation*} y_1(\pi) = y_j(\pi), \quad j = \overline{2, m}, \quad \sum_{j = 1}^m y_j^{[1]}(\pi) = 0, \end{equation*} and the mixed boundary conditions \begin{align*} L \colon \quad & y_j^{[1]}(0) = 0, \: j = \overline{1, p}, \quad y_j(0) = 0, \: j = \overline{p+1, m},\\ L_0 \colon \quad & y_j^{[1]}(0) = 0, \: j = \overline{1, p+1}, \quad y_j(0) = 0, \: j = \overline{p+2, m}, \end{align*} where $2 \le p \le m -2$. The asymptotic behavior of the spectrum of the problem $L$ is described by the following theorem, which can also be applied to the problem $L_0$. Everywhere below the same symbol $\{ \varkappa_n \}$ is used for different sequences from $l_2$. \begin{thm} \label{thm:asympt} The boundary value problem $L$ has a countable set of eigenvalues, which are real and can be numbered as $\{ \lambda_{nk} \}_{n \in \mathbb N, \, k = \overline{1, m} }$ (counting with their multiplicities) to satisfy the following asymptotic formulas \begin{equation} \label{asymptrho} \arraycolsep=1.4pt\def2.2{2.2} \left. \begin{array}{ll} \rho_{n1} & = n - 1 + \dfrac{\alpha}{\pi} + \varkappa_n, \\ \rho_{n2} & = n - \dfrac{\alpha}{\pi} + \varkappa_n, \\ \rho_{nk} & = n -\dfrac{1}{2} + \varkappa_n, \quad k \in \mathcal I_3, \\ \rho_{nk} & = n + \varkappa_n, \quad k \in \mathcal I_4, \end{array} \right\} \end{equation} where $\rho_{nk} = \sqrt{\lambda_{nk}}$, $\alpha = \arccos \sqrt{\frac{p}{m}}$, $\mathcal I_3$ and $\mathcal I_4$ are some fixed sets of indices, such that $\mathcal I_3 \cup \mathcal I_4 = \overline{3, m}$, $\mathcal I_3 \cap \mathcal I_4 = \varnothing$, $|\mathcal I_3| = p-1$, $|\mathcal I_4| = m-p-1$. For definiteness, we assume that $3 \in \mathcal I_3$ and $4 \in \mathcal I_4$. \end{thm} Theorem~\ref{thm:asympt} can be proved similarly to \cite[Theorem~1]{Bond17-mixed}. In the papers \cite{Bond17-mixed, Bond17}, we started to investigate the so-called {\it partial inverse problems} on graphs. Our research was motivated by the paper \cite{Yang10} by C.-F. Yang, who has shown, that the (regular) potential of the Sturm-Liouville operator on one edge of the star-shaped graph is uniquely specified by a fractional part of the spectrum, if the potentials on all the other edges are given. In the papers \cite{Bond17-mixed, Bond17}, we have developed a constructive method for the solution of such 1-edge partial inverse problems. The method is based on the Riesz-basis property of some systems of vector functions, and allows one to establish the local solvability of the partial inverse problems and the stability for their solutions. Note that the partial inverse problems on graphs generalize the Hochstadt-Lieberman problem on a finite interval \cite{HM04-half, HL78}. In this paper, we demonstrate that the approach of \cite{Bond17-mixed, Bond17} can be applied to operators with singular potentials. Moreover, in contrast to the previous papers, we study a 2-edge inverse problem, when the potentials on two edges are unknown. In this case, one spectrum is not sufficient for recovering the both potentials, so we use a part of the spectrum of the boundary value problem $L$ and a part of the spectrum of $L_0$. We prove the uniqueness theorem and provide a constructive algorithm for the solution of the 2-edge inverse problem. The most challenging part of the research is the analysis of the Riesz-basicity for special systems of functions (see Appendix A). Let us proceed to the problem formulation. Denote by $C_j(x_j, \lambda)$, $j = \overline{1,p + 1}$, and $S_j(x_j, \lambda)$, $j = \overline{p+1, m}$ the solutions of equations \eqref{eqv} under the initial conditions \begin{equation} \label{init} C_j(0, \lambda) = 1, \: C_j^{[1]}(0, \lambda) = 0, \quad S_j(0, \lambda) = 0, \: S_j^{[1]}(0, \lambda) = 1. \end{equation} Consider a sequence $\{ \lambda_{nk} \}_{n \in \mathbb N\, k = \overline{1, 4}}$ of eigenvalues of the problem $L$, satisfying \eqref{asymptrho}, and a sequence $\{ \mu_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$ of eigenvalues of the problem $L_0$, satisfying the following asymptotic relations \begin{equation} \label{asymptmu} \arraycolsep=1.4pt\def2.2{2.2} \left. \begin{array}{ll} \sqrt{\mu_{n1}} & = n - 1 + \dfrac{\alpha_1}{\pi} + \varkappa_n, \\ \sqrt{\mu_{n2}} & = n - \dfrac{\alpha_1}{\pi} + \varkappa_n, \\ \end{array} \right\} \end{equation} where $\alpha_1 = \arccos \sqrt{\frac{p+1}{m}}$. Further we suppose, that the following {\bf assumptions} hold. ($A_1$) $C_j(\pi, \lambda_{nk}) \ne 0$, $j = \overline{1, p}$, and $S_j(\pi, \lambda_{nk}) \ne 0$, $j = \overline{p+1, m}$, for all $n \in \mathbb N$, $k = \overline{1, 4}$. ($A_2$) $C_j(\pi, \mu_{nk}) \ne 0$, $j = \overline{1, p + 1}$, and $S_j(\pi, \mu_{nk}) \ne 0$, $j = \overline{p+1, m}$, for all $n \in \mathbb N$, $k = \overline{1, 2}$. The paper is devoted to the following 2-edge partial inverse problem. {\bf IP.} {\it Given the potentials $\{ \sigma_j \}_{j = \overline{1, m} \backslash \{ 1, p+1 \}}$ and the eigenvalues $\{ \lambda_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$, $\{ \mu_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$, find the potentials $\sigma_1$ and $\sigma_{p+1}$.} The paper is organized as follows. {\it Section~2} contains some preliminaries. In {\it Section~3}, we prove the uniqueness theorem for IP. In {\it Section~4}, the constructive procedure for the solution of IP is developed. {\it Appendix A} is devoted to the main technical part of the paper, where we investigate the Riesz-basis property for special systems of functions. In {\it Appendix B}, we provide auxiliary results, concerning entire functions, constructed by their zeros. Throughout the paper, we use the following notation. \begin{itemize} \item $\rho = \sqrt \lambda$, $\mbox{Re}\, \rho \ge 0$. \item $B_{2, a}$ is the Paley-Wiener class of entire function of exponential type not greater than $a$, belonging to $L_2(\mathbb R)$. \item $\mathbb N_0 = \mathbb N \cup \{ 0 \}$. \item The symbol $C$ stands for different constants, independent on $x$, $\lambda$, etc. \end{itemize} {\large \bf 2. Preliminaries} The eigenvalues of the problem $L$ coincide with the zeros of {\it the characteristic function} \begin{equation} \label{defDelta} \Delta(\lambda) = \sum_{j = 1}^p C^{[1]}_j(\pi, \lambda) \prod_{\substack{i = 1 \\ i \ne j} }^p C_i(\pi, \lambda) \prod_{k = p+1}^m S_k(\pi, \lambda) + \sum_{j = p+1}^m S_j^{[1]}(\pi, \lambda) \prod_{i = 1}^p C_i(\pi, \lambda) \prod_{\substack{k = p+1 \\ k \ne i}}^m S_k(\pi, \lambda). \end{equation} Let $L_j$ be the boundary value problem for the Sturm-Liouville equation \eqref{eqv} for each fixed $j = \overline{1, m}$ with the boundary conditions $y_j^{[1]}(0) = 0$, $y_j(\pi) = 0$ for $j = \overline{1, p}$, and $y_j(0) = y_j(\pi) = 0$ for $j = \overline{p+1, m}$. Denote by $M_j(\lambda)$ the {\it Weyl functions} of the problems $L_j$: \begin{equation} \label{defM} M_j(\lambda) := -\frac{C_j^{[1]}(\pi, \lambda)}{C_j(\pi, \lambda)}, \: j = \overline{1, p}, \quad M_j(\lambda) := -\frac{S_j^{[1]}(\pi, \lambda)}{S_j(\pi, \lambda)}, \: j = \overline{p+1, m}. \end{equation} Weyl functions and their generalizations are natural spectral characteristics for different classes of differential operators (see \cite{FIY08, Mar77, FY01}). For each fixed $j = \overline{1, m}$, the potential $\sigma_j$ can be uniquely recovered from its Weyl function $M_j(\lambda)$ (see \cite{FIY08}). Using \eqref{defDelta} and \eqref{defM}, one can easily derive the relation \begin{equation} \label{sumM} \sum_{j = 1}^m M_j(\lambda) = -\frac{\Delta(\lambda)}{\prod\limits_{j = 1}^p C_j(\pi, \lambda) \prod\limits_{j = p+1}^m S_j(\pi, \lambda)}. \end{equation} Taking the assumption ($A_1$) into account, we obtain from \eqref{sumM}: \begin{equation} \label{defg} M_1(\lambda_{nk}) + M_{p+1}(\lambda_{nk}) = - \sum_{\substack{j = 2 \\ j \ne p+1}}^m M_j(\lambda_{nk}) =: g_{nk}, \quad n \in \mathbb N, \: k = \overline{1, 4}. \end{equation} It follows from \eqref{defM}, that \begin{equation} \label{sumMfrac} M_1(\lambda) + M_{p+1}(\lambda) = \frac{D_1(\lambda)}{D_2(\lambda)}, \end{equation} where \begin{equation} \label{defD} D_1(\lambda) = - (C_1^{[1]}(\pi, \lambda) S_{p+1}(\pi, \lambda) + C_1(\pi, \lambda) S^{[1]}_{p+1}(\pi, \lambda)), \quad D_2(\lambda) = C_1(\pi, \lambda) S_{p+1}(\pi, \lambda). \end{equation} \begin{lem} \label{lem:asymptD} The following relations hold \begin{equation} \label{asymptD} D_1(\pi, \lambda) = -\left( \cos 2 \rho \pi + \int_0^{2 \pi} N(t) \cos \rho t \, dt \right), \quad D_2(\pi, \lambda) = \frac{\sin 2 \rho \pi}{2 \rho} + \frac{1}{\rho}\int_0^{2 \pi} K(t) \sin \rho t \, dt, \end{equation} where $N$ and $K$ are real-valued functions from $L_2(0, 2 \pi)$. \end{lem} \begin{proof} Using the transformation operators \cite{HM04-transform}, one can obtain the following relations (see \cite{HM03, HM04-2spectra}): \begin{equation} \label{intCS} \arraycolsep=1.4pt\def2.2{2.2} \left. \begin{array}{ll} C_1(\pi, \lambda) & = \cos \rho \pi + \displaystyle\int_0^{\pi} K_1(t) \cos \rho t \, dt, \\ C_1^{[1]}(\pi, \lambda) & = - \rho \sin \rho \pi + \rho \displaystyle\int_0^{\pi} N_1(t) \sin \rho t \, dt + C_1^{[1]}(\pi, 0), \\ S_{p+1}(\pi, \lambda) & = \dfrac{\sin \rho \pi}{\rho} + \dfrac{1}{\rho} \displaystyle\int_0^{\pi} K_{p+1}(t) \sin \rho t \, dt, \\ S_{p+1}^{[1]}(\pi, \lambda) & = \cos \rho \pi + \displaystyle\int_0^{\pi} N_{p+1}(t) \cos \rho t \, dt, \end{array} \right\} \end{equation} where $K_j, N_j \in L_2(0, \pi)$, $j \in \{1, p+1\}$. Substituting these relations into \eqref{defD}, we get $$ D_1(\lambda) = \sin^2 \rho \pi - \cos^2 \rho \pi + F_1(\rho), \quad D_2(\lambda) = \frac{\cos \rho \pi \sin \rho \pi}{\rho} + \frac{1}{\rho} F_2(\rho), $$ where $F_1, F_2 \in B_{2, 2 \pi}$, $F_1$ is even and $F_2$ is odd. Therefore they can be represented in the form $$ F_1(\rho) = \int_0^{2 \pi} N(t) \cos \rho t \, dt, \quad F_2(\rho) = \int_0^{2 \pi} K(t) \sin \rho t \, dt, \quad N, K \in L_2(0, 2 \pi). $$ Thus, we arrive at \eqref{asymptD}. \end{proof} Now let us study the boundary value problem $L_0$ and its eigenvalues $\{ \mu_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$. Introduce the Weyl function $M^N_{p+1}(\lambda) = -\dfrac{C_{p+1}^{[1]}(\pi, \lambda)}{C_{p+1}(\pi, \lambda)}$. Similarly to \eqref{defg}, we obtain the following relation under the assumption ($A_2$): \begin{equation} \label{defhN} M_1(\mu_{nk}) + M_{p+1}^N (\mu_{nk}) = - \sum_{\substack{j = 2 \\ j \ne p+1}}^m M_j(\mu_{nk}) =: h_{nk}^N, \quad n \in \mathbb N, \: k = 1, 2. \end{equation} Denote \begin{equation} \label{defh} M_1(\mu_{nk}) + M_{p+1}(\mu_{nk}) =: h_{nk}, \quad n \in \mathbb N, \, k = 1, 2. \end{equation} Then $$ h_{nk}^N - h_{nk} = M_{p+1}^N (\mu_{nk}) - M_{p+1}(\mu_{nk}) = \frac{S_{p+1}^{[1]}(\pi, \mu_{nk}) C_{p+1}(\pi, \mu_{nk}) - C^{[1]}_{p+1}(\pi, \mu_{nk}) S_{p+1}(\pi, \mu_{nk})}{C_{p+1}(\pi, \mu_{nk}) S_{p+1}(\pi, \mu_{nk})}. $$ Using \eqref{eqv} and \eqref{init}, one can easily show that $S_{p+1}^{[1]}(x, \lambda) C_{p+1}(x, \lambda) - C^{[1]}_{p+1}(x, \lambda) S_{p+1}(x, \lambda) \equiv 1$ for all $x \in (0, \pi)$, $\lambda \in \mathbb C$. Thus, we get \begin{equation} \label{CS} C_{p+1}(\pi, \mu_{nk}) S_{p+1}(\pi, \mu_{nk}) = \frac{1}{h_{nk}^N - h_{nk}}, \quad n \in \mathbb N, \, k = 1, 2. \end{equation} {\large \bf 3. Uniqueness theorem} Together with $L$ and $L_0$, consider other boundary value problems $\tilde L$ and $\tilde L_0$ of the same form, but with different potentials $\{ \tilde \sigma_j \}_{j = 1}^m$. The values of $m$ and $p$ remains the same. We agree that if a certain symbol $\gamma$ denotes an object related to $L$ or $L_0$, then the corresponding symbol $\tilde \gamma$ with tilde denotes the analogous object related to $\tilde L$ or $\tilde L_0$. \begin{lem} \label{lem:uniqM} Let the problems $L$ and $\tilde L$ satisfy the assumption ($A_1$), and let $\sigma_j = \tilde \sigma_j$, $j = \overline{1,m}\backslash \{ 1, p+1 \}$, $\lambda_{nk} = \tilde \lambda_{nk}$, $n \in \mathbb N$, $k = \overline{1, 4}$. Then $M_1(\lambda) + M_{p + 1}(\lambda) \equiv \tilde M_1(\lambda) + \tilde M_{p+1}(\lambda)$. \end{lem} \begin{proof} The relation \eqref{defg} implies $$ M_1(\lambda_{nk}) + M_{p+1}(\lambda_{nk}) = \tilde M_1(\lambda_{nk}) + \tilde M_{p+1}(\lambda_{nk}), \quad n \in \mathbb N, \quad k = \overline{1, 4}. $$ Taking the relation \eqref{sumMfrac} into account, we get $$ D_1(\lambda_{nk}) \tilde D_2(\lambda_{nk}) - \tilde D_1(\lambda_{nk}) D_2(\lambda_{nk}) = 0. $$ Thus, the function $$ H(\lambda) = D_1(\lambda) \tilde D_2(\lambda) - \tilde D_1(\lambda) D_2(\lambda) $$ has zeros at the points $\{ \lambda_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$. Construct the entire function $$ P(\lambda) := \prod_{k = 1}^4 \prod_{n = 1}^{\infty} \left(1 - \frac{\lambda}{\lambda_{nk}} \right). $$ (The case $\lambda_{nk} = 0$ requires minor modifications). Obviously, $\dfrac{H(\lambda)}{P(\lambda)}$ is an entire function of order $\frac{1}{2}$. According to Lemma~\ref{lem:asymptD} and Corollary~\ref{cor:prodla} from Appendix~B, the estimate $\left|\dfrac{H(\lambda)}{P(\lambda)} \right| \le C$ holds for $\lambda = \rho^2$, $\varepsilon < \arg \rho < \pi - \varepsilon$. Applying Phragmen-Lindel\"of's theorem \cite{BFY} and Liouville's theorem, we conclude that $H(\lambda) \equiv C P(\lambda)$. By virtue of Lemma~\ref{lem:asymptD}, the function $\rho H(\rho^2)$ belongs to the Paley-Wiener class $B_{2, 4 \pi}$, as a function of $\rho$, but $\rho P(\rho^2) \not \in B_{2, 4 \pi}$ (see \eqref{reprP}). Consequently, $C = 0$ and $H(\lambda) \equiv 0$. Thus, $\dfrac{D_1(\lambda)}{D_2(\lambda)} \equiv \dfrac{\tilde D_1(\lambda)}{\tilde D_2(\lambda)}$, and the lemma is proved. \end{proof} \begin{remark} If together with the potentials $\{ \sigma_j \}_{j = \overline{1, m} \backslash \{ 1, p + 1\}}$ even more eigenvalues of $L$ are given, than the collection $\{ \lambda_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ contains, we can not obtain more information, than the sum of the Weyl functions $M_1(\lambda) + M_{p+1}(\lambda)$. We need some additional data, to ``separate'' the potentials $\sigma_1$ and $\sigma_{p+1}$. \end{remark} \begin{thm} \label{thm:uniq} Let $\sigma_j = \tilde \sigma_j$, $j = \overline{1,m}\backslash \{ 1, p+1 \}$, $\lambda_{nk} = \tilde \lambda_{nk}$ for $n \in \mathbb N$, $k = \overline{1, 4}$, and $\mu_{nk} = \tilde \mu_{nk}$ for $n \in \mathbb N$, $k = 1, 2$. Assume that ($A_1$), ($A_2$) hold for the problems $L$, $L_1$, $\tilde L$, $\tilde L_1$. Then $\sigma_1 = \tilde \sigma_1$ and $\sigma_{p+1} = \tilde \sigma_{p+1}$ in $L_2(0, \pi)$. Thus, the solution of IP is unique. \end{thm} \begin{proof} By virtue of Lemma~\ref{lem:uniqM} \begin{equation} \label{sumMeq} M_1(\lambda) + M_{p+1}(\lambda) = \tilde M_1(\lambda) + \tilde M_{p+1}(\lambda). \end{equation} The relations \eqref{defhN}, \eqref{defh} and similar relations for $\tilde L$ and $\tilde L_0$ imply $h_{nk} = \tilde h_{nk}$, $h_{nk}^N = \tilde h_{nk}^N$ for $n \in \mathbb N$, $k = 1, 2$. Taking \eqref{CS} into account, we conclude that the entire function function $$ H(\lambda) := C_{p + 1}(\pi, \lambda) S_{p+1}(\pi, \lambda) - \tilde C_{p+1}(\pi, \lambda) \tilde S_{p+1}(\pi, \lambda) $$ has zeros $\{ \mu_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$. Similarly to \eqref{intCS}, one can derive the relations \begin{equation} \label{intCS2} \arraycolsep=1.4pt\def2.2{2.2} \left. \begin{array}{ll} C_{p + 1}(\pi, \lambda) & = \cos \rho \pi + \displaystyle\int_0^{\pi} T_{p+1}(t) \cos \rho t \, dt, \\ S_{p + 1}(\pi, \lambda) & = \dfrac{\sin \rho \pi}{\rho} + \dfrac{1}{\rho} \displaystyle\int_0^{\pi} K_{p+1}(t) \sin \rho t \, dt, \end{array} \right\} \end{equation} where $T_{p+1}, K_{p+1} \in L_2(0, \pi)$. Hence $|H(\lambda)| \le C |\rho|^{-1} \exp(2 |\mbox{Im}\, \rho| \pi)$ for $|\rho| \ge \rho^* > 0$. Construct the function $$ P(\lambda) := \prod_{k = 1}^2 \prod_{n = 1}^{\infty} \left( 1 - \frac{\lambda}{\mu_{nk}}\right). $$ (The case $\mu_{nk} = 0$ requires minor changes). In view of the asymptotics \eqref{asymptmu}, one can apply Corollary~\ref{cor:shift} from Appendix~B to $P(\lambda)$. Consequently, the entire function $\dfrac{H(\lambda)}{P(\lambda)}$ admits the estimate $\left| \dfrac{H(\lambda)}{P(\lambda)}\right| \le \dfrac{C}{|\rho|}$ for $\lambda = \rho^2$, $\varepsilon < \arg \rho < \pi - \varepsilon$, $|\rho| > \rho^*$ for some $\varepsilon > 0$ and $\rho^* > 0$. By Phragmen-Lindel\"of's and Liouville's theorems, we get $H(\lambda) \equiv 0$. Hence $C_{p+1}(\pi, \lambda) S_{p+1}(\pi, \lambda) \equiv \tilde C_{p+1}(\pi, \lambda) \tilde S_{p+1} (\pi, \lambda)$. The functions $C_{p + 1}(\pi, \lambda)$ and $S_{p + 1}(\pi, \lambda)$ have real zeros $\{ \nu_n \}_{n \in \mathbb N_0}$ and $\{ \theta_n \}_{n \in \mathbb N}$, which interlace~\cite{HM04-2spectra}: \begin{equation} \label{interlace} \nu_0 < \theta_1 < \nu_1 < \theta_2 < \nu_2 < \dots \end{equation} The same assertion is valid for $\tilde C_{p + 1}(\pi, \lambda)$ and $\tilde S_{p + 1}(\pi, \lambda)$. Consequently, $\nu_n = \tilde \nu_n$, for all $n \in \mathbb N_0$ and $\theta_n = \tilde \theta_n$ for $n \in \mathbb N$. It has been proved in \cite{HM04-2spectra}, that the two spectra $\{ \nu_n \}_{n \in \mathbb N_0}$ and $\{ \theta_n \}_{n \in \mathbb N}$ determine the potential $\sigma_{p + 1}$ uniquely. Hence $M_{p + 1}(\lambda) \equiv \tilde M_{p + 1}(\lambda)$. Together with \eqref{sumMeq}, this yields $M_1(\lambda) \equiv \tilde M_1(\lambda)$. The Weyl function $M_1(\lambda)$ determines the potential $\sigma_1$ uniquely (see \cite{FIY08}). Thus, $\sigma_1 = \tilde \sigma_1$ and $\sigma_{p + 1} = \tilde \sigma_{p + 1}$ in $L_2(0, \pi)$. \end{proof} {\large \bf 4. Solution of IP} In this section, we develop a constructive algorithm for the solution of IP. First, we show that, using the eigenvalues $\{ \lambda_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$, one can obtain a sequence of coefficients of some vector function $f(t)$ with respect to the specially constructed Riesz basis. Recovering $f(t)$ from its coefficients, we can find the sum $M_1(\lambda) + M_{p + 1}(\lambda)$. Then we add the part of the second spectrum $\{ \mu_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$ and find the potentials $\sigma_1$ and $\sigma_{p+1}$. Substituting \eqref{sumMfrac} and \eqref{asymptD} into \eqref{defg}, we obtain \begin{equation} \label{NK} \frac{\rho_{nk}}{g_{nk}} \int_0^{2 \pi} N(t) \cos \rho_{nk} t \, dt + \int_0^{2 \pi} K(t) \sin \rho_{nk} t \, dt = f_{nk}, \quad n \in \mathbb N, \: k = \overline{1, 4}, \end{equation} \begin{equation} \label{deff} f_{nk} := \frac{\rho_{nk}}{g_{nk}}\cos 2 \rho_{nk} \pi + \frac{1}{2} \sin \rho_{nk} \pi. \end{equation} For simplicity, we assume that ($A_3$) the numbers $\{ \lambda_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ are distinct and positive; ($A_4$) $g_{nk} \ne 0$, $n \in \mathbb N$, $k = \overline{1, 4}$. These assumptions are not principal. The case of multiple eigenvalues was discussed in \cite{Bond17}, while the other conditions can be easily achieved by a shift $q_j \to q_j + C$, $j = \overline{1, m}$. Denote \begin{equation} \label{defv} f(t) = \begin{bmatrix} N(t) \\ K(t) \end{bmatrix}, \quad v_{nk}(t) = \begin{bmatrix} \frac{\rho_{nk}}{g_{nk}}\cos \rho_{nk} t \\ \sin \rho_{nk} t \end{bmatrix}, \quad n \in \mathbb N, \: k = \overline{1, 4}. \end{equation} Consider the real Hilbert space $\mathcal{H} := L_2(0, 2\pi) \oplus L_2(0, 2\pi)$. The scalar product and the norm in $\mathcal H$ are defined as follows $$ (g, h)_{\mathcal H} = \int_0^{2 \pi} ( g_1(t) h_1(t) + g_2(t) h_2(t)) \, dt, \quad \| g \|_{\mathcal H} = \sqrt{\int_0^{2 \pi} (g_1^2(t) + g_2^2(t)) \, dt}, $$ $$ g = \begin{bmatrix} g_1 \\ g_2 \end{bmatrix}, \quad h = \begin{bmatrix} h_1 \\ h_2 \end{bmatrix}, \quad g, h \in \mathcal H. $$ One can rewrite the relation \eqref{NK} in the form \begin{equation} \label{scal} (f, v_{nk})_{\mathcal H} = f_{nk}, \quad n \in \mathbb N, \quad k = \overline{1, 4}. \end{equation} In Appendix A, we will prove the following theorem. \begin{thm} \label{thm:Riesz} Under the assumptions ($A_1$), ($A_3$), ($A_4$), the system $\{ v_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ is a Riesz basis in $\mathcal H$. \end{thm} In view of Theorem~\ref{thm:Riesz} and the relation \eqref{scal}, the numbers $f_{nk}$ are the coordinates of the vector function $f$ with respect to the Riesz basis, biorthonormal to $\{ v_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$. Given $\{ v_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$ and $\{ f_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$, we can recover $f$ uniquely. Consequently, we know $N(t)$ and $K(t)$, and can find the sum $M_1(\lambda) + M_{p + 1}(\lambda)$ via \eqref{asymptD} and \eqref{sumMfrac}. Now given $\{ \mu_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$, one can find $h_{nk}^N$ and $h_{nk}$ via \eqref{defhN} and \eqref{defh}, respectively. Consider the relation \eqref{CS}. It follows from \eqref{intCS2}, that \begin{equation} \label{prodCS} C_{p + 1}(\pi, \lambda) S_{p + 1}(\pi, \lambda) = \frac{\sin 2 \rho \pi}{2 \rho} + \frac{1}{\rho} \int_0^{2 \pi} T(t) \sin \rho t \, dt, \end{equation} where $T \in L_2(0, 2 \pi)$. Substituting \eqref{prodCS} into \eqref{CS}, we obtain $$ C_{p + 1}(\pi, \mu_{nk}) S_{p + 1}(\pi, \mu_{nk}) = \frac{\sin 2 \sqrt{\mu_{nk}}\pi}{2\sqrt{\mu_{nk}}} + \frac{1}{\sqrt{\mu_{nk}}} \int_0^{2\pi} T(t) \sin \sqrt{\mu_{nk}}t \, dt = \frac{1}{h_{nk}^N - h_{nk}} $$ for $n \in \mathbb N$, $k = 1, 2$. Then we derive the following system of equations \begin{equation} \label{sysT} \int_0^{2 \pi} T(t) \sin \sqrt{\mu_{nk}} t \, dt = \frac{\sqrt{\mu_{nk}}}{h_{nk}^N - h_{nk}} - \frac{1}{2} \sin 2 \sqrt{\mu_{nk}} \pi, \quad n \in \mathbb N, \: k = 1, 2. \end{equation} Impose an additional assumption: ($A_5$) the numbers $\{ \mu_{nk} \}_{n \in \mathbb N,\, k = 1, 2}$ are distinct and positive. The following theorem will be proved in Appendix A. \begin{thm} \label{thm:Riesz2} Under the assumptions ($A_2$), ($A_5$), the system $\{ \sin \sqrt{\mu_{nk}} t \}_{n \in \mathbb N, \, k = 1, 2}$ is a Riesz basis in $L_2(0, 2 \pi)$. \end{thm} Thus, one can solve the system \eqref{sysT} uniquely, recovering the function $T$ from its coefficients with respect to the Riesz basis. Then using \eqref{prodCS}, one can find the product $C_{p+1}(\pi, \lambda) S_{p+1}(\pi, \lambda)$ and the interlacing zeros $\{ \nu_n \}_{n \in \mathbb N_0}$ and $\{ \theta_n \}_{n \in \mathbb N}$ of the entire functions $C_{p+1}(\pi, \lambda)$ and $S_{p+1}(\pi, \lambda)$, respectively. These data can be used for reconstruction of the potential $\sigma_{p+1}$. Summarizing the results of this section, we arrive at the following algorithm for the solution of IP. {\bf Algorithm.} Let the potentials $\{ \sigma_j \}_{j = \overline{1, m} \backslash \{ 1, p + 1 \}}$ and the eigenvalues $\{ \lambda_{nk} \}_{n \in \mathbb N\, k = \overline{1, 4}}$, $\{ \mu_{nk} \}_{n \in \mathbb N,\, k = 1, 2}$ be given, and let the assumptions ($A_1$)-($A_5$) be satisfied. \begin{enumerate} \item Using the potentials $\{ \sigma_j \}_{j = \overline{1, m} \backslash \{ 1, p + 1 \}}$, construct the Weyl functions $M_j(\lambda)$ for $j = \overline{1, m} \backslash \{ 1, p + 1 \}$ via \eqref{defM}. \item Construct the numbers $g_{nk}$, $f_{nk}$ and the vector functions $v_{nk}$ for $n \in \mathbb N$, $k = \overline{1, 4}$ via \eqref{defg}, \eqref{deff} and \eqref{defv}. \item According to \eqref{scal}, recover the vector function $f(t) = \begin{bmatrix} N(t) \\ K(t) \end{bmatrix}$ from its coordinates $f_{nk}$ with respect to the Riesz basis. \item Using $N(t)$ and $K(t)$, recover the sum $M_1(\lambda) + M_{p+1}(\lambda)$ via \eqref{sumMfrac}, \eqref{asymptD}. \item Find the numbers $h_{nk}^N$ and $h_{nk}$ for $n \in \mathbb N$, $k = 1, 2$, using \eqref{defhN} and \eqref{defh}. \item Construct the system \eqref{sysT} and find the function $T(t)$ from this system, recovering it from the coordinates with respect to the Riesz basis. \item Using $T(t)$, construct the product $C_{p+1}(\pi, \lambda) S_{p+1}(\pi, \lambda)$ via \eqref{prodCS}. \item Find the zeros of the product $C_{p+1}(\pi, \lambda) S_{p+1}(\pi, \lambda)$ and divide it into two sequences $\{ \nu_n \}_{n \in \mathbb N_0}$ and $\{ \theta_n \}_{n \in \mathbb N}$, interlacing according to \eqref{interlace}. \item Construct the potential $\sigma_{p+1}$ by two spectra $\{ \nu_n \}_{n \in \mathbb N_0}$ and $\{ \theta_n \}_{n \in \mathbb N}$ (see \cite{HM04-2spectra}). \item Find $M_{p+1}(\lambda)$ by $\sigma_{p+1}(\lambda)$, and then $M_1(\lambda)$ from the sum $M_1(\lambda) + M_{p+1}(\lambda)$. \item Construct the potential $\sigma_1$ by the Weyl function $M_1(\lambda)$ (see \cite{FIY08, FY01}). \end{enumerate} {\large \bf Appendix A. Riesz bases} The goal of this section is to prove the important Theorems~\ref{thm:Riesz} and~\ref{thm:Riesz2}. We start with the analysis of the auxiliary systems $\mathcal S := \{ \sin (n + \beta) t \}_{n \in \mathbb Z}$ and $\mathcal C := \{ \cos (n + \beta) t \}_{n \in \mathbb Z}$ in $L_2(0, 2 \pi)$. Here $\beta$ is an arbitrary number from $(0, \frac{1}{2})$. \begin{lem} The systems $\mathcal S$ and $\mathcal C$ are complete in $L_2(0, 2\pi)$. \end{lem} \begin{proof} Let us prove the assertion of the lemma for the system $\mathcal S$. The proof for $\mathcal C$ is similar. Suppose that, on the contrary, the system $\mathcal S$ is not complete. Then there exists a nonzero function $h$ from $L_2(0, 2 \pi)$, such that $$ \int_0^{2 \pi} h(t) \sin (n + \beta) t \, dt = 0, \quad n \in \mathbb Z. $$ Hence the odd entire function $$ H(\rho) := \int_0^{2 \pi} h(t) \sin \rho t \, dt $$ has zeros $\{ \pm (n + \beta) \}_{n \in \mathbb Z} \cup \{ 0 \}$, which coincide with the zeros of the function $D(\rho) := \rho (\cos^2 \rho \pi - \cos^2 \beta \pi)$. Clearly, the function $\dfrac{H(\rho)}{D(\rho)}$ is entire and $\dfrac{H(\rho)}{D(\rho)} = O(\rho^{-1})$, as $|\rho| \to \infty$. By virtue of Liouville's theorem, $H(\rho) \equiv 0$ and $h = 0$. The contradiction proves the lemma. \end{proof} \begin{lem} For an arbitrary sequence $\{ c_n \}_{n \in \mathbb Z}$ from $l_2$, the following estimates hold \begin{equation} \label{RBestS} \pi (1 - \cos 2 \beta \pi) \sum_{n = -\infty}^{\infty} c_n^2 \le \left\| \sum_{n = -\infty}^{\infty} c_n \sin (n + \beta) t \right\|_2^2 \le \pi (1 + \cos 2 \beta \pi) \sum_{n = -\infty}^{\infty} c_n^2, \end{equation} \begin{equation} \label{RBestC} \pi (1 - \cos 2 \beta \pi) \sum_{n = -\infty}^{\infty} c_n^2 \le \left\| \sum_{n = -\infty}^{\infty} c_n \cos (n + \beta) t \right\|_2^2 \le \pi (1 + \cos 2 \beta \pi) \sum_{n = -\infty}^{\infty} c_n^2. \end{equation} Thus, the systems $\mathcal S$ and $\mathcal C$ are Riesz bases in $L_2(0, 2 \pi)$. \end{lem} \begin{proof} Without loss of generality, consider real sequences $\{ c_n \}_{n \in \mathbb Z}$. Similarly to the proof of \cite[Theorem 3.1]{Sedl03}, we derive \begin{multline*} \left\| \sum_{n = -\infty}^{\infty} c_n \sin (n + \beta) t \right\|_2^2 = \int_0^{2 \pi} \left( \sum_{n = -\infty}^{\infty} \sum_{k = -\infty}^{\infty} c_n c_k \sin (n + \beta) t \sin (k + \beta) t \right) \\ = \frac{1}{2} \int_0^{2 \pi} \left( \sum_{n = -\infty}^{\infty} \sum_{k = -\infty}^{\infty} c_n c_k (\cos ( n - k) t - \cos(n + k + 2 \beta) t) \right) \, dt \\ = \pi \sum_{n = -\infty}^{\infty} c_n^2 - \frac{1}{2} \sin 4 \beta \pi \sum_{n = -\infty}^{\infty} \sum_{k = -\infty}^{\infty} \frac{c_n c_k}{n + k + 2 \beta}. \end{multline*} Consider the bilinear form $$ A = \sum_{n = -\infty}^{\infty} \sum_{k = -\infty}^{\infty} a_{nk} c_n c_k, \quad a_{nk} = \frac{1}{n + k + 2 \beta}. $$ Let us calculate its norm (see \cite{HLP}): $$ B = \sum_{i = -\infty}^{\infty} \sum_{j = -\infty}^{\infty} b_{ij} x_i x_j, $$ \vspace*{-8mm} \begin{multline*} b_{ij} = \sum_{k = -\infty}^{\infty} a_{ik} a_{jk} = \sum_{k = -\infty}^{\infty} \frac{1}{(i + k + 2 \beta) (j + k + 2 \beta)} \\ = \frac{1}{j - i} \sum_{k = -\infty}^{\infty} \left( \frac{1}{i + k + 2 \beta} - \frac{1}{j + k + 2 \beta}\right) = 0, \quad i \ne j, \end{multline*} $$ b_{ii} = \sum_{k = -\infty}^{\infty} a_{ik}^2 = \sum_{n = -\infty}^{\infty} \frac{1}{(n + 2 \beta)^2} = \frac{\pi^2}{\sin^2 2 \beta \pi}. $$ Consequently, $$ B = \frac{\pi^2}{\sin^2 2 \beta \pi} \sum_{i = -\infty}^{\infty} x_i^2, \qquad |A| \le \frac{\pi}{\sin 2 \beta \pi} \sum_{n = -\infty}^{\infty} c_n^2. $$ Finally, we obtain $$ \pi (1 - \cos 2 \beta \pi) \sum_{n = -\infty}^{\infty} c_n^2 \le \pi \sum_{n = -\infty}^{\infty} c_n^2 - \frac{1}{2} \sin 4 \pi \beta \cdot A \le \pi (1 + \cos 2 \beta \pi) \sum_{n = -\infty}^{\infty} c_n^2, $$ so we arrive at \eqref{RBestS}. The estimate \eqref{RBestC} can be proved similarly. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Riesz2}] Since the eigenvalues $\{ \mu_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$ satisfy the asymptotic formulas \eqref{asymptmu}, we have $$ \sin \sqrt{\mu_{n1}} t = \sin \left(n - 1 + \frac{\alpha_1}{\pi}\right) t + \varkappa_n, \quad \sin \sqrt{\mu_{n2}} t = \sin \left(n - \frac{\alpha_1}{\pi}\right) t + \varkappa_n. $$ Thus, the considered system $\{ \sin \sqrt{\mu_{nk}} \}_{n \in \mathbb N, \, k = 1, 2}$ is $l_2$-close to the Riesz basis $\mathcal S$ for $\beta = \frac{\alpha_1}{\pi} \in \left(0, \frac{\pi}{2}\right)$. In order to prove the Riesz-bacisity of $\{ \sin \sqrt{\mu_{nk}} \}_{n \in \mathbb N, \, k = 1, 2}$, it remains to show that this system is complete in $L_2(0, 2 \pi)$. Suppose that the contrary holds, i.e. there exists a function $h \ne 0$ from $L_2(0, 2 \pi)$, such that $$ \int_0^{2 \pi} h(t) \sin \sqrt{\mu_{nk}} t \, dt = 0, \quad n \in \mathbb N, \: k = 1, 2. $$ Consequently, the entire function $$ H(\lambda) := \frac{1}{\rho} \int_0^{2 \pi} h(t) \sin \rho t \, dt $$ has zeros $\{ \mu_{nk} \}_{n \in \mathbb N, \, k = 1, 2}$. Obviously, the estimate $|H(\lambda)| \le C |\rho|^{-1} \exp(2 |\mbox{Im}\, \rho| \pi)$ holds for $|\rho| \ge \rho^* > 0$. Further one can repeat the arguments from the proof of Theorem~\ref{thm:uniq}, and show that $H(\lambda) \equiv 0$. Thus, $h = 0$, and we arrive at the contradiction. Hence the system $\{ \sin \sqrt{\mu_{nk}} \}_{n \in \mathbb N, \, k = 1, 2}$ is complete in $L_2(0, 2 \pi)$. \end{proof} Now we proceed to the system $\{ v_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$. Denote $$ v_{n1}^0(t) = \begin{bmatrix} -\frac{1}{2} \tan 2 \alpha \cos (n - 1 + \frac{\alpha}{\pi}) t \\ \sin (n - 1 + \frac{\alpha}{\pi} ) t \end{bmatrix}, \quad v_{n2}^0(t) = \begin{bmatrix} \frac{1}{2} \tan 2 \alpha \cos (n - \frac{\alpha}{\pi}) t \\ \sin (n - \frac{\alpha}{\pi}) t \end{bmatrix}, $$ $$ v_{n3}^0(t) = \begin{bmatrix} 0 \\ \sin (n - \frac{1}{2} ) t \end{bmatrix}, \quad v_{n4}^0(t) = \begin{bmatrix} 0 \\ \sin n t \end{bmatrix}, \quad n \in \mathbb N. $$ \begin{lem} \label{lem:estv} The sequence $\{ v_{nk} \}$ is $l_2$-close to the sequence $\{ v_{nk}^0 \}$ in $\mathcal H$, i.e. $$ \{ \| v_{nk} - v_{nk}^0 \| \}_{n \in \mathbb N, \, k = \overline{1, 4}} \in l_2. $$ \end{lem} \begin{proof} Using the relations \eqref{asymptrho}, \eqref{defg}, \eqref{sumMfrac} and \eqref{asymptD}, we obtain \begin{gather*} g_{n1}^{-1} = -\frac{1}{2 n} \tan 2 \alpha + \frac{\varkappa_n}{n}, \quad g_{n2}^{-1} = \frac{1}{2 n} \tan 2 \alpha + \frac{\varkappa_n}{n}, \\ g_{n3}^{-1} = \frac{\varkappa_n}{n}, \quad g_{n4}^{-1} = \frac{\varkappa_n}{n}, \quad n \in \mathbb N. \end{gather*} Substituting these estimates together with \eqref{asymptrho} into \eqref{defv}, we arrive at the assertion of the lemma. \end{proof} \begin{lem} \label{lem:Rieszv0} The system $\{ v_{nk}^0 \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ is a Riesz basis in $\mathcal H$. \end{lem} \begin{proof} Let us construct a linear bounded operator $A \colon \mathcal H \to \mathcal H$ with a bounded inverse, such that the system $\{ A v_{nk}^0 \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ is a Riesz basis. Put $$ A v = A \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} v_1 \\ v_2 + g \end{bmatrix}, \quad A^{-1} v = \begin{bmatrix} v_1 \\ v_2 - g \end{bmatrix}, $$ where $$ g(t) := 2 \cot 2 \alpha \sum_{n = -\infty}^{\infty} c_n \sin (n + \beta), \quad \beta := \frac{\alpha}{\pi}, $$ and $\{ c_n \}_{n \in \mathbb Z}$ are the coordinates of $v_1$ with respect to the Riesz basis $\mathcal C$: $$ v_1(t) = \sum_{n = -\infty}^{\infty} c_n \cos (n + \beta) t. $$ Using the estimates \eqref{RBestS} and \eqref{RBestC}, one can easily show that the operators $A$ and $A^{-1}$ are bounded in $\mathcal H$. Furthermore, we have \begin{gather*} (A v_{n1}^0)(t) = -\frac{1}{2} \tan 2 \alpha \begin{bmatrix} \cos (n - 1 + \beta) t \\ 0\end{bmatrix}, \quad (A v_{n2}^0)(t) = \frac{1}{2} \tan 2 \alpha \begin{bmatrix} \cos (- n + \beta) t \\ 0 \end{bmatrix}, \\ (A v_{n3}^0)(t) = \begin{bmatrix} 0 \\ \sin (n - \frac{1}{2}) t \end{bmatrix}, \quad (A v_{n4}^0)(t) = \begin{bmatrix} 0 \\ \sin n t \end{bmatrix}. \end{gather*} Since the systems $\mathcal C$ and $\{ \sin n t \}_{n \in \mathbb N} \cup \{ \sin (n - \frac{1}{2}) t \}_{n \in \mathbb N}$ are Riesz bases in $L_2(0, 2 \pi)$, the system $\{ A v_{nk}^0 \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ is a Riesz basis in $\mathcal H$. \end{proof} \begin{lem} \label{lem:complete} Under the assumptions ($A_1$), ($A_3$) and ($A_4$), the system $\{ v_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ is complete in $\mathcal H$. \end{lem} \begin{proof} Suppose that functions $w_1, w_2 \in L_2(0, 2 \pi)$ are such that \begin{equation} \label{smeqw} \int_0^{2 \pi} \left(w_1(t) \frac{\rho_{nk}}{g_{nk}} \cos \rho_{nk} t + w_2(t) \sin \rho_{nk} t \right) \, dt = 0, \quad n \in \mathbb N, \quad k = \overline{1, 4}. \end{equation} Recall that $g_{nk} = M_1(\lambda_{nk}) + M_{p + 1}(\lambda_{nk})$ and $M_1(\lambda) + M_{p + 1}(\lambda) = \dfrac{D_1(\lambda)}{D_2(\lambda)}$. In view of the assumptions ($A_3$) and ($A_4$), $\rho_{nk} \ne 0$ and $g_{nk} \ne 0$ for $n \in \mathbb N$, $k = \overline{1, 4}$. The assumption ($A_1$) together with \eqref{defD} implies $D_2(\lambda_{nk}) \ne 0$. Consequently, the entire function \begin{equation} \label{defW} W(\lambda) := \int_0^{2 \pi} \left( w_1(t) D_2(\lambda) \cos \rho t + w_2(t) D_1(\lambda) \frac{\sin \rho t}{\rho} \right) \, dt \end{equation} has zeros at the points $\{ \lambda_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$. It follows from \eqref{asymptD} and \eqref{defW}, that \begin{equation} \label{estW} W(\lambda) = O(|\rho|^{-1}\exp(4 |\mbox{Im}\,\rho|\pi)), \quad |\rho| \ge \rho^* > 0. \end{equation} Construct the function \begin{equation*} P(\lambda) := \prod_{k = 1}^4 \prod_{n = 1}^{\infty} \left( 1 - \frac{\lambda}{\lambda_{nk}} \right). \end{equation*} Note that $\lambda_{nk} \ne 0$ due to ($A_3$). Clearly, the function $\dfrac{W(\lambda)}{P(\lambda)}$ is entire. The estimate \eqref{estW} and Corollary~\ref{cor:prodla} from Appendix~B yield $\dfrac{W(\lambda)}{P(\lambda)} = O(1)$ for $\lambda = \rho^2$, $\varepsilon < \arg \rho < \pi - \varepsilon$. Applying Phragmen-Lindel\"of's and Liouville's theorems \cite{BFY}, we conclude that $W(\lambda) \equiv C P(\lambda)$. Using \eqref{defW}, one can easily show that $\rho W(\rho^2) \in B_{2, 4\pi}$ (as a function of $\rho$). However, the expression \eqref{reprP} implies $\rho P(\rho^2) \not \in B_{2, 4 \pi}$. Hence $C = 0$ and $W(\lambda) \equiv 0$. Let $\{ \tau_n \}_{n \in \mathbb N}$ be the sequence of zeros of the function $D_2(\lambda)$. Using \eqref{defD}, one can easily check, that $D_1(\tau_n) \ne 0$. Consequently, the function \begin{equation} \label{defH} H(\lambda) := \int_0^{2 \pi} w_2(t) \frac{\sin \rho t}{\rho} \, dt \end{equation} has zeros at the points $\{ \tau_n \}_{n \in \mathbb Z}$ (if $\tau_n$ is a multiple zero of $D_2(\lambda)$, then it is also a multiple zero of $H(\lambda)$ with the same multiplicity). Thus, the function $\dfrac{H(\lambda)}{D_2(\lambda)}$ is entire. It follows from \eqref{defH} and \eqref{asymptD}, that $\dfrac{H(\lambda)}{D_2(\lambda)} = O(1)$ as $|\lambda| \to \infty$. By Liouville's theorem, $H(\lambda) \equiv C D_2(\lambda)$. However, $\rho H(\rho^2) \in B_{2, 2\pi}$ and $\rho D_2(\rho^2) \not \in B_{2, 2 \pi}$. Therefore $H(\lambda) \equiv 0$ and, consequently, $w_2 = 0$ in $L_2(0, \pi)$. Using \eqref{defW} and the relation $W(\lambda) \equiv 0$, we conclude that $w_1 = 0$. In view of \eqref{smeqw}, the system $\{ v_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ is complete in $\mathcal H$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:Riesz}] The assertion of the theorem immediately follows from Lemmas~\ref{lem:estv},~\ref{lem:Rieszv0} and~\ref{lem:complete}. \end{proof} {\large \bf Appendix B. Entire functions} Here we discuss entire functions, constructed as infinite products by their zeros with a certain asymptotic behavior. We derive some relations, which can be used for evaluation of these functions. Our analysis is based on the following result. \begin{lem}[\cite{BB16}] \label{lem:prod} Let $$ \rho_n = n + \varkappa_n, \quad n \in \mathbb Z, $$ be arbitrary complex numbers, and $$ P(\rho) := \pi (\rho - \rho_0) \prod_{\substack{n = -\infty \\n \ne 0}}^{\infty} \frac{\rho_n - \rho}{n} \exp\left(\frac{\rho}{n}\right). $$ Then $P(\rho)$ can be represented in the form $$ P(\rho) = \sin \rho \pi + \int_0^{\pi} ( w_1(t) \sin \rho t + w_2(t) \cos \rho t) \, dt, $$ where $w_1, w_2 \in L_2(0, \pi)$. \end{lem} Lemma~\ref{lem:prod} has the following corollaries. We prove only Corollary~\ref{cor:shift}, since the proofs of Corollaries~\ref{cor:sin} and~\ref{cor:cos} exploit similar ideas. \begin{cor} \label{cor:sin} The function $$ P(\lambda) := \prod_{n = 1}^{\infty} \left( 1 - \frac{\lambda}{\lambda_n}\right), $$ where $$ \lambda_n = \rho_n^2 \ne 0, \quad \rho_n = n + \varkappa_n, \quad n \in \mathbb N, $$ admits the representation $$ P(\lambda) = \frac{C \sin \rho \pi}{\rho} + \frac{1}{\rho} \int_0^{\pi} w(t) \sin \rho t \, dt, \quad w \in L_2(0, \pi). $$ \end{cor} \begin{cor} \label{cor:cos} The function $$ P(\lambda) := \prod_{n = 1}^{\infty} \left( 1 - \frac{\lambda}{\lambda_n}\right), $$ where $$ \lambda_n = \rho_n^2 \ne 0, \quad \rho_n = n - \frac{1}{2} + \varkappa_n, \quad n \in \mathbb N, $$ admits the representation $$ P(\lambda) = C \cos \rho \pi + \int_0^{\pi} w(t) \cos \rho t \, dt, \quad w \in L_2(0, \pi). $$ \end{cor} \begin{cor} \label{cor:shift} The function \begin{equation} \label{prodP} P(\lambda) := \prod_{n = 0}^{\infty} \left(1 - \frac{\lambda}{\lambda_n^+} \right) \prod_{n = 1}^{\infty} \left(1 - \frac{\lambda}{\lambda_n^-} \right), \end{equation} where \begin{align*} \lambda_n^+ & = (\rho_n^+)^2 \ne 0, \quad \rho_n^+ = n + a + \varkappa_n, \quad n \in \mathbb N_0, \\ \lambda_n^- & = (\rho_n^-)^2 \ne 0, \quad \rho_n^- = n - a + \varkappa_n, \quad n \in \mathbb N, \\ \end{align*} admits the representation $$ P(\lambda) = C (\cos 2 \rho \pi - \cos 2 a \pi) + \int_0^{2 \pi} w(t) \cos \rho t\, dt, \quad w \in L_2(0, 2 \pi). $$ \end{cor} \begin{proof} Denote $\rho_{-n}^+ = -\rho_n^-$ for $n \in \mathbb N$ and $\rho_{-n}^- = - \rho_n^+$ for $n \in \mathbb N_0$. Then \begin{equation} \label{smasympt} \rho_n^{\pm} = n \pm a + \varkappa_n, \quad n \in \mathbb Z, \end{equation} and the product \eqref{prodP} can be rewritten in the form $$ P(\lambda) = d^+(\rho) d^-(\rho), \quad d^{\pm}(\rho) := \left(1 - \frac{\rho}{\rho_0^{\pm}} \right)\prod_{\substack{n = -\infty \\ n \ne 0 }}^{\infty} \left( 1 - \frac{\rho}{\rho_n^{\pm}} \right) \exp\left( \frac{\rho}{\rho_n^{\pm}}\right) $$ Clearly, $d^+(\rho)$ and $d^-(\rho)$ are entire functions with the zeros $\{ \rho_n^+ \}_{n \in \mathbb Z}$ and $\{ \rho_n^- \}_{n \in \mathbb Z}$, respectively. Introduce the functions $$ \tilde d^{\pm}(\rho) := \pi (\rho_0^{\pm} - \rho) \prod_{\substack{n = -\infty \\ n \ne 0}}^{\infty} \frac{\rho_n^{\pm} - \rho}{n} \exp\left( \frac{\rho \mp a}{\rho_n^{\pm} \mp a}\right), $$ having the same zeros. For simplicity, we assume that $\rho_n^{\pm} \ne \pm a$. One can easily calculate \begin{equation} \label{fracd} \frac{d^{\pm}(\rho)}{\tilde d^{\pm}(\rho)} = \frac{1}{\pi \rho_0^{\pm}} \exp\left( \rho \sum_{\substack{n = -\infty \\ n \ne 0}}^{\infty} \frac{\mp a}{\rho_n^{\pm} (\rho_n^{\pm} \mp a)}\right) \prod_{\substack{n = -\infty \\ n \ne 0}}^{\infty} \frac{n}{\rho_n^{\pm}} \exp\left( \frac{\pm a}{\rho_n^{\pm} \mp a}\right). \end{equation} By virtue of the asymptotic formula \eqref{smasympt}, the sum and the product in \eqref{fracd} converge absolutely. The relation \eqref{fracd} yields $$ \frac{d^+(\rho) d^-(\rho)}{\tilde d^+(\rho) \tilde d^-(\rho)} = C. $$ By Lemma~\ref{lem:prod} $$ \tilde d^{\pm}(\rho \pm a) = \sin \rho \pi + \int_0^{\pi} w_1^{\pm}(t) \sin \rho t \, dt + \int_0^{\pi} w_2^{\pm}(t) \cos \rho t \, dt. $$ Consequently, \begin{multline*} P(\lambda) = d^+(\rho) d^-(\rho) = C \tilde d^+(\rho) \tilde d^-(\rho) = C \biggl( \sin (\rho - a) \pi + \int_0^{\pi} w_1^+(t) \sin (\rho - a) t \, dt \\ + \int_0^{\pi} w_2^+(t) \cos(\rho - a) t\, dt \biggr) \biggl( \sin (\rho + a) \pi + \int_0^{\pi} w_1^-(t) \sin (\rho + a) t \, dt + \int_0^{\pi} w_2^-(t) \cos(\rho + a) t\, dt \biggr) \\ = 2 C (\cos 2 a \pi - \cos 2 \rho \pi) + F(\rho). \end{multline*} Clearly, $F \in B_{2, 2 \pi}$ and $F(\rho) = F(-\rho)$. Hence $F(\rho) = \displaystyle\int_0^{2 \pi} w(t) \cos \rho t \, dt$, where $w \in L_2(0, 2 \pi)$. \end{proof} Recall that $\{ \lambda_{nk} \}_{n \in \mathbb N, \, k = \overline{1, 4}}$ are eigenvalues of the boundary value problem $L$, satisfying the asymptotic relations \eqref{asymptrho}. For simplicity, assume that $\lambda_{nk} \ne 0$. Summarizing the results of the previous corollaries, we obtain the following one. \begin{cor} \label{cor:prodla} The function $$ P(\lambda) := \prod_{k = 1}^4 \prod_{n = 1}^{\infty} \left( 1 - \frac{\lambda}{\lambda_{nk}} \right) $$ admits the representation \begin{equation} \label{reprP} P(\lambda) = \frac{C}{\rho} \sin \rho \pi \cos \rho \pi (\cos 2 \rho \pi - \cos 2 \alpha) + \frac{1}{\rho} \int_0^{4 \pi} w(t) \sin \rho t \, dt, \quad w \in L_2(0, 4 \pi). \end{equation} The following estimate from below is valid $$ |P(\rho^2)| \ge C |\rho|^{-1} \exp(4 |\mbox{Im}\, \rho| \pi), \quad \varepsilon < \arg \rho < \pi - \varepsilon, \quad |\rho| \ge \rho^*, $$ for some positive $\varepsilon$ and $\rho^*$. \end{cor} {\bf Acknowledgment.} This work was supported in part by the Russian Federation President Grant MK-686.2017.1, by Grant 1.1660.2017/PCh of the Russian Ministry of Education and Science and by Grants 15-01-04864, 16-01-00015, 17-51-53180 of the Russian Foundation for Basic Research. \noindent Natalia Pavlovna Bondarenko \\ 1. Department of Applied Mathematics, Samara National Research University, \\ 34, Moskovskoye Shosse, Samara 443086, Russia, 2. Department of Mechanics and Mathematics, Saratov State University, \\ Astrakhanskaya 83, Saratov 410012, Russia, \\ e-mail: {\it [email protected]} \end{document}
arXiv
Event Date and Location Summary Tanguy Grall (Cambridge) Tue. February 9th, 2021 Host: Kurt Hinterbichler Continue reading… Tanguy Grall (Cambridge) Shubham Maheshwari (Groningen) Tue. February 16th, 2021 Continue reading… Shubham Maheshwari (Groningen) Erik Shirokoff (UChicago) Tue. February 23rd, 2021 Host: John Ruhl Continue reading… Erik Shirokoff (UChicago) Ozenc Gungor (CWRU) Tue. March 2nd, 2021 Host: Glenn Starkman Continue reading… Ozenc Gungor (CWRU) Tim Tait (UC Irvine) Tue. March 9th, 2021 Host: Pavel / Alexis Continue reading… Tim Tait (UC Irvine) No Seminar Tue. March 16th, 2021 No classes or seminars Continue reading… No Seminar Hazel Mak (Brown University) Tue. March 23rd, 2021 Host: Klaountia Continue reading… Hazel Mak (Brown University) Delilah Gates (Harvard) Tue. March 30th, 2021 Continue reading… Delilah Gates (Harvard) Benjamin Grinstein (UCSD) Tue. April 6th, 2021 Host: Pavel/Alexis Continue reading… Benjamin Grinstein (UCSD) Klaountia Pasmatsiou (CWRU) Tue. April 13th, 2021 Continue reading… Klaountia Pasmatsiou (CWRU) Clara Murgui (Caltech) Tue. April 20th, 2021 Continue reading… Clara Murgui (Caltech) Don Scipione (ACMEX) Tue. April 27th, 2021 Host: Idit Zehavi Continue reading… Don Scipione (ACMEX) Event Date Summary Joachim Brod (University of Cincinnati) Tue. November 17th, 2020 Precision Standard-Model Prediction of epsilon_K The parameter epsilon_K describes CP violation in the neutral kaon system and is one of the most sensitive probes of new physics. The large uncertainties related to the charm-quark contribution to epsilon_K have so far prevented a reliable standard-model prediction. In this talk, I will review mixing in the neutral kaon system, and then show that CKM unitarity suggests a unique form of the weak effective Hamiltonian in which the short-distance theory uncertainty of the imaginary part is dramatically reduced. The uncertainty related to the charm-quark contribution is now at the percent level. Continue reading… Joachim Brod (University of Cincinnati) Kara Farnsworth (CWRU) Tue. November 10th, 2020 The Newman-Penrose Map and the Classical Double Copy Abstract: Double copy relations between gauge and gravitational theories, originally found in the context of string theory and scattering amplitudes, have recently been realized in a classical setting as maps between exact solutions of gauge theories and gravity. I will present a new map between a certain class of real, exact solutions of Einstein's equations and self-dual solutions of the flat-space vacuum Maxwell equations. This map, which we call the Newman-Penrose map, is well-defined even for non-vacuum, non-stationary spacetimes, providing a systematic framework for exploring gravity solutions in the context of the double copy that have not been previously studied in this setting. Continue reading… Kara Farnsworth (CWRU) Ravi Sheth (University of Pennsylvania) Tue. November 3rd, 2020 Energy as a guiding principle in nonlinear structure formation Abstract: One goal of studies of large scale structure formation is to understand why the dense, virialized clumps which host galaxies form where they do. In cold dark matter cosmologies, the late time field retains some memory of the initial conditions, which models of dark matter halo formation try to exploit. The simplest models are motivated by a spherical collapse calculation which dates back to the early 1970s. In the late 1980s, this approximation for the physics of collapse was coupled with the heuristic assumption that collapse occurs around regions that are maxima of the initial matter density fluctuation field. Continue reading… Ravi Sheth (University of Pennsylvania) Zach Weiner (University of Illinois) Tue. October 27th, 2020 Seeing the dark: gravitational relics of dark photon production Axion-like particles are a recurrent feature of models of early Universe phenomena, spanning inflation, dark matter, and solutions to the Hubble tension. The nonperturbative decay of axions into beyond the Standard Model photons is a generic feature of these models. I will present the complex nonperturbative and nonlinear dynamics of axion–gauge-field couplings, studied via numerical simulation. These scenarios result in a significant stochastic background of gravitational waves, which provides various means to rule out and constrain models. In the two examples I will present, the (over)production of GHz gravitational waves at preheating imposes the tightest constraints on the inflaton's axial coupling to gauge fields, Continue reading… Zach Weiner (University of Illinois) David Weinberg (Ohio State University and Institute for Advanced Study) Tue. October 20th, 2020 Decoding Chemical Evolution and Nucleosynthesis I will discuss insights from analytic and numerical models of galactic chemical evolution and observations of Milky Way elemental abundances from the Sloan Digital Sky Survey's APOGEE project. Under generic model assumptions, abundances and abundance ratios approach an equilibrium in which element production from nucleosynthesis is balanced by element depletion from star formation and outflows. The efficiency of outflows required to reproduce observed abundances is strongly degenerate with the uncertain overall scale of supernova yields. APOGEE observations show that the distributions of stars in (magnesium,iron,age)-space change steadily across the Milky Way disk, Continue reading… David Weinberg (Ohio State University and Institute for Advanced Study) Benjamin Elder (University of Hawaii) Tue. October 13th, 2020 Chameleon dark energy in the lab The accelerated expansion of the universe hints at the existence of a new light degree of freedom in the gravitational sector. Such a degree of freedom, generally taken to be a scalar, mediates a fifth force between matter particles. This property is in tension with existing tests of gravity, unless the fifth force is screened, i.e. it dynamically weakens in certain environments. A new generation of gravitational experiments, being performed in the laboratory, are designed to be sensitive to screened forces, and have made great headway towards detecting or ruling out screened forces over the past several years. Continue reading… Benjamin Elder (University of Hawaii) Chunshan Lin (Warsaw) Tue. October 6th, 2020 Is GR unique? Not sure. I will present an iterative Hamiltonian approach, to build up a gravity theory with all constraints being first class and thus possesses only 2 local degrees of freedom. The results are conjectural, rather than conclusive. If it is true, however, it implies GR may not be unique in the 4-dimensional space-time. If time permits, I will also briefly discuss the recently proposed 4D Einstein-Gauss-Bonnet gravity, which was another attempt of mine, yet probably unsuccessful one, along the line. Zoom meeting ID: 999 3023 4812 For the password to access the meeting please contact one of us: Kurt Hinterbichler: kjh92 Alexis Plascencia: adp110 Ellen Rabe: exr223 Idit Zehavi: ixz6 at case.edu Continue reading… Chunshan Lin (Warsaw) Anson Hook (Maryland) Tue. September 29th, 2020 A CMB Millikan Experiment with Cosmic Axiverse Strings We study axion strings of hyperlight axions coupled to photons. These axions strings produce a distinct quantized polarization rotation of CMB photons which is O(1%). As the CMB light passes many strings, this polarization rotation converts E-modes to B-modes and adds up like a random walk. Using numerical simulations we show that the expected size of the final result is well within the reach of current and future CMB experiments through the measurement of correlations of CMB B-modes with E- and T-modes. The quantized polarization rotation angle is topological in nature and its value depends only on the anomaly coefficient, Continue reading… Anson Hook (Maryland) Xiaoju Xu (CWRU) Tue. September 22nd, 2020 Halo and galaxy assembly bias Measuring galaxy clustering is an effective way to gain knowledge of galaxy formation and constraining cosmology. Cosmology determines dark matter halo population and clustering, and halo clustering and halo occupation determine the galaxy clustering. It is important to understand halo clustering and galaxy-halo connection to build halo occupation models. In N-body simulations, halo clustering is shown to depend not only on halo mass but also on secondary halo properties, which is called the halo assembly bias. However, traditional halo occupation models only consider halo mass dependence and ignore effects caused by secondary halo properties. Continue reading… Xiaoju Xu (CWRU) Gordan Krnjaic (Fermilab) Tue. September 15th, 2020 A Dark Matter Interpretation of Excesses in Multiple Direct Detection Experiments We present a novel unifying interpretation of excess event rates observed in several dark matter direct-detection experiments that utilize single-electron threshold semiconductor detectors. Despite their different locations, exposures, readout techniques, detector composition, and operating depths, these experiments all observe statistically significant excess event rates of ~10 Hz/kg. However, none of these persistent excesses has yet been reported as a dark matter signal because their common spectral shapes are inconsistent with dark matter particles scattering elastically off detector nuclei or electrons. We show that these results can be reconciled if the semiconductor detectors are seeing a collective inelastic process known as a plasmon. Continue reading… Gordan Krnjaic (Fermilab) Hooman Davoudiasl (Brookhaven) Tue. September 8th, 2020 Ultralight Fermionic Dark Matter Tremaine and Gunn argued long ago that fermionic dark matter lighter than a few hundred eV is not feasible, based on the Pauli exclusion principle. We highlight a simple way of evading this conclusion which can lead to various interesting consequences. In this scenario, a large number of fermionic species with quasi-degenerate masses and no couplings, other than gravitational, to the standard model are assumed. Nonetheless, we find that gravitational interactions can lead to constraints on the relevant parameter space, based on high energy data from the LHC and cosmic ray experiments, Continue reading… Hooman Davoudiasl (Brookhaven) Saurabh Kumar (CWRU) Tue. September 1st, 2020 Radiating Macroscopic Dark Matter Dark matter is believed to constitute about 5/6th of the matter in the universe, but its nature and interactions remain one of the great puzzles of fundamental physics. Despite extensive experimental efforts, there have been no widely believed detections of WIMPS, axions or any other physics Beyond the Standard Model (BSM) (except for neutrino oscillations, which are BSM principally by historical accident). The question then arises: could the Standard Model, the most accurate and extremely well-tested theory of all observed particles in nature, explain dark matter as well? Many models of exotic quark matter have been proposed, Continue reading… Saurabh Kumar (CWRU) Jagjit Singh Sidhu (CWRU) Tue. March 3rd, 2020 Charge Constraints of Macroscopic Dark Matter Macroscopic dark matter (macros) refers to a broad class of alternative candidates to particle dark matter with still unprobed regions of parameter space. Prior work on macros has considered elastic scattering to be the dominant energy transfer mechanism in deriving constraints on the abundance of macros for some range of masses and (geometric) cross-sections. However, macros with a significant amount of electric charge would, through Coulomb interactions, interact strongly enough to have produced observable signals on terrestrial, galactic and cosmological scales. We determine the expected phenomenological signals and constrain the corresponding regions of parameter space, Continue reading… Jagjit Singh Sidhu (CWRU) Shruti Paranjape (University of Michigan) Tue. February 25th, 2020 Born-Infeld Theory Beyond the Leading Order The modern approach to scattering amplitudes exploits the symmetries of effective field theories. In this talk, I will focus on Born-Infeld, a theory of non-linear electrodynamics that has a myriad of interesting properties: It can be obtained as the "double copy" of Yang-Mills and chiral perturbation theory and it is the supersymmetric truncation of low-energy brane dynamics. Born-Infeld theory also has a classical electromagnetic duality symmetry. I will discuss how one can use these nice properties to uniquely fix all tree-level amplitudes in the theory. At subleading order, I will address one-loop amplitudes and admissible higher derivative corrections to the Born-Infeld effective field theory. Continue reading… Shruti Paranjape (University of Michigan) Charlotte Sleight (IAS Princeton) Tue. February 11th, 2020 A Mellin Space Approach to Scattering in de Sitter Space Boundary correlators in (anti)-de Sitter space-times are notoriously difficult beasts to tame. In AdS, where such observables are equivalent to CFT correlation functions, recent years have seen significant progress in our understanding of their structure owing to the development of numerous systematic techniques, many of which have drawn inspiration from the successes and the strengths of the scattering amplitudes programme in flat space. In dS however, the problem is more complicated owing to the time-dependence of the background and it is unclear how consistent time evolution is encoded in spatial correlations on the boundary. Continue reading… Charlotte Sleight (IAS Princeton) Craig Hogan (University of Chicago) Tue. February 4th, 2020 Holographic Inflation: Symmetries in the relic pattern of primordial perturbations from a coherent quantum inflationary horizon A reconciliation of quantum mechanics with gravity might be achieved in a holographic theory of quantum gravity, based on coherent states of covariant causal structures. This talk will review the properties of quantum-gravitational perturbations generated during cosmic holographic inflation, in which the inflationary horizon is a coherent quantum object, like the horizon of a black hole. A new analysis of cosmic anisotropy will be described, which shows evidence for some of the new symmetries. Continue reading… Craig Hogan (University of Chicago) Matthew Digman (Ohio State University) Tue. January 28th, 2020 Not as big as a barn: Upper bounds on dark matter-nucleus cross sections Critical probes of dark matter come from tests of its elastic scattering with nuclei. The results are typically assumed to be model independent, meaning that the form of the potential need not be specified and that the cross sections on different nuclear targets can be simply related to the cross section on nucleons. For pointlike spin-independent scattering, the assumed scaling relation is σχA∝A2μ2AσχN∝A4σχN, where the A2 comes from coherence and the μ2A≃A2m2N from kinematics for mχ≫mA. Here we calculate where model independence ends, Continue reading… Matthew Digman (Ohio State University) Adi Nusser (Technion) Tue. January 14th, 2020 New and old probes of the structure of the evolved Universe The observed large scale distribution of galaxies and their peculiar motions (on top of the pure Hubble flow) are very well described in the framework of the standard Lambda Cold Dark Matter model. The model is founded on general relativity (GR) which in itself has recently gained substantial support by the detection of gravitational waves. Despite this success, observational data on large scales allow for deviations from the GR and the standard model. Any tiny deviation may have profound implications on fundamental physical theory of the Universe. Continue reading… Adi Nusser (Technion) Bira van Kolck (Institut de Physique Nucleaire d'Orsay and University of Arizona) Tue. December 10th, 2019 A New Leading Mechanism for Neutrinoless Double-Beta Decay … or how to attract the ire of the community. The neutrinoless double-beta decay of nuclei is essentially the only way to test lepton-number violation coming from the possible Majorana character of neutrinos. Tremendous effort is dedicated to its measurement and to reducing the theoretical uncertainty in the calculation of the nuclear matrix elements needed for its interpretation. Well, we increase the uncertainty. Continue reading… Bira van Kolck (Institut de Physique Nucleaire d'Orsay and University of Arizona) Roman Scoccimarro (NYU) Tue. November 26th, 2019 Bispectrum Bias Loops and Power Spectrum Covariance I will discuss recent progress in two topics in large-scale structure: 1) understanding galaxy bias beyond leading order in perturbation theory and its application to the bispectrum, and 2) how to model the covariance of the galaxy power spectrum multipoles analytically instead of using numerical simulations. Continue reading… Roman Scoccimarro (NYU) Garrett Goon (CMU) Tue. November 19th, 2019 Linking Corrections to Entropy and Extremality I will prove that the leading perturbative corrections to the entropy and extremality bounds of black holes are directly proportional to each other, generically. This fact is intimately related to the Weak Gravity Conjecture, as I will discuss. The proof is purely thermodynamic and applies to systems beyond the gravitational realm. Continue reading… Garrett Goon (CMU) Jesse Thaler (MIT) Fri. November 15th, 2019 Quantum Algorithms for Collider Physics As particle physics experiments continue to stretch the limits of classical computation, it is natural to ask about the potential future role of quantum computers. In this talk, I discuss the potential relevance of quantum algorithms for collider physics. I present a proof-of-concept study for "thrust", a well-known collider observable that has O(N^3) runtime for a collision involving N final-state particles. Thrust is a particularly interesting observable in this context, since it has two dual formulations, one which naturally maps to quantum annealing and one which naturally maps to Grover search. Continue reading… Jesse Thaler (MIT) Clara Murgui (IFIC, Valencia) Tue. November 5th, 2019 The QCD Axion and Unification The QCD axion is one of the most appealing candidates for the dark matter in the Universe. In this article, we discuss the possibility to predict the axion mass in the context of a simple renormalizable grand unified theory where the Peccei-Quinn scale is determined by the unification scale. In this framework, the axion mass is predicted to be in the range ma ≃ (3 − 13) × 10−9 eV. We study the axion phenomenology and find that the ABRACADABRA and CASPEr-Electric experiments will be able to fully probe this mass window. Continue reading… Clara Murgui (IFIC, Valencia) Juri Smirnov (Ohio State University) Tue. October 29th, 2019 Dark Matter Research with Bound Systems My discussion will rest on three pillars. The first is an overview of bound states in dark sectors, and their implications for dark matter phenomenology, mass predictions and dark matter model building. The second is an exploration of new experimental techniques, which are needed to search for dark matter, which resides in a sector containing bound states. Finally, I will discuss some experimental observations based on bound states of ordinary matter, which can be used to constrain some of the introduced dark matter scenarios. Continue reading… Juri Smirnov (Ohio State University) Chi Tian (CWRU) Tue. October 15th, 2019 Black-Hole Lattices as Cosmological Models Challenges for modern cosmology include determining the influence the small-scale structure has in the universe on its large-scale dynamics and observations. With numerical relativity tools, finding and exploring cosmological models which are exact solutions to the Einstein equations will resolve all the non-linearities so that give us hints on quantifying the influence. In this talk, I will introduce Black-Hole Lattice models, which are subsets of relativistic discrete cosmological models. In particular, I will start from constructing those spacetimes and show what we can learn from exploring their properties. Continue reading… Chi Tian (CWRU) Cedric Weiland (University of Pittsburgh) Tue. October 8th, 2019 Electroweak measurements at electron-positron colliders as indirect searches for heavy neutrinos Heavy neutrinos are part of many extensions of the Standard Model, in particular seesaw models that can explain the light neutrino masses and mixing. Future electron-positron colliders would greatly increase the precision of the measurements of electroweak processes. I will discuss how this improved precision offers new opportunities to search for the effects of heavy neutrinos. In particular, I will focus on indirect search strategies based on the modifications of the production cross-sections of W or Higgs bosons at linear collider. These searches are complementary to other observables and would allow to probe the multi-TeV mass regime at future colliders. Continue reading… Cedric Weiland (University of Pittsburgh) Gilles Gerbier (Queen's U) Tue. October 1st, 2019 Searching for low mass dark matter particles at SNOLAB 90 years after its first evidence by F Zwicky, the nature of the dark matter of the Universe is still unknown. There is a consensus it should be made of elementary particles but their search has been going on for several decades without success. Huge progress in sensitivity has been done, though, thanks to new innovative detection techniques. Indeed some new techniques allow to enlarge the exploration of parameter space. I will describe status of two projects I have developed, within international collaborations, thanks to a CERC grant in Canada, Continue reading… Gilles Gerbier (Queen's U) Laura Johnson (CWRU) Tue. September 24th, 2019 Massive Gravitons in Curved Spacetimes This talk will cover various interesting topics that occur in massive spin-2 on various spacetimes including de Sitter, anti-de Sitter, and flat space. In de Sitter, we examine what happens to massive gravity as its mass approaches the partially massless value. In this limit, if the interactions are chosen to be precisely those of the 'candidate' non-linear partially massless theory, the strong coupling scale is raised, giving the theory a wider range of applicability. In anti-de Sitter and flat spacetime, we show how shift symmetries acting on the vector modes emerge from massive spin-2 theories fixing the non-linear structure and discuss whether these theories have amplitudes that can be constructed via soft substracted recursion. Continue reading… Laura Johnson (CWRU) Goran Senjanovic (ICTP, Trieste) Fri. September 20th, 2019 The fall and rise of parity and the origin of (neutrino) mass Continue reading… Goran Senjanovic (ICTP, Trieste) Goran Senjanovic (ICTP, Trieste) Wed. September 18th, 2019 Strong CP violation: fancy and fact Callum Jones (University of Michigan) Tue. September 10th, 2019 Born-Infeld Electrodynamics at One-Loop The Born-Infeld model is an effective field theory of central importance describing the low-energy dynamics of massless gauge bosons on the world-volume of D-branes. Though it is in many ways exceptional in the universality class of models of nonlinear electrodynamics, several aspects of the physics of the Born-Infeld model remain mysterious. In this talk I will explain how aspects of the model, obscured in the traditional formulation of Lagrangian field theory, are clarified by directly studying the on-shell S-matrix. In particular in 3+1-dimensions, classical Born-Infeld has an electromagnetic duality symmetry which manifests in tree-level scattering amplitudes as the conservation of a chiral charge. Continue reading… Callum Jones (University of Michigan) Erin Blauvelt (Lehigh University) Mon. September 9th, 2019 Striped and Superconducting Phases in Holography There is a duality out of the framework of string theory that tells us, in certain cases, gravity can be thought of as emerging from the quantum mechanical degrees of freedom of a system. Remarkably, this relationship has not only given us a long sought after microscopic description of black holes and insights into the fabric of spacetime, but has also proven itself useful as a novel analytic toolset to investigate non-perturbative systems. Known as holography, this weak/strong coupling duality allows us to examine strongly coupled quantum systems by mapping them to perturbative, Continue reading… Erin Blauvelt (Lehigh University) Bharat Ratra (Kansas State University) Fri. September 6th, 2019 Cosmological Seed Magnetic Field from Inflation A cosmological magnetic field of nG strength on Mpc length scales could be the seed magnetic field needed to explain observed few microG large-scale galactic magnetic fields. I first briefly review the observational and theoretical motivations for such a seed field, two galactic magnetic field amplification models, and some non-inflationary seed field generation scenarios. I then discuss an inflation magnetic field generation model. I conclude by mentioning possible extensions of this model as well as potentially observable consequences. Continue reading… Bharat Ratra (Kansas State University) Jacob Seiler (Swinburne University of Technology, Melbourne) Tue. May 7th, 2019 Coupling Galaxy Evolution and the Epoch of Reionization The Epoch of Reionization is a pivotal period in our cosmic history, representing the transition from a neutral post-recombination Universe into the fully ionized one we observe today. The procession of reionization is dictated by the fraction of ionizing photons, fesc, that escapes from galaxies to ionize the inter-galactic medium, with the exact value and functional form still an open question. I explore this question using the Semi-Analytic Galaxy Evolution (SAGE) model to generate galaxy properties, such as the number of ionizing photons emitted, and follow different possible Epoch of Reionization scenarios with a semi-numerical scheme. Continue reading… Jacob Seiler (Swinburne University of Technology, Melbourne) Yue Zhang (Fermilab) Tue. April 16th, 2019 Electroweak Baryogenesis, ACME II, and Dark Sector CP Violation The origin of the matter-anti-matter asymmetry in the universe is a big puzzle for particle physics and cosmology. Baryogenesis mechanisms at the electroweak scale are attractive for their testability at high-energy colliders and low-energy experiments. The recent measurement of electron electric dipole moment by ACME II sets stringent limit on weak scale CP violations and challenges the viable parameter space for successful electroweak baryogenesis in traditional models, such as two-Higgs doublet models and supersymmetry. In this talk, I will present our recent proposal of triggering electroweak baryogenesis with dark sector CP violation, Continue reading… Yue Zhang (Fermilab) James Wells (University of Michigan-Ann Arbor) Tue. April 9th, 2019 Unification and Precision Measurements Abstract: The Standard Model of particle physics may yet be unified into a deeper organizing principle. The gauge groups may unify into higher rank gauge group, and the Yukawa couplings might unify into a simplifying symmetry group. The key to assessing these unification prospects is precision measurements and related precision theory. The current status of unification, in its various guises, is discussed from the perspective of precision analysis. In addition, the prospects for further stress-testing the ideas at future experiment and through future theory work are also presented. Continue reading… James Wells (University of Michigan-Ann Arbor) Maura McLaughlin (West Virginia University) Tue. April 2nd, 2019 The NANOGrav 11-year Data Set: New Insights into Galaxy Growth and Evolution Benjamin Monreal (CWRU) Tue. March 19th, 2019 Giant telescopes, exoplanets, and astronomy in the 2020s Bhupal Dev (Washington University) Tue. March 5th, 2019 New Physics at Neutrino Telescopes Abstract: The recent observation of high-energy neutrinos at the IceCube neutrino telescope has opened a new era in neutrino astrophysics. Understanding all aspects of these events is very important for both Astrophysics and Particle Physics ramifications. In this talk, I will discuss a few possible new physics scenarios, such as dark matter, leptoquarks and supersymmetry, that could be probed using the IceCube data. I will also relate this to the puzzling observation of two upgoing EeV events recently made by the ANITA experiment, which were not seen by IceCube. Continue reading… Bhupal Dev (Washington University) Brian Batell (University of Pittsburgh ) Tue. February 26th, 2019 Breaking Mirror Hypercharge in Twin Higgs Models The Twin Higgs is a novel framework to understand the stability of the Higgs mass in the face of increasingly stringent LHC bounds on colored top partners. Two principal structural questions in this framework concern the nature of the twin hypercharge gauge symmetry and the origin of the Z2 symmetry breaking needed to achieve the correct vacuum alignment. After an introduction to this framework, a simple extension of the Mirror Twin Higgs model with an exact Z2 symmetry is presented in which a new scalar field in the twin sector spontaneously breaks both twin hypercharge and Z2. Continue reading… Brian Batell (University of Pittsburgh ) Aaron Pierce (University of Michigan-Ann Arbor) Tue. February 19th, 2019 Supersymmetry, Hidden Sectors, and Baryogenesis Abstract: Supersymmetry has been a primary target for the experiments at the Large Hadron Collider. We review what the absence of supersymmetric signals thus far implies for supersymmetric extensions to the Standard Model. We discuss ways in which supersymmetry might still have important consequences for our Universe — even if it does not completely explain the hierarchy between strength of gravity and the other forces. As an example, we discuss how a supersymmetric extension might be responsible for generating the observed symmetry between matter and anti-mattter. Continue reading… Aaron Pierce (University of Michigan-Ann Arbor) Riccardo Penco (Carnegie Mellon University) Tue. February 12th, 2019 Constraining the gravitational sector with black hole perturbations Joshua Berger (University of Pittsburgh) Tue. February 5th, 2019 11:30 am-12:30 am Searching for the dark sector in neutrino detectors Abstract: Dark matter has thus far eluded attempts to determine its non-gravitational interactions, putting strong constraints on a minimal dark sector. I present models of non-minimal dark sectors that could elude current searches, but be seen in current or near future neutrino experiments. I begin by presenting a comprehensive, ongoing phenomenological study of models in which dark matter can annihilate into other forms of dark matter, leading to a flux of energetic (boosted) dark matter (BDM). Such dark matter could deposit enough energy to be detected in large neutrino detectors such as Super-Kamiokande and DUNE. Continue reading… Joshua Berger (University of Pittsburgh) James Bonifacio (CWRU) Tue. January 22nd, 2019 Shift Symmetries in (Anti) de Sitter Space Alexis D. Plascencia (CWRU) Tue. January 15th, 2019 Tau-philic dark matter coannihilation at the LHC and CLIC Abstract: We will discuss a set of simplified models of dark matter with three-point interactions between dark matter, its coannihilation partner and the Standard Model particle, which we take to be the tau lepton. The contribution from dark matter coannihilation is highly relevant for a determination of the correct relic abundance. Although these models are hard to detect using direct and indirect detection, we will show that particle colliders can probe large regions in the parameter space. Some of the models discussed are manifestly gauge invariant and renormalizable, Continue reading… Alexis D. Plascencia (CWRU) Stephane Coutu (Penn State) Tue. December 4th, 2018 Host: Covault Continue reading… Stephane Coutu (Penn State) Mark B. Wise (Caltech) Tue. November 27th, 2018 Loop induced inflationary non-Gaussianites that give rise to an enhanced galaxy power spectrum at small wave-vectors Abstract: I outline the calculation of non-Gaussian mass density fluctuations that arise from one-loop Feynman diagrams in a de Sitter background. Their impact on the distribution of galaxies on very large length scales (i.e. l > 200/ h Mpc) is discussed. The role that symmetries of the de Sitter metric play in determining the form of the power spectrum, bi-spectrum and tri-spectrum of primordial curvature perturbations is emphasized. Host: Fileviez Perez Continue reading… Mark B. Wise (Caltech) Jure Zupan (University of Cincinnati) Tue. November 20th, 2018 Effective field theories for dark matter direct detection I will discuss the nonperturbative matching of the effective field theory describing dark matter interactions with quarks and gluons to the effective theory of nonrelativistic dark matter interacting with nonrelativistic nucleons. In general, a single partonic operator already matches onto several nonrelativistic operators at leading order in chiral counting. Thus, keeping only one operator at the time in the nonrelativistic effective theory does not properly describe the scattering in direct detection. Moreover, the matching of the axial–axial partonic level operator, as well as the matching of the operators coupling DM to the QCD anomaly term, Continue reading… Jure Zupan (University of Cincinnati) Jonathan Ouellet (MIT) Tue. November 13th, 2018 First Results from the ABRACADABRA-10cm Prototype The evidence for the existence of Dark Matter is well supported by many cosmological observations. Separately, long standing problems within the Standard Model point to new weakly interacting particles to help explain away unnatural fine-tunings. The axion was originally proposed to explain the Strong-CP problem, but was subsequently shown to be a strong candidate for explaining the Dark Matter abundance of the Universe. ABRACADABRA is a proposed experiment to search for ultralight axion Dark Matter, with a focus on the mass range 10^{-14} ~< Continue reading… Jonathan Ouellet (MIT) Francesc Ferrer (Washington University) Tue. October 30th, 2018 Primordial black holes in the wake of LIGO The detection of gravitational waves from the merger of black holes of ~30 solar masses has reignited the interest of primordial black holes (PBHs) as the source of the dark matter in the universe. We will review the existing constraints on the abundance of PBHs and the implications for several fundamental physics scenarios. A small relic abundance of heavy PBHs may play and important role in the generation of cosmological structures, and we will discuss how such a PBH population can be generated by the collapse of axionic topological defects. Continue reading… Francesc Ferrer (Washington University) Xiaoju Xu (University of Utah) Tue. October 16th, 2018 Multivariate Dependent Halo and Galaxy Assembly Bias Galaxies form in dark matter halos, and their properties and distributions are connected to the host halos. With a prescription of the galaxy-halo relation and the theoretically known halo clustering (e.g., from N-body simulations), galaxy clustering data from large galaxy surveys can be modeled to learn about galaxy formation and cosmology. In the above halo-based model, it is usually assumed that the statistical distribution of galaxies inside halos only depends on halo mass. However, it is found that in addition to mass halo clustering also depends on the formation history and environment of halos, Continue reading… Xiaoju Xu (University of Utah) Brad Benson (University of Chicago) Tue. October 9th, 2018 New Results from the South Pole Telescope I will give an overview of the South Pole Telescope (SPT), a 10-meter diameter telescope at the South Pole designed to measure the cosmic microwave background (CMB). The SPT recently completed 10 years of observations, over which time it has been equipped with three different cameras: SPT-SZ, SPTpol, and SPT-3G. I will discuss recent results from the SPT-SZ and SPTpol surveys, including: an update on the SPT Sunyaev-Zel'dovich (SZ) cluster survey, and joint analyses with the optical dark energy survey (DES); a comparison of CMB measurements between SPT-SZ and the Planck satellite; Continue reading… Brad Benson (University of Chicago) Tim Linden (Ohio State University) Tue. October 2nd, 2018 2018 Michelson Postdoctoral Prize Lecture 2 The Rise of the Leptons: Emission from Pulsars will Dominate the next Decade of TeV Gamma-Ray Astronomy HAWC observations have detected extended TeV emission coincident with the Geminga and Monogem pulsars. In this talk, I will show that these detections have significant implications for our understanding of pulsar emission. First, the spectrum and intensity of these "TeV Halos" indicates that a large fraction of the pulsar spindown energy is efficiently converted into electron-positron pairs. This provides observational evidence necessitating pulsar interpretations of the rising positron fraction observed by PAMELA and AMS-02. Continue reading… Tim Linden (Ohio State University) Mahmoud Parvizi (Vanderbilt University) Tue. September 25th, 2018 Cosmological Observables via Non-equilibrium Quantum Dynamics in Non-stationary Spacetimes In nearly all cases cosmological observables associated with quantum matter fields are computed in a general approximation, via the standard irreducible representations found in the operator formalism of particle physics, where intricacies related to a renormalized stress-energy tensor in a non-stationary spacetime are ignored. Models of the early universe also include a hot, dense environment of quantum fields where far-from-equilibrium interactions manifest expressions for observables with leading terms at higher orders in the coupling. A more rigorous treatment of these cosmological observables may be carried out within the alternative framework of algebraic quantum field theory in curved spacetime, Continue reading… Mahmoud Parvizi (Vanderbilt University) Miguel Zumalacarregui (UC Berkeley & IPhT Saclay) Tue. September 18th, 2018 The Dark Universe in the Gravitational Wave Era Evidence shows that we live in a universe where 95% of the matter and energy is of unknown nature. Right from the onset, Gravitational Wave (GW) astronomy is shaping our understanding of the dark universe in several ways: GW signals of black hole mergers have resurrected the idea of Dark Matter being made of primordial black holes, while multi-messenger GW astronomy has generated novel ways to test Dark Energy and the fundamental properties of gravity. I will discuss the impact of gravitational waves on the landscape of gravitational theories, Continue reading… Miguel Zumalacarregui (UC Berkeley & IPhT Saclay) Andre De Gouvea (Northwestern Univ.) Fri. September 7th, 2018 Chiral Dark Sectors, Neutrino Masses, and Dark Matter I discuss the hypothesis that there are new chiral fermions particles that transform under a new gauge group. Along the way, I present one mechanism for constructing nontrivial, chiral gauge theory and explore the phenomenology – mostly related to nonzero neutrino masses and the existence of dark matter – associated to a couple of concrete example. Continue reading… Andre De Gouvea (Northwestern Univ.) Anastasia Fialkov (Harvard Univ.) Tue. August 7th, 2018 SHINING LIGHT INTO COSMIC DARK AGES The first billion years is the least-explored epoch in cosmic history. The first claimed detection of the 21 cm line of neutral hydrogen by EDGES (announced at the end of February this year) – if confirmed – would be the first time ever that we witness star formation at cosmic dawn. Join Dr. Fialkov as she discusses theoretical modeling of the 21 cm signal, summarizes the status of the field after the EDGES detection, and shares thoughts on prospects for future detections of this line. Host: Starkman Continue reading… Anastasia Fialkov (Harvard Univ.) Amy Connolly (The Ohio State University) Tue. May 8th, 2018 High Energy Neutrino Astronomy through Radio Detection Multimessenger astronomy has entered an exciting new era with the recent discovery of both gravitational waves and cosmic neutrinos. I will focus on neutrinos as particles that can uniquely probe cosmic distances at the highest energies. While optical Cerenkov radiation has been used for decades in neutrino experiments, the radio Cerenkov technique has emerged in the last 15 years as the most promising for a long-term program to push the neutrino frontier by over a factor of 1000 in energy. I will give an overview of the current status and future of the radio neutrino program, Continue reading… Amy Connolly (The Ohio State University) Stuart Raby (Ohio State University) Tue. May 1st, 2018 Fitting amu and B physics anomalies with a Z' and a Vector-like 4th family in the Standard Model The Standard Model is very successful. Nevertheless, there are some, perhaps significant, discrepancies with data. A particularly interesting set of discrepancies hints at new physics related to muons. I will review the data and recent NP models trying to fit the data. Then I will discuss a very simple model which is motivated by heterotic string constructions. Continue reading… Stuart Raby (Ohio State University) Tyce DeYoung (Michigan State University) Tue. April 24th, 2018 First light at the IceCube Neutrino Observatory The IceCube Neutrino Observatory, the world's largest neutrino detector, monitors a cubic kilometer of glacial ice below the South Pole Station to search for very high energy neutrinos from the astrophysical accelerators of cosmic rays. Since its commissioning in 2011, IceCube has discovered a flux of TeV-PeV scale astrophysical neutrinos, at a level with significant implications for our understanding of the dynamics of the non-thermal universe. The sources of this flux have remained elusive, however. In the last six months, hints to the identity of at least some of the sources may have begun to emerge, Continue reading… Tyce DeYoung (Michigan State University) Camille Avestruz (Kavli Institute for Cosmological Physics, University of Chicago) Tue. April 17th, 2018 Computationally Probing Large Structures We can constrain cosmological parameters by measuring patterns in the large scale structure of our universe, which are governed by the competition between gravitational collapse and the accelerated expansion of our universe. The most massive collapsed structures are clusters of galaxies, comprised of hundreds to thousands of galaxies. For galaxy clusters, the telltale cosmological pattern is simply their number count as a function of mass and time. In this talk, I will discuss the challenges in using galaxy clusters as a probe for cosmology. We address these challenges through computational methods that explore galaxy formation processes such as energy feedback from active galactic nuclei, Continue reading… Camille Avestruz (Kavli Institute for Cosmological Physics, University of Chicago) Hayden Lee (Harvard University) Tue. April 3rd, 2018 Collider Physics for Inflation Cosmological correlation functions encode the spectrum of particles during inflation, in analogy to scattering amplitudes in colliders. Particles with masses comparable to the Hubble scale lead to distinctive signatures on non-Gaussianities that reflect their masses and spins. In addition, there exists a special class of partially massless particles that have no flat space analog, but could have existed during inflation. I will describe their key spectroscopic features in the soft limits of correlation functions, and discuss scenarios in which they lead to observable non-Gaussianity. Continue reading… Hayden Lee (Harvard University) Segev BenZvi (University of Rochester) Tue. March 27th, 2018 The Latest Results from the HAWC Very High-Energy Gamma-ray Survey The High Altitude Water Cherenkov (HAWC) observatory, located in central Mexico, is conducting a wide-angle survey of TeV gamma rays and cosmic rays from two-thirds of the sky. TeV gamma rays are the highest energy photons ever observed and provide a unique window into the non-thermal universe. These very high energy photons allow HAWC to conduct a broad science program, ranging from studies of particle acceleration in the Milky Way to searches for new physics beyond the Standard Model. In this Continue reading… Segev BenZvi (University of Rochester) Cliff Cheung (Caltech) Tue. March 20th, 2018 Unification from Scattering Amplitudes The modern S-matrix program offers an elegant approach to bootstrapping quantum field theories without the aid of an action. While most progress has centered on gravity and gauge theory, similar ideas apply to effective field theories (EFTs). Sans reference to symmetry or symmetry breaking, we show how certain EFTs can be derived directly from the properties of the tree-level S-matrix, carving out a theory space of consistent EFTs from first principles. Furthermore, we argue that the S-matrix encodes a hidden unification of gravity, gauge theory, and EFTs. In particular, starting from the tree-level S-matrix of the mother of all theories, Continue reading… Cliff Cheung (Caltech) John Beacom (The Ohio State University) Tue. March 6th, 2018 A New Era for Solar Neutrinos Abstract: Studies of solar neutrinos have been tremendously important, revealing the nature of the Sun's power source and that its neutrino flux is strongly affected by flavor mixing. Nowadays, one gets the impression that this field is over. However, this is not due to a lack of interesting questions; it is due to a lack of experimental progress. I show how this can be solved, opening opportunities for discoveries in particle physics and astrophysics, simultaneously. Continue reading… John Beacom (The Ohio State University) Lindley Winslow (MIT) Wed. February 28th, 2018 First Results from CUORE: Majorana Neutrinos and the Search for Neutrinoless Double-Beta Decay The neutrino is unique among the Standard Model particles. It is the only fundamental fermion that could be its own antiparticle, a Majorana particle. A Majorana neutrino would acquire mass in a fundamentally different way than the other particles and this would have profound consequences to particle physics and cosmology. The only feasible experiments to determine the Majorana nature of the neutrino are searches for the rare nuclear process neutrinoless double-beta decay. CUORE uses tellurium dioxide crystals cooled to 10 mK to search for this rare Continue reading… Lindley Winslow (MIT) Richard Ruiz (IPPP-Durham, UK) Tue. February 20th, 2018 Left-Right Symmetry: At the Edges of Phase Space and Beyond The Left-Right Symmetric model (LRSM) remains one of the best motivated completions of the Standard Model of Particle Physics. Thus far, however, data from the CERN Large Hadron Collider (LHC) tell us that new particles, if they are still accessible, must be very heavy and/or very weakly coupled. Interestingly, these regions of parameter space correspond to collider signatures that are qualitatively and quantitatively different from those developed in pre-LHC times. We present several new LRSM collider signatures for these parameter spaces and show a greatly expanded discovery potential at the 13 TeV LHC and hypothetical future 100 TeV very large hadron collider. Continue reading… Richard Ruiz (IPPP-Durham, UK) Andrew J. Long (Kavli Institute for Cosmological Physics, University of Chicago) Tue. February 13th, 2018 Testing baryons from bubbles with colliders and cosmology "Why is there more matter than antimatter?" This simple question is arguably the most longstanding and challenging problem in modern cosmology, but with input from the next generation of particle physics experiments we may finally have an answer! In the talk I will discuss how precision measurements of the Higgs boson at the LHC and future high energy collider experiments will be used to test the idea that the matter-antimatter asymmetry arose during the electroweak phase transition in the fractions of a second after the big bang. Other cosmological phase transitions can also provide the right environment for generating the matter excess. Continue reading… Andrew J. Long (Kavli Institute for Cosmological Physics, University of Chicago) Ayres Freitas (University of Pittsburgh) Tue. February 6th, 2018 Radiative Corrections in Universal Extra Dimensions Universal extra dimensions is an interesting extension of the Standard Model that is naturally protected from electroweak precision constraints and provides a natural dark matter candidate. Its phenomenology at the LHC is strongly affected by radiative corrections. On one hand, QCD corrections are important for understanding the production of heavy gluons and quarks, which are the particles with the largest production rates at the LHC. On the other hand, radiative corrections crucially modify the mass spectrum and interactions of the heavy resonances. This talk will describe recent progress on both of these fronts. Continue reading… Ayres Freitas (University of Pittsburgh) David McKeen (University of Pittsburgh) Tue. January 30th, 2018 Neutrino Portal Dark Matter Dark matter that interacts with the standard model (SM) through the "neutrino portal" is a possibility that is relatively less well studied than other scenarios. In such a setup, the dark matter communicates with the SM primarily through its interactions with neutrinos. In this talk, I will motivate neutrino portal dark matter and discuss some new tests of this possibility. Continue reading… David McKeen (University of Pittsburgh) Anders Johan Andreassen (Harvard University) Tue. January 23rd, 2018 Tunneling in Quantum Field Theory and the Ultimate Fate of our Universe One of the most concrete implications of the discovery of the Higgs boson is that, in the absence of physics beyond the standard model, the long-term fate of our universe can now be established through precision calculations. Are we in a metastable minimum of the Higgs potential or the true minimum? If we are in a metastable vacuum, what is its lifetime? To answer these questions, we need to understand tunneling in quantum field theory.This talk will give an overview of the interesting history of tunneling rate calculations and all of its complications in calculating functional determinants of fluctuations around the bounce solutions. Continue reading… Anders Johan Andreassen (Harvard University) Dragan Huterer (U. Michigan) Fri. December 1st, 2017 title and abstract tba Continue reading… Dragan Huterer (U. Michigan) Arthur Kosowsky (Pittsburgh) Tue. November 28th, 2017 Continue reading… Arthur Kosowsky (Pittsburgh) Simone Aiola (Princeton) Tue. November 14th, 2017 Cosmology with ACTPol and AdvACT The bolometric polarimeter at the focal plane of the Atacama Cosmology Telescope allows us to map the Cosmic Microwave Background (CMB) with high signal-to-noise both in temperature and polarization. In this talk, I will present the data-reduction pipeline, highlighting the importance of making maximum-likelihood unbiased CMB maps. I will show the two-season ACTPol cosmological results presented in Louis et al. (2017), Sherwin et al. (2017), and Hilton et al. (2017) and describe the current effort to finalize the analysis of the ACTPol dataset. I will conclude with preliminary results from the ongoing AdvACT survey, Continue reading… Simone Aiola (Princeton) James Bonifacio (Oxford and CWRU) Tue. October 31st, 2017 Title: Amplitudes for massive spinning particles Abstract: I will review a method for constructing scattering amplitudes for spinning particles and then discuss how these amplitudes can be used to constrain massive gravity and theories containing higher-spin particles. Continue reading… James Bonifacio (Oxford and CWRU) Lloyd Knox (UC Davis) Tue. October 17th, 2017 The Standard Cosmological Model: A Status Report Overall, the standard cosmological model has enjoyed enormous empirical success. But there are a number of indicators that we might be missing something. These include the large-scale cosmic microwave background (CMB) "anomalies", and two to three sigma discrepancies between cosmological parameters derived from larger angular scales of the CMB vs. smaller angular scales, CMB lensing potential reconstruction vs. CMB power spectra, data from the Planck satellite vs. data from the South Pole Telescope, and CMB-calibrated predictions for the current rate of expansion vs. more direct measurements. I will introduce the standard cosmological model, Continue reading… Lloyd Knox (UC Davis) Rachel Bezanson (Pittsburgh) Tue. October 10th, 2017 Title: The Surprisingly Complex Lives of Massive Galaxies Abstract: Massive galaxies reside in the densest and most evolved regions of the Universe, yet we are only beginning to understand their formation history. Once thought to be relics of a much earlier epoch, the most massive local galaxies are red and dead ellipticals, with little ongoing star formation or organized rotation. In the last decade, observations of their assumed progenitors have demonstrated that the evolutionary histories of massive galaxies have been far from static. Instead, billions of years ago, massive galaxies were morphologically different: compact, possibly with more disk-like structures, Continue reading… Rachel Bezanson (Pittsburgh) Tiziana Di Matteo (Carnegie Mellon) Tue. September 26th, 2017 The next massive galaxy and quasar frontier at the Cosmic Dawn Many of the advances in our understanding of cosmic structure have come from direct computer modeling. In cosmology, we need to develop computer simulations that cover this vast dynamic range of spatial and time scales. I will discuss recent progress in cosmological hydrodynamic simulations of galaxy formation at unprecedented volumes and resolution. I will focus on predictions for the first quasars and their host galaxies in the BlueTides simulation. BlueTides is a uniquely large volume and high resolution simulation of the high redshift universe: with 0.7 trillion particles in a volume half a gigaparsec on a side. Continue reading… Tiziana Di Matteo (Carnegie Mellon) Laura Gladstone (CWRU) Tue. September 19th, 2017 Neutrinos: cool, cold, coldest In all of particle physics, neutrinos are some of the most ghostly particles we've detected. While the story of their discovery was pretty cool in itself, some modern experiments are even cooler. The IceCube experiment, located at the geographic South Pole, was originally designed to collect astro-particle data, especially by looking for neutrino point sources as potential sources of the highest energy cosmic rays. But because of its immense fiducial volume, IceCube can collect high-statistic neutrino data, and thus measure oscillation parameters with precision that rivals dedicated oscillation experiments. The CUORE experiment examines majorana nature of neutrinos by looking for neutrinoless double beta decay in the coldest cubic meter in the Universe, Continue reading… Laura Gladstone (CWRU) Liang Wu, University California Berkeley, MPPL2,Giant nonlinear optical responses in Weyl semimetals Tue. September 12th, 2017 11:30 pm-12:30 pm Giant nonlinear optical responses in Weyl semimetals Recently Weyl quasi-particles have been observed in transition metal monopnictides (TMMPs) such as TaAs, a class of noncentrosymmetric materials that heretofore received only limited attention. The question that arises now is whether these materials will exhibit novel, enhanced, or technologically applicable properties. The TMMPs are polar metals, a rare subset of inversion- breaking crystals that would allow spontaneous polarization, were it not screened by conduction electrons. Despite the absence of spontaneous polarization, polar metals can exhibit other signatures, most notably second-order nonlinear optical polarizability, leading to phenomena such as second-harmonic generation (SHG). Continue reading… Liang Wu, University California Berkeley, MPPL2,Giant nonlinear optical responses in Weyl semimetals Gabriela Marques, National Observatory of Rio de Janeiro and CWRU Tue. September 5th, 2017 Continue reading… Gabriela Marques, National Observatory of Rio de Janeiro and CWRU Sarah Shandera (Penn State) Tue. May 9th, 2017 Cosmological open quantum systems Our current understanding of the universe relies on an inherently quantum origin for the rich, inhomogeneous structure we see today. Inflation (or any of the alternative proposals for the primordial era) easily generates a universe exponentially larger than what we can observe. In other words, the modes that are observationally accessible make up an open quantum system. I will discuss what we might learn by thinking about the universe in this way, even though the quantum structure is probably not observable. Continue reading… Sarah Shandera (Penn State) Ema Dimastrogiovanni (CWRU) Tue. April 25th, 2017 Primordial gravitational waves: Imprints and search Discussed will be some interesting scenarios for the generation of gravitational waves from inflation and the characteristic imprints we can search with upcoming cosmological observations. Continue reading… Ema Dimastrogiovanni (CWRU) Matthew Johnson (Perimeter Institute) Tue. April 18th, 2017 Mapping Ultra Large Scale Structure Anomalies in the CMB on large angular scales could find an explanation in terms of pre-inflationary physics or intrinsic statistical anisotropies. However, due to cosmic variance it is difficult to conclusively test many of these ideas using the primary cosmic microwave background (CMB) alone. In this talk, I will outline a program to place stringent observational constraints on theories that predict ultra-large scale structure or statistical anisotropies using the secondary CMB (the Sunyaev Zel'dovich effect, polarization form the post-reionization era, lensing, etc.) and tracers of large-scale structure. These methods will become accessible with next-generation CMB experiments and planned galaxy surveys. Continue reading… Matthew Johnson (Perimeter Institute) David Chuss (Villanova) Tue. April 11th, 2017 The Cosmology Large Angular Scale Surveyor (CLASS) Precise observations of the cosmic microwave background have played a leading role in the development of the LCDM model of cosmology, which has been successful in describing the universe's energy content and evolution using a mere six parameters. With this progress have come hints that the universe underwent an inflationary epoch during its infancy. Cosmic inflation is predicted to produce a background of gravitational waves that would imprint a distinct polarized pattern on the cosmic microwave background (CMB). Measurement of this polarized signal would provide the first direct evidence for inflation and would provide a means to study physics at energy scales around the predicted GUT scale. Continue reading… David Chuss (Villanova) Donghui Jeong (Penn State) Tue. April 4th, 2017 Non-linearities in large-scale structure: Induced gravitational waves, non-linear galaxy bias I will present my recent work on non-linearities in large-scale structures of the Universe. For the first part, I will discuss the gauge dependence of the scalar-induced tensor perturbations and its implication on searching the primordial gravitational wave signature from the large-scale structure. For the second part of the talk, I will give a brief overview of the recent review on large-scale galaxy bias (Desjacques, Jeong & Schmidt, 1611.09787) that contains a complete expression for the perturbative bias expansion that must hold on large scales. Continue reading… Donghui Jeong (Penn State) Ben Monreal (CWRU) Tue. March 28th, 2017 Nuclei, neutrinos, and microwaves: searching for the neutrino mass in tritium decay When Enrico Fermi published his theory of beta decay in 1934—what we now call the weak interaction—he suggested how experiments could measure the neutrino mass: by looking at the shape of the energy distribution of beta decay electrons. We're still doing exactly that! I will talk about the state of the art of tritium beta decay electron measurements: the KATRIN experiment, which starts science runs soon with a molecular tritium source towards sub-0.3 eV sensitivity; and the Project 8 experiment, which aims to develop a future atomic tritium experiment sensitive to neutrino masses below 0.05 eV. Continue reading… Ben Monreal (CWRU) Mauricio Bustamante (CCAPP, OSU) Tue. March 21st, 2017 Prospecting for new physics with high-energy astrophysical neutrinos High-energy astrophysical neutrinos, recently discovered by IceCube, are fertile ground to look for new physics. Due to the high neutrino energies — tens of TeV to a few PeV — we can look for new physics at unexplored energies. Due to their cosmological-scale baselines — Mpc to Gpc — tiny new-physics effects, otherwise unobservable, could accumulate and become detectable. Possibilities include neutrino decay, violation of fundamental symmetries, and novel neutrino-neutrino interactions. I will show that the spectral features, angular distribution, and flavor composition of neutrinos could reveal the presence of new physics and, Continue reading… Mauricio Bustamante (CCAPP, OSU) Robert Caldwell (Dartmouth) Tue. March 7th, 2017 Cosmology with Flavor-Space Locked Fields We present new models of cosmic acceleration built from a cosmological SU(2) field in a flavor-space locked configuration. We show that such fields are gravitationally birefringent, and absorb and re-emit gravitational waves through the phenomenon of gravitational wave — gauge field oscillations. As a result, a cosmological SU(2) field leaves a unique imprint on both long-wavelength gravitational waves of primordial origin as well as high frequency waves produced by astrophysical sources. We show that these effects may be detected in the future using the cosmic microwave background and gravitational wave observatories. Continue reading… Robert Caldwell (Dartmouth) Matthew Baumgart (Perimeter Institute) Tue. February 14th, 2017 De Sitter Wavefunctionals and the Resummation of Time The holographic RG of Anti-De Sitter gives a powerful clue about the underlying AdS/CFT correspondence. The question is whether similar hints can be found for the heretofore elusive holographic dual of De Sitter. The framework of stochastic inflation uses nonperturbative insight to tame bad behavior in the perturbation series of a massless scalar in DS at late times. Remarkably, this fully quantum system loses phase information and exhibits semiclassical dynamics in the leading approximation. Recasting this as a "resummation of time," we wish understand whether the distributions that result can be thought of as an attractive UV fixed point of a theory living on a spacelike slice of DS. Continue reading… Matthew Baumgart (Perimeter Institute) Andrew Zentner (Pittsburgh) Tue. February 7th, 2017 The Power-Law Galaxy Correlation Function For nearly 40 years, the galaxy-galaxy correlation function has been used to characterize the distribution of galaxies on the sky. In addition, the galaxy correlation function has been recognized as very nearly power-law like despite the fact that it is measured over a wide range of scales. In particular, the galaxy correlation function has been measured on very large scales (~30 Mpc), on which density fluctuations are mild and perturbative approaches are appropriate, as well as very small scales (~0.1 Mpc), on which the evolution of the density field of the universe is quite nonlinear. Continue reading… Andrew Zentner (Pittsburgh) Kurt Hinterbichler (CWRU) Tue. January 31st, 2017 Partially Massless Higher-Spin Gauge Theory The higher spin theories of Vasiliev are gauge theories that contain towers of massless particles of all spins, and are thought to be UV complete quantum theories that include gravity, describing physics at energies much higher than the Planck scale. We discuss Vasiliev-like theories that include towers of massless and partially massless fields. These massive towers can be thought of as partially Higgs-ed versions of Vasiliev theory. The theory is a fully non-linear theory which contains partially massless modes, is expected to be UV complete, includes gravity, and can live on dS as well as AdS. Continue reading… Kurt Hinterbichler (CWRU) Lucile Savary (MIT) — Michelson Postdoctoral Prize Lecturer Tue. January 24th, 2017 Quantum Spin Ice Recent work has highlighted remarkable effects of classical thermal fluctuations in the dipolar spin ice compounds, such as "artificial magnetostatics." In this talk, I will address the effects of terms which induce quantum dynamics in a range of models close to the classical spin ice point. Specifically, I will focus on Coulombic quantum spin liquid states, in which a highly entangled massive superposition of spin ice states is formed, allowing for dramatic quantum effects: emergent quantum electrodynamics and its associated emergent electric and magnetic monopoles. I will also discuss how random disorder alone may give rise to both a quantum spin liquid and a Griffiths Coulombic liquid–a Bose glass-like phase. Continue reading… Lucile Savary (MIT) — Michelson Postdoctoral Prize Lecturer Lucile Savary (MIT) — Michelson Postdoctoral Prize Lecturer Mon. January 23rd, 2017 A New Type of Quantum Criticality in the Pyrochlore Iridates The search for truly quantum phases of matter is one of the center pieces of modern research in condensed matter physics. Quantum spin liquids are exemplars of such phases. They may be considered "quantum disordered" ground states of spin systems, in which zero point fluctuations are so strong that they prevent conventional magnetic long range order. More interestingly, quantum spin liquids are prototypical examples of ground states with massive many-body entanglement, of a degree sufficient to render these states distinct phases of matter. Their highly entangled nature imbues quantum spin liquids with unique physical aspects, Claire Zukowski (Columbia U.) Tue. January 17th, 2017 Emergent de Sitter Spaces from Entanglement Entropy A theory of gravity can be holographically "emergent" from a field theory in one lower dimension. In most known cases, the gravitational theory lives in an asymptotically anti- de Sitter spacetime with very different properties from our own de Sitter universe. I will introduce a second emergent "auxiliary" spacetime constructed from the entanglement entropy of subregions in the field theory. In 2d, this auxiliary space is either a de Sitter spacetime or its various identifications. The modular Hamiltonian, which encodes information about the entanglement properties of a state in the field theory, Continue reading… Claire Zukowski (Columbia U.) Beatrice Bonga (Penn State) Tue. December 6th, 2016 The closed universe and the CMB Cosmic microwave background (CMB) observations put strong constraints on the spatial curvature via estimation of the parameter $\Omega_k$. This is done assuming a nearly scale-invariant primordial power spectrum. However, we found that the inflationary dynamics is modified due to the presence of spatial curvature leading to corrections to the primordial power spectrum. When evolved to the surface of last scattering, the resulting temperature anisotropy spectrum shows deficit of power at low multipoles ($\ell<20$). This may partially explain the observed $3 \sigma$ anomaly of power suppression for $\ell <30$. Since the curvature effects are limited to low multipoles, Continue reading… Beatrice Bonga (Penn State) Yi-Zen Chu (University of Minnesota, Duluth) Tue. November 29th, 2016 Causal Structure Of Gravitational Waves In Cosmology Despite being associated with particles of zero rest mass, electromagnetic and gravitational waves do not travel solely on the null cone in generic curved spacetimes. (That is, light does not always propagate on the light cone.) This inside-the-null-cone propagation of waves is known as the tail effect, and may have consequences for the quantitative prediction of gravitational waves from both in-spiraling binary compact stars/black holes and "Extreme-Mass-Ratio" systems. The latter consists of compact objects orbiting, and subsequently plunging into, the horizons of super-massive black holes astronomers now believe reside at the center of many (if not all) galaxies — Continue reading… Yi-Zen Chu (University of Minnesota, Duluth) Daniel Winklehner (MIT) Tue. November 22nd, 2016 On the development and applications of high-intensity cyclotrons in neutrino physics and energy research The cyclotron is one of, if not the, most versatile particle accelerator ever conceived. Based on the (then revolutionary) principle of cyclic acceleration using RF frequency alternating voltage on a so-called dee, while particles are forced into circular orbits by a strong vertical magnetic field, many varieties have been developed in the 84 years since their invention by Lawrence in 1932. The fact that they are still around and oftentimes in a form that has been proposed many years ago is a testimony to their robustness and versatility. Continue reading… Daniel Winklehner (MIT) Austin Joyce (Kavli Institute for Cosmological Physics, Chicago) Tue. November 15th, 2016 Soft limits, asymptotic symmetries, and inflation in Flatland There has been much recent interest in soft limits, both of flat space S-Matrix elements and of cosmological correlation functions. I will discuss the physics probed by soft limits in cosmology and explore the connection between cosmological soft theorems and asymptotic symmetries. These ideas will be illustrated by a simple example: inflation in 2+1 dimensions. Continue reading… Austin Joyce (Kavli Institute for Cosmological Physics, Chicago) Rachel Rosen (Columbia University) Tue. November 8th, 2016 Non-Singular Black Holes in Massive Gravity When starting with a static, spherically-symmetric ansatz, there are currently two types of black hole solutions in massive gravity: (i) exact Schwarzschild solutions which exhibit no Yukawa suppression at large distances and (ii) solutions which contain coordinate-invariant singularities at the horizon. In this talk, I will present new black hole solutions which have a nonsingular horizon and can potentially be matched to Yukawa asymptotics at large distances. These solutions recover Schwarzschild black holes in the massless limit and are thus observationally viable." Continue reading… Rachel Rosen (Columbia University) Tao Han (University of Pittsburgh) Fri. November 4th, 2016 Splitting and showering in the electroweak sector We derive the splitting functions for the Standard Model electroweak sector at high energies, including the fermions, massive gauge bosons and the Higgs boson. We study the class of functions with the "ultra-collinear" behavior that is a consequence of the electroweak symmetry breaking. We stress the leading-order corrections to the "Goldstone-boson Equivalence Theorem". We propose a novel gauge, dubbed the "Goldstone Equivalence Gauge" that practically as well as conceptually disentangles the effects from the Goldstone bosons and the gauge fields. We also demonstrate a practical scheme for multiple electroweak boson production via showering at high energies. Continue reading… Tao Han (University of Pittsburgh) Sean Bryan (Arizona State University) Tue. October 18th, 2016 Cosmology with Millimeter Wave LEKIDs: CMB, Spectroscopy, and Imaging with TolTEC Millimeter-wave cameras offer a unique window on the history and dynamics of the universe. Observations of CMB polarization are setting new constraints on cosmic inflation and gravitational lensing. Imaging and spectroscopy in millimeter waves measures individual galaxies through their bolometric flux as well as C+/CO line strengths. In this talk, I will discuss aluminum LEKID detectors that can be used for all of these applications. The feed structures are directly machined in metal, and the detectors are made with a single-layer process. Lab measurements show that the 150 GHz dual-polarization detectors have photon-noise limited sensitivity, Continue reading… Sean Bryan (Arizona State University) Stacy McGaugh (CWRU Astronomy) [note time] Tue. October 11th, 2016 *Note that the seminar may be pushed back to 11:30-12:30. The Radial Acceleration Relation in Rotationally Supported Galaxies We report a correlation between the radial acceleration traced by rotation curves and that predicted by the observed distribution of baryons. The same relation is followed by 2693 points in 153 galaxies with very different morphologies, masses, sizes, and gas fractions. The correlation persists even when dark matter dominates. Consequently, the dark matter contribution is fully specified by that of the baryons. The observed scatter is small and largely dominated by observational uncertainties. This radial acceleration relation is tantamount to a natural law for rotating galaxies. Continue reading… Stacy McGaugh (CWRU Astronomy) [note time] Henriette Elvang (University of Michigan) Tue. September 20th, 2016 Scattering amplitudes and soft theorems I will give a pedagogical introduction to the spinor helicity formalism which provides a very efficient tool for studies of on-shell scattering amplitudes in 4 dimensions. The power of this formalism will be demonstrated in a new analysis of soft photon and soft graviton theorems. Continue reading… Henriette Elvang (University of Michigan) Bob Brown (CWRU) Tue. September 13th, 2016 Understanding Color-Kinematics Duality with a New Symmetry: From Radiation Zeros to BCJ I discuss a new set of symmetries obeyed by tree-level gauge-theory amplitudes involving at least one gluon. The symmetry acts as a momentum-dependent shift on the color factors of the amplitude. Using our previous development of radiation vertex expansions, we prove the invariance under this color-factor shift of the n -gluon amplitude, and in fact for any amplitudes involving at least one massless gauge boson and any number of massless or massive particles in arbitrary representations of the gauge group with spin zero, Continue reading… Bob Brown (CWRU) Bryan Lynn (CWRU and University College London) Tue. September 6th, 2016 Raymond Stora's last work Continue reading… Bryan Lynn (CWRU and University College London) Excursion Sets, Peaks and Other Creatures: Improved Analytical Models of LSS – Marcello Musso Tue. May 3rd, 2016 I will present recent developments in analytical methods to predict abundance, clustering, velocities and bias of Dark Matter halos. In the standard analytical approach, halos are identified either with sufficiently high peaks of the initial matter density field, or with the largest spheres enclosing a sufficiently high density. I will revise the physical assumptions leading to this standard picture, and show how a careful statistical implementation of the model of collapse (even in the simple spherically symmetric case) leads to a surprisingly rich structure. This allows to make simple – yet remarkably accurate – analytical predictions for halo statistics, a necessary ingredient on the road to precision cosmology. Continue reading… Excursion Sets, Peaks and Other Creatures: Improved Analytical Models of LSS – Marcello Musso Do We Understand the Universe? – Raul Jimenez Tue. April 26th, 2016 Observations of the cosmos provide a valuable tool to study the fundamental laws of nature. The future generation of astronomical surveys will provide data for a sizeable fraction of the observable sky. This rich data set should provide the means to answer fundamental questions: what are the laws of physics at high energies in the Early Universe? What is the nature of neutrinos? What is dark matter? What is dark energy? Why are there baryons at all? In this talk I will review the current status, provide a roadmap for future prospects and discuss in detail how we might approach the task of extracting information from the sky to answer the above questions. Continue reading… Do We Understand the Universe? – Raul Jimenez New Directions in Bouncing Cosmologies – Anna M. Ijjas Tue. April 19th, 2016 In this talk, I will discuss novel ideas to smooth and flatten the universe and generate nearly scale-invariant perturbations during a contracting phase that precedes a cosmological bounce. I will also present some recent work on the possibility of having well-behaved non-singular bounces. Continue reading… New Directions in Bouncing Cosmologies – Anna M. Ijjas Beyond Precision Cosmology – Licia Verde Tue. April 5th, 2016 The avalanche of data over the past 10-20 years has propelled cosmology into the "precision era". The next challenge cosmology has to meet is to enter the era of accuracy. Because of the intrinsic nature of studying the Cosmos and the sheer amount of data available and coming, the only way to meet these challenges is by developing suitable and specific statistical techniques. The road from precision Cosmology to accurate Cosmology goes through statistical Cosmology. I will outline some open challenges and discuss some specific examples. Continue reading… Beyond Precision Cosmology – Licia Verde New Approaches to Dark Matter – Justin Khoury Tue. March 29th, 2016 In this talk I will discuss a novel theory of superfluid dark matter. The scenario matches the predictions of the LambdaCDM model on cosmological scales while simultaneously reproducing the MOdified Newtonian Dynamics (MOND) empirical success on galactic scales. The dark matter and MOND components have a common origin, as different phases of a single underlying substance. This is achieved through the rich and well-studied physics of superfluidity. The framework naturally distinguishes between galaxies (where MOND is successful) and galaxy clusters (where MOND is not): due to the higher velocity dispersion in clusters, and correspondingly higher temperature, the DM in clusters is either in a mixture of superfluid and normal phases, Continue reading… New Approaches to Dark Matter – Justin Khoury Calibration of the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) Detectors – Madeline Wade Tue. March 22nd, 2016 Calibration is the critical link between the LIGO detectors and searches for gravitational-wave signals in LIGO data. The LIGO calibration effort involves constructing the external strain incident on each LIGO detector from the digitized readout of the LIGO photodetectors. The essential steps in calibration are the development of accurate models of the LIGO detectors, the digitization of these models, and the application of the calibration models to construct the external strain. The Advanced LIGO era has brought new complexities in accurately modeling the LIGO detectors as well as the challenge of producing calibrated external strain data in low-latency. This talk will give an overview of the Advanced LIGO calibration procedure, Continue reading… Calibration of the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) Detectors – Madeline Wade New Probes of Large-scale CMB Anomalies – Simone Aiola Tue. March 15th, 2016 Inflation prescribes a homogenous and isotropic universe on large scales, and it generates density fluctuations which are expected to be spatially correlated over the whole Hubble volume. Such fundamental predictions have been tested with current Cosmic Microwave Background (CMB) data and found to be in tension with our — remarkably simple — ΛCDM model. Is it just a random fluke or a fundamental issue with the present model? In this talk, I will present new possibilities of using CMB polarization as a probe of the measured suppression of the large-scale temperature correlation function. I will also discuss the viability of using this new technique with present and upcoming data. Continue reading… New Probes of Large-scale CMB Anomalies – Simone Aiola Joining Forces Against the Dark Side of the Universe: The Cosmic Microwave Background and the Large Scale Structure – Shirley Ho Fri. March 4th, 2016 Despite tremendous recent progress, gaps remain in our knowledge of our understanding of the Universe. For example, we have yet pinned down the properties of dark energy, nor have we confirmed Einstein's theory of Gravity at the largest scales. Current and upcoming large sky surveys of the cosmic microwave background, large scale structure in galaxies, quasars, lyman-alpha forest and 21cm presents us with the best opportunity to understand various mysterious properties of the Universe and its underlying principles. I will review recent results from the Baryon Oscillations Spectroscopic Survey (BOSS). These results have demonstrated the feasibility of high precision Baryon Acoustic Oscillation (BAO) measurement, Continue reading… Joining Forces Against the Dark Side of the Universe: The Cosmic Microwave Background and the Large Scale Structure – Shirley Ho Testing Early Universe Physics with Upcoming Observations – Emanuela Dimastrogiovanni Wed. February 10th, 2016 Cosmology has seen tremendous progress thanks to precision measurements and is bound to greatly benefit from upcoming Large Scale Structure and Cosmic Microwave Background data. I will point out a number of interesting directions. In particular, I discuss how the microphysics of inflation may be tested in galaxy surveys through "fossil" signatures originating from squeezed primordial correlations. I further elaborate on the constraining power of CMB spectral distortions on small-scale cosmological fluctuations and on particle decays in the very early Universe in relation to reheating. I also describe some of the possible constraints on inflation and reheating from future B-mode observations. Continue reading… Testing Early Universe Physics with Upcoming Observations – Emanuela Dimastrogiovanni New Paradigm for Physics Beyond the Standard Model – Pavel Fileviez Perez Tue. February 9th, 2016 The great desert hypothesis in particle physics defines the relation between the electroweak scale and the high scale where an unified theory could describes physics. In this talk we review the desert hypothesis and discuss the main experimental constraints from rare decays. We present a new class of theories for the TeV scale where the desert hypothesis is not needed. In this context one predicts the existence of new particles with baryon and lepton numbers called lepto-baryons. The implications for cosmology, collider experiments and the unification of forces are discussed. Continue reading… New Paradigm for Physics Beyond the Standard Model – Pavel Fileviez Perez Cosmology from the Megaparsec to the Micron – Amol Upadhye Fri. February 5th, 2016 Two major challenges for cosmology over the next decade are to characterize the dark energy responsible for the cosmic acceleration and to weigh the neutrinos, the only Standard Model particles whose masses are not yet known. Part I of the presentation describes my ongoing work to understand the effects of massive neutrinos and evolving dark energy on the formation of large-scale structure. I include both effects in a redshift-space generalization of Time-RG perturbation theory, and establish its validity through comparison to N-body simulations. In Part II I discuss my previous work using stars and laboratory experiments to search for couplings between dark energy and Standard Model particles. Continue reading… Cosmology from the Megaparsec to the Micron – Amol Upadhye Massive and Partially Massless Gravity and Higher spins – Kurt Hinterbichler Tue. February 2nd, 2016 On de Sitter space, there exists a special value for the mass of a graviton for which the linear theory propagates 4 rather than 5 degrees of freedom, known as a partially massless graviton. If a satisfactory non-linear version of the theory can be found and coupled to known matter, it would have interesting properties and could solve the cosmological constant problem. I will review attempts at constructing such a theory and some no-go's, and will describe a Vasiliev-like theory containing a tower of partially massless higher spins. Continue reading… Massive and Partially Massless Gravity and Higher spins – Kurt Hinterbichler Testing Eternal Inflation – Matthew Johnson Tue. December 8th, 2015 The theory of eternal inflation in an inflaton potential with multiple vacua predicts that our universe is one of many bubble universes nucleating and growing inside an ever-expanding false vacuum. The collision of our bubble with another could provide an important observational signature to test this scenario. In this talk I will summarize recent work providing a quantitative connection between the scalar field lagrangian underlying eternal inflation and the observational signature of bubble collisions. I will also summarize existing constraints and forecasts for future searches using CMB and LSS, as well as discuss the general relevance of this work for assessing fine-tuning problems in inflationary cosmology. Continue reading… Testing Eternal Inflation – Matthew Johnson Bigravity: Dead or Alive? – Adam Solomon Tue. December 1st, 2015 Spurred in large part by the discovery of the accelerating universe, recent years have seen tremendous advances in our understanding of alternatives to general relativity, particularly in the large-distance and low-curvature régimes. Looming large in this field is the recent development of a ghost-free, nonlinear theory of massive gravity and multimetric gravity (or equivalently, theories of interacting gravitons), which had proven elusive for the better part of seven decades. Nevertheless, both massive gravity and its generalization to a bimetric theory have run into potentially-deadly problems in the search for viable, self-accelerated cosmologies. I will summarize some of these issues, and then discuss possible ways out. Continue reading… Bigravity: Dead or Alive? – Adam Solomon Bi-gravity from DGP Two-brane Model – Yasuho Yamashita Wed. October 28th, 2015 We discuss whether or not bigravity theory can be embedded into the braneworld setup. As a candidate, we consider Dvali-Gabadadze-Porrati two-brane model. We will show that we can construct a ghost free model whose low energy spectrum is composed of a massless graviton and a massive graviton with a small mass, fixing the brane separation with the Goldberger-Wise radion stabilization. We also show that there is two branches: the normal branch is stable and the self-accelerating branch is inevitably unstable, and discuss the condition for the normal branch. Next, we consider DGP two-brane model without the radion stabilization to discuss how the ghost free bigravity coupled with a single scalar field can be derived from a braneworld setup. Continue reading… Bi-gravity from DGP Two-brane Model – Yasuho Yamashita The Instability of de Sitter Space and Dynamical Dark Energy: Massless Degrees of Freedom from the Conformal Anomaly in Cosmology – Emil Mottola Tue. October 27th, 2015 Global de Sitter space is unstable to particle creation, even for a massive free field theory with no self-interactions. The Bunch-Davies state is a definite phase coherent superposition of particle and anti-particle solutions in both the asymptotic past and future, and therefore is not a true vacuum state. In the closely related case of particle creation by a constant, uniform electric field, a time symmetric state analogous to the de Sitter invariant one is constructed, which is also not a stable vacuum state. The conformal anomaly plays a decisive role in the growth of perturbations and de Sitter symmetry breaking. Continue reading… The Instability of de Sitter Space and Dynamical Dark Energy: Massless Degrees of Freedom from the Conformal Anomaly in Cosmology – Emil Mottola Perspectives on WIMP Dark Matter – Pearl Sandick Tue. October 13th, 2015 The question of the identity of dark matter remains one of the most important outstanding puzzles in modern physics. Weakly Interacting Massive Particles (WIMPs) have long been the frontrunner dark matter candidate, with the supersymmetric neutralino serving as the canonical WIMP. In this talk, I'll discuss recent results relevant to the search for dark matter, supersymmetric and otherwise, and highlight the spectrum of theoretical and phenomenological approaches to its study. From fundamental constructions to simplified models and effective theories, each approach plays a specific role in furthering our understanding and allowing us to evaluate the prospects for discovery of dark matter. Continue reading… Perspectives on WIMP Dark Matter – Pearl Sandick The Standard Model of Particle Physics via Non-Commutative Geometry – Latham Boyle Fri. October 9th, 2015 I will introduce Connes' notion of non-commutative geometry, and explain how it offers a novel geometric perspective on certain otherwise unexplained features of the standard model of particle physics, and a more restrictive framework than effective field theory for exploring physics beyond the standard model. I will also explain the main ideas behind a new reformulation of NCG which has certain key mathematical and physical advantages over Connes' traditional "spectral triple" formulation. In this reformulation, the traditional NCG axioms are considerably simplified and unified; a number of problematic issues in the traditional NCG construction of the standard model are fixed; Continue reading… The Standard Model of Particle Physics via Non-Commutative Geometry – Latham Boyle An Anisotropic Universe Due to Dimension-changing False Vacuum Decay – James Scargill Tue. September 29th, 2015 In this talk I will consider the observational consequences of models of inflation after false vacuum decay in which the parent vacuum has a smaller number of large dimensions than our current vacuum. After introducing and briefly discussing in general the topic of inflation after false vacuum, I will then explain how such events can occur which change the number of large dimensions and lead to an anisotropic universe. The effects on the CMB of anisotropy at late times might be expected to render irrelevant the effects of primordial anisotropy, however after showing how to properly deal with the latter I will demonstrate how for the tensor perturbation modes the primordial effects are much larger than expected and can in fact be dominant. Continue reading… An Anisotropic Universe Due to Dimension-changing False Vacuum Decay – James Scargill Prospects for Measuring the Neutron-star Equation of State with Advanced Gravitational-wave Detectors – Leslie Wade Tue. September 22nd, 2015 It is widely anticipated that the first direct detections of gravitational waves will be made by advanced gravitational-wave detectors, such as the two Laser Interferometer Gravitational-wave Observatories (LIGO) and the Virgo interferometer. Arguably the most important source for ground-based interferometers are coalescing binary neutron stars. Following the detection of such a system, a more detailed followup analysis will seek to measure certain properties of the component neutron stars, such as their masses and/or spin configurations. In particular, it has been shown that the gravitational waves emitted by binary neutron stars carry information about the neutron-star equation of state. In this talk, Continue reading… Prospects for Measuring the Neutron-star Equation of State with Advanced Gravitational-wave Detectors – Leslie Wade Gravitational Signals from Noise in the Hubble Diagram – Edward Macaulay Tue. May 5th, 2015 Understanding the nature of the dark universe requires precise measurements of the background expansion history, and also the growth rate of density fluctuations. In this talk, I'll consider both regimes with supernova lensing for the OzDES spectroscopic survey – which is measuring the redshifts of hundreds of supernova and thousands of galaxies identified by the Dark Energy Survey. I'll start by reviewing the more established method of growth rate measurements with Redshift Space Distortions, and discuss possible tension between RSDs and expectations from Planck CMB measurements. I'll then consider how OzDES can place novel constraints on the growth rate and amplitude of density fluctuations by correlating noise in the supernova Hubble diagram with the gravitational effects of lensing and peculiar velocities expected from the observed density field. Continue reading… Gravitational Signals from Noise in the Hubble Diagram – Edward Macaulay The Race for the Highest Energy Neutrinos in the Universe – Patrick Allison Tue. April 7th, 2015 In 1969, Berezinsky and Zatsepin predicted a flux of ultra-high energy (greater than 1 EeV) neutrinos due to cosmic ray interactions with the cosmic microwave background. These 'cosmogenic' BZ neutrinos are virtually "guaranteed" – barring extreme changes in either fundamental physics or our understanding of the source of cosmic rays, these neutrinos must exist. Detecting these neutrinos is extremely challenging, due to their incredibly low flux – however, recent experiments are approaching the sensitivity needed to finally make a detection. Here, I will talk about several of these existing and upcoming experiments, including the ANITA and EVA balloon-borne detectors, and the ARA experiment, Continue reading… The Race for the Highest Energy Neutrinos in the Universe – Patrick Allison Macro Dark Matter – David Jacobs Tue. March 31st, 2015 Dark matter is a vital component of the current best model of our universe, Lambda-CDM. There are leading candidates for what the dark matter could be (e.g. weakly-interacting massive particles, or axions), but no compelling observational or experimental evidence exists to support these particular candidates, nor any beyond-the-Standard-Model physics that might produce such candidates. This suggests that other dark matter candidates, including ones that might arise in the Standard Model, should receive increased attention. I will discuss the general class of dark matter candidates with characteristic masses and interaction cross-sections characterized in units of grams and square centimeters, respectively — Continue reading… Macro Dark Matter – David Jacobs Wave Turbulence in Preheating – Henrique de Oliveira Tue. March 24th, 2015 We have studied the nonlinear preheating dynamics of several inflationary models. They include nonminimally coupled scalar fields and two-fields models. It is well established that after a linear stage of preheating characterized by the parametric resonance, the nonlinear dynamics becomes relevant driving the system towards turbulence. Wave turbulence is the appropriated description of this phase since the matter contents are fields instead of usual fluids. Turbulence develops due to the nonlinear interations of waves, here represented by the small inhomogeneities of the scalar fields. We present relevant aspects of wave turbulence and presented the effective equation of state at the thermalize phase. Continue reading… Wave Turbulence in Preheating – Henrique de Oliveira Mapping New Physics with the Cosmic Microwave Background – Jeff McMahon Mon. February 23rd, 2015 The Cosmic Microwave Background (CMB) is the afterglow of the big bang and the oldest light in the universe that can be observed. Faint signals in the pattern of the CMB provide information about the physics that govern the very early universe and the growth of large scale structure. Thus, precision measurements of the CMB provide unique views on ultra high energy physics (inflation); pressing mysteries including dark energy and dark matter; and traditional particle physics questions such as the sum of the neutrino masses. In this talk I present the state of the CMB field and highlight the Atacama Cosmology Telescope Polarimeter (ACTPol) and it successor Advanced ACTPol (AdvACT). Continue reading… Mapping New Physics with the Cosmic Microwave Background – Jeff McMahon Optical Frequency Combs and Precision Spectroscopy – Jason Stalnaker Tue. February 17th, 2015 Atomic spectroscopy has a long history of providing tests of fundamental physics. This tradition continues as the precision and accuracy of spectroscopic techniques improve. I will discuss the impact that the development of stabilized optical frequency combs has had on precision spectroscopy and describe an ongoing effort to study the atomic spectra of lithium at Oberlin College. Continue reading… Optical Frequency Combs and Precision Spectroscopy – Jason Stalnaker Numerical Relativity in Spherical Polar Coordinates – Thomas W. Baumgarte Thu. February 12th, 2015 Numerical relativity simulations have made dramatic advances in recent years. Most of these simulations adopt Cartesian coordinates, which have some very useful properties for many types of applications. Spherical polar coordinates, on the other hand, have significant advantages for others. Until recently, the new coordinate singularities in spherical polar coordinates have hampered the development of numerical relativity codes adopting such coordinates, at least in the absence of symmetry assumptions. With a combination of different techniques – a reference-metric formulation of the relevant equations, a proper rescaling of all tensorial quantities, and a partially-implicit Runge-Kutta method – we have been able to solve these problems. Continue reading… Numerical Relativity in Spherical Polar Coordinates – Thomas W. Baumgarte Is Clustering Dark Energy Non-linear? The AP Resummation Approach – Stefano Anselmi Tue. February 3rd, 2015 In order to gain insights on the mysterious component driving the acceleration of the Universe the future surveys will measure with unprecedent precision the density power spectrum in the non-linear range of scales and redshifts. On the theoretical hand those non-linearities require a comparable computational level. This is a tremendous effort that see deployed numerical (N-body), semi-analytical and analytical investigations. I this context I will present a powerful analytical resummation scheme first developed for LCDM and very recently extended to the Clustering Quintessence scenario, i.e. quintessence models with vanishing speed of sound. The approach I will expose allows predictions at few percent level beyond the Baryon Acoustic Oscillations range of scales, Continue reading… Is Clustering Dark Energy Non-linear? The AP Resummation Approach – Stefano Anselmi Sterile Plus Active Neutrinos and Neutrino Oscillations – Leonard Kisslinger Mon. January 26th, 2015 The talk will be based on recent neutrino oscillation experiments that have determined that there is almost certainly a sterile neutrino, with an estimate of the mixing angle. Continue reading… Sterile Plus Active Neutrinos and Neutrino Oscillations – Leonard Kisslinger New Accelerators for Neutrino Physics – Matt Toups Tue. January 20th, 2015 DAEδALUS is a proposed phased neutrino experiment, whose ultimate aim is to search for evidence of CP violation in the neutrino sector. The experiment will consist of several accelerator-based modules that produce decay-at-rest neutrino beams located at three different distances from a single, large underground neutrino detector. Each of these modules will make use of a pair of low-cost, high power cyclotrons to accelerate an H2+ beam initially up to 60 MeV with a compact injector cyclotron and then ultimately up to 800 MeV with a separated sector super-conducting cyclotron. These new low-cost, high power cyclotrons are motivated by industry needs and also open up new possibilities for searches for physics beyond the standard model with neutrinos. Continue reading… New Accelerators for Neutrino Physics – Matt Toups The Universe as a Cosmic String – Florian Niedermann Tue. November 25th, 2014 We are investigating modifications of general relativity that are operative at the largest observable scales. In this context, we are investigating the model of brane induced gravity in 6D, a higher dimensional generalization of the DGP model. As opposed to different claims in the literature, we have proven the quantum stability of the theory in a weakly coupling regime on a Minkowski background. In particular, we have shown that the Hamiltonian of the linear theory is bounded from below. This result opened a new window of opportunity for consistent modified Friedmann cosmologies. In our recent work it is shown that a brane with FRW symmetries necessarily acts as a source of cylindrically symmetric gravitational waves, Continue reading… The Universe as a Cosmic String – Florian Niedermann Imprints of the Standard Model in the Sky? – Daniel G. Figueroa Tue. November 18th, 2014 The existence of the Standard Model (SM) Higgs implies that a gravitational wave (GW) background is generated by the decay products of the Higgs, soon after the end of inflation. Theoretically, all Yukawa and SU(2)L gauge couplings of the SM are imprinted as features in the GW spectrum. However, in practice, the signal from the most strongly coupled species dominate, rendering inaccesible the information on the other species. This background could be used for inferring properties of particle physics, including beyond the SM, at energies way above the reach of LHC. To measure this background, however, new high frequency GW detection technology is required. Continue reading… Imprints of the Standard Model in the Sky? – Daniel G. Figueroa New Ideas for Dark Energy and Also for Dust Discrimination in B-mode Maps – Marc Kamionkowski Fri. November 14th, 2014 Continue reading… New Ideas for Dark Energy and Also for Dust Discrimination in B-mode Maps – Marc Kamionkowski Intergalactic Magnetic Fields – Tanmay Vachaspati Tue. November 11th, 2014 I will describe theoretical motivation for the existence of parity violating (helical) intergalactic magnetic fields and recent and growing observational evidence for such fields. Continue reading… Intergalactic Magnetic Fields – Tanmay Vachaspati Peaks and Troughs in Large Scale Structure – Ravi K. Sheth Tue. November 4th, 2014 I will reiew recent and substantial progress in modeling the cosmic web. This progress, which results from merging two different and decades old literature streams, leads to a number of new and interesting insights about how the biased tracers we will observe in the next generation of large scale structure datasets can better constrain cosmological models. Continue reading… Peaks and Troughs in Large Scale Structure – Ravi K. Sheth High Precision Cosmology with BAO Surveys: BOSS and Future 21cm BAO Surveys – Hee-Jong Seo Fri. October 24th, 2014 The large scale structure of matter and galaxies contains important information on the evolution of the Universe. Baryon acoustic oscillations (BAO), which is one of the most promising large scale features, can provide an excellent standard ruler that enables us to measure the cosmological distance scales, and therefore dark energy properties. I would like to first discuss the ongoing joint analysis of BOSS galaxy and lya BAO results and, second, future 21cm BAO surveys focused on the effect of foregrounds. Continue reading… High Precision Cosmology with BAO Surveys: BOSS and Future 21cm BAO Surveys – Hee-Jong Seo The Shape of the Electron, and Why It Matters – Amar Vutha Tue. October 14th, 2014 The universe, or at least the 5% of it that we understand, is described rather well by the Standard Model of particle physics. Yet even this non-dark sector of the universe conceals a great mystery: // where has all the anti-matter gone? // In this lecture, I will describe the problem and the best solution that we have for it. One of the crucial ingredients of that solution is the prediction of new sources of time-reversal violation. The most sensitive probe of such time-reversal violation is, oddly enough, to be found in small asymmetries in the shape of the electron's charge distribution. Continue reading… The Shape of the Electron, and Why It Matters – Amar Vutha Precision Cosmology with Galaxy Surveys: Understanding Intrinsic Alignments and Redshift-space Distortions – Jonathan A. Blazek Fri. October 10th, 2014 Galaxy imaging and redshift surveys, designed to measure gravitational lensing and galaxy clustering, remain the most powerful probes of large-scale structure. Such surveys constitute a significant fraction of current and next-generation projects in the cosmology community (e.g. DES, HSC, LSST, eBOSS, DESI, EUCLID, WFIRST). The statistical power of these experiments requires significantly improved understanding of astrophysical and observational effects. In this talk, I will focus on two important astrophysical processes which contribute systematic uncertainty but also contain a potential wealth of information. First, correlations in the intrinsic shapes and orientations of galaxies, termed "intrinsic alignments" (IA), are an important systematic in weak lensing. Continue reading… Precision Cosmology with Galaxy Surveys: Understanding Intrinsic Alignments and Redshift-space Distortions – Jonathan A. Blazek Healthy Theories Beyond Horndeski – Jerome Gleyzes Wed. September 3rd, 2014 In search for a candidate that could explain the current acceleration of the Universe, a lot of attention has been given recently to Galileon theories, or in their generalized form, Horndeski theories. They are interesting as they represent the most general scalar tensor theories that do not lead to equations of motion containing more than two derivatives. This restriction is generally thought to be of great importance, as generically, higher order derivatives lead to ghost instabilities. I will present a new class of scalar tensor theories that are broader than Horndeski and, as such, do bring higher order derivatives. However, Continue reading… Healthy Theories Beyond Horndeski – Jerome Gleyzes Interacting Spin-2 Fields – Johannes Noller Tue. September 2nd, 2014 In this talk I will discuss some recent progress in our understanding of the spin-2 sector, focussing on theories with two or more dynamical such fields. In particular I will highlight the existence of several dualities in such models (generalisations of `Galileon dualities'), their decoupling limit phenomenology as well as the form of their interactions with other matter fields. Continue reading… Interacting Spin-2 Fields – Johannes Noller Recent Progress in Large-Scale Structure – Roman Scoccimarro Fri. May 9th, 2014 I will discuss recent progress in the understanding of how to model galaxy clustering. While recent analyses have focussed on the baryon acoustic oscillations as a probe of cosmology, galaxy redshift surveys contain a lot more information than the acoustic scale. In extracting this additional information three main issues need to be well understood: nonlinear evolution of matter fluctuations, galaxy bias and redshift-space distortions. I will present recent progress in modeling these three effects that pave the way to constraining cosmology and galaxy formation with increased precision. Continue reading… Recent Progress in Large-Scale Structure – Roman Scoccimarro Atom Interferometry Fundamentals and its Applications in Space Science – Babak Saif Tue. May 6th, 2014 Continue reading… Atom Interferometry Fundamentals and its Applications in Space Science – Babak Saif Shape of the Universe – Daniel Müller Tue. April 29th, 2014 The most recent observations indicate that the Universe is isotropic, with a small spatial curvature, which can be either positive, negative or zero. As is well known, Einstein's theory of gravitation restricts the spatially isotropic sections of space time to be locally S^3, H^3 or E^3, respectively. Thus, the topology of the Universe is only partly determined. On the other hand there are a few effects which occur for non trivial topology. In this talk, we will give a brief discussion of some of these, in particular of the Casimir effect which should have been important in the primordial stages of the Universe. Continue reading… Shape of the Universe – Daniel Müller Testing Gravity via Lunar Laser Ranging – Tom Murphy Tue. April 22nd, 2014 Forty years ago, Apollo astronauts placed the first of several retroreflector arrays on the moon. Laser range measurements between the earth and the moon have provided some of our best tests to date of general relativity and gravitational phenomenology–including the equivalence principle, the time-rate-of-change of the gravitational constant, the inverse square law, and gravitomagnetism. A new effort called APOLLO (the Apache Point Observatory Lunar Laser-ranging Operation) is now collecting measurements at the unprecidented precision of one millimeter, which will produce order-of-magnitude improvements in a variety of gravitational tests, as well as reveal more detail about the interior structure of the moon. Continue reading… Testing Gravity via Lunar Laser Ranging – Tom Murphy WIMP physics with direct detection – Annika H. G. Peter Tue. April 8th, 2014 One of the best-motivated classes of dark-matter candidate is the Weakly-Interacting Massive Particle (WIMP). In this talk, I will discuss WIMPs in the context of direct-detection experiments. First, I will discuss a new signal for WIMP dark matter: gravitational focusing in direct-detection experiments. This effect leads to an energy-dependent phase-shift in the peak direct-detection event rate throughout the year. I will discuss this in light of current putative annual-modulation claims. Second, I will discuss what we can learn about WIMPs in the "early-discovery" days once WIMPs are conclusively found in direct-detection experiments. I will show that what we can learn about WIMPs depends sensitively on the ensemble of experiments that are running at the time of discovery. Continue reading… WIMP physics with direct detection – Annika H. G. Peter Probing Dark Energy Using Growth of Structure: The Role of Simulations – Hao-Yi Wu Tue. April 1st, 2014 The growth of cosmic structure provides a unique approach for measuring the dynamic evolution of dark energy and distinguishing different models of gravity. In this talk, I will focus on two of the most important methods for measuring the growth of structure: galaxy cluster counts and the redshift-space distortions of galaxy clustering. I will discuss the systematic uncertainties involved in both methods, and how I use numerical simulations to help reducing these systematics and improve our theoretical predictions. Continue reading… Probing Dark Energy Using Growth of Structure: The Role of Simulations – Hao-Yi Wu Science with CMB Spectral Distortions: a New Window to Early-Universe Physics – Jens Chluba Tue. March 18th, 2014 Since COBE/FIRAS we know that the CMB spectrum is extremely close to a perfect blackbody. There are, however, a number of processes in the early Universe that should create spectral distortions at a level that is within reach of present day technology. I will give an overview of recent theoretical and experimental developments, explaining why future measurements of the CMB spectrum will open up an unexplored window to early-universe and particle physics, with possible non-standard surprises but also guaranteed signals awaiting us. Continue reading… Science with CMB Spectral Distortions: a New Window to Early-Universe Physics – Jens Chluba The Marvelous Success of the Standard Model of Cosmology – Lloyd Knox Wed. February 26th, 2014 The standard model of cosmology has been remarkably successful in its predictions for current data given earlier data. One can react with sadness for the lack of evidence for new physics, chase marginal anomalies, or marvel at the success and soldier on toward better measurements knowing new physics may be just around the corner. In this talk I will reveal some of the inner workings of this success in order to communicate why I find it marvelous. For example, for the predictions to agree with cosmic microwave background (CMB) data we need, at very high statistical significance, a cosmic neutrino background, Continue reading… The Marvelous Success of the Standard Model of Cosmology – Lloyd Knox 21cm Cosmology – Ue-Li Pen Tue. February 18th, 2014 I present recent developments in a new window to map the large scale structure of the universe through intensity mapping using the collective unresolved emission of cosmic hydrogen 21cm emission. Initial maps have been made with various existing telescopes, and an ambitious survey, the Canadian Hydrogen Intensity Mapping Experiment (CHIME) is under construction. Future potential science targets include precision measurements of dark energy, neutrino masses, and possibly gravitational waves. Continue reading… 21cm Cosmology – Ue-Li Pen Cosmology and Systematics of Multi-wavelength Galaxy Cluster Observables – Tomasz Biesiadzinski Tue. February 11th, 2014 The current concordance lCDM cosmological model describes a universe where cold dark matter seeds structure formation and a cosmological constant drives its accelerated expansion. Precise measurements of various astronomical observables allow us to test this model and any deviations, if found, may lead to an improved cosmological theory. Ongoing and planned large scale surveys of the skies have the power to study the lCDM model. However the data sets they generate will be dominated by complex systematic uncertainties. One probe of cosmological parameters, the evolution of clusters of galaxies, has the power to differentiate simple models of dark energy, like the cosmological constant, Continue reading… Cosmology and Systematics of Multi-wavelength Galaxy Cluster Observables – Tomasz Biesiadzinski Quantum-Limited Superconducting Detectors and Amplifiers for Cosmology – Philip Mauskopf Fri. February 7th, 2014 Continue reading… Quantum-Limited Superconducting Detectors and Amplifiers for Cosmology – Philip Mauskopf 21-cm Intensity Mapping – Jeffrey Peterson Tue. January 28th, 2014 Continue reading… 21-cm Intensity Mapping – Jeffrey Peterson Supersymmetry, Non-thermal Dark Matter and Precision Cosmology Tue. December 3rd, 2013 Within the Minimal Supersymmetric Standard Model (MSSM), LHC bounds suggest that scalar superpartner masses are far above the electroweak scale. Given a high superpartner mass, nonthermal dark matter is a viable alternative to WIMP dark matter generated via freezeout. In the presence of moduli fields nonthermal dark matter production is associated with a long matter dominated phase, modifying the spectral index and primordial tensor amplitude relative to those in a thermalized primordial universe. Nonthermal dark matter can have a higher self-interaction cross-section than its thermal counterpart, enhancing astrophysical bounds on its annihilation signals. I will review recent progress in this program, Continue reading… Supersymmetry, Non-thermal Dark Matter and Precision Cosmology Cosmic Bandits: Exploration vs. Exploitation in Cosmological Surveys – Ely Kovetz Tue. November 26th, 2013 Various cosmological observations consist of prolonged integrations over small patches of sky. These include searches for B-modes in the CMB, the power spectrum of 21-cm fluctuations during the epoch of reionization and deep-field imaging by telescopes such as HST/JWST, among others. However, since these measurements are hindered by spatially-varying foreground noise, the observational sensitivity can be improved considerably by finding the region of sky cleanest of foregrounds. The best strategy thus involves a tradeoff between exploration (to find lower-foreground patches) and exploitation (through prolonged integration). But how to balance this tradeoff efficiently? This problem is akin to the multi-armed bandit (MAB) problem in probability theory, Continue reading… Cosmic Bandits: Exploration vs. Exploitation in Cosmological Surveys – Ely Kovetz Turning trajectories in multi-field inflation – Krzysztof Turzyński Tue. November 19th, 2013 The latest results from the PLANCK collaboration, consistent with the simplest single-field models of slow-roll inflation and with no trace of non-Gaussianity, have extinguished many hopes of seeing specific aspects of New Physics directly in the sky. One may then wonder whether the landscape of allowed inflationary models has been practically reduced to single-field effective theories. I shall argue that the answer is negative and present several inflationary models in which the turn-induced interactions between two scalar fields affect the normalization/running of the power spectrum of curvature perturbations, or smooth out its features (e.g. via particle production), actually driving the power spectrum towards phenomenologically acceptable characteristics. Continue reading… Turning trajectories in multi-field inflation – Krzysztof Turzyński Lorentz violation in gravity: why, how and where – Diego Blas Mon. November 18th, 2013 Recent approaches to quantum gravity question the role of Lorentz invariance as a fundamental symmetry of Nature. This has implications for most of the observables in gravitational physics, also at low-energies. In this talk I will describe recent bounds on deviations from Lorentz invariance in gravity coming from binary pulsar observations and cosmological data. Continue reading… Lorentz violation in gravity: why, how and where – Diego Blas Non-local quantum effects in cosmology – John Donoghue Tue. November 12th, 2013 In general relativity, there are non-local quantum effects that come from the propagation of light particles including gravitons. I will review the effective field theory treatment which allows one to identify the reliable parts of the quantum loops. In cosmology, there are then non-local corrections to the FLRW equations. I will present some of the formalism for this and give some exploration of results. Continue reading… Non-local quantum effects in cosmology – John Donoghue Cosmology from conformal symmetry – Austin Joyce Tue. October 29th, 2013 We will explore the role that conformal symmetries may play in cosmology. First, we will discuss the symmetries underlying the statistics of the primordial perturbations which seeded the temperature anisotropies of the Cosmic Microwave Background. I will show how symmetry considerations lead us to three broad classes of theories to explain these perturbations: single-field inflation, multi-field inflation, and the conformal mechanism. We will discuss the symmetries in each case and derive their model-independent consequences. Finally, we will examine the possibility of violating the null energy condition with a well-behaved quantum field theory. Continue reading… Cosmology from conformal symmetry – Austin Joyce Goldstone bosons with spontaneously broken Lorentz symmetry – Riccardo Penco Tue. October 15th, 2013 In this talk, I will discuss some general properties of effective theories of Goldstone bosons in which Lorentz symmetry is spontaneously broken. I will first introduce an extension of Goldstone theorem to systems with a finite density of charge. This very general setting is potentially applicable to contexts as diverse as early universe cosmology and QCD at finite density. Additionally, I will show how certain effective theories of Goldstones with broken Lorentz symmetry admit UV completions that do not restore any broken symmetry. Continue reading… Goldstone bosons with spontaneously broken Lorentz symmetry – Riccardo Penco Slavnov-Taylor Identities for Primordial Perturbations – Lasha Berezhiani Tue. October 8th, 2013 I will show that all consistency relations for the primordial perturbations derive from a single, master identity, which follows from the Slavnov-Taylor identity for spatial diffeomorphisms. This master identity is valid at any value of momenta and therefore goes beyond the soft limit. This approach underscores the role of spatial diffeomorphism invariance at the root of cosmological consistency relations. It also offers new insights on the necessary conditions for their validity: a physical contribution to the vertex functional must satisfy certain analyticity properties in the soft limit in order for the consistency relations to hold. For standard inflationary models, this is equivalent to requiring that mode functions have constant growing-mode solutions. Continue reading… Slavnov-Taylor Identities for Primordial Perturbations – Lasha Berezhiani Symmetry Breaking and Galileons – Garrett Goon Wed. October 2nd, 2013 Galileons, and related theories, have deep connections to spontaneous symmetry breaking. After reviewing the origins of Galileon theories, I motivate their interpretation as Goldstone Bosons and illustrate some of their special technical properties before proceeding to discuss applications and future directions. Continue reading… Symmetry Breaking and Galileons – Garrett Goon CMB Lensing: reconstruction from polarisation & implications for cosmology from cross correlation with galaxies – Ruth Pearson Tue. September 24th, 2013 CMB Lensing is a probe of the matter distribution between the surface of last scattering and today, which has been measured using CMB temperature data. Signal to noise for lensing reconstruction from CMB polarisation data is expected to be much better, since B modes on small scales should vanish in the absence of lensing. An effect of having data from an incomplete sky is leakage of E mode power in to B mode power. Upcoming data analysis from ground based CMB polarisation instruments must account for this effect. In the first part of my talk I will show results for CMB polarisation lensing reconstruction from small patches of sky, Continue reading… CMB Lensing: reconstruction from polarisation & implications for cosmology from cross correlation with galaxies – Ruth Pearson Making the connection between galaxy voids, dark matter underdensities and theory – Paul Sutter Tue. September 10th, 2013 Continue reading… Making the connection between galaxy voids, dark matter underdensities and theory – Paul Sutter The Universe in a New Light: the First Cosmological Results from the Planck Mission – Bill Jones Tue. April 30th, 2013 The precision and accuracy of the recently released Planck data are without precedent; the data from a single experiment provide all-sky images at wavelengths never before explored, covering more than three decades in angular scale with a signal dynamic range exceeding a factor of a million. These data open new avenues of research in fields ranging from Galactic astrophysics to cosmology. Our present Universe has shown herself to be both simple and elegant, and although her origins remain enshrouded in mystery, it appears that her past may have been more complex. While the Planck data have begun to inform us about the nature of cosmo-genesis, Continue reading… The Universe in a New Light: the First Cosmological Results from the Planck Mission – Bill Jones Detecting Modified Gravity in the Stars – Jeremy Sakstein Mon. April 29th, 2013 Screened scalar-tensor gravity such as chameleon and symmetron theories allow order one deviations from General Relativity on large scales whilst satisfying all local solar-system constraints. A lot of recent work has therefore focused on searching for observational signatures of these models and constraining them. If these models are to be viable then our own solar system is necessarily screened, however, this may not be the case for stars in dwarf galaxies, which can exhibit novel and unique phenomena. These new effects can be exploited to produce constraints that are far more competitive than laboratory and cosmological tests and in this talk, Continue reading… Detecting Modified Gravity in the Stars – Jeremy Sakstein Senior Project Symposium Sat. April 20th, 2013 Continue reading… Senior Project Symposium In search for hints of resonance in the CMB power spectrum – Daan Meerburg Tue. April 16th, 2013 We investigate possible resonance effects in the primordial power spectrum using the latest CMB data. These effects are predicted by a wide variety of models and come in two flavors, one where the oscillations are log spaced and one where the oscillations are linearly spaced. We treat the oscillations as perturbations on top of the scale invariant power spectrum. This allows us to significantly improve the search for resonance because it allows us to precompute the transfer functions. We show that the largest error from this simplification comes from the variance in the measurement to the distance of last scattering. Continue reading… In search for hints of resonance in the CMB power spectrum – Daan Meerburg Black Hole Space-Times from S Matrices – Ira Rothstein Tue. April 9th, 2013 In this talk I will show how to generate classical space-times directly from S matrices. The method makes no use of Einsteins' equations nor, for that matter, any space-time action at all. This approach also allows us to make direct contact between the classical solutions of Yang-Mills theory and those of gravity through the squaring relation between the Yang-Mills and gravitational tree level scattering amplitudes. In this way one may construct classical space-times directly from Yang-Mills theory. – Continue reading… Black Hole Space-Times from S Matrices – Ira Rothstein Testing gravity with pulsars, black holes and the microwave background – Lam Hui Tue. April 2nd, 2013 We will discuss 3 topics: 1. a way to detect gravitational waves using binaries; 2. a way to test general relativity using black holes; 3. a way to connect superhorizon fluctuations with the observed statistical asymmetry of the universe. Continue reading… Testing gravity with pulsars, black holes and the microwave background – Lam Hui Neutrinoless double beta decay results from EXO-200 – Carter Hall Tue. March 26th, 2013 Neutrinoless double beta decay has never been definitively observed, although for the last ten years one group has claimed to see a 6-sigma positive effect in 76Ge. Recently the EXO-200 experiment produced the first independent check on this claim using 136Xe. This talk will report on the double beta decay results from EXO-200 and other experiments, along with prospects for future progress in this field. Continue reading… Neutrinoless double beta decay results from EXO-200 – Carter Hall CMB Non-Gaussianity from Recombination and Fingerprints of Dark Matter – Cora Dvorkin Tue. February 26th, 2013 In this talk, I show that dark matter annihilation around the time of recombination can lead to growing ionization perturbations, that track the linear collapse of matter overdensities. This amplifies small scale cosmological perturbations to the free electron density by a significant amount compared to the usual acoustic oscillations. Electron density perturbations distort the CMB, inducing secondary non-gaussianity, offering a means of detection by Planck and other experiments. I will present a novel analytic calculation of CMB non-gaussianity from recombination, providing a clear identification of the relevant physical processes. I will show that, even though electron perturbations can be markedly boosted compared with the standard model prediction, Continue reading… CMB Non-Gaussianity from Recombination and Fingerprints of Dark Matter – Cora Dvorkin Odd tensor modes from particle production during inflation – Lorenzo Sorbo Tue. December 4th, 2012 Several mechanisms can lead to production of particles during primordial inflation. I will review how such a phenomenon occurs and I will discuss how it can lead to the generation of tensor modes with unusual properties that might be detected in the not-so-far future. The gravitational waves produced this way can have a larger amplitude than in the standard scenarios, can violate parity, and their spectrum can display a feature that can be directly detected within the decade by second-generation gravitational interferometers such as advanced LIGO. Continue reading… Odd tensor modes from particle production during inflation – Lorenzo Sorbo Advances in Solving the Two-Body Problem in General Relativity: Implications for the Search of Gravitational Waves – Alessandra Buonanno Tue. November 20th, 2012 Compact binary systems composed of black holes and neutron stars are among the most promising sources for ground-based gravitational-wave detectors, such as the Laser Interferometer Gravitational Wave Observatory (LIGO) and its international partners. A detailed and accurate understanding of the shape of the gravitational waves is crucial not only for the initial detection of such sources, but also for maximizing the information that can be obtained from the gravitational-wave signals once they are observed. In this talk I will review progresses at the interface between analytical and numerical relativity. These advances have deepened our understanding of the two-body problem in general relativity, Continue reading… Advances in Solving the Two-Body Problem in General Relativity: Implications for the Search of Gravitational Waves – Alessandra Buonanno Effective Field Theory for Fluids – Rachel Rosen Tue. November 13th, 2012 In this talk I will present the low-energy effective field theory that describes the infrared dynamics of non-dissipative fluids. In particular, I will use the techniques of non-linear realizations developed by Callan, Coleman, Wess and Zumino, and Volkov to construct the effective theory based on the symmetry-breaking pattern of the fluid. I will discuss how this formalism can be used to incorporate quantum anomalies into the effective field theory. Continue reading… Effective Field Theory for Fluids – Rachel Rosen Recent Results from CDMS II and The SuperCDMS Dark-matter Program – Raymond Bunker Tue. November 6th, 2012 The Cryogenic Dark Matter Search experiment (CDMS II) was designed to directly detect dark matter by simultaneously measuring phonon and ionization signals caused by particle interactions in semiconductor targets, allowing event-by-event discrimination of signal from background via the relative sizes of the two signals. I'll briefly review the CDMS II experiment and then focus on recent results related to the current low-mass WIMP controversy, including data from the CoGeNT, CRESST II, and DAMA/LIBRA experiments that hint at a low-mass WIMP signal and the (similarly sensitive) low-threshold and annual-modulation analyses performed by the CDMS II collaboration. I'll also comment on the Collar and Fields likelihood analysis of the CDMS II low-energy data. Continue reading… Recent Results from CDMS II and The SuperCDMS Dark-matter Program – Raymond Bunker Kicking Chameleons: Early Universe Challenges for Chameleon Gravity – Adrienne Erickcek Tue. October 9th, 2012 Chameleon gravity is a scalar-tensor theory that mimics general relativity in the Solar System. The scalar degree of freedom is hidden in high-density environments because the effective mass of the chameleon scalar depends on the trace of the stress-energy tensor. In the early Universe, when the trace of the stress-energy tensor is nearly zero, the chameleon is very light and Hubble friction prevents it from reaching its potential minimum. Whenever a particle species becomes non-relativistic, however, the trace of the stress energy tensor is temporarily nonzero, and the chameleon begins to roll. I will show that these "kicks" to the chameleon field have catastrophic consequences for chameleon gravity. Continue reading… Kicking Chameleons: Early Universe Challenges for Chameleon Gravity – Adrienne Erickcek A new window on primordial non-Gaussianity – Enrico Pajer Tue. October 2nd, 2012 We know very little about primordial curvature perturbations on scales smaller than about a Mpc. I review how mu-type distortion of the Cosmic Microwave Background spectrum provides the unique opportunity to probe these scales over the unexplored range from 50 to $104 Mpc-1$. This is a very clean probe, in that it relies only on well-understood linear evolution. While mu-distortion by itself can constrain the amount of power on small scales, correlations between mu-distortion and temperature anisotropies can be used to test Gaussianity. In particular the muT cross correlation is proportional to the very squeezed limit of the primordial bispectrum and hence measures $f_NL$ local, Continue reading… A new window on primordial non-Gaussianity – Enrico Pajer The Canadian Hydrogen Intensity Mapping Experiment (CHIME) – a new tool to probe the dark energy driven expansion history of the universe from z=1-3 – Matt Dobbs Tue. September 25th, 2012 The most surprising discovery in cosmology since Edwin Hubble observed the expansion of the Universe isthat the rate of this expansion is accelerating. This either signals that a mysterious Dark Energy dominatesthe energy density of the Universe, or that our understanding of gravity on large scales is incorrect. The Canadian Hydrogen Intensity Mapping Experiment (CHIME) will produce the largest volume astronomical survey to date, potentially unlocking the mysteries the dark-energy driven expansion history of the Universe. The CHIME telescope forms an image of the entire over-head sky each night by digitally processing the information received on a compact array of 2500 radio receivers. Continue reading… The Canadian Hydrogen Intensity Mapping Experiment (CHIME) – a new tool to probe the dark energy driven expansion history of the universe from z=1-3 – Matt Dobbs Non-Gaussianity from general inflationary states – Nishant Agarwal Tue. September 18th, 2012 I will describe the effects of non-trivial initial quantum states for inflationary fluctuations within the context of the effective field theory for inflation. We find that besides giving rise to large non-Gaussianities from inflation, general initial states can also have interesting implications for the consistency relation of the bispectrum. In addition, they leave a distinct observable signature on the scale-dependence of the bias of dark matter halos. I will also discuss constraints on the initial state from current large scale structure data, including luminous red galaxies and quasars in the Sloan Digital Sky Survey sample. Continue reading… Non-Gaussianity from general inflationary states – Nishant Agarwal Boosting the Universe: Observational consequences of our motion – Amanda Yoho Tue. September 11th, 2012 The Cosmic Microwave Background (CMB), photons from the earliest epoch that are able to free stream towards us, provides a unique opportunity to learn about many properties of the universe we live in. Already, the temperature fluctuations of the CMB have been studied by the Wilkinson Anisotropy Probe (WMAP) and have allowed many cosmological parameters to be pinned down to within a percent error. However, there are many more mysteries to be uncovered by precise measurements of the CMB polarization of these photons and weak lensing fields. Only with a robust understanding of the possible contaminants and astrophysical effects that can deform the measured fields will we be able to accurately characterize which models are favored over others. Continue reading… Boosting the Universe: Observational consequences of our motion – Amanda Yoho The interplay between high and low redshift universe – Azadeh Moradinezhad Dizgah Tue. September 4th, 2012 Download the slides Continue reading… The interplay between high and low redshift universe – Azadeh Moradinezhad Dizgah Supersymmetry, Naturalness, and the LHC: Where Do We Stand? – Matthew Reece Tue. May 1st, 2012 The LHC has accumulated a large luminosity and has already begun ruling out a wide range of theoretical scenarios. I will discuss the theoretical implications of current LHC searches for supersymmetry and the first tentative Higgs measurements. In particular, I will assess the current status of SUSY naturalness, and explain some ways in which searches for the scalar top quark might help to further constrain the parameter space. Continue reading… Supersymmetry, Naturalness, and the LHC: Where Do We Stand? – Matthew Reece Gravitational Wave Detection with Pulsars: the NANOGrav collaboration – Dan Stinebring Tue. April 24th, 2012 The effort to detect long-wavelength gravitational waves with a pulsar timing array (PTA) is progressing well, with three major international groups intensifying their efforts and increasingly sharing data and techniques. *Your* PTA, the North American Nanohertz Observatory for Gravitational waves (NANOGrav) is making excellent progress. I will report on our recent results and also comment on my group's specialty, the effort to remove time variable propagation delays through the ionized interstellar medium. Continue reading… Gravitational Wave Detection with Pulsars: the NANOGrav collaboration – Dan Stinebring Hunting for de Sitter vacua in the String Landscape – Gary Shiu Tue. April 17th, 2012 Results from observational cosmology suggest that our universe is currently accelerating. The simplest explanation is that we are living in a universe with a positive cosmological constant. In this talk, I will describe some recent attempts in constructing such solutions in string theory and discuss the difficulties one encounters in finding metastable de Sitter vacua. Thus, the requirement of positive cosmological constant and stability imposes strong constraints on the string theory landscape. Continue reading… Hunting for de Sitter vacua in the String Landscape – Gary Shiu Bosonic and Fermionic Non-thermal Dark Matter Isocurvature Perturbations and Non-Gaussianities – Daniel Chung Tue. April 10th, 2012 Dark matter candidates in a broad class of non-thermal models produce primordial isocurvature perturbations and non-Gaussianities. We discuss the model dependence of such scenarios. In particular, fermionic superheavy dark matter requires non-gravitational interactions to be observationally interesting. We also present a general mathematical result regarding the cross correlation between the primordial isocurvature perturbations and curvature perturbations. This last result is of general interest for isocurvature phenomenology. Download the slides Continue reading… Bosonic and Fermionic Non-thermal Dark Matter Isocurvature Perturbations and Non-Gaussianities – Daniel Chung Ghost-free multi-metric interactions Tue. April 3rd, 2012 The idea that the graviton may be massive has seen a resurgence of interest due to recent progress which has overcome its traditional problems. I will review this recent progress, and show how the theory can be extended to write consistent interactions coupling together multiple massive spin-2 fields. Download the slides Continue reading… Ghost-free multi-metric interactions Chromo-Natural Inflation – Peter Adshead Tue. March 27th, 2012 I will describe a new model for inflation – Chromo-Natural Inflation – consisting of an axionic scalar field coupled to a set of three non-Abelian gauge fields. The model's novel requirement is that the gauge fields begin inflation with a rotationally invariant vacuum expectation value (VEV) that is preserved through identification of SU(2) gauge invariance with rotations in three dimensions. The gauge VEV interacts with the background value of the axion, leading to an attractor solution that exhibits slow roll inflation even when the axion decay constant has a natural value (\less M_{\rm Pl}). Assuming a sinusoidal potential for the axion, Continue reading… Chromo-Natural Inflation – Peter Adshead Testing the concordance cosmology with weak gravitational lensing – Ali Vanderveld Tue. March 20th, 2012 Weak gravitational lensing, whereby the images of background galaxies are distorted by foreground matter, can be a powerful cosmological probe if systematics are sufficiently controlled. In particular, I will show how we may use weak lensing to robustly test the standard cosmological constant-dominated "concordance model" of cosmology by using in-hand expansion history data to make predictions for future observations. I will then discuss one recent proposal for economically gathering the necessary data while minimizing systematics — the balloon-borne High Altitude Lensing Observatory (HALO). Download the slides Continue reading… Testing the concordance cosmology with weak gravitational lensing – Ali Vanderveld An estimator for statistical anisotropy from the CMB bispectrum – Ema Dimstrogiovanni Tue. February 28th, 2012 Various data analysis of the Cosmic Microwave Background (CMB) radiation present anomalous features that can be interpreted as indications of statistical isotropy breaking. Some models of inflation involving vector fields predict statistical anisotropy in the correlation functions of primordial curvature perturbations. We employ a simplified vector field model and parametrize the bispectrum of curvature fluctuations in such a way that all the information about statistical anisotropy is encoded in some coefficients lambda_{LM} (representing the ratio of the anisotropic to the isotropic bispectrum amplitudes). We compute an optimal estimator for these coefficients and their Fisher error. We predict a sensitivity for an experiment like Planck to the anisotropic to isotropic amplitudes of about 10% if fNL is around 30. Continue reading… An estimator for statistical anisotropy from the CMB bispectrum – Ema Dimstrogiovanni Local Primordial non-Gaussianity in Large-scale Structure – Marilena LoVerde Tue. February 21st, 2012 Primordial non-Gaussianity is among the most promising of few observational tests of physics at the inflationary epoch. At present non-Gaussianity is best constrained by the cosmic microwave background, but in the near term large-scale structure data may be competitive so long as the effects of primordial non-Gaussianity can be modeled through the non-linear process of structure formation. I will discuss recent work modeling effects of a few types of primordial non-Gaussianity on the large-scale halo clustering and the halo mass function. More specifically, I will compare analytic and N-body results for two variants of the curvaton model of inflation: (i) a "tau_NL" Continue reading… Local Primordial non-Gaussianity in Large-scale Structure – Marilena LoVerde Inflation, or What? – William Kinney Tue. February 14th, 2012 Cosmological inflation is the leading candidate theory for the physics of the early universe, and is in beautiful agreement with current cosmological data such as the WMAP Cosmic Microwave Background measurement. I consider alternatives to inflation with a critical eye, and present a simple argument showing that any model which matches the observed universe must have one of three properties: (1) accelerated expansion, (2) speed of sound faster than the speed of light, or (3) super-Planckian energy density. Download the slides Continue reading… Inflation, or What? – William Kinney Quantum Kinetics and Thermalization of Hawking Radiation – Dmitry Podolsky Tue. February 7th, 2012 Hawking's discovery of black holes radiance along with Bekenstein's conjecture of the generalized second law of thermodynamics inspired a conceptually pleasing connection between gravity, thermodynamics and quantum theory. However, the discovery that the spectrum of the radiation is in fact thermal, together with the no-hair theorem, has brought along with it some undesirable consequences, most notably the information loss paradox. There have been many proposals to the resolution of this paradox, with the most natural resolution being that during the time of collapse the radiation given off is not completely thermal and can carry small amounts of information with it. Continue reading… Quantum Kinetics and Thermalization of Hawking Radiation – Dmitry Podolsky Condensates and quasiparticles in inflationary cosmology – Daniel Boyanovsky Mon. February 6th, 2012 Correlation functions during inflation feature infrared effects that could undermine a perturbative study. I will discuss self-consistent mechanisms of mass generation that regulates infrared physics, and introduce a method based on quantum optics to obtain the decay width of quantum states. Lack of energy conservation entails that EVERY particle acquires a width as a result of emission and absorption of superhorizon quanta thus becoming "quasiparticles". BLACKBOARD TALK Continue reading… Condensates and quasiparticles in inflationary cosmology – Daniel Boyanovsky Gravitational Waves from Cosmological Phase Transitions – Tom Giblin Tue. January 31st, 2012 Cosmological phase transitions occurred. I will talk about recent advances in modeling possible phase transitions when these transitions are mediated by scalar fields. I will discuss first- and second-order transitions, at various scales, and show how we can compute the background of stochastic gravitational waves produced during (and after) these transitions. Continue reading… Gravitational Waves from Cosmological Phase Transitions – Tom Giblin Spatially Covariant Theories of a Transverse, Traceless Graviton – Godfrey Miller Tue. January 24th, 2012 General relativity is a generally covariant, locally Lorentz covariant theory of two transverse, traceless graviton degrees of freedom. According to a theorem of Hojman, Kuchar, and Teitelboim, modifications of general relativity must either introduce new degrees of freedom or violate the principle of local Lorentz covariance. In this paper, we explore modifications of general relativity that retain the same graviton degrees of freedom, and therefore explicitly break Lorentz covariance. Motivated by cosmology, the modifications of interest maintain explicit spatial covariance. In spatially covariant theories of the graviton, the physical Hamiltonian density obeys an analogue of the renormalization group equation which encodes invariance under flow through the space of conformally equivalent spatial metrics. Continue reading… Spatially Covariant Theories of a Transverse, Traceless Graviton – Godfrey Miller Dark matter bounds from direct and indirect searches – Aravind Natarajan Tue. November 22nd, 2011 I discuss ways of constraining dark matter properties using a combination of direct and indirect dark matter measurements. The DAMA, CoGeNT, and CRESST experiments have obtained tentative evidence for low mass WIMPs. I show that the CMB is a clean probe of low mass WIMPs, and the WMAP+SPT measurements place competitive bounds on light WIMPs. I discuss how these dark matter bounds may be further improved by including other data sets, such as counts of galaxy clusters. Continue reading… Dark matter bounds from direct and indirect searches – Aravind Natarajan Light does not always travel on the light cone – Yi-Zen Chu Tue. November 15th, 2011 Massless particles such as photons and gravitons do not travel solely on the null cone in a generic curved spacetime. They propagate at all speeds equal to and less than c. This fact does not appear to be well appreciated in cosmology, and its consequences deserve to be worked out to ensure we are interpreting observations correctly. A rather dramatic (and hypothetical) example would be the following: suppose a significant fraction of photons from a distant supernova travels slower than c, then we may be mislead into thinking the SN is dimmer than it actually is, because some of the light has not arrived yet. Continue reading… Light does not always travel on the light cone – Yi-Zen Chu Holographic Quantum Quench – Sumit Das Fri. November 11th, 2011 The holographic correspondence between non-gravitational field theories and gravitational theories in one higher dimension can be used to study non-equilibrium behavior of strongly coupled quantum field theories. One such phenomenon is that of quantum quench, where a coupling of the field theory is time dependent and typically asymptotes to constants at early and late times. In the gravity dual this can describe, under suitable circumstances, either black hole formation, or passage through a spacelilke region of high curvature similar to a cosmological singularity. On one hand this has taught us about the meaning of cosmological singularities, while on the other hand this has thrown light on the process of thermalization in strongly coupled field theories. Continue reading… Holographic Quantum Quench – Sumit Das A Paradise Island for Deformed Gravity – Florian Kuehnel Tue. November 8th, 2011 I will discuss our recently-proposed model (hep-th/1106.3566) of deformations of general relativity that are consistent and potentially phenomenologically viable, since they respect cosmological backgrounds. These deformations have unique symmetries in accordance with unitarity requirements, and give rise to a curvature induced self-stabilizing mechanism. Furthermore, our findings include the possibility of consistent and potentially phenomenologically viable deformations of general relativity that are solely operative on curved spacetime geometries, reducing to Einstein's theory on the Minkowski background. I will also comment on possible phenomenological implications. Continue reading… A Paradise Island for Deformed Gravity – Florian Kuehnel Measuring the dark sector with clusters of galaxies – Douglas Clowe Tue. November 1st, 2011 Since Zwicky (1933), we have known that clusters of galaxies have gravitational potentials which are too large to be explained by the amount of visible baryons under the assumption of a Newtonian gravitational force law. This has led to competing hypotheses that either the masses of clusters are dominated by a non-baryonic form of matter or that gravity departs from a 1/r^2 force law on cluster scales. By using merging clusters of galaxies, I will show that the different types of matter in the clusters can be spatially seperated and, by using gravitational lensing, I will prove, independent of any assumptions about the nature of the law of gravity, Continue reading… Measuring the dark sector with clusters of galaxies – Douglas Clowe Carving Out the Space of Conformal Field Theories – David Simmons-Duffin Fri. October 28th, 2011 Conformal Field Theories (CFTs) are theories that are symmetric under changes of distance scale, like a fractal or a Russian doll. They are basic building blocks of more general Quantum Field Theories, which can describe how nature works at its most fundamental level. Despite their importance, the range of possible behavior in CFTs is poorly understood, and often the most interesting theories resist calculation with conventional perturbative methods. However, over the last few years, new techniques have emerged for mapping out the space of these important theories. I'll explain how to use basic mathematical consistency conditions, techniques from optimization theory (a subfield of computer science), Continue reading… Carving Out the Space of Conformal Field Theories – David Simmons-Duffin Understanding Chameleon Scalar Fields via Electrostatic Analogy – Kate Jones-Smith Tue. October 18th, 2011 The late-time accelerated expansion of the universe could be caused by a scalar field that is screened on small scales, as in chameleon or symmetron scenarios. We present an analogy between such scalar fields and electrostatics, which allows calculation of the chameleon field profile for general extended bodies. Interestingly, the field demonstrates a `lightning rod' effect, where it becomes enhanced near the ends of a pointed or elongated object. Drawing from this correspondence, we show that non-spherical test bodies immersed in a background field will experience a net torque caused by the scalar field. This effect, with no counterpart in the gravitational case, Continue reading… Understanding Chameleon Scalar Fields via Electrostatic Analogy – Kate Jones-Smith How Asymmetric Dark Matter May Alter the Conditions of Stardom – Andrew Zentner Tue. September 27th, 2011 Numerous recent experimental results have reinforced interest in a class of models dubbed "Asymmetric Dark Matter" (ADM), in which the relic dark matter density results from a particle-antiparticle asymmetry. Early models of this sort were invoked to explain the fact that the cosmic baryon and dark matter densities are of the same order, yet in the standard cosmology, they are produced by distinct physical processes. In such models, the relic dark matter density results from an asymmetry (perhaps dark matter carries B-L charge), so there are no contemporary cosmic dark matter annihilations and no opportunity for indirect detection. Otherwise, these scenarios give essentially the same cosmological predictions as the standard weakly-interacting massive particle/cold dark matter paradigm, Continue reading… How Asymmetric Dark Matter May Alter the Conditions of Stardom – Andrew Zentner How the genome folds – Erez Lieberman Aiden Fri. September 23rd, 2011 I describe Hi-C, a novel technology for probing the three-dimensional architecture of whole genomes by coupling proximity-based ligation with massively parallel sequencing. Working with collaborators at the Broad Institute and UMass Medical School, we used Hi-C to construct spatial proximity maps of the human genome at a resolution of 1Mb. These maps confirm the presence of chromosome territories and the spatial proximity of small, gene-rich chromosomes. We identified an additional level of genome organization that is characterized by the spatial segregation of open and closed chromatin to form two genome-wide compartments. At the megabase scale, the chromatin conformation is consistent with a fractal globule, Continue reading… How the genome folds – Erez Lieberman Aiden Lumps and bumps in the early universe: (p)reheating and oscillons after inflation – Mustafa Amin Tue. September 20th, 2011 Our understanding of the universe between the end of inflation and production of light elements is incomplete. How did inflation end? What did the universe look like at the end of inflation? In this talk, I will discuss the different scenarios of (p)reheating: particle production at the end of inflation. I will then concentrate on a particular scenario: the fragmentation of the inflaton into localized, long-lived excitations of the inflaton field (oscillons), which end up dominating the energy density of the universe if couplings to other fields are weak. Oscillons are produced in a large class of inflationary models which are theoretically well motivated and observationally consistent with the cosmic microwave background anisotropies. Continue reading… Lumps and bumps in the early universe: (p)reheating and oscillons after inflation – Mustafa Amin Massive gravitons and enhanced gravitational lensing – Mark Wyman Tue. April 26th, 2011 The mystery of dark energy suggests that there is new gravitational physics at low energies and on long length scales. On the other hand, low mass degrees of freedom in gravity are strictly limited by observations within the solar system. A compelling way to resolve this apparent contradiction is to add a galilean-invariant scalar field to gravity. Called galileons, these scalars have strong self interactions near overdensities, like the solar system, that suppress their effects on the motion of massive particles. These non-linearities are weak on cosmological scales, permitting new physics to operate. Extending galilean invariance to the coupling of galileons to stress-energy — Continue reading… Massive gravitons and enhanced gravitational lensing – Mark Wyman Learning about Aspects of Clusters and Cosmology from Weak and Strong Gravitational Lensing Approaches – Mandeep Gill Tue. April 12th, 2011 I will cover several aspects of current astrophysics that can be probed by various regimes of lensing in simulations and data –from galaxy cluster substructure to what we can learn about cosmology from cluster weak lensing ensembles. Further, a new approach to extracting information from strongly lensed arc images that I have been involved with in recent times, and which is model-independent and has the potential to revolutionize approaches to strong lensing analyses and is very complementary to weak lensing analyses will be introduced. I will further briefly discuss initial lensing results from already-taken data of 6 clusters from the Large Binocular Telescope in Arizona, Continue reading… Learning about Aspects of Clusters and Cosmology from Weak and Strong Gravitational Lensing Approaches – Mandeep Gill Thick-wall tunneling in a piecewise linear and quadratic potential – Pascal Vaudrevange Tue. April 12th, 2011 After reviewing the basics of Coleman deLuccia tunneling, especially in the thin-wall limit, I discuss an (almost) exact tunneling solution in a piecewise linear and quadratic potential. A comparison with the exact solution for a piecewise linear potential demonstrates the dependence of the tunneling rate on the exact shape of the potential. Finally, I will mention applications when determining initial conditions for inflation in the landscape. Based on arXiv:1102.4742 [hep-th]. Continue reading… Thick-wall tunneling in a piecewise linear and quadratic potential – Pascal Vaudrevange Gravitational wave astronomy in the next decade – Xavier Siemens Tue. April 5th, 2011 In the next decade two types of gravitational wave experiments are expected to result in the direct detection of gravitational waves: Advanced ground-based interferometric detectors and pulsar timing experiments. In my talk I will describe both types of experiments and their sensitivities to various types of gravitational wave sources. I will also discuss some of the impacts of these experiments on astronomy and cosmology. Continue reading… Gravitational wave astronomy in the next decade – Xavier Siemens Testing Dark Energy with Massive Galaxy Clusters – Michael Mortonson Tue. March 29th, 2011 Existing observations of the cosmic expansion history place strong restrictions on the rate of large scale structure growth predicted by various dark energy models. In the simplest Lambda CDM scenario, current observations enable percent-level predictions of growth, which can be interpreted in terms of the expected abundance of massive galaxy clusters at high redshift. I will show that these predictions from current data set a firm upper limit on the cluster abundance in the more general class of quintessence models where dark energy is a canonical, minimally-coupled scalar field. While the most massive clusters known today appear to lie just below this limit, Continue reading… Testing Dark Energy with Massive Galaxy Clusters – Michael Mortonson New observational power from halo bias – Sarah Shandera Tue. March 22nd, 2011 Non-Gaussianity of the local type will be particularly well constrained by large scale structure through measurements of the power spectra of collapsed objects. Motivated by properties of early universe scenarios that produce observationally large local non-Gaussianity, we suggest a generalized local ansatz and perform N-body simulations to determine the signatures in the bias of dark matter halos. The ansatz introduces two bispectral indices that characterize how the local non-Gaussianity changes with scale and these generate two new signals in the bias. While analytic predictions agree qualitatively with the simulations, we find numerically a stronger observational signal than expected, which suggests that a better analytic understanding is needed to fully explain the consequences of primordial non-Gaussianity. Continue reading… New observational power from halo bias – Sarah Shandera Constraining the cosmic growth history with large scale structure – Rachel Bean Tue. March 15th, 2011 We consider how upcoming, prospective large scale structure surveys, measuring galaxy weak lensing, position and peculiar velocity correlations, in tandem with the CMB temperature anisotropies, will constrain dark energy when both the expansion history and growth of structure can be modified, as might arise if cosmic acceleration is due to modifications to GR. We consider an equation of state figure of merit parameter, and analogous figure of merit parameters for modified gravity, to quantify the relative constraints from CMB, galaxy position, lensing, and peculiar velocity observations and their cross correlations, independently and in tandem. Continue reading… Constraining the cosmic growth history with large scale structure – Rachel Bean What to do with 350,000 astronomers – Chris Lintott Fri. February 18th, 2011 Since its launch in 2007, the Galaxy Zoo project has involved hundreds of thousands of volunteers in the morphological classification of galaxies. Project PI Chris Lintott will review the results – which include a new understanding of the importance of red spirals – and their implications for our understanding of galaxy formation. The project has now expanded to include tasks ranging from discovering planets through to lunar classification, and the talk will also discuss the potential of this 'citizen science' method to help scientists cope with massive modern data sets. Continue reading… What to do with 350,000 astronomers – Chris Lintott Astrophysics with Gravitational-Wave Detectors – Vuk Mandic Tue. February 8th, 2011 Gravitational waves are predicted by the general theory of relativity to be produced by accelerating mass systems with quadrupole moment. The amplitude of gravitational waves is expected to be very small, so the best chance of their direct detection lies with some of the most energetic events in the universe, such as mergers of two neutron stars or black holes, Supernova explosions, or the Big-Bang itself. I will review the status of current gravitational-wave detectors, such as the Laser Interferometer Gravitational-wave Observatory (LIGO), as well as some of the most recent results obtained using LIGO data. I will also discuss plans and expectations for the future generations of gravitational-wave detectors. Continue reading… Astrophysics with Gravitational-Wave Detectors – Vuk Mandic New and Old Massive Gravity – Claudia de Rham Tue. February 1st, 2011 Continue reading… New and Old Massive Gravity – Claudia de Rham A new method for cosmological parameter estimation from Supernovae Type Ia data – Marisa March Tue. January 18th, 2011 We present a new methodology to extract constraints on cosmological parameters from SNIa data obtained with the SALT lightcurve fitter. The power of our Bayesian method lies in its full exploitation of relevant prior information, which is ignored by the usual chisquare approach. Using realistic simulated data sets we demonstrate that our method outperforms the usual chisquare approach 2/3 of the times. A further benefit of our methodology is its ability to produce a posterior probability distribution for the intrinsic dispersion of SNe. This feature can also be used to detect hidden systematics in the data. Continue reading… A new method for cosmological parameter estimation from Supernovae Type Ia data – Marisa March K-essence Interactions with Neutrinos: Flavor Oscillations without Mass – Christopher Gauthier Tue. December 7th, 2010 In this talk we discuss a novel means of coupling neutrinos to a Lorentz violating background k-essence field. K-essence is a model of dark energy, which uses a non-canonical scalar field to drive the late time accelerated expansion of the universe. We propose that neutrinos couple to the k-essence induced metric rather than the space-time metric. The immediate effect that this has will be to modify the energy-momentum relation of the neutrino. This implies that the neutrino velocity will in general be different from the speed of light, even if the neutrino is massless. Later we will see that k-essence can also induce neutrino oscillations even without a neutrino mass term. Continue reading… K-essence Interactions with Neutrinos: Flavor Oscillations without Mass – Christopher Gauthier Light from Cosmic Strings – Tanmay Vachaspati Tue. November 16th, 2010 Continue reading… Light from Cosmic Strings – Tanmay Vachaspati Testing the No-Hair Theorem with Astrophysical Black Holes – Dmitrios Psaltis Tue. November 2nd, 2010 The Kerr spacetime of spinning black holes is one of the most intriguing predictions of Einstein's theory of general relativity. The special role this spacetime plays in the theory of gravity is encapsulated in the no-hair theorem, which states that the Kerr metric is the only realistic black-hole solution of the vacuum field equations. Recent and anticipated advances in the observations of black holes throughout the electromagnetic spectrum have secured our understanding of their basic properties while opening up new opportunities for devising tests of the Kerr metric. In this talk, I will show how imaging and spectroscopic observations of accreting black-holes with current and future instruments can lead to the first direct test of the no-hair theorem with an astrophysical object. Continue reading… Testing the No-Hair Theorem with Astrophysical Black Holes – Dmitrios Psaltis Cosmological Constraints from Peculiar Velocities – Arthur Kosowski Fri. October 29th, 2010 Peculiar velocities of galaxies and clusters are induced during the formation of structure in the universe via gravitational forces. As such, they provide a potentially powerful route to constraining both the growth of structure and the expansion history of the universe. Traditional methods of velocity determination have not yet been able to measure velocities at cosmological distances with sufficient accuracy to allow cosmological constraints. I will discuss two possible methods of measuring peculiar velocities: directly via the kinematic Sunyaev-Zeldovich effect for galaxy clusters, and using distance measurements of type-Ia supernovae in future large surveys. I will discuss measurement prospects, and show that upcoming probes of mean pairwise velocity will have the potential to plac significant constraints on both dark energy and modifications of gravity while limiting systematic errors Continue reading… Cosmological Constraints from Peculiar Velocities – Arthur Kosowski IR issues in Inflation – Richard Holman Fri. October 15th, 2010 I review some problems involving IR divergences in de Sitter space that give rise to behavior such as secular growth of fluctuations and discuss the use of the Dynamical Renormalization Group as a tool to resum and reinterpret these divergences. Time permitting, I'll also discuss some more recent work on the breakdown of the semiclassical approximation in de Sitter space. Continue reading… IR issues in Inflation – Richard Holman The Angular Distribution of the Highest-Energy Cosmic Rays – Andrew Jaffe Tue. October 12th, 2010 Continue reading… The Angular Distribution of the Highest-Energy Cosmic Rays – Andrew Jaffe Bulk viscosity and the damping of neutron star oscillations – Mark Alford Fri. October 8th, 2010 How do we learn about the phases of matter beyond nuclear density? They are to be found only in the interior of neutron stars, which are inaccessible and hard to observe. One approach is through the oscillations of neutron stars, which depend on the viscosity of their interior. If the viscosity is low enough then "r-mode" oscillations arise spontaneously and cause the star to spin down. Finding fast-spinning stars therefore puts limits on the viscosity viscosity and hence on the possible phases present in the interior of the star. This talk discusses non-linear effects which arise for large amplitude "suprathermal" Continue reading… Bulk viscosity and the damping of neutron star oscillations – Mark Alford CMB in a Box – Raul Abramo Tue. September 28th, 2010 First, I will show that the line-of-sight solution to cosmic microwave anisotropies in Fourier space, even though formally defined for arbitrarily large wavelengths, leads to position-space solutions which only depend on the sources of anisotropies inside the past light-cone of the observer. This happens order by order in a series expansion in powers of the visibility function. Second, I will show that the Fourier-Bessel expansion of the physical fields (including the temperature and polarization momenta) is superior to the usual Fourier basis as a framework to compute the anisotropies. In that expansion, for each multipole $l$ there is a discrete tower of momenta $k_{i,l}$ (not a continuum) which can affect physical observables, Continue reading… CMB in a Box – Raul Abramo Does Quantum Mechanics Imply Gravity? – Harsh Mathur Tue. September 21st, 2010 Continue reading… Does Quantum Mechanics Imply Gravity? – Harsh Mathur Galileon Inflation and Non-Gaussianities – Andrew Tolley Tue. September 7th, 2010 I will discuss a new class of inflationary models based upon the idea of Galileon fields, scalar fields that exhibit non-linearly realized symmetries. These models predict distinctive non-Gaussian features in the primordial power spectrum, and I will discuss how they relate with, and can be distinguished from, canonical inflation, k-inflation, ghost inflation, and DBI-inflationary models. Continue reading… Galileon Inflation and Non-Gaussianities – Andrew Tolley Michelson Lectures — High-Energy Physics with Low-Energy Symmetry Studies – David Hanneke Fri. May 14th, 2010 Discrete symmetries — charge conjugation (C), parity inversion (P), time reversal (T), and their combinations — provide insight into the structure of our physical theories. Many extensions to the Standard Model predict symmetry violations beyond those already known. From the first evidence of P-violation in the 1950s using cold atoms, low-energy, high-precision experiments have quantified existing violations and constrained further ones. In this lecture, I will describe several searches for discrete symmetry violations with low-energy experiments. T-violation, closely related to matter/antimatter asymmetry through the CPT theorem, is tightly constrained by searches for intrinsic electric dipole moments. CPT-violation — the only combination of these symmetries obeyed by the entire Standard Model — Continue reading… Michelson Lectures — High-Energy Physics with Low-Energy Symmetry Studies – David Hanneke Michelson Lectures — Cavity Control in a Single-Electron Quantum Cyclotron: An Improved Measurement of the Electron Magnetic Moment – David Hanneke Thu. May 13th, 2010 Measurements of the electron magnetic moment (the "g-value") probe the electron's interaction with the fluctuating vacuum. With a quantum electrodynamics calculation, they provide the most accurate determination of the fine structure constant. Comparisons with independent determinations of the fine structure constant are the most precise tests of quantum electrodynamics and probe extensions to the Standard Model of particle physics. I will present two new measurements of the electron magnetic moment. The second, at a relative uncertainty of 0.28 parts-per-trillion, yields a value of the fine structure constant with a relative accuracy of 0.37 parts-per-billion, over 10-times smaller uncertainty than the next-best methods. Continue reading… Michelson Lectures — Cavity Control in a Single-Electron Quantum Cyclotron: An Improved Measurement of the Electron Magnetic Moment – David Hanneke Michelson Lectures — Optical Atomic Clocks – David Hanneke Tue. May 11th, 2010 The most precise measurement techniques involve time, frequency, or a frequency ratio. For example, for centuries, accurate navigation has relied on precise timekeeping — a trend that continues with today's global positioning system. After briefly reviewing the current microwave frequency standards based on the hyperfine structure of cesium, I will describe work towards atomic clocks working at optical frequencies. Among these are standards based on trapped ions or on neutral atoms trapped in an optical lattice. A frequency comb allows the comparison of different optical frequencies and the linking of optical frequencies to more-easily-counted microwave ones. Though still in the basic research stage, Continue reading… Michelson Lectures — Optical Atomic Clocks – David Hanneke Michelson Lectures — Entangled Mechanical Oscillators and a Programmable Quantum Computer: Adventures in Coupling Two-Level Systems to Quantum Harmonic Oscillators – David Hanneke Mon. May 10th, 2010 The two-level system and the harmonic oscillator are among the simplest analyzed with quantum mechanics, yet they display a rich set of behaviors. Quantum information science is based on manipulating the states of two-level systems, called quantum bits or qubits. Coupling two-level systems to harmonic oscillators allows the generation of interesting motional states. When isolated from the environment, trapped atomic ions can take on both of these behaviors. The two-level system is formed from a pair of internal states, which lasers efficiently prepare, manipulate, and read-out. The ions' motion in the trap is well described as a harmonic oscillator and can be cooled to the quantum ground state. Continue reading… Michelson Lectures — Entangled Mechanical Oscillators and a Programmable Quantum Computer: Adventures in Coupling Two-Level Systems to Quantum Harmonic Oscillators – David Hanneke Cosmological Bubbles and Solitons: A Classic(al) Effect – Tom Giblin Tue. April 27th, 2010 Cosmological bubble collisions arising from first order phase transitions are a generic consequence of the Eternal Inflation scenario. I will present our computational strategy for generating and evolving these bubbles in 3+1 dimensions and in a self-consistently expanding background. I will show the existence of classical field transitions–the classical nucleation of bubbles during collisions–which can dramatically alter the canonical description of eternal inflation. Continue reading… Cosmological Bubbles and Solitons: A Classic(al) Effect – Tom Giblin CP Violation in Bs->J/psi phi: Evidence for New Physics? – Karen Gibson Tue. April 13th, 2010 CP violation in the Bs->J/psi phi system has been one of the most discussed topics in particle physics in the past two years, in large part due to anomalously high, although statistically limited, measurements of the CP violating phase made by the Tevatron experiments. The measurement of this CP phase has been a highlight of the late Run II Tevatron physics effort and it's precise determination is the flagship analysis of the LHCb program. I will present the physics interest in CP violation in the Bs system, give an overview of the past and present results from the Tevatron experiments, Continue reading… CP Violation in Bs->J/psi phi: Evidence for New Physics? – Karen Gibson Quantum Effects in Gravitational Collapse of a Reisner-Nordström Domain wall Tue. April 6th, 2010 We will investigate the formation of RN black holes by studying the collapse of a charged spherically symmetric domain wall. Utilizing the Functional Schrödinger formalism, we will also investigate time-dependent thermodynamic properties of the collapse and compare with the well known theoretical results. Continue reading… Quantum Effects in Gravitational Collapse of a Reisner-Nordström Domain wall String theory cosmic strings – Dimitri P. Skliros Tue. March 30th, 2010 I will discuss the first construction of coherent states in the covariant formalism for both open and closed strings with applications to cosmic strings in mind. Furthermore, I provide an explicit map that relates three different descriptions of cosmic strings: classical strings, lightcone gauge quantum states and covariant vertex operators. I will then go on to discuss applications and future directions: string amplitude computations with such vertices and in particular decays of (the phenomenologically promising) cosmic strings with non-degenerate cusps in a framework that naturally incorporates the effects of gravitational backreaction. Partly based on: http://arxiv.org/abs/0911.5354 Continue reading… String theory cosmic strings – Dimitri P. Skliros Tunneling in Flux Compactifications – Jose Blanco-Pillado Tue. March 23rd, 2010 We identify instantons representing several different transitions in a field theory toy model for string theory flux compactifications and described the observational signatures of such processes. Continue reading… Tunneling in Flux Compactifications – Jose Blanco-Pillado Primordial magnetic fields: evolution and observable signatures – Tina Kahniashvili Tue. March 16th, 2010 I will discuss the evolution of the primordial magnetic field accounting for MHD instabilities in the early Universe. I will address different cosmological signatures of the primordial magnetic fields and will discuss the observational tests to limit the amplitude and correlation length of the magnetic fields, as well as their detection prospects. Continue reading… Primordial magnetic fields: evolution and observable signatures – Tina Kahniashvili ArDM Experiment – Carmen Carmona Tue. March 2nd, 2010 The Argon Dark Matter (ArDM) project aims at operating a large noble liquid detector to search for direct evidence of Weakly Interacting Massive Particles (WIMP) as Dark Matter in the universe. It consists on a one-ton liquid argon detector able to read independently ionization charge and scintillation light. I will describe the experimental concept and the physics performance of the ArDM experiment, which is presently under construction and commissioning on surface at CERN. Continue reading… ArDM Experiment – Carmen Carmona A Theory Program to Exploit Weak Gravitational Lensing to Constrain Dark Energy – Andrew Zentner Fri. February 26th, 2010 Weak gravitational lensing is one of the most promising techniques to constrain the dark energy that drives the contemporary cosmic acceleration. I give an overview of the dark energy problem, focusing on the manner in which weak gravitational lensing can determine the nature of the dark energy. Bringing lensing constraints to fruition is challenging both observationally and theoretically. I will focus on the theoretical challenges. The most demanding of these is to make accurate predictions for the power spectrum of density fluctuations on nonlinear scales, including treatments of baryonic processes such as galaxy formation, that have been neglected in much of the literature. Continue reading… A Theory Program to Exploit Weak Gravitational Lensing to Constrain Dark Energy – Andrew Zentner Shedding light on the nature of dark matter with gamma-rays – Jennifer Siegal-Gaskins Tue. February 23rd, 2010 Detection of gamma rays from the annihilation or decay of dark matter particles is a promising method for identifying dark matter, understanding its intrinsic properties, and mapping its distribution in the universe. I will review recent results from the Fermi Gamma-ray Space Telescope and other experiments and discuss the constraints these place on particle dark matter models. I will also present a novel approach to extracting a dark matter signal from Fermi gamma-ray observations using the energy-dependence of anisotropies in a sky map of the diffuse emission. The sensitivity of this technique and its prospects for robustly identifying a dark matter signal in Fermi data will be discussed. Continue reading… Shedding light on the nature of dark matter with gamma-rays – Jennifer Siegal-Gaskins Non-gaussianities and the Inflationary Initial State – Andrew Tolley Fri. February 19th, 2010 The potential discovery of primordial non-gaussianities would revolutionize our understanding of early universe cosmology, giving a whole new perspective on the physics responsible for inflation. I will review the different possible physical mechanisms that can give rise to non-gaussianities, and discuss in detail those which are distinctive in telling us about the inflationary quantum state. In particular, I will show how consistency conditions coming from effective field theory can be used to constrain the level of non-gaussianity that we can hope to observe in future data. Continue reading… Non-gaussianities and the Inflationary Initial State – Andrew Tolley Dark Matter via Many Copies of the Standard Model – Alex Vikman Tue. February 16th, 2010 Recently it was realized that the strong coupling scale in gravity substantially depends on the number of different quantum fields present in nature. On the other hand, gravity theory with an electroweak strong coupling scale could be responsible for a solution of the hierarchy problem. Consequently it was suggested that possible existence of very many hidden fields could stabilize the mass of Higgs particle. In this talk I review a cosmological scenario based on the assumption that the Standard Model possesses a large number of copies. It is demonstrated that baryons in the hidden copies of the standard model can naturally account for the dark matter. Continue reading… Dark Matter via Many Copies of the Standard Model – Alex Vikman Hierarchy in the Phase Space and Dark Matter Astronomy – Niayesh Afshordi Fri. February 12th, 2010 Understanding small scale structure in the dark matter distribution is important in interpreting many astrophysical observations, as well as dark matter (direct or indirect) detection searches. With this motivation, I introduce a theoretical framework for describing the rich hierarchy of the phase space of cold dark matter haloes, due to gravitationally bound sub-structures, as well as tidal debris and caustics. I then argue that if/when we detect dark matter particles, a new era of Dark Matter Astronomy will be just around the corner. Continue reading… Hierarchy in the Phase Space and Dark Matter Astronomy – Niayesh Afshordi Shading Lambda – Claudia de Rahm Tue. February 2nd, 2010 The idea of degravitation is to tackle the cosmological constant problem by modifying gravity at large distances such that a large cosmological constant does not backreact as much as anticipated from standard General Relativity. After reviewing the fundamental aspects of degravitation, I will present a new class of theories of massive gravity capable of exihibiting the degravitation behaviour. I will then comment on the stability of such models and show in the decoupling limit how theories of gravity with at least two additional helicity-0 excitations can provide a stable realization of degravitation. Continue reading… Shading Lambda – Claudia de Rahm Dark Matter Substructure in the Milky Way: Properties and Detection Prospects – Louie Strigari Tue. January 26th, 2010 Cosmological observations have converged on a standard model of Lambda-Cold Dark Matter (LCDM), in which the Universe is dominated by yet unknown components of dark matter and dark energy. When confronted with observations of our own Milky Way, this theory of LCDM leads to the prediction of a significant population of bound, unseen dark matter substructures, ranging possibly from Earth mass scales up to observed dwarf galaxy mass scales. In this talk, I will discuss the theory of LCDM and substructure in the context of present and forthcoming deep galaxy surveys, and show how these observations may be used to provide detailed predictions for the abundance and mass spectrum of dark substructures. Continue reading… Dark Matter Substructure in the Milky Way: Properties and Detection Prospects – Louie Strigari On triviality of $\lambda\phi^{4}$ theory in $D=4$ – Dmitry Podolsky Tue. January 19th, 2010 e introduce a new non-perturbative method suitable for analyzing scalar quantum field theories at strong coupling based on mapping between quantum field theories in $dS_{D}\times M_{N}$ spacetime and statistical field theories in Euclidean space $M_{N}$. Applying this method to $\lambda\phi^{4}$ theory in $dS_{D}\times E_{4}$ spacetime, we analyze behavior of the 4-dimensional $\lambda\phi^{4}$ theory in the regime $\lambda\sim{}1$ and give a new argument in favor of triviality of the theory. Continue reading… On triviality of $\lambda\phi^{4}$ theory in $D=4$ – Dmitry Podolsky Pulsar Kicks With Active and Sterile Neutrinos – Leonard Kisslinger Fri. December 4th, 2009 In 2007 my coworkers and I completed the calculation of the velocity given to a neutron star in the period of 10-20 seconds after the gravitational collapse of a massive star by active neutrinos. This year an analysis of neutrino data has shown that there exist sterile neutrinos with large mixing angles. We have calculated the velocity that the emission of such sterile neutrinos in the 0-10 second period would give to the neutron star (the pulsar). We are applying this to calculate the velocity of the neutron star that might have been formed by SN 1987A. We are also engaged in calculating sterile neutrino prosesses during a supernova collapse to see if the stalled shock can be unstalled. Continue reading… Pulsar Kicks With Active and Sterile Neutrinos – Leonard Kisslinger Nongaussian Fluctuations from Particle Production During Inflation – Neil Barnaby Tue. November 24th, 2009 In a variety of inflation models, the motion of the inflaton may trigger the production of some iso-curvature particles during inflation, for example via parametric resonance or a phase transition. Inflationary particle production provides a new mechanism for generating cosmological perturbations (infra-red cascading) and can also slow the motion of the inflaton on a steep potential. Moreover, such models provide a novel example of non-decoupling of high scale physics during inflation. I will discuss the observational consequences of inflationary particle production, including the generation of features in the primordial power spectrum and large nongaussianities with a unique shape of bispectrum. Continue reading… Nongaussian Fluctuations from Particle Production During Inflation – Neil Barnaby Gravitational Waves, Laser Interferometers and Multimessenger Astrophysics – Laura Cadonati Tue. November 10th, 2009 The Laser Interferometer Gravitational-wave Observatory (LIGO) and its sister project Virgo are currently acquiring data, aiming at the first direct detection of gravitational waves. These elusive ripples in the fabric of space-time, carriers of information on the acceleration of large masses, are a key prediction of General Relativity; their detection will activate a fundamental, new probe into the universe. Sources of interest for LIGO/Virgo include the coalescence of compact binary systems, core-collapse supernovae and the stochastic background from the early universe, as well as multi-messenger coincident signatures with electromagnetic or neutrino counterparts. In this talk, I will present the status of ground-based gravitational wave detectors and review the most significant observational results obtained so far. Continue reading… Gravitational Waves, Laser Interferometers and Multimessenger Astrophysics – Laura Cadonati Three thoughts about black holes and cosmology – Latham Boyle Tue. November 3rd, 2009 I will present three ideas about black holes and cosmology. First, I will discuss a way of understanding the simple patterns which emerge from the notoriously thorny numerical simulations of binary black hole merger, and some of the directions where this understanding may lead. Second, I will suggest a sequence of practical bootstrap tests designed to give sharp observational confirmation of the essential idea underlying the inflationary paradigm: that the universe underwent a period of accelerated expansion followed by a long period of decelerated expansion. Third, I will investigate a way that one might try to detect the strong bending of light rays in the vicinity of a black hole. Continue reading… Three thoughts about black holes and cosmology – Latham Boyle Using anisotropy to identify a dark matter signal in diffuse gamma-ray emission with Fermi – Jennifer Siegal-Gaskins Tue. October 20th, 2009 Dark matter annihilation in Galactic substructure will produce diffuse gamma-ray emission of remarkably constant intensity across the sky, making it difficult to disentangle this Galactic dark matter signal from the extragalactic gamma-ray background. Recent studies have considered the angular power spectrum of the diffuse emission from various extragalactic source classes and from Galactic dark matter. I'll discuss these results and show how the energy dependence of anisotropies in the total measured diffuse emission could be used to confidently identify a signal from dark matter in Fermi data. Finally, I'll present new results demonstrating that anisotropy analysis could significantly extend the sensitivity of current indirect dark matter searches. Continue reading… Using anisotropy to identify a dark matter signal in diffuse gamma-ray emission with Fermi – Jennifer Siegal-Gaskins Measuring small scale CMB temperature and polarization anisotropies with the Atacama Cosmology Telescope – Mike Niemack Fri. October 16th, 2009 The Atacama Cosmology Telescope (ACT) is a six-meter telescope on the Atacama plateau, Chile that was built to characterize the cosmic microwave background (CMB) with arcminute resolution. Since 2008 ACT has been used to measure the temperature anisotropies in the CMB in three bands between 140 – 300 GHz with the largest arrays of transition-edge sensor (TES) bolometers ever fielded for CMB observations. Two of the primary science objectives for these measurements are: detecting galaxy clusters via the Sunyaev-Zel'dovich effect, which can be used to study the dark energy equation of state when combined with optical redshifts, and measuring the CMB power spectrum at high multipoles to improve constraints on cosmological parameters. Continue reading… Measuring small scale CMB temperature and polarization anisotropies with the Atacama Cosmology Telescope – Mike Niemack New Perspectives on Indirect, Astrophysical Dark Matter Limits – Andrew Zentner Fri. October 9th, 2009 High-Energy neutrinos from the annihilations of dark matter captured within the Sun is thought to be a relatively clean, indirect probe of dark matter physics. In addition, this probe is sensitive to the dark matter-proton cross section so it can be used to cross-check direct searches, and does not rely on a large annihilation cross section in order to be observed in near-term experiments such as IceCube. I will consider a modification of the standard scenario. Dark matter that interacts strongly with itself as has been proposed in several contexts. I show that viable models of self-interacting dark matter can lead to large boosts in the expected neutrino flux from the Sun, Continue reading… New Perspectives on Indirect, Astrophysical Dark Matter Limits – Andrew Zentner CMB Polarization Power Spectra from Two Years of BICEP Data – Cynthia Chiang Tue. September 22nd, 2009 BICEP is a bolometric polarimeter designed to measure the inflationary B-mode polarization of the cosmic microwave background at degree angular scales. During three seasons of observing at the South Pole (2006–2008), BICEP mapped ~2% of the sky chosen to be uniquely clean of polarized foreground emission. I will discuss the initial maps and angular power spectra derived from a subset of the data acquired during the first two years, and I will describe in detail the analysis methods and studies of potential systematic errors. BICEP measures the E-mode power spectrum with high precision at 21 < ell < 335 and detects the acoustic peak expected at ell ~ 140 for the first time. Continue reading… CMB Polarization Power Spectra from Two Years of BICEP Data – Cynthia Chiang Cryogenic Dark Matter Search . Current Results and Future Background Discrimination – Cathy Bailey Tue. May 5th, 2009 The Cryogenic Dark Matter Search (CDMS) is searching for Weakly Interacting Massive Particles (WIMPs) with cryogenic germanium particle detectors. These detectors discriminate between nuclear recoil candidate and electron recoil background events by collecting both phonon and ionization energy from recoils in the detector crystals. The CDMS II experiment has completed analysis of the first data run with 30 semiconductor detectors at the Soudan Underground Laboratory resulting in a world leading WIMP-nucleon spin-independent cross section limit for WIMP masses above 44 GeV/c2. As CDMS aims to achieve greater WIMP sensitivity, it is necessary to increase the detector mass and discrimination between signal and background events. Continue reading… Cryogenic Dark Matter Search . Current Results and Future Background Discrimination – Cathy Bailey String shots from a spinning black hole – Ted Jacobson Fri. April 24th, 2009 The dynamics of relativistic current carrying string loops moving axisymmetrically on the background of a Kerr black hole are characterized. In one interesting type of motion, a loop can be ejected along the axis, some internal elastic or rotational kinetic energy being converted into translational kinetic energy. Continue reading… String shots from a spinning black hole – Ted Jacobson Fundamentals of the LHC – Johan Alwall Tue. April 14th, 2009 In this introductory lecture I will present why we have built the LHC, and discuss the underlying physics of a hadron collider. This includes the fundamentals of QCD (the theory for the strong interaction), features such as jets and hadronization, and an introduction to the physics of the Standard Model, including Electroweak symmetry breaking. The lecture will be concluded with a discussion about the problems with the Standard Model. Continue reading… Fundamentals of the LHC – Johan Alwall The curvaton inflationary model, non-Gaussianity and isocurvature – Maria Beltran Tue. March 31st, 2009 The inflationary paradigm has become one of the most compelling candidates to explain the observed cosmological phenomena. However, the data is still inconclusive about the particular details of the inflationary model. Apart from the basic, single field model, there exists a wide range of currently undistinguishable possibilities for the scalar field number, potential and couplings during the early universe. In this talk I will review one of these extensions of the basic inflationary model, the curvaton model, where at least two scalar fields are present during inflation. I will revisit the constraints on the parameters of the model in light of the results of recent non-Gaussianity analyses and bounds on the cold dark matter isocurvature contribution in the primordial anisotropies of the CMB. Continue reading… The curvaton inflationary model, non-Gaussianity and isocurvature – Maria Beltran Large-Scale Structure in Modified Gravity – Roman Scoccimarro Fri. March 27th, 2009 Cosmic acceleration may be due to modifications of general relativity (GR) at large scales, rather than dark energy. We use analytic techniques and N-body simulations to find out what observational signatures to expect in brane-induced gravity, with focus on new nonlinear effects not present in GR. Continue reading… Large-Scale Structure in Modified Gravity – Roman Scoccimarro Dark Stars – Katie Freese Tue. March 17th, 2009 We have proposed that the first phase of stellar evolution in the history of the Universe may be Dark Stars (DS), powered by dark matter heating rather than by nuclear fusion. Weakly Interacting Massive Particles, which may be their own antipartners, collect inside the first stars and annihilate to produce a heat source that can power the stars. A new stellar phase results, a Dark Star, powered by dark matter annihilation as long as there is dark matter fuel, with lifetimes from millions to billions of years. We find that the first stars are very bright (a million times solar) diffuse puffy objects during the DS phase, Continue reading… Dark Stars – Katie Freese Cascading Gravity and Degravitation – Claudia de Rham Tue. March 3rd, 2009 Cascading gravity is an explicit realization of the idea of degravitation, where gravity behaves as a high-pass filter. This could explain why a large cosmological constant does not backreact as much as anticipated from standard General Relativity. The model relies on the presence of at least two infinite extra dimensions while our world is confined on a four-dimensional brane. Gravity is then four-dimensional at short distances and becomes weaker at larger distances. Continue reading… Cascading Gravity and Degravitation – Claudia de Rham Testing global isotropy and some interesting cosmological models with CMB – Amir Hajian Tue. February 24th, 2009 Simplest models of the Universe predict global (statistical) isotropy on large scales in the observable Universe. However there are a number of interesting models that predict existence of preferred directions. In this talk I will present results of using CMB anisotropy maps to test the global isotropy of the Universe on its largest scales, and will show how that can help us constrain interesting models such as topology of the Universe and anisotropic cosmological models (e.g. Bianchi models). I will also discuss the intriguing lack of power on large angular scales in the observed CMB maps and implications that it may have for cosmology. Continue reading… Testing global isotropy and some interesting cosmological models with CMB – Amir Hajian Hilltop Quintessence – Sourish Dutta Tue. February 17th, 2009 We examine hilltop quintessence models, in which the scalar field is rolling near a local maximum in the potential, and w is close to -1. We first derive a general equation for the evolution of the scalar field in the limit where w is close to -1. We solve this equation for the case of hilltop quintessence to derive w as a function of the scale factor; these solutions depend on the curvature of the potential near its maximum. Our general result is in excellent agreement (delta w < 0.5%) with all of the particular cases examined. It works particularly well (delta w < Continue reading… Hilltop Quintessence – Sourish Dutta Can the WMAP Haze really be a signature of annihilating neutralino dark matter? – Daniel Cumberbatch Tue. February 3rd, 2009 Observations by the Wilkinson Microwave Anisotropy Probe (WMAP) satellite have identified an excess of microwave emission from the centre of the Milky Way. It has been suggested that this {\it WMAP haze} emission could potentially be synchrotron emission from relativistic electrons and positrons produced in the annihilations of one (or more) species of dark matter particles. In this paper we re-calculate the intensity and morphology of the WMAP haze using a multi-linear regression involving full-sky templates of the dominant forms of galactic foreground emission, using two different CMB sky signal estimators. The first estimator is a posterior mean CMB map, Continue reading… Can the WMAP Haze really be a signature of annihilating neutralino dark matter? – Daniel Cumberbatch Multi-brane Inflation in String Theory – Amjad Ashoorioon Tue. January 27th, 2009 I will talk about two inflationary scenarios in which the cooperative behavior of multiple branes give rise to inflation. In the first one, which we call cascade inflation, assisted inflation is realized in heterotic M-theory and by non-perturbative interactions of N M5-branes. The features in the inflaton potential are generated whenever two M5-branes collide with the boundaries. The derived small-scale power suppression could serves as a possible explanation for the dearth of observed dwarf galaxies in the Milky Way halo. In the second one, the transverse dimension of coincident D3-branes, which are N-dimensional matrices, result in inflation. We discuss how various scenarios such as chaotic, Continue reading… Multi-brane Inflation in String Theory – Amjad Ashoorioon High temperature superfluidity in high energy heavy ion collisions at RHIC and forward physics with TOTEM at LHC – Tamas Csorgo Tue. January 13th, 2009 Five important milestones have been achieved in high energy heavy ion collisions utilitizing the Relativistic Heavy Ion Collider at BNL: – a new phenomena – which was proven to signal a new state of matter – this state of matter was found to be a perfect fluid, with temperatures reaching 2 terakelvins and more – the degrees of freedom were shown to be the quarks – and the kinematic viscosity of this matter at extemely high temperatures were found to be less than that of a superfluid 4He at the onset of superfluidity. I will summarize these milestones and some more recent novel results of the RHIC programme and also outline an interesting new direction, Continue reading… High temperature superfluidity in high energy heavy ion collisions at RHIC and forward physics with TOTEM at LHC – Tamas Csorgo Anthropy and entropy – Irit Maor Tue. November 25th, 2008 Continue reading… Anthropy and entropy – Irit Maor On the Challenge to Unveil the Microscopic Nature of Dark Matter – Scott Watson Tue. November 18th, 2008 Despite the successes of modern precision cosmology to measure the macroscopic properties of dark matter, its microscopic nature still remains elusive. LHC is expected to probe energies relevant for testing theories of electroweak symmetry breaking, and as a result may allow us to produce dark matter for the first time. Other indirect experiments, such as PAMELA, offer additional ways to probe the microscopic nature of dark matter through observations of cosmic rays. Results from a number of indirect detection experiments seem to suggest that our old views of the creation of dark matter may need revisited. This is also suggested by theories of electroweak symmetry breaking that are required to be well behaved at high energies and in the presence of gravity. Continue reading… On the Challenge to Unveil the Microscopic Nature of Dark Matter – Scott Watson South Pole Telescope: From conception to first discovery – Zak Staniszewski Tue. October 21st, 2008 The South Pole Telescope recently discovered three new galaxy clusters in their CMB maps via the Sunyaev Zel'dovich (SZ) effect (Staniszewski et al. 2008). These are the first galaxy clusters discovered using this promising new technique. The number of galaxy clusters at a given redshift depends strongly on the expansion history of the universe as well as the relative abundances of matter, dark matter and dark energy during structure formation. The brightness of the SZ signal from a galaxy cluster is nearly redshift independent, making it a powerful tool for discovering galaxy clusters that were forming when dark energy was becoming important. Continue reading… South Pole Telescope: From conception to first discovery – Zak Staniszewski Primordial Nongaussianity and Large-Scale Structure – Dragan Huterer Fri. October 17th, 2008 The near-absence of primordial nongaussianity is one of the basic predictions of slow roll, single-field inflation, making measurements of nongaussianity fundamental tests of the physics of the early universe. I first review parametrizations of nongaussianity and briefly review the history of its measurements from the CMB and large-scale structure. I then present results from recent work where effects of primordial nongaussianity on the distribution of largest virialized objects was studied numerically and analytically. We found that the bias of dark matter halos takes strong scale dependence in nongaussian cosmological models. Therefore, measurements of scale dependence of the bias, using various tracers of large-scale structure, Continue reading… Primordial Nongaussianity and Large-Scale Structure – Dragan Huterer In Search of the Coolest White Dwarfs – Evalyn I.Gates Tue. October 14th, 2008 Cool white dwarf stars are among the oldest objects in the Galaxy. These relics of an ancient stellar population offer a window into the early stages of the galaxy and its formation, and more data on the oldest and coolest white dwarfs may help resolve the interpretation of microlensing searches for MACHOs in the galactic halo. The Sloan Digital Sky Survey (SDSS) and the SEGUE program of SDSS-II are ideally suited to a search for these rare objects, and to date we have discovered 13 new ultracool white dwarfs =96 those with temperatures below 4000K =96 constituting the majority of these faint stellar fossils. Continue reading… In Search of the Coolest White Dwarfs – Evalyn I.Gates The White Elephant: Upsilon Physics at the BaBar B-factory – Steve Sekula Tue. October 7th, 2008 For a decade, the PEP-II/BaBar B-factory has been a flagship experiment in precision measurements in the flavor sector, notably in the decays of B and charm mesons. Before its shutdown in April, the B-factory took a new direction and secured the world's largest samples of Upsilon(3S) and Upsilon(2S) mesons and performed an extensive scan above the Upsilon(4S) resonance. I will talk about the motivation for this change of course and our new results in both the search for the ground state of bottomonium and the search for evidence of new physics at a low mass scale, including both the Higgs and dark matter. Continue reading… The White Elephant: Upsilon Physics at the BaBar B-factory – Steve Sekula Parameterizing dark energy – Zhiqi Huang Tue. September 16th, 2008 Dark energy is parameterized by the time evolution of its equation of state $w(z)$. For a very wide class of quintessence (and phantom) dark energy models, we parameterize $w(z)$ with physical quantities related to the scalar field potential and initial conditions. Using a set of updated observational data including supernova, CMB, galaxy power spectrum, weak lensing and Lyman-${\alpha}$ forest, we run Markov Chain Monte Carlo calculations to determine the likelihood of cosmological parameters including the new dynamical parameters. The best fit model is centered around the cosmological constant (flat potential), while many popular scalar field models are excluded at different levels. Continue reading… Parameterizing dark energy – Zhiqi Huang The effect of dark matter halos on reionization and the H21 cm line – Aravind Natarajan Fri. September 5th, 2008 If much of the dark matter in the Universe consists of WIMPs, their annihilation releases energy, some of which ionizes the IGM. We calculate the contribution to the optical depth due to particle annihilation in early halos. This allows us to place bounds on the dark matter particle mass. We also consider the effect of halos on the H21 cm background. It is shown that larger halos (~ 10^6 solar masses) contain enough hot Hydrogen gas to produce a measurable H21 cm background. We present our conclusions. Continue reading… The effect of dark matter halos on reionization and the H21 cm line – Aravind Natarajan Probing dark energy with cosmology – Roberto Trotta Tue. May 6th, 2008 In order to pin down the fundamental nature of dark energy, and thus to understand what most of the Universe is actually made of, new and more precise observations are required, along with more efficient and reliable statistical techniques to interpret those observations correctly and to understand the implications they have for our theoretical models of the Universe. The outstanding challenge posed by the nature and properties of dark energy is giving rise to a flourishing of proposals for new observational campaigns. Type Ia supernovae, gravitational lensing, cluster counts and baryonic acoustic oscillations are some of the techniques available to study dark energy, Continue reading… Probing dark energy with cosmology – Roberto Trotta Astrophysics and Particle Physics with IceCube – Tyce DeYoung Tue. April 8th, 2008 The IceCube neutrino observatory under construction at the South Pole is designed to detect high energy (TeV-PeV) neutrino emission from astrophysical objects, such as the sources of galactic and extragalactic cosmic rays. Data is being taken with the partially- built detector, now half complete with 40 strings and 2400 optical modules, and initial results are now available. In addition to astrophysical studies, IceCube also has a broad particle physics program that will be enhanced by the addition of the IceCube Deep Core, a dense, contained subarray that will push IceCube's energy reach down to 10-20 GeV and improve its sensitivity to dark matter, Continue reading… Astrophysics and Particle Physics with IceCube – Tyce DeYoung UHECR Phenomenology – Glennys Farrar Tue. March 18th, 2008 I will review some very general properties that must characterize any relativistic UHECR accelerator, and I will list some key observational constraints on the accelerators. In combination these make it unlikely that any of the conventional source candidates can be solely responsible for the observed cosmic rays about about 60 EeV. I will describe a recently proposed new mechanism that is in excellent accord with the constraints and observations. I will describe how it can be tested using UHECRs and GLAST. Continue reading… UHECR Phenomenology – Glennys Farrar Challenging the Cosmological Constant – Nemanja Kaloper Thu. February 28th, 2008 We outline a dynamical dark energy scenario whose signatures may be simultaneously tested by astronomical observations and laboratory experiments. The dark energy is a field with slightly sub-gravitational couplings to matter, a logarithmic self- interaction potential with a scale tuned to ~ 10 -3 eV, as is usual in quintessence models, and an effective mass m phi influenced by the environmental energy density. Among the signatures of this scenario may be dark energy equation of state w is not equal to -1, stronger gravity in dilute mediums, that may influence BBN and appear as an excess of dark matter, and sub- millimeter corrections to Newton's law, Continue reading… Challenging the Cosmological Constant – Nemanja Kaloper Observing Dark Energy with the Next Generation of Galaxy Surveys – Ofer Lahav Tue. February 26th, 2008 The talk will discuss the design and forecasting for measuring properties of Dark Energy and Dark Matter from new deep imaging surveys, in particular the "Dark Energy Survey" and the DUNE satellite. The effect of accuracy of photometric redshifts on the cosmological results will be assessed. Continue reading… Observing Dark Energy with the Next Generation of Galaxy Surveys – Ofer Lahav k-Essence: superluminal propagation, causality and emergent geometry – Alexander Vikman Tue. February 19th, 2008 K-essence models – scalar field theories with non-quadratic kinetic terms – are considered candidates for dynamical dark energy and inflation. One of the most interesting features of these nonlinear theories is that perturbations around nontrivial backgrounds propagate with a speed different from the speed of light. In particular, superluminal propagation is possible. In my talk, I will review the k-essence paradigm emphasizing the issues related to causality. I will show that superluminal propagation does not lead to any additional causal paradoxes over and above those already present in standard general relativity. I will end by presenting a model which allows the obtaining of information from beyond the horizon of a black hole. Continue reading… k-Essence: superluminal propagation, causality and emergent geometry – Alexander Vikman Physics Beyond the Horizon – Niayesh Afshordi Tue. February 12th, 2008 The history of human knowledge is often highlighted by our efforts to explore beyond our apparent horizon. In this talk, I will describe how this challenge has now evolved into our quest to understand the physics at/beyond the cosmological horizon, some twenty orders of magnitude beyond Columbus's original plan. I also argue why inflationary paradigm predicts the existence of non-trivial physics beyond the cosmological horizon, and how we can use the Integrated Sachs-Wolfe effect in the Cosmic Microwave Background to probe this physics, which includes the nature of gravity and primordial non-gaussianity on the horizon scale. Continue reading… Physics Beyond the Horizon – Niayesh Afshordi Demystifying the Large-Scale Structure and Evolution of the Cosmos – Constantinos Skordis Tue. February 5th, 2008 In the last two decades, cosmology has undergone a revolution, with a large influx of high quality data. There is now a consensus cosmological standard model, Lambda-CDM, based on General Relativity as the theory of gravity, and which requires only about 4% of the energy budget of the universe to be in known baryonic form. The rest is divided into two apparently distinct, dark components: Cold Dark Matter (CDM) and cosmological constant. The simplest explanation for CDM is a weakly interacting particle, still to be detected; he cosmological constant is the simplest term that can be added to the Einstein equations that can give rise to the observed accelerated expansion of the universe but has no compelling explanation within our current understanding of fundamental physics. Continue reading… Demystifying the Large-Scale Structure and Evolution of the Cosmos – Constantinos Skordis Cosmological Unification of String Theories – Simeon Hellerman Fri. January 18th, 2008 Recent developments have greatly extended our understanding of quantum gravity in cosmological environments. A new set of exact time-dependent solutions has been found to the equations of motion of string theory, that interpolate among string theories of dramatically different character. These transitions dynamically alter features of the theory such as the degree of stability, the amount of supersymmetry, the number of dimensions of space itself, and the basic type of string. Taken together, these transitions fill out a web that unifies (almost) all known string theories into a single dynamical structure. Continue reading… Cosmological Unification of String Theories – Simeon Hellerman The Accelerating Universe: Landscape or Modified Gravity? – Sergei Dubovsky Tue. January 15th, 2008 The most remarkable recent discovery in fundamental physics is that the Universe is undergoing accelerated expansion. To achieve a proper understanding of its physical origin forces us to make a hard choice between dynamical and enviromental scenarios. The first approach predicts the existence of a new long distance physics in the gravitational sector, while the second relies on the existence of the vast landscape of vacua with different values of the cosmological constant. I will discuss achievements and shortcomings of each of the approaches, and illustrate them in the concrete examples. Continue reading… The Accelerating Universe: Landscape or Modified Gravity? – Sergei Dubovsky Late Time Behavior of False Vacuum Decay – James Dent Fri. December 7th, 2007 The late time behavior of decaying states is examined with regards to its deviation from the usual exponential form of decay. We will look at the origins of this well-established result in quantum mechanics and discuss the issues that arise in a field theory setting. An increase in the survival probability of a metastable state at large times finds applications in the context of cosmology, namely with regards to eternal inflation and the string theory landscape. Continue reading… Late Time Behavior of False Vacuum Decay – James Dent What do WMAP and SDSS really tell about inflation? – Wessel Valkenburg Tue. December 4th, 2007 We present new constraints on the Hubble function H(phi) and subsequently on the inflationary scalar potential V(phi) from WMAP 3-year data combined with the Sloan Luminous Red Galaxy survey (SDSS-LRG), using a new methodology which appears to be more generic, conservative and model-independent than in most of the recent literature, since it depends neither on the slow-roll approximation, nor on any extrapolation scheme for the potential beyond the observable e-fold range, nor on additional assumptions about initial conditions for the inflaton velocity. Besides these new constraints, we will briefly discuss the accuracy of the slow-roll approximation in the light of present day observations, Continue reading… What do WMAP and SDSS really tell about inflation? – Wessel Valkenburg Bekenstein-Sanders theory of modified gravity – Constantinos Skordis Tue. November 27th, 2007 Continue reading… Bekenstein-Sanders theory of modified gravity – Constantinos Skordis Gravitational Radiation from Supermassive Black Hole Binaries – Andrew Jaffe Tue. November 20th, 2007 Evidence for Supermassive Black Holes at the centers of galaxy bulges, combined with the paradigm of hierarchical structure formation, implies the existence of binary Supermassive Black Holes. It is expected that these binaries themselves will eventually coalesce in what would be the brightest gravitational-radiation events in the astrophysical universe. In this talk, we discuss the effect of the overall galaxy merger rate as well as dynamical processes at the centers of galaxies that might effect this scenario, in particular the so-called "final parsec problem" indicating that a significant fraction of the binaries may stall before they can coalesce. I discuss the theoretical prospects for resolving this problem, Continue reading… Gravitational Radiation from Supermassive Black Hole Binaries – Andrew Jaffe Scanning Inflation – Pascal Vaudrevange Tue. November 20th, 2007 The shapes of the primordial power spectra are the key quantities to unravel the physics of the inflationary epoch. We propose a new framework for parametrizing the spectra of primordial scalar and tensor perturbations, stressing the statistical trajectory nature of the relevant quantities and the importance of priors which can lead to spurious results like an apparent detection of tensor modes. We clarify the impact of prior probabilities, demonstrate strategies to adjust the prior distributions and as an example investigate a model inspired by high energy theory that exhibits intrinsic statistical elements. Continue reading… Scanning Inflation – Pascal Vaudrevange Sterile neutrinos as subdominant warm dark matter – Dan Cumberbatch Tue. November 13th, 2007 In light of recent findings which seem to disfavor a scenario with (warm) dark matter entirely constituted of sterile neutrinos produced via the Dodelson-Widrow (DW) mechanism, my colleagues and I investigated the constraints attainable for this mechanism by relaxing the usual hypothesis that the relic neutrino abundance must necessarily account for all of the dark matter. We firstly studied how to reinterpret the limits attainable from X-ray non-detection and Lyman-alpha forest measurements in the case that steril e neutrinos constitute only a fraction 'f_s' of the total amount of dark matter. Then, assuming that sterile neutrinos are generated in the early universe solely through the DW mechanism, Continue reading… Sterile neutrinos as subdominant warm dark matter – Dan Cumberbatch Baryogenesis, Electric Dipole Moments, and the Higgs Boson – Michael Ramsey-Musolf Tue. October 30th, 2007 Explaining the predominance of visible matter over antimatter remains one of the outstanding puzzles at the interface of cosmology with particle and nuclear physics. Although the Standard Model cannot account for the matter-antimatter asymmetry, new physics at the electroweak scale may provide the solution. In this talk, I discuss the general requirements for successful electroweak scale baryogenesis; recent theoretical work in computations of the matter-antimatter asymmetry; and implications for experimental searches for permanent electric dipole moments of the electron and neutron and for the Higgs boson at future colliders. Continue reading… Baryogenesis, Electric Dipole Moments, and the Higgs Boson – Michael Ramsey-Musolf Gravitational Breakthrough or Experimental Error? – Martin Tajmar Wed. October 24th, 2007 Accelerometer measurments indicate that a circular field is induced when the rotation rate of a Niobium superconducting ring changes. If found to be genuine, this would be the first-ever gravitational-like field induced by controllable means. The field is measured inside the ring and its magnitude and direction opposes the ring's angular acceleration. Since this observation does not match any theory, the emphasis is to carefully verify the observations. This seminar will describe the observations, experimental methods, and next-step options. This includes data from independent experiments conducted at the University of Canterbury, NZ, where the world's most accurate ring-laser-gyro was used to search for the noted effects, Continue reading… Gravitational Breakthrough or Experimental Error? – Martin Tajmar Extragalatic Cosmic Rays: a Prescription to Avoid Disaster – Corbin Covault Tue. October 16th, 2007 The origin of the highest energy cosmic rays has remained a persistent mystery for decades. Now we seem to be on the verge of getting a new handle on where in the universe these things come from. The Pierre Auger Observatory has been operating since 2004, and already we have some clear clues, including the energy spectrum and limits on photon flux that strongly suggest an extragalactic origin for the highest energy cosmic rays. More recently the unparalleled collecting area of Auger has been brought to bear on the question of potential correlations between particular astrophysical objects and cosmic ray arrival directions. Continue reading… Extragalatic Cosmic Rays: a Prescription to Avoid Disaster – Corbin Covault Dark matter, small-scale structure, and dwarf galaxies – Louie Strigari Tue. September 4th, 2007 The standard model of cold dark matter predicts the existence of thousands of small dark matter halos orbiting the Milky Way, and steep cusps in the central regions of dark matter halos. The low-luminosity, dark matter dominated dwarf galaxy population of the Milky Way provides an ideal laboratory for testing these predictions, and thus placing strong constraints on the nature of dark matter. I will show how present kinematic data from the galaxies tests solutions to the CDM 'missing satellites problem,' and how future astrometric data will reveal the presence of central density cores or cusps. I will also discuss how the kinematic data from these galaxies is able to provide strong constraints on the signal from cold dark matter particles annihilating into gamma-rays, Continue reading… Dark matter, small-scale structure, and dwarf galaxies – Louie Strigari String Gas Cosmology and Structure Formation – Robert Brandenberger Tue. April 24th, 2007 Understanding the very early universe is linked inextricably with understanding the resolution of cosmological singularities. I will discuss "string gas cosmology", one of the approaches making use of string theory to obtain an improved picture of the early universe cosmology. In particular, I will show that string gas cosmology can lead to a new structure formation scenario in which string thermodynamic fluctuations generate a scale-invariant spectrum of adiabatic fluctuations. Continue reading… String Gas Cosmology and Structure Formation – Robert Brandenberger The Origin of the Big Bang: the status of inflation after WMAP – Slava Mukhanov Fri. April 20th, 2007 I will discuss at a colloquium level the robust model independent predictions of inflation and compare these predictions with the results of the observations of the fluctuations of the cosmic mictrowave background radiation. Continue reading… The Origin of the Big Bang: the status of inflation after WMAP – Slava Mukhanov Prospects for a New Type of High Energy Physics Facility: a Muon Collider – Tom Roberts Fri. April 13th, 2007 In a few years, after Fermilab's Tevatron turns off and initial LHC results are available, the High Energy Physics community will be at a crossroads: what type of facility to consider next? Neither proton nor electron machines hold much prospect for advancing the energy frontier beyond the LHC. But recent innovations in manipulating muon beams make it possible to imagine a third type of facility for HEP: a muon collider. An energy frontier muon collider could potentially fit on the Fermilab site, opening a completely new window into fundamental particle processes. In addition to presenting the basic concept, this talk will discuss the challenges inherent in creating, Continue reading… Prospects for a New Type of High Energy Physics Facility: a Muon Collider – Tom Roberts Ongoing Mysteries in Astrophysics – Don Driscoll Wed. April 11th, 2007 We are at the brink of a Golden Age of Astrophysics with the promise of answers to many long-outstanding questions, including: What is the nature of Dark Matter? What source powers Active Galactic Nuclei? Where do Gamma-Ray Bursts come from? Where do the highest energy Cosmic Rays come from? With an unprecedented number of experiments both active and coming online, there is a real hope that many of these questions may be answered in the near future. I have been lucky enough to be associated with some of the world's most advanced astrophysical experiments. In this talk, I plan on detailing my life as an experimentalist and how my work has touched on some of these intriguing questions. Continue reading… Ongoing Mysteries in Astrophysics – Don Driscoll Probability in cosmology: from Bayes theorem to the anthropic principle – Roberto Trotta Tue. March 27th, 2007 Continue reading… Probability in cosmology: from Bayes theorem to the anthropic principle – Roberto Trotta EBEX, a CMB B-mode polarization experiment – Tomotake Matsumura Tue. March 20th, 2007 I present a balloon-borne cosmic microwave background (CMB) polarization experiment, E and B experiment(EBEX). EBEX is designed, i) to detect or set an upper limit (T/S less than 0.03) on the inflationary gravity-wave background polarization anisotropy signal (primordial B-mode), ii) to measure the CMB polarization anisotropy signal induced by gravitational lensing (lensing B-mode), and iii) to measure galactic dust emission (120 GHz – 450 GHz) in order to monitor foreground contamination. In this talk, I present the EBEX science goals as well as an instrument overview. In particular among a number of subsystems in EBEX, I discuss a half-wave plate polarimeter using a superconducting magnetic bearing. Continue reading… EBEX, a CMB B-mode polarization experiment – Tomotake Matsumura Warped Passages: Unravelling the Mysteries of the Universe's Hidden Dimensions Tue. March 20th, 2007 Host: NOTE: The event is free, but registration is required, at www.case.edu/events/dls/register.html Continue reading… Warped Passages: Unravelling the Mysteries of the Universe's Hidden Dimensions Voids of Dark Energy – Sourish Dutta Tue. March 6th, 2007 The present-day acceleration of the Universe is one of the greatest mysteries of modern cosmology. In the framework of general relativity, the expansion could be caused by either a "cosmological constant", or a dynamical dark energy component (DDE). In this talk I will describe a novel theoretical approach to distinguishing between these two possibilities, namely, via the clustering properties of DDE. By following the dynamical evolution of matter perturbations in a cosmic mix of matter and DDE, we find the very interesting result that the DDE tends to form voids in the vicinity of gravitationally collapsing matter. I will discuss these voids in detail, Continue reading… Voids of Dark Energy – Sourish Dutta Reconstructing dark energy using Maximum Entropy – Caroline Zunckel Fri. March 2nd, 2007 Even in what has been termed an age of `precision cosmology' certain anomalies on a range of astrophysical scales are observed and demand the existence of unseen types of matter or modifications to our current gravitational theory. In this article the issue of the nature of the mysterious `dark energy' has been explored in a model-independent way. A maximum-entropy technique is developed and used to reconstruct the equation of state of dark energy within a bayesian framework. The motivation for the use of the MaxEnt technique is the lack of good data points in comparison to the number of parameters required for a sufficient characterization of dark energy. Continue reading… Reconstructing dark energy using Maximum Entropy – Caroline Zunckel Do quantum excitations of the inflaton decay? – Cristian Armendariz-Picon Fri. February 16th, 2007 The properties of the primordial perturbations seeded during a stage of inflation are determined by the quantum state of the inflaton. This state is usually assumed to be the "vacuum", since one expects excited states to decay into the state of lowest energy. In the talk I discuss whether this assumption holds in the presence of a short-distance cut-off. I describe the calculation of transition probabilities between excited states and the vacuum, and discuss the implications of the results that I obtain. Continue reading… Do quantum excitations of the inflaton decay? – Cristian Armendariz-Picon Cosmic (super)strings: Gravitational wave bursts, stochastic background, and experimental constraints – Xavier Siemens Tue. January 30th, 2007 I discuss gravitational wave experimental signatures (bursts and stochastic background) of cosmic strings. I will show burst rates that are substantially lower (about a factor of 1000) than previous estimates suggest and explain the disagreement. Initial LIGO is unlikely to detect bursts from field theoretic cosmic strings, though it may detect cosmic superstring bursts. I also compare the stochastic background produced by a network of strings with a wide range of experiments and indirect bounds. If the latest cosmic string simulation results are correct then a large area of superstring parameter space is ruled out by pulsar timing observations. Continue reading… Cosmic (super)strings: Gravitational wave bursts, stochastic background, and experimental constraints – Xavier Siemens Quantum cosmology and the conditions at birth of the universe – Serge Winitzki Tue. January 23rd, 2007 Cosmology ultimately aims to explain the initial conditions at the beginning of time and the entire subsequent evolution of the universe. The "beginning of time" can be understood in the Wheeler-DeWitt approach to quantum gravity, where homogeneous universes are described by a Schroedinger equation with a potential barrier. Quantum tunneling through the barrier is interpreted as a spontaneous creation of a small (Planck-size) closed universe, which then enters the regime of cosmological inflation and reaches an extremely large size. After sufficient growth, the universe can be adequately described as a classical spacetime with quantum matter. The initial quantum state of matter in the created universe can be determined by solving the Schroedinger equation with appropriate boundary conditions. Continue reading… Quantum cosmology and the conditions at birth of the universe – Serge Winitzki The life and death of dark matter halos: predictions for neutralino annihilation Tue. December 12th, 2006 The concordance cosmological model predicts that structures in the Universe form via hierarchical merging, beginning with the smallest dark matter mini-halos. The mass of the smallest halo is set by the initial thermal motion of dark matter particles. After merging into larger systems and subsequent dynamical evolution, most halos lose between 50% and 99% of their mass but an interesting fraction of dark matter remains in self-bound clumps at all mass scales. The smallest substructure has important implications for the detection of dark matter annihilation, predicted by SUSY models. Continue reading… The life and death of dark matter halos: predictions for neutralino annihilation Aethereal Gravity – Brendan Foster Tue. December 5th, 2006 Hints from quantum gravity suggest the existence of a preferred frame. One way to accommodate such a frame in general relativity without sacrificing general covariance is to couple the metric to a dynamical, timelike, unit-norm vector field–the "aether". I will discuss observational constraints on a class of such theories, with a focus on post-Newtonian effects and radiation from binary pulsar systems, and show that a subset remains viable. Continue reading… Aethereal Gravity – Brendan Foster The Quintessence Potential: Need for Features and Tracking? – Martin Sahlen Tue. November 28th, 2006 We reconstruct the potential of a quintessence field from current observational data, including new supernova data, plus information from the cosmic microwave background and from baryon acoustic oscillations. We model the potential using Pade approximant expansions as well as Taylor series, and use observations to assess the viability of the tracker hypothesis. Present data provide some insights into the shape of a presumptive quintessence potential, but also strengthen the model selection preference for the cosmological constant over evolving models. They also show some signs, though inconclusive, of favouring tracker models over non-tracker models under our assumptions. Continue reading… The Quintessence Potential: Need for Features and Tracking? – Martin Sahlen Exploring the Dark Energy Domain – Dragan Huterer Tue. November 21st, 2006 One of the great mysteries of modern cosmology is the origin and nature of dark energy – a smooth component that contributes about 70% of the total energy density in the universe and causes its accelerated expansion. Here I present results from a comprehensive study of a class of dark energy models, exploring their dynamical behavior using the method of flow equations and the Monte Carlo Markov Chain machinery that have previously been applied to inflationary models. I comment on the current and expected future constraints, insights into the dynamics of dark energy, figures of merit, and a classification of theoretical models. Continue reading… Exploring the Dark Energy Domain – Dragan Huterer Probing Dark Energy – Josh Frieman Tue. November 14th, 2006 Continue reading… Probing Dark Energy – Josh Frieman Black Hole Formation, Evaporation and the Information Loss Problem – Dejan Stojkovic Tue. October 17th, 2006 We use the full quantum treatment to study formation of a black hole as seen by an asymptotic observer. Using the Wheeler-de Witt equation to describe a collapsing shell of matter (a spherical domain wall), we show that the black hole takes an infinite time to form in the quantum theory, just as in the classical treatment. Asymptotic observers will therefore see a compact object but never see effects associated with an event horizon. To explore what signals such an observer would see we study radiation of quantum fields in this background using two approaches: functional Schroedinger method and an adaptation of Hawking's original calculation. Continue reading… Black Hole Formation, Evaporation and the Information Loss Problem – Dejan Stojkovic Nuclear astrophysics underground – Heide Costantini Tue. October 3rd, 2006 Cross section measurements for quiescent stellar burning are hampered mainly by extremely low counting rate and cosmic background. Some of the main reactions of H-burning phase have been measured at the LUNA facility (Laboratory for Underground Nuclear Astrophysics) taking advantage of the very low background environment of the Underground Gran Sasso National Laboratory in Italy. The adopted experimental techniques will be presented together with the latest results on the 14N(p,g)15O reaction and the status of the ongoing 3He(4He,g)7Be experiment. Furthermore a brief overview of the ALNA (Accelerator Laboratory for Nuclear Astrophysics underground) project, as a part of the new future Underground DUSEL laboratory in the USA, Continue reading… Nuclear astrophysics underground – Heide Costantini Searching for double beta decay with the Enriched Xenon Observatory – Carter Hall Tue. September 26th, 2006 Neutrinoless double beta decay has recently become a top priority for the global experimental neutrino physics program. Double beta decay has the potential to resolve the scale of the neutrino mass spectrum, and is also the only practical tool we have for understanding the particle/anti- particle nature of the neutrino. The Enriched Xenon Observatory (EXO) collaboration is developing sensitive searches for the double beta decay of Xenon-136. Our first experiment, EXO-200, will be the largest double beta decay experiment ever attempted by an order-of-magnitude, and is rapidly being constructed. We are also pursuing R and D to realize a system to tag the daughter barium nucleus of the decay using the techniques of single-ion spectroscopy. Continue reading… Searching for double beta decay with the Enriched Xenon Observatory – Carter Hall Positron annihilations at the Galactic Center: Generating more questions than answers – Hasan Yuksel Tue. September 26th, 2006 The bulge of our Galaxy is illuminated by the 0.511 MeV gamma-ray line flux from annihilations of nonrelativistic positrons. The emission is strongly concentrated at the Galactic Center, in contrast to gamma-ray maps tracing nucleosynthesis (e.g., the 1.809 MeV line from decaying ^26Al) or cosmic ray processes (e.g., the 1-30 MeV continuum), which reveal a bright disk with a much less prominent central region. Central to resolving the origin of the positrons is the question of their injection energies, which range up to 100 MeV or even higher in recent astrophysical and exotic (requiring new particle physics) models. If positrons are generated at relativistic energies, Continue reading… Positron annihilations at the Galactic Center: Generating more questions than answers – Hasan Yuksel Michelson Postdoctoral Prize Lecture – Nicole Bell Mon. May 1st, 2006 Astrophysical Neutrinos: Revealing Neutrino Properties at the Highest Energies Continue reading… Michelson Postdoctoral Prize Lecture – Nicole Bell Accelerated expansion from structure formation – Syksy Rasanen Tue. April 4th, 2006 I discuss the backreaction of inhomogeneities on the expansion of the universe. The average behaviour of an inhomogeneous spacetime is not given by the Friedmann-Robertson-Walker equations. The new terms in the exact equations hold the possibility of explaining the observed acceleration without a cosmological constant or new physics. In particular, the coincidence problem may be solved by a connection with structure formation. Continue reading… Accelerated expansion from structure formation – Syksy Rasanen DEAP and CLEAN Detectors for Low-Energy Particle Astrophysics – Andrew Hime Tue. March 7th, 2006 The unique properties of scintillation light in liquid neon and liquid argon make possible conceptually simple, massive, and highly sensitive detectors of low-energy solar neutrinos and cosmological dark matter. I will describe the program underway for the design and construction of two novel and complementary detectors dubbed DEAP (Dark matter Experiment with Argon and Pulse shape discrimination) and CLEAN (Cryogenic Low Energy Astrophysics with Neon). Continue reading… DEAP and CLEAN Detectors for Low-Energy Particle Astrophysics – Andrew Hime In Search of Particle Dark Matter – Dan Hooper Tue. February 28th, 2006 In recent years, we have learned a great deal about dark matter, but are still ignorant of its identity. The key to uncovering this mystery is likely to lie in some combination of direct and indirect detection techniques, as well as with collider experiments. In this talk, I will explore the ability of indirect detection experiments using anti-matter, neutrinos and gamma-rays to detect particle dark matter. I will summarize the current observational situation and project the reach of these endeavors in the coming years. Continue reading… In Search of Particle Dark Matter – Dan Hooper Galaxy Clustering in the SDSS Redshift Survey – Idit Zehavi Tue. February 21st, 2006 The ongoing Sloan Digital Sky Survey (SDSS) is providing a wealth of information enabling extensive large-scale structure studies. I will present measurements of galaxy clustering with the SDSS redshift survey, using a sample of about 200,000 galaxies, and concentrating on the two-point correlation function. The SDSS is particularly suitable for investigating the dependence of clustering on galaxy properties, and we focus on the dependence on color and on luminosity. We interpret the measurements using contemporary models of galaxy clustering, which help to elucidate the features of the observed correlation functions and provide insights on galaxy formation and the relation of galaxies and dark matter. Continue reading… Galaxy Clustering in the SDSS Redshift Survey – Idit Zehavi Cosmogenic Radioisotopes in Low Background Experiments – The WARP Experiment at Gran Sasso – Cristiano Galbiati Tue. January 24th, 2006 I will discuss results from recent studies on production of radioisotopes by muon-induced showers in neutrino detectors located deep underground. Cosmogenic radioisotopes represent one of the most significant and important classes of background for experiments on solar neutrinos. I will show how a detailed understanding of the production mechanisms of the radioisotopes can help in opening new windows of observation for low energy solar neutrinos (in particular, pep neutrinos). I will also review the status and the plans for the WARP experiment at Gran Sasso. WARP is a two-phase argon drift chamber designed for direct detection of WIMP Dark Matter. Continue reading… Cosmogenic Radioisotopes in Low Background Experiments – The WARP Experiment at Gran Sasso – Cristiano Galbiati TeV gamma-rays and the largest masses and annihilation cross sections of neutralino dark matter – Stefano Profumo Tue. November 15th, 2005 Motivated by the interpretation of the recent results on the TeV gamma radiation from the Galactic center, including the new 2004 HESS data, as a by-product of dark matter particles annihilations, we address the question of the largest possible neutralino masses and pair annihilation cross sections in supersymmetric models. Extending the parameter space of minimal models, such as the mSUGRA and the mAMSB scenarios, to general soft SUSY breaking Higgs masses gives access to the largest possible pair annihilation rates, corresponding to resonantly annihilating neutralinos with maximal gaugino-higgsino mixing. Adopting a model-independent approach, we provide analytical and numerical upper limits for the neutralino pair annihilation cross section. Continue reading… TeV gamma-rays and the largest masses and annihilation cross sections of neutralino dark matter – Stefano Profumo Chaotic Processes in Planet Migration and Orbital Evolution – Fred Adams Tue. November 8th, 2005 Nearly 150 extrasolar planets have been discovered to date, and their observed orbits display an unexpected diversity. This talk considers a collection of processes for planet migration and orbital evolution, including those operating on a range of time scales. In particular, we consider planet-planet scattering, the action of disk torques, scattering of solar systems with passing binary star systems, and the long term evolution of planetary systems. The result of this survey of processes provides a explanation for the orbital elements of observed planetary systems, places constraints on the birth aggregate of our solar system, and determines the fraction of binary star systems that allow for the long term stability of an Earth-like planet. Continue reading… Chaotic Processes in Planet Migration and Orbital Evolution – Fred Adams Prospects for Measuring nu-N Coherent Scattering at a Spallation Source Tue. October 18th, 2005 Coherent neutral current neutrino-nucleus elastic scattering has never been observed. Although the cross-section is very high, nuclear recoil energies are very small. However, detection of the process may be possible for the new generation of low-threshold detectors. A promising prospect for the first detection of this process is an experiment at a high flux stopped-pion neutrino source such as the SNS. I will present some preliminary rate calculations and discuss the physics reach of such an experiment. Continue reading… Prospects for Measuring nu-N Coherent Scattering at a Spallation Source On virialization with dark energy – Irit Maor Tue. October 11th, 2005 We review the inclusion of dark energy into the formalism of spherical collapse, and the virialization of a two-component system, made of matter and dark energy. We compare two approaches in the literature. The first assumes that only the matter component virializes, e.g. as in the case of a classic cosmological constant. The second approach allows the full system to virialize as a whole. We show that the two approaches give fundamentally different results for the final state of the system. This might be a differentiating signature between the classic cosmological constant which cannot virialize, and a dynamical dark energy mimicking a cosmological constant. Continue reading… On virialization with dark energy – Irit Maor Prospects for Measuring nu-N Coherent Scattering at a Spallation Source – Kate Scholberg Tue. October 11th, 2005 Continue reading… Prospects for Measuring nu-N Coherent Scattering at a Spallation Source – Kate Scholberg Wormholes, Dark Energy, and the Null Energy Condition – Roman Buniy Tue. October 4th, 2005 We show that violation of the null energy condition implies instability in a broad class of models, including classical gauge theories with scalar and fermionic matter as well as any perfect fluid. When applied to the dark energy, our results imply that w = p / rho is unlikely to be less than -1. As another application, Lorentzian (traversable) wormholes and time machines with semi-classical spacetimes are unstable to small perturbations. Continue reading… Wormholes, Dark Energy, and the Null Energy Condition – Roman Buniy Can black hole events from cosmic rays be observed at the Auger Observatory? – Dejan Stojkovic Tue. September 27th, 2005 It has been argued that neutrinos originating from ultra-high energy cosmic rays produce black holes deep in the atmosphere in models with TeV-scale quantum gravity. Such black holes would initiate quasi-horizontal showers of particles far above the standard model rate, so that the Auger Observatory would observe hundreds of black hole events. This would provide the first opportunity for experimental study of microscopic black holes. However, any phenomenologically viable model with a low scale of quantum gravity must explain how to preserve protons from rapid decay mediated by virtual black holes. We argue that unless this is accomplished by the gauging of baryon or lepton number, Continue reading… Can black hole events from cosmic rays be observed at the Auger Observatory? – Dejan Stojkovic Quantum metric fluctuations in cosmological and black hole spacetimes – Albert Roura Tue. September 20th, 2005 It is expected that a number of quantum aspects of the gravitational field and its interaction with the remaining matter fields can be studied within a low-energy effective field theory approach provided that the typical scales involved are much larger than the Planck length. This has been considered in some detail for weak gravitational fields, but physically interesting situations often involve strong fields. Some non-equilibrium field theory methods which are particularly useful to address gravitational back reaction problems, such as the closed time path (CTP) formalism, will be briefly reviewed. I will then explain how to extract information on metric fluctuations and discuss applications to black hole and cosmological spacetimes. Continue reading… Quantum metric fluctuations in cosmological and black hole spacetimes – Albert Roura What is the Cosmological Significance of a Discovery of Wimps at Colliders or in Direct Experiments? – Jacob Bourjaily Tue. September 13th, 2005 Although a discovery of wimps either at colliders or indirect experiments would have enormous implications for our understanding of particle physics, it would imply less than one would like about our understanding of the dark matter in the universe or in the galactic halo: it surely is possible that discovered particles account for only a little of the total dark matter. To establish the cosmological significance of a wimp discovery, their density must be determined. I will show that data from neither hadron colliders nor direct detection experiments alone can be sufficient to determine the local or relic density of discovered wimps, Continue reading… What is the Cosmological Significance of a Discovery of Wimps at Colliders or in Direct Experiments? – Jacob Bourjaily Boundary Localized Symmetry Breaking and Topological Defects – Matthew Martin Fri. May 6th, 2005 I discuss the structure of topological defects in the context of recent extra dimensional models where the symmetry breaking terms are localized. These defects develop structure in the extra dimension which differs from the case where symmetry breaking is not localized. This new structure can lead to corrections to the mass scale of the defects which is not captured by the effective theory obtained by integrating out the extra dimension. I also consider the Higgsless model of symmetry breaking and show that no finite energydefects appear in some situations where they might have been expected. Continue reading… Boundary Localized Symmetry Breaking and Topological Defects – Matthew Martin The Ages of the Oldest Stars – Brian Chaboyer Tue. April 26th, 2005 The ages of the oldest stars in the Milky Way yield a reliable lower limit to the age of the universe and provide important information on the early formation history of our Galaxy. I will provide an overview of the stellar age determination process, including a critical look at the uncertainties associated with determining the ages of stars. Evidence for a significant spread in ages among the old stars in the halo of the Milky Way will be presented and used to study the early formation history of our Galaxy. I will conclude by discussing the absolute age of the oldest stars and its implications for cosmology. Continue reading… The Ages of the Oldest Stars – Brian Chaboyer Gravity and Horizon Entropy – Ted Jacobson Fri. April 8th, 2005 I will argue that if (i) entanglement entropy density across any surface is a universal finite constant η, and (ii) local Lorentz symmetry holds, then the spacetime metric must satisfy the Einstein equation, with Newton's constant equal to 1/(4 hbar η). I will then discuss the nature of black hole entropy in light of this result. Continue reading… Gravity and Horizon Entropy – Ted Jacobson Technique for WIMP dark matter detection using pulse-shape discrimination in noble liquids – Mark Boulay Tue. March 29th, 2005 It has long been known that a large fraction of our universe is composed of non-luminous or dark matter. The effects of dark matter have been observed since the 1930's by studying velocity dipersions in galaxy clusters, and several direct searches for particle dark matter are ongoing. In this seminar I will present studies for the design of novel detectors for particle dark matter using scintillation pulse shape discrimination in noble liquids. Design of a dual-purpose liquid neon detector (CLEAN) for dark matter and low-energy solar neutrino interactions evaluated with Monte Carlo simulations will be discussed. The projected sensitivity for CLEAN is less than 10-46 cm2 for the spin-independent WIMP-nucleon cross-section, Continue reading… Technique for WIMP dark matter detection using pulse-shape discrimination in noble liquids – Mark Boulay Indirect signals from Dark Matter – Francesc Ferrer Fri. March 4th, 2005 Abstract: The only evidence so far for the presence of Dark Matter in our Galaxy is through its gravitational interactions. Several experiments, however, have recently observed the emission of gamma-rays from the Galactic Center that could be caused by the annihilation of Dark Matter particles. Candidates with masses ranging from the MeV to the ZeV will be explored and constraints on their properties will be obtained by requiring that they account for the observed Galactic radiation. Continue reading… Indirect signals from Dark Matter – Francesc Ferrer A Geometric approach to Distinguish Between a New Source and Random Fluctuations: Applications to High-Energy Physics – Ramani S. Pilla Fri. February 25th, 2005 One of the fundamental problems in the analysis of experimental data is determining the statistical significance of a putative signal. Such a problem can be cast in terms of classical "hypothesis testing", where a null hypothesis describes the background and an alternative hypothesis characterizes the signal as a perturbation of the background. This testing problem is often addressed by a chi- square goodness-of-fit or a likelihood ratio test (LRT) statistic. In general, the former does not yield good power in detecting the signal and the latter has lacked an analytically tractable reference distribution required to calibrate a test statistic. Pilla and Loader have introduced a new test statistic based on "perturbation theory" Continue reading… A Geometric approach to Distinguish Between a New Source and Random Fluctuations: Applications to High-Energy Physics – Ramani S. Pilla Ultra-high energy neutrinos – Mike Duvernois Tue. February 22nd, 2005 The search for GZK neutrinos, and its connection to the highest-energy cosmic rays will be discussed. In particular, we'll look at the current generation of astrophysical and cosmological neutrino search experiments (Auger, Icecube, and ANITA) and the next generation of Terraton detectors for neutrino measurements. Continue reading… Ultra-high energy neutrinos – Mike Duvernois CMB/LSS correlation as a probe of dark energy – Levon Pogosian Tue. February 15th, 2005 Recent detection of the Integrated Sachs-Wolfe effect via cross-correlation of the CMB with large scale structure provided another piece of evidence for the existence of Dark Energy. Although cross-correlation measurements are limited by large statistical uncertainties, they probe physical processes that are only weakly constrained by the CMB spectra and the SNIa luminosity curves. I will show that the cross-correlation data, combined with the CMB power spectra, can provide competitive constraints on certain properties of dark energy. Continue reading… CMB/LSS correlation as a probe of dark energy – Levon Pogosian Brane cosmology with an anisotropic bulk – Dani Steer Fri. February 11th, 2005 In the context of brane cosmology, a scenario where our universe is a 3+1-dimensional surface (the "brane") embedded in a five-dimensional spacetime (the "bulk"), we focus on geometries for which the brane is anisotropic though still homogeneous. The main question we address is the following: can an anisotropic brane be sourced by a perfect fluid? As opposed to standard 4D cosmology, we argue that this may only be possible for very specific perfect fluid sources. Continue reading… Brane cosmology with an anisotropic bulk – Dani Steer The future of dark energy measurements – Dragan Huterer Tue. February 1st, 2005 Evidence for the existence of some form of dark energy — a smooth component that causes the accelerated expansion of the universe and contributes about 70% of the total energy density — is by now very solid. However, despite thousands of published papers on the topic essentially no progress has been made in understanding its nature and the underlying physical mechanism. In this talk I describe the prospects of several methods to measure the macroscopic properties of dark energy within the next decade. In addition to type Ia supernovae, these include weak and strong gravitational lensing, number counts of clusters of galaxies, Continue reading… The future of dark energy measurements – Dragan Huterer Theoretical Constraints on the Dark Energy Equation of State – Mark Trodden Fri. January 28th, 2005 Modern cosmological observations indicate that the expansion of the universe is accelerating. This is typically described in terms of the equation of state parameter of a hypothetical new component of the cosmic energy budget, presumed to be driving the acceleration. Observations then provide bounds on this parameter. In this talk I will discuss theoretical limits on the values of this parameter. In the first part I will discuss the (dire) implications of inferring from the data that the equation of state parameter is less than -1. This may happen if cosmic acceleration is driven by an energy component that violates the energy conditions of general relativity. Continue reading… Theoretical Constraints on the Dark Energy Equation of State – Mark Trodden Observing the Cosmic Infrared Background with Frequency Selective Bolometers – Thushara Perera Tue. November 30th, 2004 Continue reading… Observing the Cosmic Infrared Background with Frequency Selective Bolometers – Thushara Perera Bayesian Analysis of the WMAP Data – Ben Wandelt Tue. November 16th, 2004 The desire to solve the three cosmological conundra of dark matter, dark energy and initial conditions drives us to demand more from cosmological observations. We require methods that link observations to theory in a convenient and lossless way. I will discuss a Bayesian approach to the analysis of the cosmic microwave background that enables the statistically exact extraction of cosmological information from the CMB and present our results from applying this methodology to the first year of WMAP data. Continue reading… Bayesian Analysis of the WMAP Data – Ben Wandelt Inflation, strings and the CMB – Ana Achucarro Tue. November 2nd, 2004 In the last year there has been a sudden renewal of interest in cosmic (super)string networks. I will explain why and will discuss – in a non-technical way – some new cosmological models coming from superstring/supergravity theory, and how to constrain these models by their cosmic string production after inflation. Continue reading… Inflation, strings and the CMB – Ana Achucarro Possible evidence for spatial fluctuations in dark energy – Christopher Gordon Tue. October 26th, 2004 The WMAP cosmic microwave background (CMB) first year data was anomalously smooth on the largest spatial scales. We have recently shown that spatial fluctuations in the dark energy, that is causing the expansion of the Universe to speed up, may partially cancel the fluctuations in the CMB on the largest scales. This would imply that the residual fluctuations that are observed on large scales would be due to the integrated Sachs Wolfe effect which is caused by the effect of large scale structure on the CMB at a redshift of about 1. We found that the current WMAP data provides a two sigma detection of the dark energy fluctuations. Continue reading… Possible evidence for spatial fluctuations in dark energy – Christopher Gordon Confronting Inflation with Observation – William Kinney Tue. October 19th, 2004 Inflationary cosmology is a compelling model for the early universe, but until recently it has not been subject to precise experimental test. In the last year, new observations have made it possible not only to test the general predictions of inflation, but also to distinguish among (and rule out) particular models of inflation. I will discuss the status of inflationary cosmology in light of the most recent observations, and summarize what we can expect over the next few years. Continue reading… Confronting Inflation with Observation – William Kinney Physics of the black hole-brane interaction – Dejan Stojkovic Tue. October 12th, 2004 In models with extra dimensions that accommodate a TeV-scale gravity, small black holes that can be described by classical solutions of Einstein's equations can exist. We study interaction of such black holes with our world — a brane embedded in a higher dimensional space. In such a setup there exist a host of new phenomena that do not have analogs in usual 3+1-dim models. We specially discuss experimental signature which may help us distinguish between the various extra dimensional scenarios. Continue reading… Physics of the black hole-brane interaction – Dejan Stojkovic Racetrack Inflation – Jose Blanco-Pillado Sat. October 9th, 2004 Four dimensional effective actions of many of the currently studied extra-dimensional theories seem to contain massless scalar fields called moduli. Giving these fields a potential is crucial to make these theories compatible with observations. It is therefore natural to explore the possibility that before they settle down to the true minimum of their potentials these fields could be relevant for cosmology, in particular they could be the source of an inflationary expansion period of the universe. In this talk, I will review ealier attempts to follow these ideas and present a new model of topological modular inflation in the context of the recently develop flux compactifications within string theory. Continue reading… Racetrack Inflation – Jose Blanco-Pillado First Results from the CAPMAP Experiment – Phil Farese Tue. September 28th, 2004 CAPMAP is a dedicated 40 and 90 GHz CMB polarization experiment. Observing with a 7m radio telescope from Holmdale, NJ CAPMAP intends to measure the primary polarization of the CMB at small (60′-4′) angular scale where the signal is maximum. I will discuss the design of the experiment, results from its first season, and the full observing campaign intended to culminate this academic year. Continue reading… First Results from the CAPMAP Experiment – Phil Farese Affleck-Dine Leptogenesis Induced by the Flaton of Thermal Inflation – Wan-il Park Tue. September 14th, 2004 We propose a simple model in which MSSM plus neutrino mass term, (LH_u)^2 is supplemented by a minimal flaton sector to drive the thermal inflation, and make two crucial assumptions: the flaton vacuum expectation value generates the mu-term of the MSSM and m_L^2 +m_{H_u}^2<0. We show that our model leads to thermal inflation followed by Affleck- Dine leptogenesis along the LH_u flat direction. Continue reading… Affleck-Dine Leptogenesis Induced by the Flaton of Thermal Inflation – Wan-il Park Results from the Sudbury Neutrino Observatory Salt Phase and the Future of the SNO Detector – Darren Grant Tue. September 7th, 2004 The Sudbury Neutrino Observatory is a heavy water Cherenkov detector designed to be sensitive to the total flux of Boron- 8 solar neutrinos. The addition of NaCl to the detector enhances the Neutral Current signal, and therefore improves the measurement of the total solar flux. The open salt dataset, consisting of approximately 254 days of livetime, has been analysed using analytic probabiltiy density functions in an extented maximum likelihood calculation. The final Boron-8 model constrained result of this analysis give a Charged Current to Neutral Current ratio of 0.344 +/- 0.021(stat) +0.024/-0.035(syst). This talk will present an overview of this independent analysis of the SNO data. Continue reading… Results from the Sudbury Neutrino Observatory Salt Phase and the Future of the SNO Detector – Darren Grant BPS bounds of F- versus D-term strings and their cosmological implications – Filipe Freire Tue. August 24th, 2004 Supersymmetry seems to facilitate the bringing together of inflationary models with particle physics. We give an overview of inflation models in supersymmetric theories. These models often lead to the production of cosmic strings after inflation. The cosmological implication of the production of these strings strongly depends on whether they saturate the so-called BPS condition. We study a particular model where we show that the BPS condition is preserved at the quantum level. Do not be discouraged by some technical language used in the abstract, all that will hopefully be made clear in more physically transparent terms during the seminar. Continue reading… BPS bounds of F- versus D-term strings and their cosmological implications – Filipe Freire Solar Evidence for Neutrino Transition Magnetic Moments and Sterile Neutrinos – David Caldwell Fri. July 9th, 2004 While KamLAND apparently rules out Resonant-Spin-Flavor-Precession (RSFP) as an explanation of the solar neutrino deficit, the solar neutrino fluxes in the Cl and Ga experiments appear to vary with solar rotation. Added to this evidence, summarized here, a power spectrum analysis of the Super-Kamiokande (SK) data reveals significant variation in the flux matching a dominant rotation rate observed in the solar magnetic field in the same time period. Four frequency peaks, all related to this rotation rate, can be explained quantitatively. A recent SK paper reported no time variation of the flux, but showed the same peaks with statistically insignificant sensitivity, Continue reading… Solar Evidence for Neutrino Transition Magnetic Moments and Sterile Neutrinos – David Caldwell The Atacama Cosmology Telescope Project – Arthur Kosowsky Fri. June 11th, 2004 The Atacama Cosmology Telescope (ACT) is a custom-designed 6-meter microwave telescope employing superconducting bolometer array detectors, which will be located in the Atacama Desert of the Chilean Andes in 2006. It will provide maps of the cosmic microwave background at arcminute resolution and micro-Kelvin sensitivity over a hundred square degrees of sky. I will review the scientific motivation for building this instrument, explain some of the technologies which are necessary, and discuss plans for complementary astronomical observations. We aim to compile a catalog of 1000 galaxy clusters and redshifts, selected by their distortions of the microwave background. ACT will provide insights into a wide range of topics including the primordial spectrum of density fluctuations, Continue reading… The Atacama Cosmology Telescope Project – Arthur Kosowsky Terrestrial Mini-Bang: Transmuting a Color Glass Condensate into Quark Gluon Plasma at RHIC – Raju Venugopalan Tue. April 20th, 2004 The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory is currently completing run 5. We discuss some of the remarkable and unexpected results emerging from experiments on Gold-Gold collisions at the ultrarelativistic energies of RHIC as well as results from Deuteron-Gold and Proton-Proton collisions at the same energies. Together, they provide a compelling (if not completely understood) picture of a) the quark-gluon matter produced at RHIC and, unexpectedly,b) a description of the matter constituting the wavefunction of a high energy hadron as a Color Glass Condensate. Continue reading… Terrestrial Mini-Bang: Transmuting a Color Glass Condensate into Quark Gluon Plasma at RHIC – Raju Venugopalan Octonions and Fermions – Corinne A. Manogue Wed. April 14th, 2004 Ten dimensional supersymmetric theories of physics such as superstring theory are at heart just higher dimensional generalizations of the Dirac equation. An enduring problem with these theories is how to reduce the spacetime dimension to the four we live in. I will describe a mechanism for reducing 10 spacetime dimensions to 4 without compactification, based on a generalization of the complex numbers known as the octonions. Applying this mechanism to the 10-dimensional Dirac equation leads to a treatment of both massive and massless particles on an equal footing. The resulting unified description has the correct particle spectrum to describe precisely 3 generations of leptons, Continue reading… Octonions and Fermions – Corinne A. Manogue Exoplanets, The Galactic Habitable Zone and the Age Distribution of Complex Life in the Milky Way – Charley Lineweaver Wed. April 7th, 2004 As we learn more about the Milky Way Galaxy, extrasolar planets and the evolution of life on Earth, qualitative discussions of the prerequisites for life in a Galactic context can become more quantitative. We model the evolution of the Milky Way Galaxy to trace the distribution in space and time of four prerequisites for complex life: the presence of a host star, enough heavy elements to form terrestrial planets, sufficient time for biological evolution and an environment free of life-extinguishing supernovae. We identify the Galactic habitable zone (GHZ) as an annular region between 7 and 9 kiloparsecs from the Galactic center that widens with time and is composed of stars that formed between 8 and 4 billion years ago. Continue reading… Exoplanets, The Galactic Habitable Zone and the Age Distribution of Complex Life in the Milky Way – Charley Lineweaver Cosmological magnetic fields vs. CMB – Tina Kahniashvili Tue. February 24th, 2004 The cosmological perturbations induced by primordial magnetic fields and its influence on cosmic microwave background (CMB) radiation will be discussed. In particular, CMB temperature anisotropies, polarization, and temperature-polarization cross correlations, as well as Faraday rotation effect will be presented. The possible observational CMB tests to detect primordial magnetic fields will be discussed. Continue reading… Cosmological magnetic fields vs. CMB – Tina Kahniashvili Looking for Dark Energy with the SDSS and WMAP – Ryan Scranton Tue. February 10th, 2004 We present measurements of the angular cross-correlation between luminous red galaxies from the Sloan Digital Sky Survey and the cosmic microwave background temperature maps from the Wilkinson Microwave Anisotropy Probe. Looking at a number of redshift slices and CMB bands, we find a statistically significant achromatic positive correlation between these data sets, consistent with the expected signal from the late Integrated Sachs-Wolfe effect. We do not detect any anti-correlation on small angular scales as would be produced by a large Sunyaev-Zel'dovich effect, although we do see evidence for some SZ effect in our highest redshift samples. Assuming the flat universe found by the WMAP survey, Continue reading… Looking for Dark Energy with the SDSS and WMAP – Ryan Scranton The Pierre Auger Observatory: A New Era Dawning in for Cosmic Rays – Corbin Covault Tue. February 3rd, 2004 We are apparently at a unique moment in the history of cosmic ray physics. The origin of Ultra-High Energy Cosmic Rays (UCECR) has persisted as a profound astrophysical mystery for decades. But recently, the two premiere experiments for the detection of UHECR (AGASA and HiRes Fly's Eye) have reported their best results — the culmination of many years of observations and analysis. These results might have been expected to provide key insight into to a new determination of the origin of cosmic rays, except for one fact: the two experiments, AGASA and HiRes have presented results that apparently contradict each other in several ways. Continue reading… The Pierre Auger Observatory: A New Era Dawning in for Cosmic Rays – Corbin Covault Cosmological Observatiions of the QCD and Electroweak Early Universe Phase Transitions – Leonard Kisslinger Tue. January 27th, 2004 TransitionsMy coworkers and I have shown that if the QCD phase transition, at about T=150 MeV, is first order, the bubble nucleation and collisions would produce magnetic effects, which would give polarization correlations of the Cosmic Microwave Background Radiation distinct from those predicted by other theoretical cosmological studies. The Electroweak phase transition at T=Higgs Mass is first order in the minimal supersymmetric model, with the mass of the stop (partner of the top quark) being of the order of the Higgs. Applying this theory we are studying magnetic fields generated during the EW phase transition as seeds for galactic magnetic fields. Continue reading… Cosmological Observatiions of the QCD and Electroweak Early Universe Phase Transitions – Leonard Kisslinger
CommonCrawl
\begin{document} \author{Vlatko Vedral} \title{High Temperature Macroscopic Entanglement} \address{Optics Section, Blackett Laboratory, Imperial College London,\\ Prince Consort Road SW7 2BZ, London, United Kingdom} \date{\today} \maketitle \begin{abstract} In this paper I intend to show that macroscopic entanglement is possible at high temperatures. I analyze multipartite entanglement produced by the $\eta$ pairing mechanism which features strongly in the fermionic lattice models of high $T_c$ superconductivity. This problem is shown to be equivalent to calculating multipartite entanglement in totally symmetric states of qubits. I demonstrate that we can conclusively calculate the relative entropy of entanglement within any subset of qubits in an overall symmetric state. Three main results then follow. First, I show that the condition for superconductivity, namely the existence of the off diagonal long range order (ODLRO), is not dependent on two-site entanglement, but on just classical correlations as the sites become more and more distant. Secondly, the entanglement that does survive in the thermodynamical limit is the entanglement of the total lattice and, at half filling, it scales with the log of the number of sites. It is this entanglement that will exist at temperatures below the superconducting critical temperature, which can currently be as high as $160$ Kelvin. Thirdly, I prove that a complete mixture of symmetric states does not contain any entanglement in the macroscopic limit. On the other hand, the same mixture of symmetric states possesses the same two qubit entanglement features as the pure states involved, in the sense that the mixing does not destroy entanglement for finite number of qubits, albeit it does decrease it. Maximal mixing of symmetric states also does not destroy ODLRO and classical correlations. I discuss various other inequalities between different entanglements as well as generalizations to the subsystems of any dimensionality (i.e. higher than spin half). \end{abstract} \section{Introduction} Entanglement is currently one of the most researched phenomena in physics. Often shrouded in mystery, its basic premise is quite simple - entanglement is a correlation between distant particles that exists outside of any description offered by classical physics. Whilst this may at first glance seem an innocuous statement, in reality it is anything but. Predictions from the theory of entanglement have confounded some of the greatest minds in science. Einstein famously dubbed it spukhafte Fernwirkungen: ``spooky action at a distance". As we look deeper into the fabric of nature this ``spooky" connection between particles is appearing everywhere, and its consequences are affecting the very (macroscopic) world that we experience. At an implementational level, using entanglement researchers have succeeded in teleporting information between two parties, designing cryptographic systems that cannot be broken and speeding up computations that would classically take a much longer time to execute \cite{Nielsen}. Even though these applications have generated significant interest, I believe we have only scratched the ``tip of the iceberg" in terms of what entanglement is, and indeed what we can do with it. Whilst entanglement is experimentally pretty much beyond dispute in microscopic systems - such as two photons or two atoms - many people find it difficult to accept that this phenomenon can exist and even have effects macroscopically. Based on our everyday intuition we would, for example, find it very hard to believe that two cats or two human beings can be quantum entangled. Yet quantum physics does not tell us that there is any limitation to the existence of entanglement. It can, in principle and as far as we understand, be present in systems of any size and under many different external conditions. The usual argument against seeing macroscopic entanglement is that large systems have a large number of degrees of freedom interacting with the rest of the universe and it is this interaction that is responsible for destroying entanglement. If we can exactly tell the state that a system is in, then this system cannot be entangled to any other system. In everyday life, objects exist at room (or comparable) temperatures so their overall state is quantum mechanically described by a very mixed state (this mixing due to temperature is, of course, also due to the interaction with a large ``hot" environment). Mixing states that are entangled, in general, reduces entanglement and ultimately all entanglement vanishes if the temperature is high enough. The question then is how high is the highest temperature before we no longer see any entanglement? And how large can the body be so that entanglement is still present? Can we, for example, have macroscopic entanglement at the room temperature? Entanglement has recently been shown to affect macroscopic properties of solids, such as its magnetic susceptibility and heat capacity, but at a very low (critical) temperature \cite{Nature}. This extraordinary result demonstrates that entanglement can have a significant effect in the macroscopic world. The basic reason for this dependence is simple. Magnetic susceptibility is proportional to the correlation between nuclear spins in the solid. As we said before, entanglement offers a higher degree of correlation than anything allowed by classical physics and the corresponding quantum susceptibility - which fully agrees with experimental results \cite{Nature} - is higher than that predicted by using just classical correlations (for further theoretical support for this see my article in \cite{NJPhys}). It is now very important to go beyond this low temperature regime and experimentally test entanglement at higher and higher temperatures. Thinking that high temperature entanglement is linked with (perhaps even responsible for) some other high temperature quantum phenomena, such as high temperature superconductivity, is tempting. After all, superconductivity is a manifestation of the existence of the off diagonal long range order (ODLRO) \cite{Yang1} which is a form of correlation that still persists in the thermodynamical (macroscopic) limit. However, it is not immediately obvious that this correlation contains any quantum entanglement. My main intention in this paper is to show that it does. This correlation contains multipartite entanglement between all electron pairs in the superconductor. To calculate this we need to be able to quantify entanglement exactly and be able to discriminate entanglement from any form of classical correlation. A great deal of effort has gone into theoretically understanding and quantifying entanglement \cite{Vedral1}. There are a large number of different proposed measures; the different measures capture different aspects of entanglement. In this paper we will be interested in a measure that is based on the (asymptotic) distinguishability of entangled states from separable (disentangled) states known as the relative entropy of entanglement \cite{PRL,Vedral4}. The main advantage of this measure is that it is easily defined for any number of systems of any dimensionality, which is not the case for entanglement of formation or distillation \cite{Vedral1}. I have argued that a number of results in quantum information and computation follow from the relative entropy function \cite{Vedral1}. There is, unfortunately, no closed form for the relative entropy of entanglement, but this measure can still be computed for a large class of relevant states such as the pure bipartite states, Werner states and many others \cite{Vedral4}. Most recently, Wei et al \cite{Wei} have succeeded in obtaining a formula for the relative entropy of entanglement for any number of totally symmetric pure states of $n$ qubits using a very simple and elegant argument (some partial results have been obtained previously in this direction using different methods by Plenio and Vedral \cite{Vedral2}, but only for three qubit symmetric states). I will use and extend these results further with the idea of applying them to a specific model of a superconductor. The purpose of this paper is to investigate possible links between high temperature entanglement and high temperature superconductivity with the intention of showing that entanglement can persist at higher temperatures. I analyze a particular mechanism - the $\eta$-pairing of electrons due to Yang \cite{Yang2} - that was originally proposed to explain high temperature superconductivity. The chief difference between this pairing mechanism and the usual Bardeen, Cooper and Schrieffer (BCS) electron pairing \cite{BCS} for (low temperature) superconductivity is that, in the former, electrons that are positioned at the same site are paired, while in the latter, electrons forming Cooper pairs are separated by a certain finite average distance (the so called coherence length, typically of the order of hundreds of nanometers). The physical reason behind electron pairing is also thought to be different in a high temperature superconductor, but I do not wish to enter into discussing these details here (see e.g. \cite{Plakida}). I will, however, look at the $\eta$ model in a different way, using totally symmetric states, and this will make calculating entanglement easier. Wei et al. \cite{Wei} have recently made very important steps in calculating the relative entropy of entanglement for symmetric state. I extend their approach to calculating the relative entropy of entanglement for mixed symmetric state arising from tracing over some qubits in pure states, and apply it to understanding various relations between entanglements of a subset of qubits and their relation to the total entanglement. I show that although two-site entanglement disappears as the distance between sites diverges (a conclusion also reached by Zanardi and Wang in a different way \cite{Zanardi}), the total entanglement still persists in the thermodynamical limit. Furthermore, it scales logarithmically with the number of qubits. Therefore, it is this total entanglement that should be compared with ODLRO and not the two-site entanglement. While the two-site entanglement vanishes thermodynamically, two-site classical correlations are still present and so is the entanglement between two clusters of qubits (two cluster entanglement in $\eta$ states has also been analysed by Fan \cite{Fan}). I show that all aspects of my analysis can easily be generalised to higher than half spin systems. My hope is that this work - which is really just a first step in exploring high temperature entanglement - will be extended to different models with states other than symmetric and that this will allow us a much more complete understanding of entanglement and the role it plays in the macroscopic world. \section{$\eta$-pairing in Superconductivity} The model that I describe now consists of a number of lattice sites, each of which can be occupied by fermions having spin up or spin down internal states. Let us introduce fermion creation and annihilation operators, $c^{\dagger}_{i,s}$ and $c_{i,s}$ respectively, where the subscript $i$ refers to the $i$th lattice site and $s$ refers for the value of the spin, $\uparrow$ or $\downarrow$. Since fermions obey the Pauli exclusion principle, we can have at most two fermions attached to one and the same site. The $c$ operators therefore satisfy the anticommutation relations: \begin{equation} \{c_{i,s},c^{\dagger}_{j,t}\} = \delta_{ij}\delta_{s,t} \end{equation} and $c$'s and $c^{\dagger}$'s anticommute as usual. (Some general features of fermionic entanglement - arising mainly from the Pauli exclusion principle - have been analysed in \cite{Zanardi,Vedral3,rest,Schliemann}). We only need assume that our model has the interaction which favors formation of Cooper pairs of fermions of opposite spin at each site \cite{Yang2}. The actual Hamiltonian is not relevant for my present purposes. It suffices to say that Yang originally considered the Hubbrad model for which the $\eta$ states are eigenstates (but none of them is a ground state \cite{Yang2}). A generalisation of the Hubbard model was presented in \cite{Korepin} and in a specific regime of this new model the $\eta$ states do become lowest energy eigenstates (this is a fact that will become relevant when we talk about high temperature entanglement). Both these models have been used to simulate high-temperature superconductivity, since in high superconducting materials, the coherent length of each Cooper pair is on average much smaller than for a normal superconductor. Suppose, now, that there are $n$ sites and suppose, further, that we introduce an operator $\eta^{\dagger}$ that creates a coherent superposition of a Cooper pair in each of the lattice sites, \begin{equation} \eta^{\dagger} = \sum_{i=1}^n c^{\dagger}_{i,\uparrow} c^{\dagger}_{i,\downarrow} \; . \end{equation} The $\eta^{\dagger}$ operator can be applied to the vacuum a number of times, each time creating a new coherent superposition. However, the number of applications, $k$, cannot exceed the number of sites, $n$, since we cannot have more than one pair per site due to the exclusion principle. I now introduce the following basis \begin{equation} |k,n-k\rangle := \frac{1}{\sqrt{n \choose k}} (\eta^{\dagger})^k |0\rangle \; , \end{equation} where the factor in front is just the necessary normalisation. Here, the vacuum state $|0\rangle$ is annihilated by all $c$ operators, $c_{i,s} |0\rangle = 0$. We note in passing that the originally defined $\eta$ operators can also have phase factors dependent on the location of the site on the lattice. We can have a set of operators like \begin{equation} \eta_k = \sum_n e^{ikn} c^{\dagger}_{n,\uparrow} c^{\dagger}_{n,\downarrow} \; . \end{equation} All the states generated with any $\eta_k$ from the vacuum will be shown to have the same amount of entanglement so that the extra phases will be ignored in the rest of the paper (i.e. we will only consider the $k=0$ states). We can think of the $\eta$ states in the following way. Suppose that $k=2$. Then this means that we will be creating two $\eta$-pairs in total, but they cannot be created in the same lattice site. The state $|2,n-2\rangle$ is therefore a symmetric superposition of all combinations of creating two pairs at two different sites. Let us, for the moment, use the label $0$ when the site is unoccupied and $1$ when it is occupied. Then the state $|2,n-2\rangle$ is \begin{equation} |2,n-2\rangle =\frac{1}{\sqrt{n \choose 2}} (|\underbrace{000}_{n-2}...\underbrace{11}_{2}\rangle + ...|\underbrace{11}_{2}...\underbrace{000}_{n-2}\rangle) \end{equation} i.e. it is an equal superposition of states containing $2$ states $|1\rangle$ and $n-2$ states $|0\rangle$. These states, due to their high degree of symmetry, are much easier to handle than general arbitrary superpositions and we can compute entanglement for them between any number of sites. Note that in this description each site effectively holds one quantum bit, whose $0$ signifies that the site is empty and $1$ signifies that the site is full. The main characteristic of $\eta$ states is the existence of the long range off diagonal order (ODLRO), which implies its various superconducting features, such as the Meissner effect and flux quantisation \cite{Nieh}. The ODLRO is defined by the off diagonal matrix elements of the two-site reduced density matrix being finite in the limit when the distance between the sites diverges. Namely, \begin{equation} \lim_{|i-j|\rightarrow \infty} \langle c^{\dagger}_{j,\uparrow} c^{\dagger}_{j,\downarrow} c_{i,\downarrow} c_{i,\uparrow} \rangle \longrightarrow \alpha \label{ODLRO} \end{equation} where $\alpha$ is a constant (independent of $n$). I will show that although the existence of off diagonal matrix elements does not guarantee the existence of entanglement between the two sites, it does guarantee the existence of multi-site entanglement between all the sites. Note that here, by ``correlations" I mean correlations between the number of electrons positioned at different sites $i$ and $j$. Namely, we are looking at the probability of one site being occupied (empty) given that the other site is occupied (empty). This is different from spin-spin correlations, which would look at the occurrences of both electron spins being up or down, or one being up and the other being down \cite{Vedral3}. \section{General Description of Symmetric States} The states I will analyze here will always be of the form \begin{equation} |\Psi (n,k)\rangle \equiv |k, n-k\rangle := \frac{1}{\sqrt{n \choose k}}(\hat{S}|\underbrace{000}_{k}...\underbrace{11}_{n-k}\rangle) \end{equation} where $\hat{S}$ is the total symmetrisation operator. We will also consider mixtures of these states, which become relevant when we talk about systems at finite temperatures. Symmetric states arise, for example, in the Dicke model in which $n$ atoms simultaneously interact with a single mode of the electro-magnetic field \cite{Dicke}. They are, furthermore, very important as they happen to be eigenstates of many models in solid state physics, and, in particular, they are eigenstates of the Hubbard and related models supporting the $\eta$ pairing mechanism. The analysis presented in this paper will be applicable to any of these systems and not just the $\eta$ model. The $\eta$ mechanism will be here significant because of its potential to support high temperature entanglement. I would now like to start to compute the entanglement between every pair of qubits (sites) in the above state $|\Psi (n,k)\rangle$. A simpler task would be first to tell if and when every pair of qubits in a totally symmetric state is entangled. For this, we need only compute the reduced two-qubit density matrix which can be written as: \begin{equation} \sigma_{12}(k) = a |00\rangle \langle 00| + b |11\rangle \langle 11| + 2c |\psi^{+}\rangle \langle \psi^{+}| \end{equation} where $|\psi^{+}\rangle = (|01\rangle + |10\rangle)/\sqrt{2}$ and \begin{eqnarray} a & = & \frac{{n-2 \choose k-2}}{{n \choose k}} = \frac{k(k-1)}{n(n-1)}\\ b & = & \frac{{n-2 \choose k}}{{n \choose k}} = \frac{(n-k)(n-k-1)}{n(n-1)}\\ c & = & \frac{{n-2 \choose k-1}}{{n \choose k}} = \frac{k(n-k)}{n(n-1)} \; . \end{eqnarray} We can easily check that $a+b+2c =1$ and so the state is normalized. This density matrix is the same no matter how far the two sites are from each other, since the state is symmetric, and must therefore be identical for all qubits. We can easily test the Peres-Horodecki (partial transposition) condition \cite{Peres} for separability of this state. Two states are entangled if and only if they are inseparable which leads to states $\sigma_{12} (k)$ being entangled if and only if \begin{equation} a + b - \sqrt{(a-b)^2 + 4c^2} < 0 \; , \end{equation} which leads to \begin{equation} (k-1) (n-k-1) < k (n-k) \; . \end{equation} This equation is satisfied for all $n\ge 2$ (two qubits or more) and $1 \le k \le n-1$. So, apart from the case when the total state is of the form $|000..0\rangle$ or $|111..1\rangle$, there is always two-qubit entanglement present in symmetric states. Note, however, that in the limit of $n$ and $k$ becoming large - no matter what their ratio may be - the value of the left hand side approaches the value of the right hand side and entanglement thus disappears. This is a very interesting property of symmetric states and we will be able to quantify it exactly in the next section. An important point to make is that the two point correlation function used in the calculation of the ODLRO in eq. (\ref{ODLRO}) is, in fact, just one of the sixteen numbers we need for the full two-site density matrix (the independent number of real parameters is actually fifteen, because of normalisation). In our simplified case of symmetric states in the $\eta$-pairing model, this off diagonal element is equal to $c$. However, for the density matrix we still need to know $a$ and $b$, and these numbers clearly affect the amount of entanglement. Imagine, for example, the situation where $a=b$. Then the condition for entanglement is that $a-c <0$, which does not hold if $a\ge c$ and such a density matrix is certainly possible. So, the first lesson is that two-site entanglement is not the same as the existence of ODLRO, and therefore two-site entanglement is not relevant for superconductivity. This does not mean, of course, that there is no entanglement in the whole of the lattice. In the next section, I will calculate exactly this. We will determine the relative entropy of entanglement for all symmetric states and all their substates. I will be able to extend the method of Wei et al \cite{Wei} and analyze many relationships between various subsets of symmetric states, including the amount of entanglement in any subset of qubits (or sites). \section{Relative Entropy of Entanglement for Symmetric States} The symmetric states are very convenient for studying various features of multipartite entanglement simply because, as we already indicated, we can compute exactly the relative entropy of entanglement for any reduced state including the total symmetric state for any $n$ and $k$. It is expected that, because they possess a high degree of symmetry, they will also display a high degree of entanglement. It is precisely for this reason that they are suitable to allow the existence of entanglement at high temperatures. This will now be analyzed in detail. I first introduce the relative entropy of entanglement. The relative entropy of entanglement measures the distance between a state and the nearest disentangled (separable) state. If ${\cal D}$ is the set of all disentangled states (i.e. states of the form $\sum_i p_i \rho^i_1 \otimes \rho^i_2\ldots\otimes \rho^i_n$, where $p_i$ is any probability distribution), the measure of entanglement for a state $\sigma$ is then defined as \begin{equation} E({\sigma}):= \min_{\rho \in \cal D}\,\,\, S(\sigma || \rho)\; , \label{measure} \end{equation} where $S(\sigma || \rho) = tr (\sigma \log \sigma - \sigma \log \rho)$ is the relative entropy between the two density matrices $\rho$ and $\sigma$. In order to compute this measure for any state $\sigma$ we need to be able to find its closest disentangled state $\rho$. Finding this closest state is, in general, still an open problem, however, it has recently been solved for pure symmetric states by Wei et al \cite{Wei}. Wei et al showed that a convenient and intuitive way of writing the closest disentangled state to the symmetric state $|k, n-k\rangle$ is \cite{Wei}: \begin{equation} \rho = \frac{1}{2\pi} \int_0^{2\pi} d\phi |\phi^{\otimes n}\rangle \langle \phi^{\otimes n}| \label{close} \; , \end{equation} where \begin{equation} |\phi^{\otimes n}\rangle = (\sqrt{k/n} |0\rangle + \sqrt{(n-k)/n} e^{i\phi} |1\rangle)^{\otimes n} \end{equation} is the tensor product of $n$ states each of which is a superposition of the states $|0\rangle$ and $|1\rangle$ with probabilities $k/n$ and $1-k/n$ respectively. This $\rho$ was proved to achieve the minimum of the relative entropy by showing that it saturates an independently obtained lower bound. The relative entropy of entanglement of the total state is now easily computed. Since $\sigma = |k,n-k\rangle\langle k,n-k|$ is a pure state, $tr \sigma \log \sigma = 0$ and we only need to compute $-\langle k,n-k| \log \rho |k,n-k\rangle$, which is equal to \begin{equation} E(|k, n-k\rangle) = - \log {n \choose k} + k \log \frac{n}{k} + (n-k) \log \frac{n}{n-k}\; . \end{equation} Note that entanglement is largest when $n=2k$ as is intuitively expected (i.e. the largest number of terms is then present in the expansion of the state in terms of the computational basis states). Then, for large $n$, it can be seen that the amount of entanglement grows as \begin{equation} E(|n/2, n/2\rangle) \approx \frac{1}{2} (\log n + 2) \end{equation} and so (in the leading order) entanglement grows logarithmically with the number of qubits in the state. To obtain this formula I have used Sterling's approximation for the factorial \begin{equation} n! \approx 2.507 n^{n+1/2}e^{-n} \; . \end{equation} Most results in this paper will asymptotically have the form $\alpha \log n + \beta$ where $\alpha >0$ and $\beta$ are constants that will usually be omitted as we only care about the general form of the behaviour. I now return to the question of different phases introduced between different elements of the superposition in the symmetric states. Let us consider states of the form \begin{equation} |1,n-1,\theta\rangle = |00..1\rangle + e^{i\theta} |00..10\rangle + e^{(n-1)i\theta} |10..0\rangle \; , \end{equation} where we have $k=1$ ones and $n-1$ zeroes and $\theta$ is any phase. The simplest way of seeing that entanglement does not depend on the phase $\theta$ is to define a new basis at the $m$th site as $|\tilde{0}\rangle = |0\rangle , |\tilde{1}\rangle = \exp \{(m-1) \theta\} |1\rangle $. This way the phases have been absorbed by into the basis states and the resulting state is, in the tilde basis, \begin{equation} |1,n-1,\theta\rangle = |\tilde{0}\tilde{0}..\tilde{1}\rangle + |\tilde{0}\tilde{0}..\tilde{1}\tilde{0}\rangle + |\tilde{1}\tilde{0}..\tilde{0}\rangle\; . \end{equation} The amount of entanglement must therefore be independent of any phase difference of the above type and this is, of course, true for symmetric states with any number of zeroes and ones. All considerations from this point onwards will therefore immediately apply to all these states will different phases. We can also compute the two-site relative entropy of entanglement exactly. The closest disentangled state is in this case the same as in eq. (\ref{close}) with $n=2$. In the computational basis we have \begin{equation} \rho = \left(\frac{k}{n}\right)^2 |00\rangle \langle 00| + \left(\frac{n-k}{n}\right)^2 |11\rangle \langle 11| + \left(\frac{2k(n-k)}{n^2}\right) |\psi^{+}\rangle \langle \psi^{+}|\; . \label{closest} \end{equation} That this is a minimum can be seen from the fact that the relative entropy of the state of two qubits is: \begin{eqnarray} S(\sigma ||\rho) & = & -S(\sigma) - \langle \psi^{+} | \log \rho |\psi^{+}\rangle - \langle 00 | \log \rho |00\rangle - \langle 11 | \log \rho |11\rangle \\ &\ge & -S(\sigma) - \log \langle \psi^{+} | \rho |\psi^{+}\rangle - \log \langle 00 | \rho |00\rangle - \log \langle 11 | \rho |11\rangle \; , \end{eqnarray} the inequality following from concavity of the $\log$ function. Suppose now that $\rho$'s only non-zero elements are $\rho_{00}=\langle 00 | \rho |00\rangle,\rho_{11}=\langle 11 | \rho |11\rangle$ and $\rho_{++} = \langle \psi^{+} | \rho |\psi^{+}\rangle$. Given that it has to be separable, meaning that $2\sqrt{\rho_{00}\rho_{11}}\ge \rho_{++}$ (which follows from the Peres-Horodecki criterion), and that, at the same time, it has to be closest to $\sigma$, we can conclude that $\rho_{00} =k/n$. The other entries of $\rho$ then follow. To prove that $\rho$ is the minimum in a rigorous fashion, we need to show that any variation of the type $(1-x)\rho + x \omega$ where $\omega$ is any separable state leads to a higher relative entropy (a method similar to \cite{Vedral4}). Since relative entropy is a convex function, this means that \begin{equation} \frac{d}{dx} S(\sigma ||(1-x)\rho + x \omega) \ge 0 \; . \end{equation} In fact, since relative entropy is convex in the second argument it is enough to assume that $\omega$ is just a product state. For $a > 0$, $ \log a = \int_0^\infty {at - 1\over a + t} {dt \over 1 + t^2}$, and thus, for any positive operator $A$, $ \log A = \int_0^\infty {At - 1\over A + t} {dt \over 1 + t^2}$. Let $f(x, \omega) = S(\sigma||(1-x) \rho + x \omega)$. Then \begin{eqnarray} {\partial f \over \partial x}(0, \omega) & = & -\lim_{x \rightarrow 0} \mbox{Tr}\bigg \{{\sigma (\log ((1-x) \rho + x \omega) - \log \rho)\over x}\bigg \} \nonumber \\ & = & \mbox{Tr}\bigg \{ ( \sigma \int_0^\infty (\rho + t)^{-1} ( \rho - \omega) (\rho +t)^{-1} dt \big) \bigg \} \nonumber \\ & = & 1 -\int_0^\infty \mbox{Tr}( \sigma (\rho + t)^{-1} \omega (\rho +t)^{-1} \big) dt \nonumber \\ & = & 1 - \int_0^\infty \mbox{Tr}( (\rho + t)^{-1} \sigma (\rho + t)^{-1} \omega \big) dt \; . \end{eqnarray} For our minimal guess $\rho$ in eq. (\ref{closest}) we can then write \begin{eqnarray} {\partial f \over \partial x}(0, \omega) - 1 & = & - \mbox{Tr} \bigg \{ \omega \int_0^\infty (\rho + t)^{-1} \sigma (\rho + t)^{-1} dt \bigg \} \nonumber \\ & = & \frac{n}{n-1} \frac{k-1}{k} \langle 00|\omega|00\rangle + \frac{n}{n-1} \frac{n-k-1}{n-k} \langle 11|\omega|11\rangle \nonumber \\ & + & \frac{n}{n-1} \langle \psi^+|\omega|\psi^+\rangle \; , \end{eqnarray} where we have used the fact that $\int_0^\infty (p + t)^{-2} dt = p^{-1}$. Since the expression in the previous equation is always less than or equal to a unity if $\omega = |\alpha\beta\rangle\langle\alpha\beta|$ (i.e. a product state), it follows that \begin{eqnarray} \left|{\partial f \over \partial x}(0, \omega) - 1\right| & \leq & 1 \;\; . \end{eqnarray} Thus it also follows that ${\partial f \over \partial x} (0, |\alpha\beta\rangle\langle\alpha\beta|) \geq 0$. But any separable state can be written in the form $\rho = \sum_i r_i |\alpha^i\beta^i\rangle\langle\alpha^i\beta^i|$ and so \begin{equation} {\partial f \over \partial x}(0, \rho) = \sum_i r_i {\partial f \over \partial x}(0, |\alpha^i\beta^i\rangle\langle\alpha^i\beta^i|) \geq 0 \; . \end{equation} And this confirms that $\rho$ is the minimum since the gradient is positive. Therefore the relative entropy of entanglement between any two sites is: \begin{eqnarray} E_{12} & = & a\log a - b\log b - 2c\log 2c \nonumber \\ & - & a \log \left(\frac{k}{n}\right)^2 - b \log \left(\frac{n - k}{n}\right)^2 - 2c \log \left(\frac{2k(n-k)}{n^2}\right) \nonumber \\ & = & \log\left( \frac{n}{n-1} \right)+ \frac{k(k-1)}{n(n-1)} \log \left( \frac{k-1}{k}\right) + \frac{(n-k)(n-k-1)}{n(n-1)} \log \left(\frac{n-k-1}{n-k}\right)\; . \end{eqnarray} We see that when $n,k,n-k \rightarrow \infty$, then $E_{12} \rightarrow 0$ as it should be from our discussion of the separability criterion. This can be thought of as one way of recovering the ``quantum to classical" correspondence in the limit of large number of systems present in the state: locally, between any two sites, entanglement does vanish, although globally, and as will be seen in more detail, entanglement still persists. Entanglement of any number of qubits, $l\le k$, can also be calculated using the same method . The state after we trace out all but $l$ qubits is given by \begin{equation} \sigma_l = \sum_{i=0}^l {l\choose l-i} \frac{{n-l \choose k-i}}{{n \choose k}} |i,l-i\rangle\langle i,l-i| \; . \end{equation} The closest disentangled state is given by \begin{equation} \rho_l = \sum_{i=0}^l {l\choose i} \left(\frac{k}{n}\right)^{l-i} \left(\frac{n-k}{n}\right)^{i} |i,l-i\rangle\langle i,l-i| \; , \end{equation} as can be shown by the above method. The relative entropy of entanglement is now given by \begin{equation} E_l = \sum_{i=0}^l {l\choose l-i} \frac{{n-l \choose k-i}}{{n \choose k}} \log \left\{ {l\choose l-i}\frac{{n-l \choose k-i}}{{n \choose k}} (\frac{n}{k})^{l-i}(\frac{n}{n-k})^{i} {l \choose i}^{-1}\right\} \label{lqubitent}\; . \end{equation} This is a very interesting quantity as it allows us to speak about entanglement involving any number of qubits. What do we expect from it? We expect that entanglement grows exponentially with $l$, for a fixed total number of qubits, $n$. This can be confirmed using the Sterling formula. Note that entanglement grows at this rate even though the states we are talking about are mixed, since $n-l$ qubits have been traced out. Another way of seeing why entanglement grows exponentially with the number of qubits included for a total fixed number of qubits, is to look at the opposite regime. For any finite fixed $l$, we should have that in the large $n,k,n-k$ limit the amount of entanglement between $l$ tends to zero. This decrease with larger and larger $n$ happens at an exponential rate. \section{Classical Versus Quantum Correlations} In this section I would like to investigate the relationship between classical and quantum correlations for symmetric states, and both in relation to the already introduced concept of ODLRO. First of all, it is clear that in the limit of $n\rightarrow \infty$ all bipartite (or two-site) entanglement disappears (this was seen both from the Peres-Horodecki criterion and from the direct computation of the relative entropy). In spite of this, the ODLRO still exists and the two quantities are therefore not related. In other words, two-site entanglement is not relevant for superconductivity. However the main point of this section is that the two-site classical correlations still survive in the limit of $n\rightarrow \infty$. In order to show this, let us, first of all, define bipartite classical correlations. A quantum state can have zero amount of entanglement, but still have non-zero classical correlations. An example is the state $|00\rangle\langle 00|+ |11\rangle\langle 11|$. Classical correlations between systems $A$ and $B$ in the state $\sigma_{AB}$ can be defined as \cite{Henderson} \begin{equation} C_A (\sigma_{AB}):= \max_{A_i^{\dagger}A_i} S(\sigma_B) - \sum_i p_i S(\sigma_B^i)=\max_{A_i^{\dagger}A_i}\sum_i p_i S(\sigma_B^i||\sigma_B)\; , \end{equation} where $\sigma_B^i=tr_A \sigma_{AB}^i$, $\sigma_{AB}^i=A_i \sigma_{AB}A_i^{\dagger}$, and $\sum_i A_i^{\dagger}A_i=1$ is the most general measurement on system $A$. The same can be defined with the most general measurement performed on $B$, so that we obtain \begin{equation} C_B (\sigma_{AB}): = \max_{B_i^{\dagger}B_i} S(\sigma_A) - \sum_i p_i S(\sigma_A^i)=\max_{B_i^{\dagger}B_i}\sum_i p_i S(\sigma_A^i||\sigma_A) \; . \end{equation} The physical motivation behind the above definition is the following: classical correlations between $A$ and $B$ tell us how much information we can obtain about $A$ ($B$) by performing measurements in $B$ ($A$). It is the (maximum) difference between the entropy of $A$ ($B$) before and after the measurement on $B$ ($A$) is performed. There is some evidence that $C_A=C_B$ \cite{Henderson}, but this equality will not be relevant here. Now, applying this measure of classical correlations to the two-site reduced density matrix from the overall symmetric state, $\rho_{12}$, we obtain, \begin{eqnarray} C & = & -a\log a - b\log b - c\log c + \frac{1}{2} ((a+c/2)\log (a+c/2) + (b+c/2)\log (b+c/2)) \nonumber\\ & = & (r-2r^2) \log r + ((1-r) - 2(1-r)^2) \log (1-r) - 2r(1-r) \log 2r(1-r) \end{eqnarray} where $r=k/n$ is the fraction of ones in the state (the so called filling factor in any ``Cooper pair" lattice model, including the $\eta$ model). We now see that at half filling - when ODLRO is maximal - the classical two-site correlations also survive asymptotically since $C_A=C_B=0.5$. Therefore, all the correlations between any two sites are here due to classical correlations. Note, incidentally, that we cannot have the situation in which entanglement exists between two parties, while at the same time classical correlations vanish. Quantum correlations presuppose the existence of classical correlations. This, of course, relies on the fact that entanglement is defined in a reasonable way, namely that when we talk about two-site entanglement we must trace the other sites out. We are not allowed to perform measurements on other sites and condition the remaining entanglement on them. Measurements that generate entanglement are, first of all, unrealistic for a macroscopic object which thermalizes very quickly. Even if we were to allow such measurements, then the state after them will still have classical correlations of at least the same magnitude as entanglement. So, it cannot be that entanglement is important for the issues of superconductivity, phase transitions, condensation, etc., and that classical correlations are not. As an example, let us take the ``maxmum singlet fraction" in the two-site density matrix $\sigma_{12}$ as our definition of entanglement. This is the maximaum fraction of a maximally entangled state in the state $\sigma_{12}$, which is in this case equal to $c$, and this is the same as ODLRO. So, if the maximum singlet fraction is used to measure entanglement, then entanglement also persists in the thermodynamical limit. In fact, as will be shown later, this measure also survives when we mix symmetric states, because it is a linear measure. The maximum singlet fraction, however, is not a realistic measure of entanglement as it is not easily accessible experimentally, which is why we do not use it in this paper. In order to make our analysis more complete we also show how to calculate mutual information \cite{Vedral1} for symmetric states. This quantity tells us about the total (quantum plus classical) correlations in a give state. Mutual information is equal to the relative entropy between the state itself and the product of individual qubit density matrices, obtained by tracing out all the other qubits. This product state is easily written down to be: \begin{equation} \rho_{prod} = \left(\frac{k}{n} |0\rangle\langle 0| + \frac{n-k}{n} |1\rangle\langle 1|\right)^{\otimes n}\; . \end{equation} The mutual information is now given by \begin{equation} I (|k,n-k\rangle) = n \left(-\frac{k}{n}\log \frac{k}{n} - \frac{n-k}{n} \log \frac{n-k}{n}\right)\; , \label{mutual} \end{equation} and this is basically just the sum of individual qubit entropies. Since the qubit entropy (the quantity in brackets in the above equation) is a finite quantity for a given ratio $r=k/n$, the total mutual information grows linearly with the number of qubits $n$. Furthermore, since entanglement grows as $\log n$, we conclude that classical correlations grow roughly as $n-\log n$ (for this conclusion to be exact, classical and quantum correlations as defined here would have to add up to mutual information; while this is true for some states \cite{Henderson}, it is certainly not true in general). The fact that classical correlations and mutual information survive the thermodynamical limit does not imply that there in no meaning left for entanglement when it comes to superconductivity and ODLRO. Only now, we must talk either about the bipartite entanglement between two clusters of sites (to be computed in the next section) or the multipartite entanglement between all sites. Since the overall state across all sites is pure in our considerations so far, this means that two-site non-vanishing classical correlations (or equivalently ODLRO) must imply entanglement between two clusters, each of which contains one of the sites and such that the union of the two clusters is the whole lattice. This simply must be the case, since, otherwise, if the clusters were not entangled, the total state would be a product of the states of individual clusters, and this means that even classical correlations would be zero, which is a contradiction. Furthermore, the fact that any two such clusters are entangled, must mean that the multipartite entanglement also exists, since this entanglement is by definition larger than any bipartite entanglement (as, for multipartite entanglement, we are looking for the closest separable state over all sites, rather than just over the two clusters). I now quantify these various relations a bit more precisely. \section{Various Other Relations Between Entanglements} In this section I will discuss some other results that can be derived from our knowledge of symmetric states so far. Some of the results will not necessarily be directly relevant for the main theme of the paper - high temperature entanglement - but this section is a natural place to present them. The discussion about high temperature entanglement in the rest of the paper can be understood without reading this section. I will fully return to the main topic in the next section. The first important question to be addressed here is the following. Suppose we look at the entanglement between one qubits and the rest $n-1$ qubits in total and individually. We would expect that the total one-versus-rest entanglement is larger then the individual sum of the two-qubit entanglements. The logic behind this conclusion is that by looking at entanglements individually we always lose something from the total entanglement, i.e. the operation of tracing reduces entanglement. This translates into the following inequality: \begin{equation} (n-1) E_{12} \le E_{1:(2,3..n)} \; , \end{equation} where \begin{equation} E_{1:(2,3..n)} = -\frac{k}{n} \log \left( \frac{k}{n}\right) - \frac{n-k}{n} \log \left(\frac{n-k}{n}\right) \end{equation} is basically the same as the entropy of every qubit in the symmetric state. We can prove this inequality by noting that it holds for $k=1$ and $2k=n$ (the extreme points), and because of the monotonicity and continuity of both sides it has to hold in general. The aforementioned inequality has a very important implication which shows that the bipartite entanglement in the symmetric state is always bounded from above by \begin{equation} E_{12} \le \frac{E_{1:(2,3..n)}}{(n-1)} \le \frac{1}{n-1} \approx \frac{1}{n} \end{equation} the second inequality following from the fact that the entanglement between one qubit and the rest is equal to the entropy of that qubit and that can at most be $\log 2 = 1$. Therefore, while the total entanglement of the symmetric state increases with $\log n$, the two qubit entanglement decreases as $1/n$. Here we see most directly how it is possible to have the emergence of (only) classical correlations between constituents even though globally entanglement increases. There are many other open questions related to this one. We can repeat the same calculation for any fixed number of qubits. We can check if a cluster of qubits is, for example, more entangled to another cluster of qubits in total or if we add all the entanglements between their individual elements. Some of these may not be easy questions to answer in general. I would now like to calculate the entanglement between $l$ qubits and the remaining $n-l$ qubits. Since the whole state that we are now examining is pure, the relative entropy of entanglement is given by the entropy of the $l$ qubits: \begin{equation} S_{12...l} = -\sum_{i=0}^l {l\choose l-i} \frac{{n-l \choose k-i}}{{n \choose k}} \log\left\{ {l\choose l-i} \frac{{n-l \choose k-i}}{{n \choose k}} \right\} \; . \end{equation} What are the properties of this expression when we take the various asymptotic limits? How is this quantity related to other entanglements calculated here? We expect that for the half filling, $n/k=2$, and $n,l\rightarrow \infty$, the entropy becomes $\log l$, since we basically have a maximal mixture in the symmetric subspace of $l$ qubits. This can be confirmed by a simple application of the Sterling approximation formula used before. The result is in agreement with the fact that total entanglement grows at the rate of the log of the number of qubits, since two cluster entanglement is a lower bound for the total entanglement in the state between all the qubits. The last question I address is the relationship between the lower and higher order entanglement in the symmetric states. More precisely, the question is: if we add all the entanglements up to and including $m$ qubits, is this quantity larger or smaller than the amount of entanglement of $m+1$ qubits? Mathematically, this translates into the following two possible inequalities: \begin{equation} \sum_{i=1}^m E_i \le E_{m+1} \; \mbox{or} \; \sum_{i=1}^m E_i \ge E_{m+1} \end{equation} where $E_m$ is given in eq. (\ref{lqubitent}). We already know that for $n=3$ and $k=1,2$, and $l=3$ we have the equality in the above, namely $E_3 = E_1 + E_2$ \cite{Vedral2}. From this result alone it is not clear which way to expect the inequality to be. Numerical examples show us that, in fact, both results are possible. If we check the inequality for $n=100, k=50$ and $l=4$ for example, than the left hand side is smaller than the right hand side and the first inequality holds. For $n=100, k=50$ and $l=30$, on the other hand, the left hand side is larger than the right hand side and the second inequality is satisfied. It is an interesting and open question to investigate the point of the cross-over when the two sides become equal to each other. \section{Thermal Entanglement and Superconductivity} There is a critical temperature beyond which any superconductor becomes a normal conductor. The basic idea behind computing this temperature according to BCS is the following. At a very low temperature, only the ground state of the system is populated and for a superconductor this state involves a collection of Cooper pairs with different momenta values around the Fermi surface. This state can be, somewhat loosely, thought of as a Cooper pair condensate, and it is this condensation that is the key to superconductivity. It took initially a long time to understand how the pairs are formed, since electrons repel each other and therefore should not be bound together. The attraction is provided by electrons interacting with the positive ions left in the lattice. We can think of one electron moving and dragging along the lattice, which then pulls other electrons thereby providing the necessary attraction \cite{BCS}. When the temperature starts to increase, the Cooper pairs start to break up, leading to the transition to the normal conductor. What this ``breaking up" means is that higher than ground states start to get populated by electrons, and these are states where an electron is created with say momentum $k$ and spin up, but no electron is created in the $-k$ momentum state. From the BCS analysis this critical temperature can be calculated to be of the form \cite{BCS} \begin{equation} T_c \approx \frac{\hbar \omega}{k} e^{-1/\lambda} \end{equation} where $\hbar \omega$ is the energy shell around the Fermi surface which is engaged in formation of Cooper pairs, $k$ is the Boltzmann constant and $\lambda$ is a parameter equal to the product of the electron density at the Fermi surface $N(0)$ and the effective electronic attractive coupling, $V$. The critical temperature formula is valid in the weak coupling regime where $\lambda = N(0) V <<1$. The formula for the critical temperature is usually used for other mechanisms of electron pairing, and not just coupling via the phonon lattice modes as in the BCS model \cite{BCS}. Importantly for us, the formula also features in models for explaining and designing high $T_c$ superconducting materials. If the attraction, say between an electron and a hole, is of the order of Coulomb forces, $\hbar \omega \approx 1eV$m, and for the weak coupling of, say, $\lambda = 0.2$, the critical temperature we obtain is $100K$. So if the material is below this temperature, it is then superconducting. Anything above $70-90 K$ is considered to be high temperature superconductivity, since it can be achieved by cooling with liquid nitrogen (which is a standard and easy method of cooling). What seems to be the mechanism behind high temperature superconductivity, is the fact that the energy gap between the ground superconducting, electron-pair state, and the excited states is large enough not to be easily excited as the temperature increases well beyond zero temperature. The exact way in which this is achieved is still an open question. In the models mentioned here the ground state is one of the symmetric states from the previous sections. Therefore, we can conclude that as long as we have high temperature superconductivity, the total state should also be macroscopically entangled. Superconduction and hence entanglement can currently exist at temperatures of about $160$ Kelvin. I would now like to explicitly calculate and show how entanglement disappears as the temperature increases for any model having the $\eta$ pairing state as the ground state. For this, we need to be able to describe other states that would be mixed in with the symmetric $\eta$ states as the temperature increases. They, of course, depend on the actual Hamiltonian. For instance, in the Hubbard model in \cite{Yang2}, states of the type \begin{equation} \xi^{\dagger}_a |0\rangle = \sum_{i} c^{\dagger}_{i,\downarrow}c^{\dagger}_{i+a,\uparrow}|0\rangle \end{equation} are important; here we create a spin singlet state but at sites separated by the distance $a\neq 0$. If we have $2k$ electrons in total, then $2k-2$ would be paired in the lowest energy state, and the remaining two electrons would not be. This would give us the state of the form \begin{equation} |\xi\rangle := \eta^{k-1} \xi^{\dagger}_a |0\rangle \; . \end{equation} Note that this state is a symmetric combination of states which have $k-1$ electron pairs distributed among $n$ sites and the last electron pair is in two different sites separated by the distance $a$. These two sites are different from the other $k-2$ sites due to Pauli's exclusion principle. Even higher states are obtained by having two electron pairs existing outside of the symmetric state and so on. The exact form of these, as noted before, depends on the exact form of the Hamiltonian. Even simple Hamiltonians are frequently very difficult to diagonalize and their eigenstates are still by and large unknown. Given this, it may be difficult to calculate the exact amount of entanglement when, at finite temperature, the ground state is mixed with higher energy states. I will, therefore, make a simplifying assumption that, if the ground state is $|k,n-k\rangle$, the higher energy states can be written as $|k-1,n-k+1\rangle$, $|k-2,n-k+2\rangle$ and so on. All these will in fact be assumed to be symmetric and I will ignore the extra unpaired electrons as far as entanglement is concerned (they will only contribute to the eigenvalue of energy as it were). This assumption leads us to consider mixtures of symmetric states. The symmetric states will be mixed with probabilities in accordance to Boltzmann's exponential law, or the Fermi-Dirac law if we talk about $\eta$ pairs. Which distribution we use will be immaterial for our argument. The total state, $\sigma_T$, is \begin{equation} \sigma_T = \sum_{k=0}^n p_k |\Psi (k,n)\rangle \langle \Psi (k,n)| \end{equation} where, in the case of $\eta$ pairs, the probabilities are \begin{equation} p_i = \frac{1}{e^{E_i/kT} +1} \end{equation} where $p_i$ is the probability of occupying the $i$th energy level. The reduced two-site state can be calculated to be \begin{equation} \sigma_{12} = \sum_{k=0}^n p_k \sigma_{12} (k) \; . \end{equation} The condition for inseparability now becomes \begin{equation} \sum_{k,l} p_{k} p_{l} k(n-l)\{(n-k)l - (k-1)(n-l-1)\} >0 \; . \end{equation} We see that the thermal averaging is in a sense inconsequential for the existence of entanglement as the factors $p_{k}p_{l}$ are probabilities and are always non-negative. For inequality to hold (i.e. to have non-zero bipartite entanglement present) we need that $1\le k,l \le n-1$. This is the same condition as before when the total state was pure. Thus, surprisingly, the condition for inseparability is completely independent of temperature (although, two-site states do become separable in the macroscopic limit even at zero temperature, as noted before). We now look at the entanglement of the symmetric mixed state as a whole. Can we still calculate the relative entropy of entanglement? This is in general very difficult to do for multiparty mixed states, and some partial methods for upper bounds have only been presented recently \cite{NJPhys}. We conjecture that the closest disentangled state is now presumably the thermal average of the closest disentangled states for individual $k$'s (this, I believe, is the same as the conjecture in \cite{Wei}, for which Wei et al have offered a great deal of ``circumstantial evidence"; for example, closest separable states have to possess the same symmetry as the entangled states for which they minimise the relative entropy \cite{Werner}). I believe that this bound is exact and that this can be proven using methods for calculating two site entanglement, but I have not been able to show this yet. Even if this is not true, my method at least gives us a very good upper bound which is sufficient to show how total entanglement vanishes as $T$ becomes high. The relative entropy of entanglement between these two states is given by (the right hand side of the inequality) \begin{equation} E (\sigma_T) \le \sum_k p_k \log p_k - \sum_k p_k \langle \Psi (k,n)| \log \left( \sum_l p_l \rho_l \right) |\Psi (k,n)\rangle \end{equation} where $\rho_l$ is the closest disentangled state to the pure symmetric state containing $l$ ones and $n-l$ zeroes. We have already seen that \begin{equation} \rho_l = \sum_{i=1}^l {l\choose i}\left(\frac{k}{n}\right)^{l-i}\left(\frac{n-k}{n}\right)^{i} |\Psi (l,n)\rangle \langle \Psi (l,n)| \; , \end{equation} so that \begin{eqnarray} & E (\sigma_T) & \le \sum_k p_k \log p_k - \nonumber \\ & - & \sum_k p_k \langle \Psi (k,n)| \log \left\{\sum_l p_l \sum_{i=1}^l {l \choose i} \left(\frac{k}{n}\right)^{l-i} \left(\frac{n-k}{n}\right)^{i}\right\} |\Psi (l,n)\rangle \langle \Psi (l,n)| |\Psi (k,n)\rangle \nonumber \\ & = & - \sum_k p_k \log \sum_{i=1}^k {k \choose i} \left(\frac{k}{n}\right)^{k-i} \left(\frac{n-k}{n}\right)^{i}\; . \end{eqnarray} The interesting conclusion here is the following. Suppose that we are at a high temperature and that all symmetric states are equally likely, meaning that $p_k = 1/(n+1)$ for all values $k$ (basically, our state is an equal mixture of all symmetric states). This, of course, in an approximation to the true density matrix, but it becomes more and more accurate with the increase of temperature and it ceases to be so when states other than symmetric become mixed in. The (upper bound to) entanglement is then given by \begin{equation} E (\sigma_{T\rightarrow \infty}) \le \frac{1}{n+1} \log \sum_{i=1}^k {k \choose i} \left(\frac{k}{n}\right)^{k-i} \left(\frac{n-k}{n}\right)^{i} \end{equation} The fraction inside the log tends to $n^2$ as $n$ becomes large, so that entanglement scales as $\log n/n$. This is to be expected as entanglement grows as $\log n$ with $n$, but the mixedness grows linearly with the number of state involved, $n+1$. Therefore, in the thermodynamical limit, the overall mixed state entanglement also disappears. This has to eventually happen, of course, if we believe that entanglement is intimately linked with superconductivity and superconductivity also vanishes at sufficiently high temperatures. One kind of entanglement that we can say survives the thermodynamical high temperature limit is the average of entanglements of individual symmetric states. This average entanglement is given by \begin{equation} E_{avr} = \sum_k p_k E(|k,n-k\rangle) = \frac{1}{2}\sum_k p_k \log \frac{k(n-k)}{n} \; . \end{equation} Note that if all probabilities go as $1/n$ - i.e. the symmetric state is maximally mixed, then the entanglement scales as $\log n$ (the same as pure state at half filling). This is expected, as there are $n+1$ states and each one has entanglement proportional to $\log n$ and, so, on average, entanglement also goes as $\log n$. However, this average entanglement, as we argued before is not a good measure as it requires us to be able to address the symmetric states individually and discriminate them from each other. This is not just difficult in practice, but is in fact frequently even impossible in principle. It is interesting to note that the ODLRO does survive the mixing of symmetric states. Even when we have an equal mixture of all symmetric states the average ODLRO is given by \begin{equation} \frac{1}{n+1} \sum_{k=0}^n \frac{k}{n}\frac{(n-k)}{n} = \frac{1}{2} - \frac{1}{6}\frac{2n+1}{n} \rightarrow \frac{1}{6} \end{equation} where the arrow indicates the convergence when $n$ is large. Of course, at sufficiently high temperatures the system will leave the subspace of symmetric states and other states will also start to contribute. This eventually does lead to vanishing of ODLRO, but the total entanglement and ODLRO may still disappear at different temperatures. To calculate this exactly, we would need a much more detailed model and a more extensive and careful calculation which lie outside of the scope of the present paper. (Note: the same conclusions hold for the maximum singlet fraction in the two-site density matrix which also survives the mixing in the thermodynamical limit; this is, unfortunately and as pointed out before, not a suitable measure of entanglement in our setting). I conclude by showing that total correlations - quantum and classical - as quantified by the mutual information \cite{Vedral1} can also easily be calculated for thermal mixtures of symmetric states. Let us assume again that the symmetric states are maximally mixed and each appears with the probability $1/(n+1)$. Then the mutual information is given by \begin{equation} I = \frac{n}{n+1} \sum_k \left(-\frac{k}{n}\log \frac{k}{n} - \frac{n-k}{n} \log \frac{n-k}{n}\right) - \log(n+1) \; . \end{equation} For large $n$ this expression reduces to \begin{equation} I \rightarrow n - \log n \; . \end{equation} Since we know that thermal entanglement disappears in this limit, it is natural that the mutual information is equal to the classical correlations and this then coincides with the conclusion following eq. (\ref{mutual}). \section{D-dimensional Symmetric States} Extensions of all the considerations in this paper to $D$-dimensions are seen to be very straightforward (similar generalizations to higher dimension symmetric states were also considered in \cite{Wei}). We should actually be able to reproduce all the above results in the generalized form, such that instead of qubits we have qutrits, and so on. The generic symmetric state would now be written as: \begin{equation} |n_1,n_2,..n_d\rangle \label{higherspin} \end{equation} and it would be a totally symmetrized state of $n_1$ states $|1\rangle$, $n_2$ states $|2\rangle$ and so on (it is also realistic to assume that the total number of particles is conserved). This could, for example, represent higher spin fermions which can occupy different lattice sites as in the rest of the paper. The closest state to the one in eq. (\ref{higherspin}) in terms of the relative entropy is a mixture of the states of the type \begin{equation} (\sqrt{n_1/N}|1\rangle + e^{i\phi} \sqrt{n_2/N}|2\rangle + ... e^{i (d-1)\phi} \sqrt{n_d/N}|d\rangle)^{\otimes n} \; , \end{equation} with the phase $\phi$ completely randomized as before. Knowledge of this closest state allows us to compute the relative entropy of entanglement of any number of subsystems of this system. All other results follow in exactly the same way. Nothing fundamental is changed in higher dimensions which is why I will not say anything more on this topic. \section{Discussion and Conclusions} In this paper I have analyzed the $\eta$ pairing mechanism which leads to eigenstates of the Hubbard and similar models used in explaining high temperature superconductivity. I have shown that they correspond to multi qubit symmetric states, where the qubit is made up of an empty and a full site (two-electron spin singlet state). I have also shown how to calculate entanglement and classical correlations for such states. For pure states, entanglement of the total state increases at the rate $\log n$ with the number of qubits $n$, while two-site entanglement vanishes at the rate $1/n$. The two-site classical correlations, on the other hand, persist in the thermodynamical limit. So, the ODLRO can be associated for pure states with total entanglement or two-site classical correlations, but not with the two-site entanglement. I have also demonstrated that the total entanglement for maximally mixed symmetric states disappears at the rate $(\log n)/n$. Various mutual information measures, which quantify the total amount of correlations in a given state, are also computed and shown to be consistent with the calculations of classical and quantum correlations. There are many interesting issues raised by this work. Even if a consensus is reached on the correct model for high $T_c$ superconductivity, and this is shown to contain multipartite electron entanglement - which we have argued for in this paper - we are still left with the question of being able to extract and use this entanglement. At present there are no methods of extraction. Perhaps we can somehow extract electrons from the superconductor and then use them for quantum teleportation or other forms of quantum information processing. It is presently believed that in order to perform a reliable and scalable quantum computation we may need to be at very low temperatures, but the existence of high temperature macroscopic entanglement may just challenge this dogma. Be that as it may, I believe that the argument in favor of the existence of high temperature entanglement does show that entanglement may be much more ubiquitous than is presently thought. This may force us to push the boundary between the classical and the quantum world towards taking seriously the concept that quantum mechanics is indeed universal and should be applied at all levels of complexity, independently of the number or, indeed, nature of particles involved. \noindent {\bf Acknowledgements}. I would like to thank S. Bose, {\v C}. Brukner, H. Fan, A. J. Fisher, J. Hartley C. Lunkes, C. Rogers, P. Scudo and T.-C. Wei for useful discussion concerning this and related subjects. I am grateful to T.-C. Wei and H. Fan for communicating their results to me prior to publication. \end{document}
arXiv
C program to swap three numbers using pointers Family scrabble wall art So for example, if y is a function of x, then the derivative of y4 +x+3 with respect to x would be 4y3 dy dx +1. Here are some Math 124 problems pertaining to implicit differentiation (these are problems directly from a practice sheet I give out when I teach Math 124). 1. Given x4 +y4 = 3, find dy dx.Example • Bring the existing power down and use it to multiply. s = 3t4 • Reduce the old power by one and use this as the new power. ds dt = 4×3t4−1 Answer ds dt = antn−1 = 12t3 Practice: In the space provided write down the requested derivative for each of the following expressions. (a) s = 3t4, ds dt (b) y = 7x3, dy dx (c) r = 0.4θ5 ...develop a relationship between the total derivative of a vector in an inertial reference frame and the corresponding derivative in a rotating system. A A iˆ +A ˆj+A kˆ r iiilf ff d Let A be an arbitrary vector with Cartesian components r A=Axi Ay j Azk in an inertial frame of reference, and A A ˆi A ˆj A kˆ = x′ ′+ y′ ′+ z′ ′ r (c)The material derivative gives rates of change inside a control volume, but not outside a control volume. correct (d)The Reynolds Transport Theorem relates the rate of change in a control volume to the rate of change in a related system. 3.In a uid, the total energy per unit mass eincludes multiple parts. Circle all that appear explicitly in The foundational material, presented in the unstarred sections of Chapters 1 through 11, was normally covered, but different applications of this basic material were stressed from year to year, and the book therefore contains more material than was covered in any one year. The following problems range in difficulty from average to challenging. PROBLEM 1 : Find two nonnegative numbers whose sum is 9 and so that the product of one number and the square of the other number is a maximum. Click HERE to see a detailed solution to problem 1. PROBLEM 2 : Build a rectangular pen with three parallel partitions using 500 ...In a material derivative for a physical problem, τ plays the role of time. It is important to note that the displacement z τ ( x τ ) at the new design is a new function z τ and measured at the new location x τ , which is determined by the design velocity field V , as discussed in Section 18.5.4 .We refer to the full derivative with respect to time as the total derivative or material derivative, and give it the special notation of D/Dt, so that the total derivative operator is D V u v w Dt t t x y z ∂ ∂ ∂ ∂ ∂ ≡ + •∇ = + + + ∂ ∂ ∂ ∂ ∂. (1) ο In the example using temperature we therefore have DT T T T T T V T u v w The first 1000 people to use the link will get a free trial of Skillshare Premium Membership: https://skl.sh/facultyofkhan11201 This video introduces the #Ma... The derivative of a sum is the sum of the derivatives: For example, Product Rule for Derivatives. IV. Quotient Rule for Derivatives. Many students remember the quotient rule by thinking of the numerator as "hi," the demoninator as "lo," the derivative as "d," and then singing. "lo d-hi minus hi d-lo over lo-lo". [collapse] The first derivative is called total derivative, and the second, partial derivative or local derivative. The symbol D Dt is also very common for the total derivative, which is also called substantial derivative, material derivative or individual derivative. Let xp(t),yp(t),zp(t) be the coordinates of a parcel moving in space. Then Problems in Earth Science 13 ... 5.2 Non-diffusive initial value problems and the material derivative . . 48 ... An example would be Section 3-3 : Differentiation Formulas. For problems 1 - 12 find the derivative of the given function. f (x) = 6x3 −9x +4 f ( x) = 6 x 3 − 9 x + 4 Solution. y = 2t4 −10t2+13t y = 2 t 4 − 10 t 2 + 13 t Solution. g(z) = 4z7 −3z−7 +9z g ( z) = 4 z 7 − 3 z − 7 + 9 z Solution. h(y) = y−4−9y−3 +8y−2 +12 h ( y) = y − 4 − ...The Leibniz integral rule is also known as the material derivative or the Lagrangian derivative in fluid mechanics. The function, f(x, t), can be interpreted as a physical quantity such as the mass density and momentum that flows with a moving particle. Shape derivatives using Eulerian and material derivatives: the rigorous 'di cult' way Eulerian and Lagrangian derivatives (VI) Let us return to our problem of calculating the shape derivative of: J() = Z j(u) dx; where ˆ u = f in u = 0 on @: The following result characterizes theLagrangian derivativeof 7!u. Its proof can be The derivative of a sum is the sum of the derivatives: For example, Product Rule for Derivatives. IV. Quotient Rule for Derivatives. Many students remember the quotient rule by thinking of the numerator as "hi," the demoninator as "lo," the derivative as "d," and then singing. "lo d-hi minus hi d-lo over lo-lo". [collapse]The material derivative S for this one-dimensional case is simply Inserting S in terms of t and x from (35) yields When this is evaluted for the parcel A the result on the right is identical with that of Eq. The derivative of a sum is the sum of the derivatives: For example, Product Rule for Derivatives. IV. Quotient Rule for Derivatives. Many students remember the quotient rule by thinking of the numerator as "hi," the demoninator as "lo," the derivative as "d," and then singing. "lo d-hi minus hi d-lo over lo-lo". [collapse] Material Derivative of Structural Point Material Derivative of Contact Point Material Derivative of Natural Coordinate at Contact DSA FORMULATION FOR CONTACT PROBLEM n 0 n 0 0 1 2 0 (n ) ( ) ( ) (), d d τ τ τ= x V x V x z x x 2 c c 0 c n n c 0 c n c 0 c 0 c (n ) ( ) ( ), d d τ τ τ= x V x z x t x ξ =()()+ − − +()() c,ξ + c,ξ T c c n ... in which the material derivative is de ned as D Dt = @ @t + ur (6) Equation (5) is the conservative form of the continuity equation. The divergent and conservative forms are completely equivalent; their distinction becomes more relevant when developing numerical methods for computing ow elds. 1.2 Momentum conservation The problem with this interpretation is that you are not following the same particle to see it speeding up over time. The material derivative effectively corrects for this confusing effect to give a true rate of change of a quantity. There are in fact many other names for the material derivative.The material derivative S for this one-dimensional case is simply Inserting S in terms of t and x from (35) yields When this is evaluted for the parcel A the result on the right is identical with that of Eq. interesting \real world" problems require, in general, way too much background to t comfortably into an already overstu ed calculus course. You will nd in this collection just a very few serious applications, problem15in Chapter29, for example, where the background is either minimal or largely irrelevant to the solution of the problem. ixThe problem with this interpretation is that you are not following the same particle to see it speeding up over time. The material derivative effectively corrects for this confusing effect to give a true rate of change of a quantity. There are in fact many other names for the material derivative.The first 1000 people to use the link will get a free trial of Skillshare Premium Membership: https://skl.sh/facultyofkhan11201 This video introduces the #Ma... Derivatives Derivative Applications Limits Integrals Integral Applications Integral Approximation Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series. Functions. Line Equations Functions Arithmetic & Comp. Conic Sections Transformation. Matrices & Vectors. The first 1000 people to use the link will get a free trial of Skillshare Premium Membership: https://skl.sh/facultyofkhan11201 This video introduces the #Ma... Material derivative From Wikipedia, the free encyclopedia In mathematics, the material derivative[1][2] is a derivative taken along a path moving with velocity v, and is often used in fluid mechanics and classical mechanics. It describes the time rate of change of somelinear: the vorticity is a function of u and v, but so is the advection operator inside the material derivative. Thus, our strategy will be to simplify things further by linearizing 9.1 about a basic (⇠ background) state. (You have seen linearization before, for example, in the context of the boussinesq and anelastic equations). The derivative of a sum is the sum of the derivatives: For example, Product Rule for Derivatives. IV. Quotient Rule for Derivatives. Many students remember the quotient rule by thinking of the numerator as "hi," the demoninator as "lo," the derivative as "d," and then singing. "lo d-hi minus hi d-lo over lo-lo". [collapse]the previous problem. Answer: The approximation formula is. ∂f. ∂x. Δx + o. ∂f. ∂y. ∂f. Δy + Δz. o ∂z. o. If x, y, z are functions of time then dividing the approximation formula by Δt gives. ≈ + Δz. Δt +. o. o. o. In the limit as Δt → 0 we get the chain rule. Note: we use the regular 'd' for the derivative. dw ...the previous problem. Answer: The approximation formula is. ∂f. ∂x. Δx + o. ∂f. ∂y. ∂f. Δy + Δz. o ∂z. o. If x, y, z are functions of time then dividing the approximation formula by Δt gives. ≈ + Δz. Δt +. o. o. o. In the limit as Δt → 0 we get the chain rule. Note: we use the regular 'd' for the derivative. dw ...Material Derivative of Structural Point Material Derivative of Contact Point Material Derivative of Natural Coordinate at Contact DSA FORMULATION FOR CONTACT PROBLEM n 0 n 0 0 1 2 0 (n ) ( ) ( ) (), d d τ τ τ= x V x V x z x x 2 c c 0 c n n c 0 c n c 0 c 0 c (n ) ( ) ( ), d d τ τ τ= x V x z x t x ξ =()()+ − − +()() c,ξ + c,ξ T c c n ... (c)The material derivative gives rates of change inside a control volume, but not outside a control volume. correct (d)The Reynolds Transport Theorem relates the rate of change in a control volume to the rate of change in a related system. 3.In a uid, the total energy per unit mass eincludes multiple parts. Circle all that appear explicitly in The derivative of a sum is the sum of the derivatives: For example, Product Rule for Derivatives. IV. Quotient Rule for Derivatives. Many students remember the quotient rule by thinking of the numerator as "hi," the demoninator as "lo," the derivative as "d," and then singing. "lo d-hi minus hi d-lo over lo-lo". [collapse] The following problems range in difficulty from average to challenging. PROBLEM 1 : Find two nonnegative numbers whose sum is 9 and so that the product of one number and the square of the other number is a maximum. Click HERE to see a detailed solution to problem 1. PROBLEM 2 : Build a rectangular pen with three parallel partitions using 500 ... Mar 22, 2021 · Attributing a derivative work . This work, "90fied", is a derivative of "Creative Commons 10th Birthday Celebration San Francisco" by tvol, used under CC BY. "90fied" is licensed under CC BY by [Your name here]. The above is a good attribution for derivative work because: Original Title, Author, Source, and License--> are all noted. The Leibniz integral rule is also known as the material derivative or the Lagrangian derivative in fluid mechanics. The function, f(x, t), can be interpreted as a physical quantity such as the mass density and momentum that flows with a moving particle. If Re˝ 1 and S˝ 1 then we should be able to neglect the material derivative term on the LHS of (3) which would yield the Stokes equations (in the absence of body forces): 0 = − 1 ρ ∇p+ν∇2u (6) 0 = ∇ ·u (7) Solutions of the Stokes equations will be focus of this chapter. 2.3 Poiseuille Flow In continuum mechanics, the material derivative describes the time rate of change of some physical quantity (like heat or momentum) of a material element that is subjected to a space-and-time-dependent macroscopic velocity field.The material derivative can serve as a link between Eulerian and Lagrangian descriptions of continuum deformation.. For example, in fluid dynamics, the velocity field ...What is the material derivative used for? A. To describe time rates of change for a given particle. B. To describe the time rates of change for a given flow. C. To give the velocity and acceleration of the flow. 10.If a flow is unsteady, its ____ may change with time at a given location. A. Velocity B. Temperature C. Density D. All of the above 11. Example 2: Material derivative of the °uid velocity ~v(~x;t) as experienced by a °uid par-ticle. This is the Lagrangian acceleration of a particle and is the acceleration that appears in Newton's laws. It is therefore evident that its Eulerian representation will be used inThe material derivative that is consistent with the frictional return mapping algorithm is derived. The smooth contact surface is used in the sensitivity formulation with design independent meshfree interpolation function. Numerical examples show the efficiency and accuracy of the proposed sensitivity calculation method. lines and path lines, Acceleration and material derivative, Control volume. Integral and Differential analysis of fluid flow: Derivation of Conservation equations, Euler's equation and Bernoulli's equation. Viscous flows in pipes: Laminar and turbulent flows, Pipe flow losses: major losses, minor The foundational material, presented in the unstarred sections of Chapters 1 through 11, was normally covered, but different applications of this basic material were stressed from year to year, and the book therefore contains more material than was covered in any one year. shape sensitivity is based on the idea of material derivative in continuum mechanics and proceeds as follows: If ˆRd is the domain with su cient smooth boundaries = @, then is a collection of material particles whose positions (~x) change with time, t. A smooth topological variation of will lead to t so that the con guration of t is given by ... Same fundamental interpretation is used in both material derivative and RTT. RTT can be considered as the integral counterpart of the material derivative. In both the cases, two parts comprises of the total rate of change of property, a local or unsteady part and an advective part which causes the movement of the fluid from one area to another. Material derivative example problems What is the material derivative used for? A. To describe time rates of change for a given particle. B. To describe the time rates of change for a given flow. C. To give the velocity and acceleration of the flow. 10.If a flow is unsteady, its ____ may change with time at a given location. A. Velocity B. Temperature C. Density D. All of the above 11. Here is a set of practice problems to accompany the Optimization section of the Applications of Derivatives chapter of the notes for Paul Dawkins Calculus I course at Lamar University. ... The cost of the material of the sides is $3/in 2 and the cost of the top and bottom is $15/in 2. Determine the dimensions of the box that will minimize the cost.Section 3-3 : Differentiation Formulas. For problems 1 - 12 find the derivative of the given function. f (x) = 6x3 −9x +4 f ( x) = 6 x 3 − 9 x + 4 Solution. y = 2t4 −10t2+13t y = 2 t 4 − 10 t 2 + 13 t Solution. g(z) = 4z7 −3z−7 +9z g ( z) = 4 z 7 − 3 z − 7 + 9 z Solution. h(y) = y−4−9y−3 +8y−2 +12 h ( y) = y − 4 − ...The foundational material, presented in the unstarred sections of Chapters 1 through 11, was normally covered, but different applications of this basic material were stressed from year to year, and the book therefore contains more material than was covered in any one year. The Leibniz integral rule is also known as the material derivative or the Lagrangian derivative in fluid mechanics. The function, f(x, t), can be interpreted as a physical quantity such as the mass density and momentum that flows with a moving particle. The first derivative is called total derivative, and the second, partial derivative or local derivative. The symbol D Dt is also very common for the total derivative, which is also called substantial derivative, material derivative or individual derivative. Let xp(t),yp(t),zp(t) be the coordinates of a parcel moving in space. Then Shelby county ohio grand jury indictments lines and path lines, Acceleration and material derivative, Control volume. Integral and Differential analysis of fluid flow: Derivation of Conservation equations, Euler's equation and Bernoulli's equation. Viscous flows in pipes: Laminar and turbulent flows, Pipe flow losses: major losses, minor The derivative of a sum is the sum of the derivatives: For example, Product Rule for Derivatives. IV. Quotient Rule for Derivatives. Many students remember the quotient rule by thinking of the numerator as "hi," the demoninator as "lo," the derivative as "d," and then singing. "lo d-hi minus hi d-lo over lo-lo". [collapse] The first 1000 people to use the link will get a free trial of Skillshare Premium Membership: https://skl.sh/facultyofkhan11201 This video introduces the #Ma... Israel oil and gas news The foundational material, presented in the unstarred sections of Chapters 1 through 11, was normally covered, but different applications of this basic material were stressed from year to year, and the book therefore contains more material than was covered in any one year. The material derivative of a scalar field φ( x, t) and a vector field u( x, t) is defined respectively as: where the distinction is that is the gradient of a scalar, while is the tensor derivative of a vector. In case of the material derivative of a vector field, the term v•∇u can both be interpreted as v•(∇u) What is the material derivative used for? A. To describe time rates of change for a given particle. B. To describe the time rates of change for a given flow. C. To give the velocity and acceleration of the flow. 10.If a flow is unsteady, its ____ may change with time at a given location. A. Velocity B. Temperature C. Density D. All of the above 11. The first 1000 people to use the link will get a free trial of Skillshare Premium Membership: https://skl.sh/facultyofkhan11201 This video introduces the #Ma... Mar 22, 2021 · Attributing a derivative work . This work, "90fied", is a derivative of "Creative Commons 10th Birthday Celebration San Francisco" by tvol, used under CC BY. "90fied" is licensed under CC BY by [Your name here]. The above is a good attribution for derivative work because: Original Title, Author, Source, and License--> are all noted. The approach that is in question is based on the cocept of the material derivative. The main definitions and the features of the method have been presented an in order to validate the method two examples have been solved: a one-variable problem (two-layer cylindrical capacitor) and a two-variable problem (cylindrical capacitor with volume charge). So for example, if y is a function of x, then the derivative of y4 +x+3 with respect to x would be 4y3 dy dx +1. Here are some Math 124 problems pertaining to implicit differentiation (these are problems directly from a practice sheet I give out when I teach Math 124). 1. Given x4 +y4 = 3, find dy dx.This problem has been solved: Solutions for Chapter 4 Problem 88P: Briefly explain the similarities and differences between the material derivative and the Reynolds transport theorem. ...interesting \real world" problems require, in general, way too much background to t comfortably into an already overstu ed calculus course. You will nd in this collection just a very few serious applications, problem15in Chapter29, for example, where the background is either minimal or largely irrelevant to the solution of the problem. ixIf Re˝ 1 and S˝ 1 then we should be able to neglect the material derivative term on the LHS of (3) which would yield the Stokes equations (in the absence of body forces): 0 = − 1 ρ ∇p+ν∇2u (6) 0 = ∇ ·u (7) Solutions of the Stokes equations will be focus of this chapter. 2.3 Poiseuille Flow Nov 06, 2011 · And we will call the operator $\frac{D}{Dt}$ the material derivative. Some authors might call it the convective or Lagrangian derivative too. This operator can act upon any scalar or vectorial quantity (In the case of a vectorial quantity you have to be careful when applying $ abla$, since the result is a tensor! For example, velocity is the rate of change of distance with respect to time in a particular direction. If f(x) is a function, then f'(x) = dy/dx is the differential equation , where f'(x) is the derivative of the function, y is dependent variable and x is an independent variable.Material Derivative Example This example involves a ball being thrown straight up into the air. The usual description of its position, \(y\), at any time, \(t\), is \[ y = Y + v_o t - {1 \over 2} g \, t^2 \] and its velocity is \[ v = {dy \over dt} = v_o - g \, t \] and its acceleration is \[ a = {dv \over dt} = -g \] Solving Fluid Dynamics Problems 3.185 November 29, 1999, revised October 31, 2001, November 1, 2002, and November 5, 2003 This outlines the methodology for solving fluid dynamics problems as presented in this class, from start to finish. ("W3R" references are to the textbook for this class by Welty, Wicks, Wilson and Rorrer.) 1. Here is a set of practice problems to accompany the Optimization section of the Applications of Derivatives chapter of the notes for Paul Dawkins Calculus I course at Lamar University. ... The cost of the material of the sides is $3/in 2 and the cost of the top and bottom is $15/in 2. Determine the dimensions of the box that will minimize the cost. Derivatives Derivative Applications Limits Integrals Integral Applications Integral Approximation Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series. Functions. Line Equations Functions Arithmetic & Comp. Conic Sections Transformation. Matrices & Vectors. Derivatives Derivative Applications Limits Integrals Integral Applications Integral Approximation Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series. Functions. Line Equations Functions Arithmetic & Comp. Conic Sections Transformation. Matrices & Vectors. . In a material derivative for a physical problem, τ plays the role of time. It is important to note that the displacement z τ ( x τ ) at the new design is a new function z τ and measured at the new location x τ , which is determined by the design velocity field V , as discussed in Section 18.5.4 . A material derivative is the time derivative - rate of change - of a property `following a fluid particle P'. The material derivative is a Lagrangian concept but we will work in an Eulerian reference frame. Consider an Eulerian quantity . Taking the Lagrangian time derivative of an Eulerian quantity gives the material derivative. A material derivative is the time derivative - rate of change - of a property `following a fluid particle P'. The material derivative is a Lagrangian concept but we will work in an Eulerian reference frame. Consider an Eulerian quantity . Taking the Lagrangian time derivative of an Eulerian quantity gives the material derivative.The following problems range in difficulty from average to challenging. PROBLEM 1 : Find two nonnegative numbers whose sum is 9 and so that the product of one number and the square of the other number is a maximum. Click HERE to see a detailed solution to problem 1. PROBLEM 2 : Build a rectangular pen with three parallel partitions using 500 ...! ii!!! Chapter!5.!!Conservation!Laws! ! ! ! ! ! !!!!!103!! 5.1.!EquationofContinuity!! ! ! ! ! !!!!!103!! 5.2.!!Mass!Conservation!for!Material!Volume! ! ! !!!!!104! (c)The material derivative gives rates of change inside a control volume, but not outside a control volume. correct (d)The Reynolds Transport Theorem relates the rate of change in a control volume to the rate of change in a related system. 3.In a uid, the total energy per unit mass eincludes multiple parts. Circle all that appear explicitly in The following problems range in difficulty from average to challenging. PROBLEM 1 : Find two nonnegative numbers whose sum is 9 and so that the product of one number and the square of the other number is a maximum. Click HERE to see a detailed solution to problem 1. PROBLEM 2 : Build a rectangular pen with three parallel partitions using 500 ... In continuum mechanics, the material derivative describes the time rate of change of some physical quantity (like heat or momentum) of a material element that is subjected to a space-and-time-dependent macroscopic velocity field.The material derivative can serve as a link between Eulerian and Lagrangian descriptions of continuum deformation.. For example, in fluid dynamics, the velocity field ...The first derivative is called total derivative, and the second, partial derivative or local derivative. The symbol D Dt is also very common for the total derivative, which is also called substantial derivative, material derivative or individual derivative. Let xp(t),yp(t),zp(t) be the coordinates of a parcel moving in space. ThenMaterial Derivative of Structural Point Material Derivative of Contact Point Material Derivative of Natural Coordinate at Contact DSA FORMULATION FOR CONTACT PROBLEM n 0 n 0 0 1 2 0 (n ) ( ) ( ) (), d d τ τ τ= x V x V x z x x 2 c c 0 c n n c 0 c n c 0 c 0 c (n ) ( ) ( ), d d τ τ τ= x V x z x t x ξ =()()+ − − +()() c,ξ + c,ξ T c c n ... The derivative of a sum is the sum of the derivatives: For example, Product Rule for Derivatives. IV. Quotient Rule for Derivatives. Many students remember the quotient rule by thinking of the numerator as "hi," the demoninator as "lo," the derivative as "d," and then singing. "lo d-hi minus hi d-lo over lo-lo". [collapse]The foundational material, presented in the unstarred sections of Chapters 1 through 11, was normally covered, but different applications of this basic material were stressed from year to year, and the book therefore contains more material than was covered in any one year. The combination of both changes, D t = ∂ t + u E · ∇ r, is called the material derivative, which only acts on Eulerian quantities while holding x fixed. Receivers in seismic experiments directly record the motion r (x, t) of particle x to which they are attached. Hence, adopting the Lagrangian viewpoint is a natural and appropriate choice ... (c)The material derivative gives rates of change inside a control volume, but not outside a control volume. correct (d)The Reynolds Transport Theorem relates the rate of change in a control volume to the rate of change in a related system. 3.In a uid, the total energy per unit mass eincludes multiple parts. Circle all that appear explicitly in Free Calculus Questions and Problems with Solutions. Free calculus tutorials are presented. The analytical tutorials may be used to further develop your skills in solving problems in calculus. Also topics in calculus are explored interactively, using apps, and analytically with examples and detailed solutions. Nov 06, 2011 · And we will call the operator $\frac{D}{Dt}$ the material derivative. Some authors might call it the convective or Lagrangian derivative too. This operator can act upon any scalar or vectorial quantity (In the case of a vectorial quantity you have to be careful when applying $ abla$, since the result is a tensor! What is the material derivative used for? A. To describe time rates of change for a given particle. B. To describe the time rates of change for a given flow. C. To give the velocity and acceleration of the flow. 10.If a flow is unsteady, its ____ may change with time at a given location. A. Velocity B. Temperature C. Density D. All of the above 11. The first 1000 people to use the link will get a free trial of Skillshare Premium Membership: https://skl.sh/facultyofkhan11201 This video introduces the #Ma...Example • Bring the existing power down and use it to multiply. s = 3t4 • Reduce the old power by one and use this as the new power. ds dt = 4×3t4−1 Answer ds dt = antn−1 = 12t3 Practice: In the space provided write down the requested derivative for each of the following expressions. (a) s = 3t4, ds dt (b) y = 7x3, dy dx (c) r = 0.4θ5 ...The purpose of this Collection of Problems is to be an additional learning resource for students who are taking a di erential calculus course at Simon Fraser University. The Collection contains problems given at Math 151 - Calculus I and Math 150 - Calculus I With Review nal exams in the period 2000-2009. The problems areMaterial Derivative of Structural Point Material Derivative of Contact Point Material Derivative of Natural Coordinate at Contact DSA FORMULATION FOR CONTACT PROBLEM n 0 n 0 0 1 2 0 (n ) ( ) ( ) (), d d τ τ τ= x V x V x z x x 2 c c 0 c n n c 0 c n c 0 c 0 c (n ) ( ) ( ), d d τ τ τ= x V x z x t x ξ =()()+ − − +()() c,ξ + c,ξ T c c n ... Ombudsman insurance complaint formDerivatives Derivative Applications Limits Integrals Integral Applications Integral Approximation Series ODE Multivariable Calculus Laplace Transform Taylor/Maclaurin Series Fourier Series. Functions. Line Equations Functions Arithmetic & Comp. Conic Sections Transformation. Matrices & Vectors. Kinematics of Fluid Flow (Ch. 3) • Streamlines, pathlines, and convective (material) derivative • Translations, Deformation, and Rotation of a fluid element Material Derivative Example This example involves a ball being thrown straight up into the air. The usual description of its position, \(y\), at any time, \(t\), is \[ y = Y + v_o t - {1 \over 2} g \, t^2 \] and its velocity is \[ v = {dy \over dt} = v_o - g \, t \] and its acceleration is \[ a = {dv \over dt} = -g \] The derivative of a sum is the sum of the derivatives: For example, Product Rule for Derivatives. IV. Quotient Rule for Derivatives. Many students remember the quotient rule by thinking of the numerator as "hi," the demoninator as "lo," the derivative as "d," and then singing. "lo d-hi minus hi d-lo over lo-lo". [collapse] see in the example, it allows an efficient computation of the shape derivative without using the material derivative but some additional differentiability of the Lagrangian. In Section 4, we apply the results of Section 3 to a non-linear transmission problem. We present a mini-mization problem with penalization and its shape differentiability. 2. 1.1.2 Material trajectories and derivatives The integral curves of u are called material trajectories and they are given by the vector-valued functions X(t) that solve the initial-value problem t ∈ [0,T]: dX dt = u(X(t),t)andX(0) = X 0 (1.5) with some initial position X 0.Thus,X(t) is the trajectory during the time If Re˝ 1 and S˝ 1 then we should be able to neglect the material derivative term on the LHS of (3) which would yield the Stokes equations (in the absence of body forces): 0 = − 1 ρ ∇p+ν∇2u (6) 0 = ∇ ·u (7) Solutions of the Stokes equations will be focus of this chapter. 2.3 Poiseuille Flow How to update rockwell firmware Oct 28, 1998 · In addition, the equation has the potential to be applied to many complex acoustic problems, because the derived equation is regularized by using the integral identity that incorporates the one-dimensional propagating wave and its material derivative. The validity of the formulations is demonstrated through examples having regular shapes such ... ! ii!!! Chapter!5.!!Conservation!Laws! ! ! ! ! ! !!!!!103!! 5.1.!EquationofContinuity!! ! ! ! ! !!!!!103!! 5.2.!!Mass!Conservation!for!Material!Volume! ! ! !!!!!104! American government chapter 1 vocab quizlet John lewis target market 2020 Among us female imposter x male reader 2021 polaris rzr xp turbo review The material derivative of a scalar field φ( x, t) and a vector field u( x, t) is defined respectively as: where the distinction is that is the gradient of a scalar, while is the tensor derivative of a vector. In case of the material derivative of a vector field, the term v•∇u can both be interpreted as v•(∇u) Doosan forklift reviews Gateway church scandal Can am outlander quiet exhaust Maple creek miniature schnauzer rescue
CommonCrawl
Visual saliency detection for RGB-D images under a Bayesian framework Songtao Wang ORCID: orcid.org/0000-0003-2203-15721,2, Zhen Zhou1, Wei Jin2 & Hanbing Qu2 IPSJ Transactions on Computer Vision and Applications volume 10, Article number: 1 (2018) Cite this article In this paper, we propose a saliency detection model for RGB-D images based on the deep features of RGB images and depth images within a Bayesian framework. By analysing 3D saliency in the case of RGB images and depth images, the class-conditional mutual information is computed for measuring the dependence of deep features extracted using a convolutional neural network; then, the posterior probability of the RGB-D saliency is formulated by applying Bayes' theorem. By assuming that deep features are Gaussian distributions, a discriminative mixed-membership naive Bayes (DMNB) model is used to calculate the final saliency map. The Gaussian distribution parameters can be estimated in the DMNB model by using a variational inference-based expectation maximization algorithm. The experimental results on RGB-D images from the NLPR dataset and NJU-DS400 dataset show that the proposed model performs better than other existing models. Saliency detection is a fundamental problem in computer vision that aims to highlight visually salient regions or objects in an image. Le Callet and Niebur introduced the concepts of overt and covert visual attention and the concepts of bottom-up and top-down processing [1]. Visual attention models have been successfully applied in many domains, including multimedia delivery, visual retargeting, quality assessment of images and videos, medical imaging, and 3D image applications [1]. Today, with the development of 3D display technologies and devices, various applications are emerging for 3D multimedia, such as 3D video retargeting [2], 3D video quality assessment [3, 4], and so forth. Overall, the emerging demand for visual attention-based applications for 3D multimedia has increased the need for computational saliency detection models for 3D multimedia content. Salient object detection has attracted a lot of interest in computer vision [5]. Numerous efforts have been devoted to designing different low-level saliency cues for 2D saliency detection, such as contrast-based features and background priors. Because human attention is preferentially attracted by high-contrast regions with their surroundings, contrast-based features (such as colour, edge orientation, or texture contrast) have a crucial role in deriving salient objects [6]. The background prior leverages the fact that most salient objects are located far from image boundaries [7]. Based on the basic assumption, which non-salient regions (i.e. background) can be explained by the low-rank matrix, salient objects can also be defined as the sparse noises in a certain feature space where the input image is represented as a low-rank matrix [8]. Most existing computational visual saliency models follow a bottom-up framework that generates independent saliency map in each selected visual feature space and combines them in a proper way. To address these problems, Li et al. proposed a saliency map computational model based on tensor analysis [9]. The recently introduced sensing technologies, such as Microsoft Kinect, provide an excellent ability and flexibility to capture RGB-D images. In addition to RGB information, depth has been shown to be one of the practical cues for extracting saliency. Furthermore, Ju et al. proposed a novel saliency method that worked on depth images based on the anisotropic centre-surround difference [10]. In contrast to saliency detection for 2D images, the depth factor must be considered when performing saliency detection for RGB-D images. Depth cues provide additional important information about content in the visual field and can therefore also be considered relevant features for saliency detection. With the additional depth information, RGB-D co-saliency detection, which is an emerging and interesting issue in saliency detection, aims to discover the common salient objects in a set of RGB-D images [11]. The stereoscopic content carries important additional binocular cues for enhancing human depth perception [12, 13]. Therefore, two important challenges when designing 3D saliency models are how to estimate the saliency from depth cues and how to combine the saliency from depth features with those of other 2D low-level features. In this paper, we propose a new computational saliency detection model based on the deep features of RGB images and depth images within a Bayesian framework. The main contributions of our approach consist of two aspects: (1) to estimate saliency from depth cues, we propose the creation of a depth feature based on a convolutional neural network (CNN) trained by supervision transfer, and (2) by assuming that the deep features of RGB images and depth images are conditionally independent given the classes, the discriminative mixed-membership naive Bayes (DMNB)[14] model is used to calculate the final saliency map by applying Bayes' theorem. In this section, we provide a brief survey and review of RGB-D saliency detection methods. These methods all contain a stage in which 2D saliency features are extracted. However, depending on the way in which they use depth information in terms of developing computational models, these models can be classified into three different categories: Depth-weighting models This type of model adopts depth information to weight a 2D saliency map to calculate the final saliency map for RGB-D images with feature map fusion [15–18]. Fang et al. proposed a novel 3D saliency detection framework based on colour, luminance, texture, and depth contrast features, and they designed a new fusion method to combine the feature maps to obtain the final saliency map for RGB-D images [15]. In [16], colour contrast features and depth contrast features were calculated to construct an effective multi-feature fusion to generate saliency maps, and multi-scale enhancement was performed on the saliency map to further improve the detection precision, focusing on 3D salient object detection. Ciptadi et al. proposed a novel computational model of visual saliency that incorporates depth information and demonstrated the method by explicitly constructing a 3D layout and shape features from depth measurements [17]. Iatsun et al. proposed a 3D saliency model by relying on 2D saliency features jointly with depth obtained from monocular cues, in which 3D perception is significantly based on monocular cues [18]. The models in this category combine 2D features with a depth feature to calculate the final saliency map, but they do not include the depth saliency map in their computation processes. Depth-pooling models This type of model combines depth saliency maps and traditional 2D saliency maps simply to obtain saliency maps for RGB-D images [19–22]. Peng et al. provided a simple fusion framework that combines existing RGB-produced saliency with new depth-induced saliency: the former one is estimated from existing RGB models, whereas the latter one is based on the multi-contextual contrast model [19]. Ren et al. presented a two-stage 3D salient object detection framework, which first integrates the contrast region with the background, depth and orientation priors to achieve a saliency map and then reconstructs the saliency map globally [20]. Xue et al. proposed an effective visual object saliency detection model via RGB and depth cues with mutually guided manifold ranking and obtained the final result by fusing RGB and depth saliency maps [21]. Wang et al. proposed two different ways to integrate depth information in the modelling of 3D visual attention, where the measures of depth saliency are derived from the eye movement data obtained from an eye tracking experiment using synthetic stimuli [22]. The models in this category rely on the existence of "depth saliency maps". These depth saliency maps are finally combined with 2D saliency maps using a saliency map pooling strategy to obtain the final 3D saliency map. Learning-based models Rather than using a depth saliency map directly, this type of model uses machine learning techniques to construct a 3D saliency detection model for RGB-D images based on extracted 2D features and depth features [23–26]. Inspired by the recent success of machine learning techniques in constructing 2D saliency detection models, Fang et al. proposed a learning-based model for RGB-D images using a linear SVM [23]. Zhu et al. proposed a learning-based approach for extracting saliency from RGB-D images, in which discriminative features can be automatically selected by learning several decision trees based on the ground truth, and those features are further utilized to search the saliency regions via the predictions of the trees [24]. Bertasius et al. developed an EgoObject Representation, which encodes these characteristics by incorporating shape, location, size, and depth features from an egocentric RGB-D image, and trained a random forest regressor to predict the saliency of a region using the ground-truth salient object [25]. Qu et al. designed a new CNN to fuse different low-level saliency cues into hierarchical features for automatically detecting salient objects in RGB-D images [26]. Most existing approaches for 3D saliency detection either treat the depth feature as an indicator to weight the RGB saliency map [15–18] or consider the 3D saliency map as the fusion of saliency maps of these low-level features [19–22]. It is not clear how to integrate 2D saliency features with depth-induced saliency feature in a better way, and linearly combining the saliency maps produced by these features cannot guarantee better results. Some other more complex combination algorithms have been proposed. These methods combine the depth-induced saliency map with the 2D saliency map either directly [19] or in a hierarchical way to calculate the final RGB-D saliency map [20]. However, because they are restricted by the computed saliency values, these saliency map combination methods are not able to correct incorrectly estimated salient regions. From the above description, the key to 3D saliency detection models is determining how to integrate the depth cues with traditional 2D low-level features. In this paper, we focus on how to integrate RGB and the additional depth information for RGB-D saliency detection. This saliency-map-level integration is not optimal because it is restricted by the determined saliency values. Conversely, we incorporate colour and depth cues at the feature level within a Bayesian framework. The proposed approach In this section, we introduce a method that integrates the colour saliency probability with the depth saliency probability computed from Gaussian distributions based on deep features and yields a prediction of the final 3D saliency map using the DMNB model within a Bayesian framework. The general architecture of the proposed framework is presented in Fig. 1. The flowchart of the proposed model. The framework of our model consists of two stages: the training stage, which includes a depth CNN trained for feature learning and a generative process for saliency, and the testing stage. From a pair of RGB and depth images, our model extracts deep features using a colour CNN and depth CNN, respectively, and performs saliency prediction using the DMNB model [14] within a Bayesian framework. In this work, we perform experiments based on the NLPR dataset in [19] First, we train a CNN model for depth images by teaching the network to reproduce the mid-level semantic representation learned from RGB images for which there are paired images. Then, deep features of the RGB and depth images are extracted by a CNN. Second, the class-conditional mutual information (CMI) is computed to measure the dependence of the deep features of the RGB and depth images; then, the posterior probability of the RGB-D saliency is formulated by applying Bayes' theorem. These two features complement each other in detecting 3D saliency cues from different perspectives and, when combined, yield the final 3D saliency value. By assuming that deep features are Gaussian distributions, the parameters of the Gaussian distribution can be estimated in the DMNB model using a variational inference-based expectation maximization (EM) algorithm. Feature extraction using CNN Most existing saliency detection methods focus on how to design low-level saliency cues or model background priors. Low-level saliency cues alone do not produce good saliency detection results, particularly when salient objects are present in a low-contrast background with confusing visual cues. Objects cannot be classified as salient objects from the low-contrast background either based on low-level saliency cues or background priors, but they are semantically salient in high-level cognition as they are distinct in object categories. Due to its capability of learning high-level semantic features, a CNN is effective for estimating the saliency maps of images and has been used for saliency detection [27, 28]. A CNN is able to generate representative and discriminative hyper-features rather than hand-designing heuristical features for saliency. To better detect semantically salient objects, it is important to use high-level knowledge on object categories. We employ deep convolutional neural networks to model the saliency of objects in RGB images and depth images. As shown in Fig. 2, the upper branch of our saliency detection pipeline is a deep CNN architecture with global context for RGB images, and the lower branch of our saliency detection pipeline is a deep CNN architecture with global context for depth images. For RGB images, the Clarifai [29] model is adopted as the baseline model, and a task-specific pre-training scheme is designed to make the global-context modelling suitable for saliency detection [27]. We use a CNN similar to the Clarifai model for saliency detection with a pre-training using supervision transfer [30] for limited labelled depth images. The supervision transfer occurs at the penultimate layer of the global context model. Taking the output of the penultimate layer of the two global context models as input, the DMNB model is trained to classify background and saliency, indicating the probabilities of whether a centred superpixel is in the background or belongs to a salient object. Architecture for supervision transfer. a The Architecture of Clarifai model, where Relu denotes a rectified linear function relu(x)=max(x,0), which rectify the feature maps thus ensuring the feature maps are always positive, lrn denotes a local response normalization layer, and Dropout is used in the fully connected layers with a rate of 0.5 to prevent CNN from overfitting. b Upper branch: Deep CNN-based global-context modelling for RGB saliency detection with a superpixel-centred window padded with the mean pixel value of the RGB training dataset. Lower branch: Deep CNN-based global-context modelling for depth saliency detection with a superpixel-centred window padded with the mean pixel value of the depth training dataset. We train a CNN model for depth images by teaching the network to reproduce the mid-level semantic representation learned from RGB images for which there are paired images. The supervision transfer occurs at the penultimate layer of the global context model. For the loss function, we use the L2 distance Deep features of RGB image Superpixel segmentation is first performed on RGB-D images [31], and the input of the global-context CNN is a superpixel-centred large context window that includes the full RGB image. Regions that exceed the image boundaries are padded with the mean pixel value of the RGB training dataset. The padded images are then warped to 227 ×3 as input, where the three dimensions represent width, height, and number of channels. With this normalization and padding scheme, the superpixel to be classified is always located at the centre of the RGB image, and the spatial distribution of the global context is normalized in this way. Moreover, it ensures that the input covers the entire range of the original RGB image. We refer readers to [27] for further details. Deep features of depth image We demonstrate how we transfer supervision from RGB images to depth images as obtained from a range sensors, such as the Microsoft Kinect, for the downstream task of saliency detection. We consider the domain of RGB images as \(\mathcal {M}_{s}\) for which there is large dataset of labelled images D s , and we treat depth images as \(\mathcal {M}_{d}\) with limited labelled data D d for which we would like to train a rich representation for saliency detection. We use convolutional neural networks as our layered rich representation. For our layered image representation models, we use CNNs with the network architecture from the Clarifai model. We denote the deep features of the RGB image as a corresponding K layered rich representation \(\Phi =\{\phi ^{i}_{\mathcal {M}_{s}, D_{s}}, \forall i \in \, [\!1 \cdots K]\}\). \(\phi ^{i}_{\mathcal {M}_{s}, D_{s}}\) is the ith layer of the Clarifai model for modality \(\mathcal {M}_{s}\) that has been trained on labelled images from dataset D s . Now, we want to learn the deep features of depth images from modality \(\mathcal {M}_{d}\), for which we do not have access to a large dataset of labelled depth images. We have already hand-designed an appropriate CNN architecture \(\Psi =\{\psi ^{i}_{\mathcal {M}_{d}}, \forall i \in [1 \cdots L]\}\) from the Clarifai model. The task is then to effectively learn the parameters associated with various operations in the CNN architecture without having access to a large set of annotated images for modality \(\mathcal {M}_{d}\). The scheme for training the depth CNN for depth images of modality \(\mathcal {M}_{d}\) is to learn the parameters of CNN Ψ such that feature vectors from \(\psi ^{L}_{D_{d}}(I_{d})\) for image I d match the feature vectors from \(\psi ^{i^{*}}_{\mathcal {M}_{s},D_{s}}(I_{s})\) for its image pair I s in modality \(\mathcal {M}_{s}\) for some chosen and fixed layer i∗∈ [ 1⋯K]. By paired images, we mean a set of images of the same scene in two different modalities. We denote these parameters of CNN \(W^{[1 \cdots L]}_{d}=\{ \boldsymbol {w}^{i}_{d}, \forall i \in \, [\!1 \cdots L] \}\) to be learned by supervision transfer from layer i∗ in Φ of modality \(\mathcal {M}_{s}\) to layer L in Ψ of modality \(\mathcal {M}_{d}\): $$\begin{array}{*{20}l} \min\limits_{W^{[1\cdots L]}_{d}} \sum\limits_{(I_{s},I_{d})\in U_{s,d}} f(\psi^{L}_{\mathcal{M}_{d}}(I_{d}),\phi^{i^{*}}_{\mathcal{M}_{s}, D_{s}}(I_{s})) \end{array} $$ where U s,d denotes the NLPR dataset, which includes paired images from modalities \(\mathcal {M}_{s}\) and \(\mathcal {M}_{d}\). For the loss function f, we use the L2 distance between the feature vectors, \(f(\cdot)=||\cdot ||^{2}_{2}\). Then, the deep features of depth images are extracted by CNN Ψ. Bayesian framework for saliency detection Let the binary random variable z s denote whether a point belongs to a salient class. Given the observed deep features of RGB image x c and the deep features of depth image x d of that point, we formulate the saliency detection as a Bayesian inference problem to estimate the posterior probability at each pixel of the RGB-D image: $$ p(\boldsymbol{z}_{s}|\boldsymbol{x}_{c},\boldsymbol{x}_{d}) = \frac{p(\boldsymbol{z}_{s},\boldsymbol{x}_{c},\boldsymbol{x}_{d})}{p(\boldsymbol{x}_{c},\boldsymbol{x}_{d})} $$ where p(z s |x c ,x d ) is shorthand for the probability of predicting whether a pixel is salient, p(x c ,x d ) is the likelihood of the observed deep features of RGB images and depth images, and p(z s ,x c ,x d ) is the joint probability of the latent class and observed features, defined as p(z s ,x c ,x d )=p(z s )p(x c ,x d |z s ). In this paper, the class-conditional mutual information (CMI) is used as a measure of the dependence between two features x c and x d , which can be defined as I(x c ,x d |z s )=H(x c |z s )+H(x d |z s )−H(x c ,x d |z s ), where H(x c |z s ) is the class-conditional entropy of x c , defined as \(-\sum _{i} p(\boldsymbol {z}_{s}=i)\sum _{\boldsymbol {x}_{c}}p(\boldsymbol {x}_{c}|\boldsymbol {z}_{s}=i)\log p(\boldsymbol {x}_{c}|\boldsymbol {z}_{s}=i)\). Mutual information is zero when x c and x d are mutually independent given class z s and increases with increasing level of dependence, reaching the maximum when one feature is a deterministic function of the other. Indeed, the independence assumption becomes more accurate with decreasing entropy, which yields an asymptotically optimal performance of the naive Bayes classifier [32]. The visual result for class-conditional mutual information between the deep features of RGB images and depth images on the NLPR dataset is shown in Fig. 5. We employ a CMI threshold τ to discover feature dependencies. For CMI between the deep features of RGB images and depth images less than τ, we assume that x c and x d are conditionally independent given the classes z s , that is, p(x c ,x d |z s )=p(x c |z s )p(x d |z s ). This entails the assumption that the distribution of the deep features of RGB images does not change with the deep features of depth images. Thus, the pixel-wise saliency of the likelihood is given by p(z s |x c ,x d )∝p(z s )p(x c |z s )p(x d |z s ). Generative model for saliency estimation Given the graphical model of DMNB for saliency detection shown in Fig. 3, the generative process for {x1:N,y} following the DMNB model can be described as follows (Algorithm 1), where Dir() is shorthand for a Dirichlet distribution, Mult() is shorthand for a multinomial distribution, x1:N=(x c ,x d ), z1:N=z s =(z c ,z d ), N is the number of features, and y is the label that indicates whether the pixel is salient. Graphical models of DMNB for saliency estimation. y and x are the corresponding observed states, and z is the hidden variable, where each feature x j is assumed to have been generated from one of C Gaussian distributions with a mean of \(\{\mu _{jk},[j]_{1}^{N}\}\) and a variance of \(\{\sigma _{jk}^{2},[j]_{1}^{N}\}\), and y is either 0 or 1, indicating whether the pixel is salient In this work, the deep features of both RGB and depth images are assumed to have been generated from a Gaussian distribution with a mean of \(\{\mu _{jk},[j]_{1}^{N}\}\) and a variance of \(\{\sigma _{jk}^{2},[j]_{1}^{N}\}\). The marginal distribution of (x1:N,y) is $$\begin{array}{*{20}l} p(\boldsymbol{x}_{1:N},\boldsymbol{y}|\alpha,\Omega,\eta)\,=\,\int \!p(\theta|\alpha)\!\left(\prod\limits_{j=1}^{N}\sum\limits_{\boldsymbol{z}_{j}} p(\boldsymbol{z}_{j}|\theta)p(\boldsymbol{x}_{j}|\boldsymbol{z}_{j},\Omega_{j})p(\boldsymbol{y}|\boldsymbol{z}_{j},\eta)\right)\!d\theta \end{array} $$ where θ is the prior distribution over C components, \(\Omega =\{(\mu _{jk},\sigma _{jk}^{2}),[j]_{1}^{N},[k]_{1}^{C}\}\) are the parameters for the distributions of N features, and \(p(\boldsymbol {x}_{j}|\boldsymbol {z}_{j},\Omega _{j})\triangleq \mathcal {N}(\boldsymbol {x}_{j}|\mu _{jk},\sigma _{jk}^{2})\). In two-class classification, y is either 0 or 1 generated from Bern(y|η). Because the DMNB model assumes a generative process for both the labels and features, we use both \(\mathcal {X}=\{(\boldsymbol {x}_{ij}),[i]_{1}^{\mathcal {M}},[j]_{1}^{N}\}\) and \(\mathcal {Y}=\{\boldsymbol {y}_{i},[i]_{1}^{\mathcal {M}}\}\) as a collection of \(\mathcal {M}\) superpixels in trained images from the generative process to estimate the parameters of the DMNB model such that the likelihood of observing \((\mathcal {X},\mathcal {Y})\) is maximized. In practice, we may find a proper C using the Dirichlet process mixture model (DPMM)[33]. The DPMM thus provides a nonparametric prior for the parameters of a mixture model that allows the number of mixture components to increase as the training set increases, as shown in Fig. 6. Due to the latent variables, the computation of the likelihood in Eq. 3 is intractable. In this paper, we use a variational inference method, which alternates between obtaining a tractable lower bound to the true log-likelihood and choosing the model parameters to maximize the lower bound. By directly applying Jensen's inequality [14], the lower bound to logp(y,x1:N|α,Ω,η) is given by $$\begin{array}{*{20}l} \log p(\boldsymbol{y},\boldsymbol{x}_{1:N}|\alpha,\Omega,\eta)&\geq\boldsymbol{E}_{q}(\log p(\boldsymbol{y},\boldsymbol{x}_{1:N},\boldsymbol{z}_{1:N}|\alpha,\Omega,\eta))\\&+\boldsymbol{H}(q(\boldsymbol{z}_{1:N},\theta|\gamma,\phi)) \end{array} $$ Noting that x1:N and y are conditionally independent given z1:N, we use a variational distribution: $$ q(\boldsymbol{z}_{1:N},\theta|\gamma,\phi)=q(\theta|\gamma)\prod\limits_{j=1}^{N}q(\boldsymbol{z}_{j}|\phi) $$ where q(θ,γ) is a C-dimensional Dirichlet distribution for θ and q(z j |ϕ) is a discrete distribution for z j . We use \(\mathcal {L}\) to denote the lower bound: $$ {{}\begin{aligned} \mathcal{L}&=\boldsymbol{E}_{q}[\log p(\theta|\alpha)]+\boldsymbol{E}_{q}[\log p(\boldsymbol{z}_{1:N}|\theta)] \\ &\quad +\boldsymbol{E}_{q}[\log p(\boldsymbol{x}_{1:N}|\boldsymbol{z}_{1:N},\gamma)] \\ &\quad -\boldsymbol{E}_{q}[\! \log q(\theta)] - \boldsymbol{E}_{q}[\! \log q(\boldsymbol{z}_{1:N})]+\boldsymbol{E}_{q}[\! \log p(\boldsymbol{y}|\boldsymbol{z}_{1:N},\eta)] \end{aligned}} $$ where \(\boldsymbol {E}_{q}[\log p(\boldsymbol {y}|\boldsymbol {z}_{1:N},\eta)]\geq \sum _{k=1}^{C}\phi _{k}(\eta _{k}\boldsymbol {y}-\frac {e^{\eta _{k}}}{\xi })-(\frac {1}{\xi }+\log \xi)\) and ξ>0 is a newly introduced variational parameter. Maximizing the lower-bound function \(\mathcal {L}(\gamma _{k},\phi _{k},\xi ;\alpha,\Omega,\eta)\) with respect to the variational parameters yields updated equations for γ k , ϕ k and ξ as follows: $$\begin{array}{*{20}l} \phi_{k}\propto e^{(\Psi(\gamma_{k})-\Psi(\sum_{l=1}^{C}\gamma_{l})+\frac{1}{N}(\eta_{k}\boldsymbol{y}_{i}-\frac{e^{\eta_{k}}}{\xi}-\sum_{j=1}^{N}\frac{(\boldsymbol{x}_{ij}-\mu_{jk})^{2}}{2\sigma_{jk}^{2}}))} \end{array} $$ $$ \gamma_{k}=\alpha + N\phi_{k} $$ $$ \xi=1+{\sum\nolimits}_{k=1}^{C}\phi_{k}e^{\eta_{k}} $$ The variational parameters (γ∗,ϕ∗,ξ∗) from the inference step provide the optimal lower bound to the log-likelihood of (x i ,y i ), and maximizing the aggregate lower bound \(\sum _{i=1}^{\mathcal {M}}\mathcal {L}(\gamma ^{*},\phi ^{*},\xi ^{*},\alpha,\Omega,\eta)\) over all data points with respect to α, Ω and η, respectively, yields the estimated parameters. For μ, σ and η, we have \(\mu _{jk}=\frac {\sum _{i=1}^{\mathcal {M}}\phi _{ik}\boldsymbol {x}_{ij}}{\sum _{i=1}^{\mathcal {M}}\phi _{ik}}\), \(\sigma _{jk}=\frac {\sum _{i=1}^{\mathcal {M}}\phi _{ik}(\boldsymbol {x}_{ij}-\mu _{jk})^{2}}{\sum _{i=1}^{\mathcal {M}}\phi _{ik}}\), and \(\eta _{k}=\log (\frac {\sum _{i=1}^{\mathcal {M}}\phi _{ik}\boldsymbol {y}_{i}}{\sum _{i=1}^{\mathcal {M}}\frac {\phi _{ik}}{\xi _{i}}})\). Based on the variational inference and parameter estimation updates, it is straightforward to construct a variational EM algorithm to estimate (α,Ω,η). Starting with an initial guess (α0,Ω0,η0), the variational EM algorithm alternates between two steps, as follows (Algorithm 2). $$\begin{array}{*{20}l} {}\left(\gamma_{i}^{(t)},\phi_{i}^{(t)},\xi_{i}^{(t)}\!\right)\!\!&= \arg \!\max\limits_{\gamma_{i},\phi_{i},\xi_{i}}\mathcal{L}\!\left(\!\gamma_{i},\phi_{i},\xi_{i};\alpha^{(t-1)}\!,\Omega^{(t-1)},\eta^{(t-1)}\right) \end{array} $$ $$\begin{array}{*{20}l} {}\left(\alpha^{(t)},\Omega^{(t)},\eta^{(t)}\right)\!&= \arg\max\limits_{(\alpha,\Omega,\eta)}\sum\limits_{i=1}^{\mathcal{M}} \mathcal{L}\left(\gamma_{i}^{(t)},\phi_{i}^{(t)},\xi_{i}^{(t)};\alpha,\Omega,\eta\right) \end{array} $$ After obtaining the DMNB model parameters from the EM algorithm, we can use η to perform saliency prediction. Given the feature x1:N, we have $$ \begin{aligned} &\boldsymbol{E}[\log p(\boldsymbol{y}|\boldsymbol{x}_{1:N},\alpha,\Omega,\eta)]=&&\left\{ \begin{array}{rcl} &\eta^{T}\boldsymbol{E}[\overline{\boldsymbol{z}}]-\boldsymbol{E}[\log(1+e^{\eta^{T}\overline{\boldsymbol{z}}})]&{\boldsymbol{y}=1}\\ &0-\boldsymbol{E}[\log(1+e^{\eta^{T}\overline{\boldsymbol{z}}})]&{\boldsymbol{y}=0}\\ \end{array} \right. \end{aligned} $$ where \(\overline {\boldsymbol {z}}\) is an average of z1:N over all of the observed features. The computation for \(\boldsymbol {E}[\overline {\boldsymbol {z}}]\) is intractable; therefore, we again introduce the distribution q(z1:N,θ) and calculate \(\boldsymbol {E}_{q}[\overline {\boldsymbol {z}}]\) as an approximation of \(\boldsymbol {E}[\overline {\boldsymbol {z}}]\). In particular, \(\boldsymbol {E}_{q}[\overline {\boldsymbol {z}}]=\phi \); therefore, we only need to compare ηTϕ with 0. Experimental evaluation Evaluation datasets In this section, we conduct some experiments to demonstrate the performance of our method. We use the NLPR datasetFootnote 1 and NJU-DS400 datasetFootnote 2 to evaluate the performance of the proposed model, as shown in Table 1. The NLPR dataset [19] includes 1000 images of diverse scenes in real 3D environments, where the ground-truth was obtained by requiring five participants to select regions where objects are present, i.e. the salient regions were marked by hand. The NJU-DS400 dataset [10] includes 400 images of different scenes, where the ground-truth was obtained by four volunteers labelling the salient object masks. Table 1 Comparison of the benchmark and existing 3D saliency detection datasets We will analyse the 3D saliency situation in RGB-D images based on human judgement. In terms of the NLPR dataset [19], the 3D saliency is decided jointly by RGB images and depth images, as shown in Fig. 4. For each selected image pair from NLPR dataset, three participants are asked to draw a rectangle according to their first glance at the the most attention-grabbing region in RGB image and depth image, respectively. The 3D saliency situation is determined by thresholding the overlap ratio between the rectangle and the corresponding ground truth salient object mask. We use the Intersection over Union (IOU) to measure the match between bounding boxes and the ground truth, respectively. The IOU threshold is set at 0.5. The 3D saliency situation in RGB-D images follows three conditions: 3D saliency situation in RGB-D images. a Colour-depth saliency: both RGB images and depth images are salient. b Colour saliency: only RGB images are salient. c Depth saliency: only depth images are salient Colour-depth saliency, in which both IOU values of RGB images and depth images are more than the IOU threshold, defined as \(\mathcal {D}^{b}=\{\mathcal {I}^{b}_{c}, \mathcal {I}^{b}_{d} \}\), where \(\mathcal {I}^{b}_{c}\) and \(\mathcal {I}^{b}_{d}\) denote RGB images and depth images, respectively. Colour saliency, in which only IOU values of RGB images are more than the IOU threshold and IOU values of depth images are less than the IOU threshold, defined as \(\mathcal {D}^{c}=\{\mathcal {I}^{c}_{c}, \mathcal {I}^{c}_{d} \}\), where \(\mathcal {I}^{c}_{c}\) and \(\mathcal {I}^{c}_{d}\) denote RGB images and depth images, respectively. Depth saliency, in which only IOU values of depth images are more than the IOU threshold and IOU values of RGB images are less than the IOU threshold, defined as \(\mathcal {D}^{d}=\left \{\mathcal {I}^{d}_{c}, \mathcal {I}^{d}_{d} \right \}\), where \(\mathcal {I}^{d}_{c}\) and \(\mathcal {I}^{d}_{d}\) denote RGB images and depth images, respectively. We removed RGB-D image pairs with severely overlapping salient objects and this leaves us with 992 images out of 1000 images from NLPR dataset. The image proportion of the three conditions about 3D saliency in RGB-D images is shown in Table 2. In the NLPR RGB-D dataset, most of the regions are 3D salient regions in the RGB images and depth images, namely, the colour-depth saliency ratio reaches 76.7%, which is much higher than the colour saliency situation and the depth saliency situation. These split datasets are used for training and evaluation. Table 2 3D saliency situation in terms of the NLPR dataset There are currently no specific and standardized measures for computing the similarity between the fixation density maps and saliency maps created using computational models in 3D situations. Nevertheless, there is a range of different measures that are widely used to perform comparisons of saliency maps for 2D content. We introduce two types of measures to evaluate algorithm performance on the benchmark. The first one is the gold standard: F-measure. The second is the precision-recall (PR) curve. A continuous saliency map can be converted into a binary mask using a threshold, resulting in a pair of precision and recall values when the binary mask is compared against the ground truth. A PR curve is then obtained by varying the threshold from 0 to 1. The PR curve indicates the mean precision and recall of the saliency map at various thresholds. We follow the default setup of the MC procedure from [27] for training the depth CNN using the caffe CNN library [34]. For training the depth CNN using supervision transfer, we copy the weights from the RGB CNN [27] that was pre-trained on ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) 2014 [35] and fine-tuned for saliency detection on the MSRA10K dataset [36] to initialize this network, base the learning rate at 0.001 and step it down by a factor of 10 every 1000 iterations, except that we fine-tune all the layers. We randomly select 600 depth images \(\mathcal {I}^{b}_{d}\) for training and 100 for validation from \(\mathcal {D}^{b}\). From each depth image, we select an average 200 of superpixels, and in total, approximately 120 thousand input windows for training and 20 thousand for validation are generated. We label a patch as salient if 50% of the pixels in this patch are salient; otherwise, it is labelled as non-salient. Training of the depth CNN for 10 thousand iterations costs 60 h without a GPU. Parameter settings A summary of the parameters in this paper is shown in Table 3. To evaluate the quality of the proposed approach, we divided the datasets into two subsets according to their CMI values, and we kept 20% of the data for testing purposes and trained on the remaining 80% whose CMI values are less than the CMI threshold τ. As shown in Fig. 5, we compute the CMI for all of the RGB-D images, and the parameter τ is set to 0.2, which is a heuristically determined value. Table 3 The parameters and their settings in this paper Visual result for class-conditional mutual information between deep features of RGB images and depth images on the NLPR dataset, where blue star denotes the CMI value and red rectangle denotes the CMI histogram. The 3D saliency situation in RGB-D images is defined as colour-depth saliency, colour saliency, and depth saliency. a Colour-depth saliency. b Colour saliency. c Depth saliency We initialize the model parameters using all data points and their labels in the training set in Algorithm 1. In particular, we use the mean and standard deviation of the data points in each class to initialize Ω and the ratio of data points in different classes to initialize α i . The effect of the parameters The parameter C in Algorithm 1 is set according to the training set based on DPMM, as shown in Fig. 6. The appropriate number of mixture components to use in the DMNB model for saliency estimation is generally unknown, and DPMM provides an attractive alternative to current methods. In practice, we find the initial number of components C using the DPMM based on 90% of the training set, and then we perform a cross validation with a range of C by holding out 10% of the training data as the validation data. Visual result for the number of components C in the DMNB model: generative clusters vs DPMM clustering. a Generative clusters for NLPR image datasets, where green and red denote the distributions of salient and non-salient features, respectively. b DPMM clustering for NLPR image datasets, where the number of colours and shapes of the points denote the number of components C. The appropriate number of mixture components to use in the DMNB model for saliency estimation is generally unknown, and DPMM provides an attractive alternative to current methods. We find that C=24 using DPMM on the NLPR dataset We use 10-fold cross-validation with the parameter C for DMNB models. In a 10-fold cross-validation, we divide the dataset evenly into 10 parts, one of which is selected as the validation set, and the remaining 9 parts are used as the training set. The process is repeated 10 times, with each part used once as the validation set. We use perplexity as the measurement for comparison. The generative models are capable of assigning a log-likelihood logp(x i ) to each observed data point x i . Based on the log-likelihood scores, we compute the perplexity of the entire dataset as \(\text {perplexity}=\exp \left (-\sum _{i=1}^{M} \frac {\log p(\boldsymbol {x}_{i})}{M}\right)\), where M is the number of data points. The perplexity is a monotonically decreasing function of the log-likelihood, implying that a lower perplexity is better (particularly on the test set) since the model can explain the data better. We calculate the perplexity for results on the validation set and training set, as shown in Fig. 7. Finally, for all the experiments described below, the parameter C was fixed at 24, and no user fine-tuning was performed. Cross validation. We use 10-fold cross-validation with the parameter C for DMNB models. The C found using DPMM was adjusted over a wide range in a 10-fold cross-validation Compared methods Let us compare our saliency model (BFSD) with a number of existing state-of-the-art methods, including graph-based manifold ranking (GMR)[7]; multi-context deep learning (MC)[27]; multiscale deep CNN (MDF)[28]; anisotropic centre-surround difference (ACSD)[10]; saliency detection at low-level, mid-level, and high-level stages (LMH)[19]; and exploiting global priors (GP)[20], among which GMR, MC and MDF are developed for RGB images, LMH and GP for RGB-D images, and ACSD for depth images. All of the results are produced using the public codes that are offered by the authors of the previously mentioned literature reports. Qualitative experiment Colour-depth saliency In this case, both RGB images and depth images are salient. The comparison of the state-of-the-art approaches is presented in Fig. 8. As shown in the first and seventh rows of Fig. 8, the salient object has a high colour contrast with the background; thus, RGB saliency methods are able to correctly detect salient objects. GMR fails to detect many pixels on the prominent objects because it does not define the pseudo-background accurately, e.g. the third row in Fig. 8. As shown, the proposed method can accurately locate the salient objects and produce nearly equal saliency values for the pixels within the target objects. Visual comparison of the saliency detection in the colour-depth saliency situation in terms of the NLPR dataset. a RGB.b Depth. c Ground truth. d ACSD. e GMR. f MC. g MDF. h LMH. i GP. j BFSD Colour saliency In this case, only RGB images are salient. The comparison of the state-of-the-art approaches is presented in Fig. 9. ACSD works on depth images on the assumption that salient objects tend to stand out from the surrounding background, which takes relative depth into consideration. ACSD performs worse when the salient object lies in the same plane as the background, e.g. the third row in Fig. 9. It is challenging because most of the salient objects share similar depth as the background. Consequently, depth saliency methods perform relatively worse than RGB saliency methods in terms of precision. Ren et al. proposed two priors, which are the normalized depth prior and the global-context surface orientation prior [20]. Because their approach uses the two priors, it has problems when such priors are invalid, e.g. the first row in Fig. 9. Figure 9 shows that the proposed method consistently outperforms all the other saliency methods. Visual comparison of the saliency detection in the colour saliency situation in terms of the NLPR dataset. a RGB. b Depth. c Ground truth. d ACSD. e GMR. f MC. g MDF. h LMH. i GP. j BFSD Depth saliency In this case, only depth images are salient. The comparison of the state-of-the-art approaches is presented in Fig. 10. When a salient object shares a similar colour with the background, it is difficult for existing RGB models to extract saliency. With the help of depth information, a salient object can easily be detected by the proposed RGB-D method. In particular, when the salient object shares similar object categories, e.g. the first row in Fig. 10, MC and MDF generate unsatisfying results without depth cues. ACSD is not designed for such complex scenes but rather single dominant-object depth images. By providing an accurate depth map, the LMH and GP methods perform well in both precision and recall. LMH uses a simple fusion framework that takes advantage of both depth and appearance cues from the low, mid, and high levels. The background is nicely excluded; however, many pixels on the salient object are not detected as salient, e.g. the second row in Fig. 10. Figure 10 also shows that the proposed method consistently outperforms all the other saliency methods. Visual comparison of the saliency detection in the depth saliency situation in terms of the NLPR dataset. a RGB. b Depth. c Ground truth. d ACSD. e GMR. f MC. g MDF. h LMH. i GP. j BFSD Quantitative evaluation Our algorithm is implemented in MATLAB v7.12 and tested on an Intel Core(TM) i5-6400 CPU with 8 GB of RAM. A simple computational comparison is shown in Table 4 in terms of the NLPR dataset without a GPU. The run time of ACSD is for per depth image; GMR, MC and MDF are for per RGB image; and LMH, GP and BFSD are for per RGB-D image pair. Note that there are many works left for computational optimization, including optimization of prior parameters and algorithm optimization for variable inference during the prediction process. Table 4 Comparison of the average run time (seconds) on the NLPR dataset The quantitative comparisons on the NLPR dataset are shown in Figs. 11 and 12. As shown in Fig. 11a, b, although the PR curves are very similar in the colour-depth saliency situation and the colour saliency situation, Fig. 11c shows that the proposed method is superior compared to MC and MDF in the depth saliency situation. The LMH method, which uses Bayesian fusion to fuse depth and RGB saliency by simple multiplication, has lower performance compared to the GP method, which uses the Markov random field model as a fusion strategy, as shown in Fig. 11b, c. LMH and GP achieve better performances than ACSD by using fusion strategies, as shown in Fig. 11. LMH and GP achieve better performances than GMR by using fusion strategies in the depth saliency situation, as shown in Fig. 11c; however, LMH and GP achieve lower performances in the colour saliency situation, as shown in Fig. 11b. The PR curves demonstrate that the proposed 3D saliency detection model performs better than do the compared methods overall, as shown in Fig. 11d. We also provide the F-measure values for several compared methods in Table 5, which shows that the proposed RGB-D method is superior to the existing methods in terms of F-measure values. This result is mainly because the deep features of RGB-D images extracted by CNNs enhance the consistency and compactness of salient patches. The PR curves of different saliency detection models in terms of the NLPR dataset. a Colour-depth saliency. b Colour saliency. c Depth saliency. d The overall The F-measures of different saliency detection models when used on the NLPR dataset. a Colour-depth saliency. b Colour saliency. c Depth saliency. d The overall Table 5 Comparison of the F-measure on the NLPR dataset As shown in Fig. 12c, in the depth saliency situation, the RGB saliency methods perform relatively worse than the RGB-D saliency methods in terms of precision. However, in the colour saliency situation, the ACSD and LMH methods do not perform well in both precision and recall. Although the simple late fusion strategy achieves improvements in the depth saliency situation, as shown in Fig. 12c, it still suffers from inconsistency in the homogeneous foreground regions in the colour saliency situation, as shown in Fig. 12b, which may be attributed to treating the appearance and depth correspondence cues in an independent manner. In the colour-depth saliency situation, due to the capability of learning high-level semantic features, MC and MDF perform relatively better than the LMH and GP methods in terms of F-measure. Although the recall values are very similar, Fig. 12b, c show that the proposed method improves the precision and F-measure when compared to MC and MDF. Our approach consistently detects the pixels on the dominant objects within a Bayesian framework with higher accuracy to resolve the issue. Figure 12 shows that the proposed method performs favourably against the existing algorithm with higher precision, recall values, and F-measure scores on the NLPR dataset. Supervision transfer vs fine-tuning This section investigates the effectiveness of different depth CNN learning strategies. It was demonstrated that fine-tuning a deep CNN model for image classification with the target task (e.g. object detection) data can significantly improve the performance of the target task [37]. Supervision transfer enables learning of rich representations from a large labelled modality as a supervisory signal for training representations for a new unlabelled paired modality and can be used as a pre-training procedure for new modalities with limited labelled data. However, the fine-tuning task and the supervision transfer task have disparity in the following aspects. (1) Input data. The fine-tuning task takes the labelled depth images as inputs, while the supervision transfer task requires the paired RGB and depth images. The fine-tuning solve the problem of domain adaptation within the same modality. In contrast, supervision transfer here tackles the problem of domain adaptation across different modalities. (2) The adapted layer. The fine-tuning task adapts the last soft-max layer to the same modality data, while the supervision transfer happens at the arbitrary internal layer for a new image modality. Particularly, deep model structures at the fine-tuning stage are only different in the last fully connected layer for predicting labels. Supervision transfer here allows for transfer of supervision at arbitrary semantic levels. Due to the "data-hungry" nature of CNNs, the existing training data is insufficient for training; therefore, we employed supervision transfer to resolve this issue. We evaluate the performance of the Depth CNN model with different training strategies on the NLPR dataset. We randomly select 600 depth images \(\mathcal {I}^{b}_{d}\) for training and 100 for validation from \(\mathcal {D}^{b}\). We show detailed experimental results for supervision transfer from RGB to depth images compared with fine-tuning with depth images, as shown in Fig. 13. We use the Clarifai that has been trained on labelled images in the ImageNet dataset, and use the mid-level representation learned by the CNN as a supervisory signal to train a CNN on depth images. Note that the output of the penultimate layer of Depth CNN is indeed a feature vector for saliency detection. The technique for transferring supervision results in improvements in performance for the end task of saliency detection on NLPR dataset, where we improve from 1.5 to 1.9% when using both RGB and depth images together, compared with the fine-tuning when using just the depth images. From the results on NLPR dataset in Fig. 13, we can conclude that supervision transfer outperforms the conventional fine-tuning method, which validates the effectiveness of the proposed supervision transfer approach for saliency detection. Evaluation of the depth CNN learned using different training strategies on the NLPR dataset. a F-measure scores on the NLPR dataset for evaluation of the supervision transfer (ST) strategy and the fine-tuning (FT) strategy. b–f Qualitative comparison between the ST strategy and the FT strategy for 10k iterations Fusion strategy comparison Despite the demonstrated success of deep features extracted from RGB images and Depth images, no single feature is effective for all scenarios as they define saliency from different perspectives. The combination of different features might be a good solution to visual saliency detection for RGB-D images. However, manually designing an interaction mechanism for integrating inherently different saliency features is a challenging problem. The qualitative comparisons and quantitative comparisons of the different fusion strategies using the deep CNN features are shown in Figs. 14, 15, and 16, respectively. CSM means colour saliency map, which is produced by deep features of the colour CNN. DSM means depth saliency map, which is produced by deep features of the depth CNN. We add and multiple the CSM with the DSM, and these results are denoted CSM + CSM and CSM ×DSM. As shown in Fig. 14, neither simple linear fusion nor weighting method is subsequently able to recover the salient object. Both simple linear fusion and weighting method suffer from inconsistency in the homogeneous foreground regions and lacks precision around object boundaries, which may be ascribed to treating the colour and depth correspondence cues in an independent manner. Our approach consistently detects the pixels on the dominant objects within a Bayesian framework with higher accuracy to resolve the issue. Figure 15 shows that the Bayesian fusion performs favourably compared with the linear fusion and the weighting method, with higher precision and recall on the NLPR dataset. Although the simple late fusion strategy achieves improvements, it still suffers from inconsistency due to ignore the strong complementarities between appearance and depth correspondence cues. We adopt a good integration method developed to address this problem by training a generative model. Visual comparison of different fusion strategies in terms of the NLPR dataset. a RGB. b Depth. c Ground truth. d CSM. e DSM. f CSM + DSM. g CSM × DSM. h BFSD. The symbol " + " indicates a linear combination strategy, and the symbol " ×" indicates a weighting method based on multiplication The PR curves of different fusion strategies in terms of the NLPR dataset. a Colour-depth saliency. b Colour saliency. c Depth saliency. d The overall. The symbol " + " indicates a linear combination strategy, and the symbol " ×" indicates a weighting method based on multiplication The F-measure scores of different fusion strategies in terms of the NLPR dataset. a Colour-depth saliency. b Colour saliency. c Depth saliency. d The overall. The symbol " + " indicates a linear combination strategy, and the symbol " ×" indicates a weighting method based on multiplication Cross-dataset generalization In this section, we evaluate the generalization performance of BFSD. To test how well the performance of our proposed method generalizes to a different dataset for detecting salient object in RGB-D images, we evaluate it on the NJU-DS400[10]. As discussed in experiment setting, the images of the NJU-DS400 are collected in different scenarios. We directly test the performance on the NJU-DS400 dataset with the model learned on NLPR dataset. The results are shown in Figs. 17 and 18. In the NJU-DS400 dataset, we do not have experimental results for the LMH and GP methods due to the lack of depth information, which is required by their codes. Although the model is trained on the NLPR dataset, it outperforms all other previous methods based on the F-measure scores and PR curves. This clearly demonstrates the generalization performance of our proposed method and robustness to dataset biases. Our model explores high-level information of RGB-D images to investigate semantics-driven attention to 3D content, and has much stronger generalization capability. Though Gaussian distributions of the DMNB model provide better performance than compared methods in terms of NJU-DS400 dataset, different numbers of mixture component would impair the generalization capability of this mixture model, especially in the case of multiple scene types. Visual comparison of the saliency detection in terms of the NJU-DS400 dataset. a RGB. b Depth. c Ground truth. d ACSD. e GMR. f MC. g MDF. h BFSD Experimental results on the NJU-DS400 dataset compared with previous works. a The F-measure scores. b The PR curves. F-measure scores and PR curves show superior generalization ability of a BFSD framework Failure cases Figure 19 presents more visual results and some failure cases of our proposed method on NLPR dataset. By comparing these images, we find that semantic information is more helpful when the salient object shares a very similar colour and depth information with the background. Figure 20 presents additional visual results and a failure case of our proposed method on NJU-DS400 dataset. We find that although our method is able to highlight the overall salient objects, the generated coarse maps may confuse some of small foreground or background regions if they have similar appearance. Our method may fail when the salient object shares a very similar colour and depth information with the background in a global context. Some failure cases in terms of NLPR dataset. a RGB. b Depth. c Ground truth. d BFSD Some failure casesin terms of NJU-DS400 dataset. a RGB. b Depth. c Ground truth. d BFSD Because our approach requires training on large datasets to adapt to specific environments, it has the problem that properly tuning the parameters for specific new tasks is important to the performance of the DMNB model. The DMNB model performs classification in one shot via a combination of mixed-membership models and logistic regression, where the results may depend on different choices of C. The learned parameters will clearly have good performances on the specific stimuli but not necessarily on the new testing set. Thus, the weakness of the proposed method is that to obtain reasonable performances, we train our saliency model on the training set for specific C. This problem could be addressed by using Dirichlet process mixture models to find a proper C for new datasets. In this study, we propose a learning-based 3D saliency detection model for RGB-D images that considers the deep features of RGB images and depth images within a Bayesian framework. To better detect semantically salient objects, we employ a deep CNN to model saliency of objects in RGB images and depth images. Rather than simply combining a depth map with 2D saliency maps as in previous studies, we propose a computational saliency detection model for RGB-D images based on the DMNB model. The experiments verify that the deep features of depth images can serve as a helpful complement to the deep features of RGB images within a Bayesian framework. Compared with other competing 3D models, the experimental results from a public RGB-D saliency datasets demonstrate the improved performance of the proposed model over other strategies. As a future work, we are considering to improve the feature representation of the depth images. We are considering to represent the depth image by three channels (horizontal disparity, height above ground, and angle with gravity) [38] for saliency detection because this representation allows the CNN to learn stronger features than by using disparity alone. We are also considering the application of our 3D saliency detection model in RGB-D object detection problems, e.g. 3D object proposals. http://sites.google.com/site/rgbdsaliency http://mcg.nju.edu.cn/en/resource.html Le Callet P, Niebur E (2013) Visual attention and applications in multimedia technology. Proc IEEE 101(9):2058–2067. https://doi.org/10.1109/JPROC.2013.2265801. Wang J, Fang Y, Narwaria M, Lin W, Callet PL (2014) Stereoscopic image retargeting based on 3d saliency detection In: The IEEE International Conference on Acoustics, Speech and Signal Processing, 669–673.. IEEE, Florence. Kim H, Lee S, Bovik C (2014) Saliency prediction on stereoscopic videos. IEEE Trans Image Process 23(4):1476–1490. https://doi.org/10.1109/TIP.2014.2303640. Zhang Y, Jiang G, Yu M, Chen K (2010) Stereoscopic visual attention model for 3d video In: The 16th International Conference on Multimedia Modeling, 314–324.. Springer, Chongqing. Borji A, Cheng M, Hou Q, Jiang H, Li J (2017) Salient object detection: a survey. arXiv preprint arXiv:1411.5878. Borji A, Cheng M, Jiang H, Li J (2015) Salient object detection: a benchmark. IEEE Trans Image Process 24(12):5706–5722. https://doi.org/10.1109/TIP.2015.2487833. Yang C, Zhang L, Lu H, Ruan X, Yang M (2013) Saliency detection via graph based manifold ranking In: The IEEE Conference on Computer Vision and Pattern Recognition, 3166–3173.. IEEE, Portland. Peng H, Li B, Ling H, Hu W, Xiong W, Maybank SJ (2017) Salient object detection via structured matrix decomposition. IEEE Trans Pattern Anal Mach Intell 39(4):818–832. https://doi.org/10.1109/TPAMI.2016.2562626. Li B, Xiong W, Hu W (2012) Visual saliency map from tensor analysis In: Proceedings of Twenty-Sixth AAAI Conference on Artificial Intelligence, 1585–1591.. AAAI, Toronto. Ju R, Ge L, Geng W, Ren T, Wu G (2014) Depth saliency based on anisotropic centre-surround difference In: IEEE International Conference Image Processing, 1115–1119.. IEEE, Pairs. Song H, Liu Z, Xie Y, Wu L, Huang M (2016) Rgbd co-saliency detection via bagging-based clustering. IEEE Sig Process Lett 23(12):1722–1726. https://doi.org/10.1109/LSP.2016.2615293. Lang C, Ngugen T, Katti H, Yadati K, Kankanhalli M, Yan S (2012) Depth matters: influence of depth cues on visual saliency In: The 12th European Conference Computer Vision, 101–105.. Springer, Florence. Desingh K, Madhava K, Rajan D, Jawahar C (2013) Depth really matters: improving visual salient region detection with depth In: The British Machine Vision Conference, 98.1–98.11.. BMVA, Bristol. Shan H, Banerjee A, Oza N (2009) Discriminative mixed-membership models In: IEEE International Conference Data Mining, 466–475.. IEEE, Miami. Fang Y, Wang J, Narwaria M, Le Callet P, Lin W (2014) Saliency detection for stereoscopic images. IEEE Trans Image Process 23(6):2625–2636. https://doi.org/10.1109/TIP.2014.2305100. Wu P, Duan L, Kong L (2015) Rgb-d salient object detection via feature fusion and multi-scale enhancement In: Chinese Conference Computer Vision, 359–368.. Springer, Xi'an. Ciptadi A, Hermans T, Rehg J (2013) An in depth view of saliency In: The British Machine Vision Conference, 9–13.. BMVA, Bristol. Iatsun I, Larabi M, Fernandez-Maloigne C (2014) Using monocular depth cues for modeling stereoscopic 3D saliency In: IEEE International Conference Acoustics, Speech and Signal Processing, 589–593.. IEEE, Florence. Peng H, Li B, Hu W, Ji R (2014) Rgbd salient object detection: a benchmark and algorithms In: The 13th European Conference Computer Vision, 92–109.. Springer, Zurich. Ren J, Gong X, Yu L, Zhou W (2015) Exploiting global priors for rgb-d saliency detection In: IEEE Conference Computer Vision and Pattern Recognition Workshops, 25–32.. IEEE, Boston. Xue H, Gu Y, Li Y, Yang J (2015) Rgb-d saliency detection via mutual guided manifold ranking In: IEEE International Conference Image Processing, 666–670.. IEEE, Quebec. Wang J, DaSilva M, Le Callet P, Ricordel V (2013) Computational model of stereoscopic 3D visual saliency. IEEE Trans Image Process 22(6):2151–2165. https://doi.org/10.1109/TIP.2013.2246176. Fang Y, Lin W, Fang Z, Lei J, Le Callet P, Yuan F (2014) Learning visual saliency for stereoscopic images In: IEEE International Conference Multimedia and Expo Workshops, 1–6.. IEEE, Chengdu. Zhu L, Cao Z, Fang Z, Xiao Y, Wu J, Deng H, Liu J (2015) Selective features for RGB-D saliency In: Conference Chinese Automation Congress, 512–517.. IEEE, Wuhan. Bertasius G, Park H, Shi J (2015) Exploiting egocentric object prior for 3d saliency detection. arXiv preprint arXiv:1511.02682. Qu L, He S, Zhang J, Tian J, Tang Y, Yang Q (2013) RGBD salient object detection via deep fusion. IEEE Trans Image Process 26(5):2274–2285. https://doi.org/10.1109/TIP.2017.2682981. Zhao R, Ouyang W, Li H, Wang X (2015) Saliency detection by multi-context deep learning In: IEEE Conference Computer Vision and Pattern Recognition Workshops, 1265–1274.. IEEE, Boston. Li G, Yu Y (2016) Visual saliency detection based on multiscale deep CNN features. arXiv preprint arXiv:1609.02077. Zeiler M, Fergus R (2014) Visualizing and understanding convolutional networks In: The 13th European Conference Computer Vision, 818–833.. Springer, Zurich. Gupta S, Hoffman J, Malik J (2015) Cross modal distillation for supervision transfer. arXiv preprint arXiv:1507.00448. Wang S, Zhou Z, Qu H, Li B (2016) Visual saliency detection for RGB-D images with generative model In: The 13th Asian Conference on Computer Vision, 20–35.. Springer, Taipei. Rish I (2001) An empirical study of the naive Bayes classifier. J Univ Comput Sci 3(22):41–46. Blei D, Jordan M (2006) Variational inference for dirichlet process mixtures. Bayesian Anal 1(1):121–143. Jia Y (2013) Caffe: An open source convolutional arichitecture for fast feature embedding. http://caffe.berkeleyvision.org/. Accessed 2013. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Li F (2014) Imagenet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252. Cheng M, Mitra N, Huang X, Torr P, Hu S (2014) Global contrast based on salient region detection. IEEE Trans Image Process 37(3):569–582. https://doi.org/10.1109/TPAMI.2014.2345401. Girshick R, Donahue J, Darrell T, Malik J (2013) Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv preprint arXiv:1311.2524. Gupta S, Girshick R, Arbelaez P, Malik J (2014) Learning rich features from RGB-D images for object detection and segmentation In: The 13th European Conference Computer Vision, 345–360.. Springer, Zurich. This work was supported in part by the Beijing Municipal special financial project (PXM2016_278215_000013, ZLXM_2017C010) and by the Innovation Group Plan of Beijing Academy of Science and Technology (IG201506C2). The Higher Educational Key Laboratory for Measuring and Control Technology and Instrumentations of Heilongjiang Province, Harbin University of Science and Technology, Harbin, 150080, China Songtao Wang & Zhen Zhou Res. Center for Artif. Intell. and Big Data Anal., Beijing Academy of Science and Technology, Beijing, 100094, China , Wei Jin & Hanbing Qu Search for Songtao Wang in: Search for Zhen Zhou in: Search for Wei Jin in: Search for Hanbing Qu in: SW took charge of the system coding, doing experiments, data analysis and writing the whole paper excluding variational EM algorithm part at subsection 3.3. ZZ took charge of advisor position for paper presentation and experiment design. WJ took charge of data analysis presentation as well as English revising. HQ took charge of coding and writing for variational EM algorithm part at subsection 3.3. All authors read and approved the final manuscript. Correspondence to Zhen Zhou. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Wang, S., Zhou, Z., Jin, W. et al. Visual saliency detection for RGB-D images under a Bayesian framework. IPSJ T Comput Vis Appl 10, 1 (2018) doi:10.1186/s41074-017-0037-0 Bayesian fusion Generative model Saliency detection RGB-D images
CommonCrawl
ΔP ΔP (Delta P) is a mathematical term symbolizing a change (Δ) in pressure (P). Uses • Young–Laplace equation Darcy–Weisbach equation Given that the head loss hf expresses the pressure loss Δp as the height of a column of fluid, $\Delta p=\rho \cdot g\cdot h_{f}$ where ρ is the density of the fluid. The Darcy–Weisbach equation can also be written in terms of pressure loss: $\Delta p=f\cdot {\frac {L}{D}}\cdot {\frac {\rho V^{2}}{2}}$ Lung compliance In general, compliance is defined by the change in volume (ΔV) versus the associated change in pressure (ΔP), or ΔV/ΔP: $Compliance={\frac {\Delta V}{\Delta P}}$ During mechanical ventilation, compliance is influenced by three main physiologic factors: 1. Lung compliance 2. Chest wall compliance 3. Airway resistance Lung compliance is influenced by a variety of primary abnormalities of lung parenchyma, both chronic and acute. Airway resistance is typically increased by bronchospasm and airway secretions. Chest wall compliance can be decreased by fixed abnormalities (e.g. kyphoscoliosis, morbid obesity) or more variable problems driven by patient agitation while intubated.[1] Calculating compliance on minute volume (VE: ΔV is always defined by tidal volume (VT), but ΔP is different for the measurement of dynamic vs. static compliance. Dynamic compliance (Cdyn) $C_{dyn}={\frac {V_{T}}{\mathrm {PIP-PEEP} }}$ where PIP = peak inspiratory pressure (the maximum pressure during inspiration), and PEEP = positive end expiratory pressure. Alterations in airway resistance, lung compliance and chest wall compliance influence Cdyn. Static compliance (Cstat) $C_{stat}={\frac {V_{T}}{P_{plat}-PEEP}}$ where Pplat = plateau pressure. Pplat is measured at the end of inhalation and prior to exhalation using an inspiratory hold maneuver. During this maneuver, airflow is transiently (~0.5 sec) discontinued, which eliminates the effects of airway resistance. Pplat is never > PIP and is typically < 3-5 cmH2O lower than PIP when airway resistance is normal. See also • Pressure measurement • Pressure drop • Head loss References 1. Dellamonica J, Lerolle N, Sargentini C, Beduneau G, Di Marco F, Mercat A, et al. (2011). "PEEP-induced changes in lung volume in acute respiratory distress syndrome. Two methods to estimate alveolar recruitment". Intensive Care Med. 37 (10): 1595–604. doi:10.1007/s00134-011-2333-y. PMID 21866369. S2CID 36231036. External links • Delta P, Diving Pressure Hazard
Wikipedia
\begin{document} \title{The formulas of coefficients of sum and product of $p $-adic integers \\ with applications to Witt vectors} \author{Kejian Xu\inst{1} ,\ \ Zhaopeng Dai \inst{2} \and Zongduo Dai \inst{3} } \institute{College of Mathematics,Qingdao University, China,\\ \email{[email protected]} \and Institute of System Science, Academy of Mathematics and System Science,\\ Chinese Academy of Sciences,China, \\ \email{ [email protected] } \and State Key Laboratory of Information Security, Graduate School of Chinese Academy of Sciences,China£¬\\ \email{[email protected]}} \maketitle {\bf Abstract} The explicit formulas of operations, in particular addition and multiplication, of $p $-adic integers are presented. As applications of the results, at first the explicit formulas of operations of Witt vectors with coefficients in $\mathbb{F}_{2}$ are given; then, through solving a problem of Browkin about the transformation between the coefficients of a $p$-adic integer expressed in the ordinary least residue system and the numerically least residue system, similar formulas for Witt vectors with coefficients in $\mathbb{F}_{3}$ are obtained. \section{Introduction} For any two $p$-adic integers $a,b \in \mathbb{Z} _{p}$, assume that we have the $p$-adic expansions: $$a=a_0+a_{1}p+a_{2}p^{2}+\cdots +a_{n}p^{n}+\ldots$$ $$b=b_0+b_{1}p+b_{2}p^{2}+\cdots +b_{n}p^{n}+\ldots \ $$ $$\ \ \ \ a+b=c_0+c_{1}p+c_{2}p^{2}+\cdots +c_{n}p^{n}+\ldots$$ $$\ \ -a=d_0+d_{1}p+d_{2}p^{2}+\cdots +d_{n}p^{n}+\ldots$$ $$\ ab=e_0+e_{1}p+e_{2}p^{2}+\cdots +e_{n}p^{n}+\ldots$$ then we have the following problem. {\bf{Problem}} {\it For any $t$, express $c_{t}, d_{t},e_{t}$ by some polynomials over $\mathbb{F}_{p}$ of $a_{0},a_{1},\cdots , \\ a_{t};b_{0}, b_{1}, \cdots,b_{t}$.} In this paper, this problem is investigated. In section 2 and section 3 of this paper, we write out the polynomials for $c_{t}$ and $d_{t}$ explicitly. In section 4, we deal with the case of $ab,$ which is rather complicated, and we give an expression of $e_{t},$ which reduces the problem to the one about some kinds of partitions of the integer $p^{t}.$ As an application, we apply the results to the operations on Witt vectors([1]). Let $R$ be an associative ring. The so-called Witt vectors are vectors $(a_{0},a_{1},\cdots), a_{i}\in R$, with the addition and the multiplication defined as follows. $$(a_{0},a_{1},\ldots)\dot{+}(b_{0},b_{1},\ldots)=(S_{0}(a_{0},b_{0}),S_{1}(a_{0},a_{1};b_{0},b_{1}),\ldots)$$ $$\ \ \ (a_{0},a_{1},\ldots)\dot{\times}(b_{0},b_{1},\ldots)=(M_{0}(a_{0},b_{0}),M_{1}(a_{0},a_{1};b_{0},b_{1}),\ldots),$$ where $S_{n}, M_{n}$ are rather complicated polynomials in $\mathbb{Z}[x_{0},x_{1},\ldots,x_{n};y_{0},y_{1},\ldots,y_{n}]$ and can be uniquely but only recurrently determined by Witt polynomials (see [1]). Up to now it seems too involved to find patterns for simplified forms of $S_{n}$ and $M_{n}$ for all $n$, and therefore no explicit expressions for $S_{n}$ and $M_{n}$ are given yet. It is well known that all Witt vectors with respect to the addition $\dot{+}$ and the multiplication $\dot{\times}$ defined above form a ring, called the ring of Witt vectors with coefficients in $R$ and denoted by $\mathbf{W}(R)$. A similar problem is whether the addition and the multiplication of Witt vectors can be expressed explicitly. From [1] it is well known that we have the canonical isomorphism $$\mathbf{W}(\mathbb{F}_{p})\cong \mathbb{Z}_{p},$$ which is given by $$(a_{0},a_{1},\ldots, a_{i},\ldots)\longmapsto \sum_{i=0}^{\infty}\tau (a_{i})p^{i},$$ where $\tau$ is the Teichm\"{u}ller lifting. By this isomorphism, the operations on $\mathbb{Z}_{p}$ can be transmitted to those on $\mathbf{W}(\mathbb{F}_{p})$. But, here the elements of $\mathbb{Z}_{p}$ are expressed with respect to the multiplicative residue system $\tau(\mathbb{F}_{p})$, not the ordinary least residue system $\{0,1, \ldots, p-1\}$. So, for $p> 5$ the operations on $\mathbb{Z}_{p}$ and hence on $\mathbf{W}(\mathbb{F}_{p})$ do not coincide with the ordinary operations of $p$-adic integers. While in the case of $p=2,$ we have $\tau(\mathbb{F}_{2})=\{0,1\},$ that is, the two residue systems coincide. Hence, our results in the case of $p=2$ imply that the operations on Witt vectors in $\mathbf{W}(\mathbf{\mathbb{F}}_{2})$ can be written explicitly. As for the case of $p=3,$ we have $\tau(\mathbb{F}_{3})=\{-1,0,1\},$ but it is difficult to apply our results directly to $\mathbf{W}(\mathbb{F}_{3}).$ However, in a recent private communication, Browkin once considered the transformation between the coefficients of a $p$-adic integer expressed in the ordinary least residue system and the numerically least residue system, and proposed the following problem, which provides us a way to apply our results to $\mathbf{W}(\mathbb{F}_{3}).$ {\bf{Browkin's problem}} \ {\it Let $p$ be an odd prime. Every $p$-adic integer $c$ can be written in two forms: $$c=\sum_{i}^{\infty}a_{i}p^{i}=\sum_{j}^{\infty}b_{j}p^{j},$$ where $a_{i}$ and $b_{j}$ belong respectively to the sets: $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \{0,1,\ldots,p-1\} \ $ and $ \ \{0, \pm1,\pm2,\ldots,\pm\frac{p-1}{2}\}$ Obviously every $b_{j}$ is a polynomial of $a_{0},a_{1},\ldots,a_{j}$ (and conversely). Can we write these polynomials explicitly ?} In section 5 of this paper, we solve Browkin's problem, that is, we present the required polynomials. And so, as an application, in section 6 we can write the operations of $\mathbf{W}(\mathbf{\mathbb{F}}_{3})$ explicitly. \section{Addition} By convention, for the empty set $\phi$, we let $\prod_{i\in \phi}=1.$ {\bf Theorem 2.1.} {\it Assume that $$ A=\sum_{i=0}^{r} a_{i} p^{i}, B =\sum_{i=0}^{r} b_{i} p^{i}, A+B =\sum_{i=0}^{r+1} c_{i} p^{i},$$ where $a_{i} , b_{i}, c_{i} \in \{ 0,1,\ldots, p-1 \} $ and $r\geq 1.$ Then $c_{0} = a_{0}+b_{0}\,( \hbox{\rm{mod}} \,p),$ and for $1\leq t\leq r+1,$ $$c_{t}=a_{t}+b_{t}+\sum_{i=0}^{t-1}\left(\sum_{k=1}^{p-1}\left(\begin{array}{c} a_{i} \\ k\\ \end{array} \right)\left(\begin{array}{c} b_{i} \\ p-k\\ \end{array} \right) \right) \prod_{j=i+1} ^{t-1}\left(\begin{array}{c} a_{j}+b_{j} \\ p-1\\ \end{array} \right)( \hbox{\rm{mod}} p).$$} {\bf Proof} In order to prove our result, we need the following two lemmas. {\bf Lemma 2.2.} (Lucas) {\it If $ A=\sum_{i=0}^{r} a_{i} p^{i}, $ $ B =\sum_{i=0}^{r} b_{i} p^{i},$ $ 0 \leq a_{i} < p \, ,\,$ $ 0 \leq b_{i} < p \, ,$ then $$ \left(\begin{array}{c} A \\ B \\ \end{array} \right) = \prod_{i=0}^{r}\left( \begin{array}{c} a_{i} \\ b_{i}\\ \end{array} \right) ( \hbox{\rm{mod}} \,\, p ). $$ In particular $$a_{t}= \left( \begin{array}{c} A \\ p^{t}\\ \end{array} \right ) ( \hbox{\rm{mod}} \,\, p ), \ \ \ \forall \ t. $$} For the convenience of readers, we include a short proof. In $\mathbb{F}_{p}[z]$ we have $$\sum_{t=0}^{A}\left( \begin{array}{c} A \\ t\\ \end{array} \right ) z^{t}=(1+z)^{A}=\prod_{i=0}^{r}(1+z)^{a_{i}p^{i}}$$ $$=\prod_{i=0}^{r}(1+z^{p^{i}})^{a_{i}} =\prod_{i=0}^{r}\sum_{j=0}^{p-1}\left( \begin{array}{c} a_{i} \\ j\\ \end{array} \right )z^{jp^{i}} $$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ =\sum_{\begin{array}{c} (j_{0},\ldots, j_{r})\\ 0\leq j_{i}\leq p-1\end{array}}\left( \begin{array}{c} a_{0} \\ j_{0}\\ \end{array}\right)\left( \begin{array}{c} a_{1} \\ j_{1}\\ \end{array}\right)\cdots \left( \begin{array}{c} a_{r} \\ j_{r}\\ \end{array}\right)z^{\sum_{i=0}^{r}j_{i}p^{i}}.$$ Comparing coefficients of $z^{B}$ in both sides we get the lemma. {\bf Lemma 2.3.} $\left( \begin{array}{c} A+B \\ t\\ \end{array} \right ) =\sum_{\lambda+\mu=t}\left( \begin{array}{c} A\\ \lambda\\ \end{array} \right )\left( \begin{array}{c} B \\ \mu\\ \end{array} \right ).$ In fact, we have $$\sum_{t}\left( \begin{array}{c} A+B \\ t\\ \end{array} \right ) z^{t}=(1+z)^{A+B}=(1+z)^{A}(1+z)^{B}$$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ \ =\sum_{\lambda}\left( \begin{array}{c} A\\ \lambda\\ \end{array} \right )z^{\lambda}\sum_{\mu}\left( \begin{array}{c} B \\ \mu\\ \end{array} \right )z^{\mu}=\sum_{t}\left (\sum_{\lambda+\mu=t}\left( \begin{array}{c} A\\ \lambda\\ \end{array} \right )\left( \begin{array}{c} B \\ \mu\\ \end{array} \right )\right ) z^{t}.$$ Then, the lemma follows from comparing coefficients of $z^{t}$ in both sides. Now, we turn to the proof of the theorem. By the two lemmas, we have $$c_{t}=a_{t}+b_{t}+\sum_{\lambda+\mu=p^{t}, p^{t-1}\parallel \lambda}\left(\begin{array}{c} A \\ \lambda \\ \end{array} \right)\left(\begin{array}{c} B \\ \mu\\ \end{array} \right)+\sum_{i=0}^{t-2}\sum_{\lambda+\mu=p^{t}, p^{i}\parallel \lambda}\left(\begin{array}{c} A \\ \lambda \\ \end{array} \right)\left(\begin{array}{c} B \\ \mu\\ \end{array} \right)( \hbox{\rm{mod}} \ p).$$ Let $$\lambda=\lambda_{i}p^{i}+\lambda_{i+1}p^{i+1}+\ldots+\lambda_{t-1}p^{t-1},$$ where $1\leq \lambda_{i} \leq p-1, 0\leq \lambda_{j} \leq p-1$ for $i+1\leq j \leq t-1.$ Then $$\mu=p^{t}-\lambda=(p-\lambda_{i})p^{i}+(p-1-\lambda_{i+1})p^{i+1}+\ldots+(p-1-\lambda_{t-1})p^{t-1}. $$ Consequently, by Lucas lemma, we have in $\mathbb{F}_{p}$ $$\left(\begin{array}{c} A \\ \lambda\\ \end{array} \right)=\left(\begin{array}{c} a_{i} \\ \lambda_{i}\\ \end{array} \right)\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} a_{j} \\ \lambda_{j}\\ \end{array} \right), \ \ \left(\begin{array}{c} B \\ \mu\ \end{array} \right)=\left(\begin{array}{c} b_{i} \\ p-\lambda_{i}\\ \end{array} \right)\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} b_{j} \\ p-1-\lambda_{j}\\ \end{array} \right),$$ $$\sum_{\lambda+\mu=p^{t}, p^{t-1}\parallel \lambda}\left(\begin{array}{c} A \\ \lambda \\ \end{array} \right)\left(\begin{array}{c} B \\ \mu\\ \end{array} \right)=\sum_{i=1}^{p-1}\left(\begin{array}{c} a_{t-1} \\ i \\ \end{array} \right)\left(\begin{array}{c} b_{t-1} \\ p-i \\ \end{array} \right).$$ Therefore $$\sum_{\lambda+\mu=p^{t}, p^{i}\parallel \lambda}\left(\begin{array}{c} A \\ \lambda \\ \end{array} \right)\left(\begin{array}{c} B \\ \mu\\ \end{array} \right)=\sum_{\lambda_{i}=1}^{p-1}\sum_{\lambda_{i+1}=0}^{p-1}\cdots \sum_{\lambda_{t-1}=0}^{p-1}\left(\begin{array}{c} a_{i} \\ \lambda_{i}\\ \end{array} \right)\left(\begin{array}{c} b_{i} \\ p-\lambda_{i}\\ \end{array} \right)\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} a_{j} \\ \lambda_{j}\\ \end{array} \right)\left(\begin{array}{c} b_{j} \\ p-1-\lambda_{j}\\ \end{array} \right).$$ $$=\sum_{\lambda_{i}=1}^{p-1}\left(\begin{array}{c} a_{i} \\ \lambda_{i}\\ \end{array} \right)\left(\begin{array}{c} b_{i} \\ p-\lambda_{i}\\ \end{array} \right) \sum_{\lambda_{i+1}=0}^{p-1}\left(\begin{array}{c} a_{i+1} \\ \lambda_{i+1}\\ \end{array} \right)\left(\begin{array}{c} b_{i+1} \\ p-1-\lambda_{i+1}\\ \end{array} \right) \cdots \sum_{\lambda_{t-1}=0}^{p-1}\left(\begin{array}{c} a_{t-1} \\ \lambda_{t-1}\\ \end{array} \right)\left(\begin{array}{c} b_{t-1} \\ p-1-\lambda_{t-1}\\ \end{array} \right)$$ To all of these sums but the first we apply Lemma 2.3 and we get $$\sum_{\lambda_{i}=1}^{p-1}\left(\begin{array}{c} a_{i} \\ \lambda_{i}\\ \end{array} \right)\left(\begin{array}{c} b_{i} \\ p-\lambda_{i}\\ \end{array} \right)\cdot \prod_{j=i+1}^{t-1}\left(\begin{array}{c} a_{j}+ b_{j} \\ p-1\\ \end{array} \right).$$ Therefore $$c_{t}=a_{t}+b_{t}+\sum_{k=1}^{p-1}\left(\begin{array}{c} a_{t-1} \\ k \\ \end{array} \right)\left(\begin{array}{c} b_{t-1} \\ p-k \\ \end{array} \right)+\sum_{i=0}^{t-2}\left(\sum_{k=1}^{p-1}\left(\begin{array}{c} a_{i} \\ k\\ \end{array} \right)\left(\begin{array}{c} b_{i} \\ p-k\\ \end{array} \right)\right)\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} a_{j}+b_{j} \\ p-1\\ \end{array} \right)$$ $$=a_{t}+b_{t}+\sum_{i=0}^{t-1}\left(\sum_{k=1}^{p-1}\left(\begin{array}{c} a_{i} \\ k\\ \end{array} \right)\left(\begin{array}{c} b_{i} \\ p-k\\ \end{array} \right) \right) \prod_{j=i+1} ^{t-1}\left(\begin{array}{c} a_{j}+b_{j} \\ p-1\\ \end{array} \right)( \hbox{\rm{mod}} \ p).\ \ \ \ \ \ \ \ \ \ \ \ $$ $ \Box $ {\bf Corollary 2.4. } {\it Assume that $$a=\sum_{i=0}^{\infty}a_{i}p^{i}, b=\sum_{i=0}^{\infty}b_{i}p^{i}, a+b=\sum_{i=0}^{\infty}c_{i}p^{i} \in \mathbb{Z}_{p}, $$ with $a_{i}, b_{i}, c_{i}\in \{0,1,\ldots,p-1\}.$ Then $c_{0} = a_{0}+b_{0} (\hbox{\rm{mod}}\,p)$, and for $t\geq 1,$ $$c_{t}=a_{t}+b_{t}+\sum_{i=0}^{t-1}\left(\sum_{j=1}^{p-1}\left(\begin{array}{c} a_{i} \\ j\\ \end{array} \right)\left(\begin{array}{c} b_{i} \\ p-j\\ \end{array} \right)\right )\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} a_{j}+b_{j} \\ p-1\\ \end{array} \right)( \hbox{\rm{mod}} \ p).$$ In particular, if $p=2,$ then we have $c_{0} = a_{0}+b_{0}( \hbox{\rm{mod}} \,2)$, and for $t\geq 1,$ $$c_{t} = a_{t}+b_{t}+\sum_{i=0}^{t-1}a_{i}b_{i}\prod_{j=i+1}^{t-1}(a_{j}+b_{j})( \hbox{\rm{mod}} \ 2).$$ } $ \Box $ {\bf Corollary 2.5.} {\it Assume that $a=\sum_{i=0}^{\infty}a_{i}2^{i} \in \mathbb{Z}_{2}$ and $n\geq1.$} (i) {\it If $2^{n}a=\sum_{i=0}^{\infty}c_{i}2^{i} \in \mathbb{Z}_{2},$ then $c_{t}=0, 0\leq t < n$ and $c_{t}=a_{t-n}(\hbox{\rm{mod}}\,2)$ for $t\geq n.$ } (ii) {\it If $(2^{n}+1)a=\sum_{i=0}^{\infty}c_{i}2^{i} \in \mathbb{Z}_{2},$ then $ c_{t}=a_{t}, 0\leq t \leq n-1, c_{n}=a_{n}+a_{0} ( \hbox{\rm{mod}} \ 2)$ and for $t\geq n+1,$ $$c_{t}=a_{t}+a_{t-n}+\sum_{i=n}^{t-1}a_{i}a_{i-n}\prod_{j=i+1}^{t-1}(a_{j}+a_{j-n})( \hbox{\rm{mod}} \ 2).$$} $\Box $ {\bf Corollary 2.6.} {\it Assume that $a=\sum_{i=0}^{\infty}a_{i}3^{i} \in \mathbb{Z}_{3}$ and $n\geq1.$ If $2a=\sum_{i=0}^{\infty}c_{i}3^{i} \in \mathbb{Z}_{3},$ then $c_{0}=-a_{0} ( \hbox{\rm{mod}} \,3) $ and for $t\geq 1,$ $$c_{t}=-a_{t}+\sum_{i=0}^{t-1} a_{i}(1-a_{i}) \prod_{j=i+1} ^{t-1}a_{j}(2a_{j}-1)( \hbox{\rm{mod}} \ 3).$$} $\Box $ \section{Minus} {\bf Theorem 3.1.} \ {\it Let $A=\sum_{i=0}^{r} a_{i} p^{i}.$ Assume that $$ -A=\sum_{i=0}^{r} d_{i} p^{i}\, ( \hbox{\rm{mod}} \, p^{r+1}),$$ where $d_{i}\in \{0,1,\ldots,p-1 \}. $ Then $d_{0}=-a_{0} (\hbox{\rm{mod}}\,p)$ and for $1\leq t \leq r$ $$d_{t}=-a_{t}-1+ \prod_{i=0}^{t-1}(1-a_{i}^{p-1})( \hbox{\rm{mod}} \, p).$$} {\bf Proof } \ \ Clearly, we can assume that $A\neq 0.$ In this case, there exists an $s$ such that $a_{s}\neq 0 $ but $a_{i}=0 $ for $ i < s .$ This implies that $$d_{t}=\{\begin{array}{c} -a_{t} \ ( \hbox{\rm{mod}} \, p) ,\ \ \ \ \mbox{if} \ t\leq s; \\ -a_{t}-1 (\hbox{\rm{mod}} \, p), \mbox{if} \ t> s, \end{array}$$ which is equivalent to $$d_{t}=\{\begin{array}{c} -a_{t} \ (\hbox{\rm{mod}} \, p), \ \ \ \ \ \mbox{if} \ (a_{0}, a_{1},\ldots, a_{t-1})=(0, 0, \ldots, 0); \\ -a_{t}-1 \ (\hbox{\rm{mod}} \, p), \mbox{if} \ (a_{0}, a_{1},\ldots, a_{t-1})\neq(0, 0, \ldots, 0). \end{array}$$ Take $f(a_{0},a_{1},\ldots, a_{t-1})=-1+\prod_{i=0}^{t-1}(1-a_{i}^{p-1})(\hbox{\rm{mod}} \, p).$ Clearly $$f(a_{0},a_{1},\ldots, a_{t-1})=\{ \begin{array}{c} 0 \ (\hbox{\rm{mod}} \, p), \ \ \mbox{if} \ (a_{0}, a_{1},\ldots, a_{t-1})=(0, 0, \ldots, 0); \\ -1 (\hbox{\rm{mod}} \, p), \mbox{if} \ (a_{0}, a_{1},\ldots, a_{t-1})\neq(0, 0, \ldots, 0)\end{array}.$$ Therefore, $$d_{t}=-a_{t}+f(a_{0},a_{1},\ldots, a_{t-1})=-a_{t}-1+ \prod_{i=0}^{t-1}(1-a_{i}^{p-1})(\hbox{\rm{mod}} \, p).$$ $\Box$ {\bf Corollary 3.2.} \ {\it Assume that $$a=\sum_{i=0}^{\infty}a_{i}p^{i}, -a=\sum_{i=0}^{\infty}d_{i}p^{i} \in \mathbb{Z}_{p}, $$ with $a_{i}, d_{i}\in \{0,1,\ldots,p-1\}.$ Then $d_{0}=-a_{0} (\hbox{\rm{mod}} \,p)$ and for $t\geq1$ $$d_{t}=-a_{t}-1+ \prod_{i=0}^{t-1}(1-a_{i}^{p-1})(\hbox{\rm{mod}} \, p).$$ If $p=2,$ then $d_{0}=a_{0},$ and for $t\geq1,$ $$ d_{t} = a_{t} + 1+ \prod_{i=0}^{t-1}(1+a_{i}) (\hbox{\rm{mod}} \,2).$$ } $\Box$ {\bf Remark 3.4} The problems considered in this section and in Corollary 2.5 and 2.6 were suggested to us by Browkin. \section{Multiplication} {\bf 4.1. Fundamental lemma} {\bf 4.1.1. Fundamental polynomials} Let $$\mathbb{K}=\{\underline{k}=(k_{1},\ldots,k_{l},\ldots,k_{p-1}): k_{l}\geq 0, 0 \leq \sum _{l=1}^{p-1}k_{l}\leq p-1\}.$$ Clearly $\underline{0}=(0,\ldots,0)\in \mathbb{K}.$ Let $$\mathbb{K}^{(r+1)^{2}}=\underbrace{\mathbb{K}\times \mathbb{K}\times\cdots \times\mathbb{K}}_{(r+1)^{2}},$$ and write $\underline{\underline{0}}=(\underline{0},\ldots,\underline{0})\in \mathbb{K}^{(r+1)^{2}} .$ For any $\underline{k}=(k_{1},\ldots,k_{l},\ldots,k_{p-1})\in \mathbb{K}, \underline{k}\neq \underline{0},$ define $$\pi_{\underline{k}} (x,y) =\frac{ y(y-1)\cdots (y-\sum_{l=1}^{p-1} k_{l} +1 )}{ k_{1} !\cdots k_{p-1}!}\prod_{l=1}^{p-1}\left( \frac{ x(x-1)\cdots (x-l +1 )}{l!}\right ) ^{k_{l}} \ ( \hbox{\rm{mod}} \, p),$$ and for $\underline{k}=\underline{0},$ define $\pi_{\underline{k}} (x,y)=1$. Let $\mathbf{I}=\{(i,j) : 0\leq i,j \leq r\},$ and let $\underline{x}=(x_{0},\ldots, x_{r}), \underline{y}=(y_{0},\ldots, y_{r}).$ Then for $\underline{\underline{k}}=(\ldots,\underline{k}_{i,j},\ldots)\in \mathbb{K}^{(r+1)^{2}}$ with $\underline{k}_{i,j}=(k_{i,j,1},\ldots, k_{i,j,p-1}),$ we define the function $$\pi_{\underline{\underline{k}}}(\underline{x},\underline{y})=\prod_{(i,j)\in \mathbf{I}}\pi_{\underline{k}_{i,j}} (x_{i},y_{j}),$$ and the norm $$\|\underline{\underline{k}} \|=\sum_{(i,j)\in \mathbf{I}}\left(\sum _{l=1}^{p-1}lk_{i,j,l}\right )p^{i+j}.$$ Clearly, $\pi_{\underline{\underline{k}}}(\underline{x},\underline{y})$ is a polynomial in $x_{0},\ldots, x_{r}; y_{0},\ldots, y_{r}.$ {\bf Lemma 4.1.} {\it Assume that $\underline{0}\neq \underline{k}\in \mathbb{K}.$ Let $0\leq a \leq p-1, 0\leq b \leq p-1.$ Then we have $\pi_{\underline{k}}(a,b)=0,$ if one of the following cases occurs. (i) $ab=0;$ (ii) there exists an $l,$ such that $l>a$ and $k_{l}>0;$ (iii) $\sum_{l=1}^{p-1}k_{l}>b.$} {\bf Proof} It can be checked directly. $\Box$ {\bf Lemma 4.2.} {\it Assume that $\underline{\underline{0}}\neq \underline{\underline{k}}=(\ldots,\underline{k}_{i,j},\ldots)\in \mathbb{K}^{(r+1)^{2}}.$ Let $\underline{a}=(a_{0},a_{1},\ldots,a_{r})$ and $\underline{b}=(b_{0},b_{1},\ldots,b_{r}).$ Then we have $$\pi_{\underline{\underline{k}}}(\underline{a},\underline{b})=0,$$ if one of the following cases occurs. (i) there exists $(i,j)\in \mathbf{I}$ such that $a_{i}b_{j}=0$ and $\underline{k}_{i,j}\neq \underline{0};$ (ii) there exist $(i,j)\in \mathbf{I}, l> a_{i}$, such that $k_{i,j,l}>0;$ (iii) there exists $(i,j)\in \mathbf{I},$ such that $\sum_{l=1}^{p-1}k_{i,j,l}> b_{j}.$} {\bf Proof} It follows from Lemma 4.1. $\Box$ {\bf 4.2.2. Fundamental lemma} {\bf Lemma 4.3.} {\it Assume that $$A=\sum_{i=0}^{r}a_{i}p^{i}, \ \ B=\sum_{i=0}^{r}b_{i}p^{i}, \ \ AB=\sum_{i=0}^{2r+1}e_{i}p^{i}.$$ Then $e_{0} = a_{0}b_{0}(\hbox{\rm{mod}}\,p)$ and for $1\leq t \leq 2r+1,$ $$ e_{t} = \sum_{\begin{array}{c} \underline{\underline{k}}\in \mathbb{K}^{(t+1)^{2}}\\ \| \underline{\underline{k}}\| = p^{t} \end{array} } \pi _{\underline{\underline{k}}}( \underline{a},\underline{b})\ \ ( \hbox{\rm{mod}} \, p), $$ where $\underline{a}=(a_{0},a_{1},\ldots,a_{t})$ and} $\underline{b}=(b_{0},b_{1},\ldots,b_{t}).$ {\bf Proof} Define $$ \mathbf{I}(\underline{a}, \underline{b})= \{ (i,j) \in \mathbf{I} : 0\leq i,j \leq t, a_{i}b_{j}\neq0\}.$$ For any integers $0< a,b < p$, define the subset of $\mathbb{K}:$ $$\mathbb{K}(a,b)=\{\underline{k}=(k_{1},\ldots,k_{l},\ldots,k_{a},0,\ldots,0)\in \mathbb{K} : k_{l}\geq 0, 1\leq \sum _{l=1}^{a}k_{l}\leq b\}.$$ Note that $\underline{0}\notin \mathbb{K}(a,b).$ We will denote $\underline{k}=(k_{1},\ldots,k_{a},0,\ldots,0)$ simply by $(k_{1},\ldots,k_{a}).$ Then, for $\underline{k}=(k_{1},\ldots,k_{a})\in \mathbb{K}(a,b),$ clearly we have $$\pi_{\underline{k}} (a,b)= \left( \begin{array}{c} b \\ \underline{k} \end{array}\right )\prod_{l=1}^{a}\left( \begin{array}{c} a \\ l \end{array}\right ) ^{k_{l}} \ ( \hbox{\rm{mod}} \, p),$$ where $$\left( \begin{array}{c} b \\ \underline{k} \end{array}\right )=\frac{ b!}{k_{1}!\cdots k_{a} !(b-\sum_{l=1}^{a} k_{l} )!}.$$ For $\phi \neq S\subseteq \mathbf{I}(\underline{a}, \underline{b}),$ define the subset of $\mathbb{K}^{(t+1)^{2}}$: $$\mathbb{K}_{S}(\underline{a},\underline{b})=\{(\ldots,\underline{k}_{i,j},\ldots)\in \mathbb{K}^{(t+1)^{2}} : \underline{k}_{i,j}\in \mathbb{K}(a_{i},b_{j}), (i,j)\in S; \, \underline{k_{i,j}}=\underline{0}, (i,j)\notin S \}.$$ If $\underline{\underline{k}}=(\ldots,\underline{k}_{i,j},\ldots)\in\mathbb{K}_{S}(\underline{a},\underline{b}) $ with $\underline{k_{i,j}}=(k_{i,j,1},k_{i,j,2},\ldots, k_{i,j, a_{i}})\in \mathbb{K}(a_{i},b_{j}),$ then it is easy to show that $$\pi_{\underline{\underline{k}}}(\underline{a}, \underline{b})=\prod_{(i,j)\in S}\pi_{\underline{k}_{i,j}}(a_{i}, b_{j}) ( \hbox{\rm{mod}} \, p).$$ and $$\|\underline{\underline{k}} \|=\sum_{(i,j)\in S}\left(\sum _{l=1}^{a_{i}}lk_{i,j,l}\right)p^{i+j}.$$ Now, we have $$ \sum_{ 0 \leq \lambda \leq AB} \left( \begin{array}{c} AB \\ \lambda \\ \end{array} \right)z^{\lambda } = ( 1 + z)^{AB} = \prod_{\begin{array}{c} 0 \leq i \leq t \\ a_{i}\neq 0 \end{array} }(1+z^{p^{i}})^{a_{i}B} $$ $$ = \prod_{(i,j)\in \mathbf{I}(\underline{a}, \underline{b}) }\left(1+\sum_{l=1}^{a_{i}}\left( \begin{array}{c} a_{i} \\ l \end{array}\right )z^{lp^{i+j}}\right) ^{b_{j}} \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\prod_{(i,j)\in \mathbf{I}(\underline{a}, \underline{b})} \left( 1+\sum _{\underline{k} \in \mathbb{K}(a_{i},b_{j}) } \left( \begin{array}{c} b_{j} \\ \underline{k} \end{array}\right )\prod_{l=1}^{a_{i}}\left( \begin{array}{c} a_{i} \\ l \end{array}\right ) ^{k_{l}} z^{\sum_{l=1}^{a_{i}}lk_{l}p^{i+j}} \right ) $$ $$ \ \ \ \ = \prod_{(i,j)\in \mathbf{I}(\underline{a}, \underline{b})} \left( 1+\sum _{ \underline{k} \in \mathbb{K}(a_{i},b_{j})} \pi_{\underline{k}}(a_{i},b_{j}) z^{\sum_{l=1}^{a_{i}}lk_{l}p^{i+j}} \right ) $$ $$= 1 + \sum_{\phi\neq S\subseteq \mathbf{I}(\underline{a}, \underline{b})}\sum_{ \underline{\underline{k}} = ( \cdots , \underline{k}_{i,j},\cdots ) \in \mathbb{K}_{S} (\underline{a},\underline{b}) } \prod_{(i,j) \in S} \pi _{\underline{k}_{i,j}} (a_{i}, b_{j})\cdot z^{\sum_{(i,j) \in S}(\sum_{l=1}^{a_{i}}lk_{i,j,l})p^{i+j}}$$ $$ = 1 + \sum_{\phi\neq S \subseteq \mathbf{I}(\underline{a}, \underline{b})}\sum_{\begin{array}{c} \underline{\underline{k}} \in \mathbb{K}_{S}(\underline{a},\underline{b} ) \end{array} }\pi_{ \underline{\underline{k}} }(\underline{a},\underline{b} )z^{\| \underline{\underline{k}} \|}( \hbox{\rm{mod}} \, p). \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ Comparing the coefficients of both sides and letting $\lambda=p^{t}$, then from Lucas lemma, we have $$e_{t}=\left ( \begin{array}{c}AB \\ p^{t} \\ \end{array} \right)=\sum_{ \phi \neq S \in \mathbf{I}(\underline{a}, \underline{b})}\sum_{\begin{array}{c} \underline{\underline{k}} \in \mathbb{K}_{S}(\underline{a},\underline{b} )\\ \| \underline{\underline{k}} \|=p^{t} \end{array} }\pi_{\underline{\underline{k}} }(\underline{a},\underline{b} )=\sum_{\begin{array}{c} \underline{\underline{k}}\in \mathbb{K}^{(t+1)^{2}}\\ \| \underline{\underline{k}}\| = p^{t} \end{array} } \pi _{\underline{\underline{k}}}( \underline{a},\underline{b})\ \ ( \hbox{\rm{mod}} \, p).$$ The last step follows from Lemma 4.2. $\Box$ {\bf 4.2. Multiplication formula} {\bf 4.2.1. $T_{p}$-partitions} \ Now we shall give a simpler formula for $e_{t}.$ Let $\mathbb{K}^{*}=\mathbb{K}\backslash \{0\}$ and $K:=| \mathbb{K}^{*}|.$ Then $|\mathbb{K}| =K+1$ and we can write the elements of $\mathbb{K}$ as $\underline{k}(j), 0\leq j \leq K,$ in particular, let $\underline{k}(0)=\underline{0}$ for convenience. So $$\mathbb{K}^{*}=\{\underline{k}(j): 1\leq j \leq K\}.$$ For $\underline{k}=(k_{1},\ldots,k_{l},\ldots,k_{p-1})\in \mathbb{K},$ define $$w(\underline{k})=\sum_{j=1}^{p-1}jk_{j}.$$ In the following, we fix the vector: $$\underline{w}:=(w(\underline{k}(1)),w(\underline{k}(2)),\ldots, w(\underline{k}(K))).$$ For $\underline{l}=(l_{1}, l_{2}, \ldots, l_{K})\in \mathbb{N}^{K}$ (the cartesian product of $\mathbb{N},$ the set of non-negative integers), the size of $\underline{l}$ is defined as $$| \underline{l}|=\sum_{j=1}^{K}l_{j},$$ and the inner product of $\underline{w}$ and $\underline{l}$ is defined as $$\underline{w}\cdot \underline{l}=\sum_{j=1}^{K}w(\underline{k}(j))l_{j}.$$ For an integer $n\geq 0,$ a $T_{p}$-partition of $n$ is defined as $$n=\sum_{j=0}^{t}(\underline{w}\cdot {\underline{l}}_{j})p^{j}, \ {\underline{l}}_{j} \in \mathbb{N}^{K}, 0\leq | \underline{l}_{j}| \leq 1+j.$$ This partition is also written as $$\underline{\underline{l}}=(\underline{l}_{0},\ldots,\underline{l}_{m},\ldots,\underline{l}_{t}), 0\leq | \underline{l}_{m}| \leq 1+m.$$ We will use the symbol $\mathbf{L}_{p}(t)$ to denote the set of all possible $T_{p}$-partitions of $p^{t},$ that is, $$\mathbf{L}_{p}(t)=\{\underline{\underline{l}}=(\underline{l}_{0},\ldots,\underline{l}_{m},\ldots,\underline{l}_{t}): \sum_{j=0}^{t}(\underline{w}\cdot {\underline{l}}_{j})p^{j}=p^{t}, 0\leq | \underline{l}_{m}| \leq 1+m\}.$$ If $p=2,$ then $K=1$ and $\underline{l}_{m}$ is only a non-negative integer, so we can write $\underline{l}_{m}=l_{m}.$ Clearly $l_{0}=0.$ Hence, for $p=2,$ we have $$\mathbf{L}_{2}(t)=\{\underline{\underline{l}}=(l_{1},\ldots,l_{k},\ldots, l_{t}): \sum_{ k=1}^{t} l_{k} 2^{k} =2^{t}, 0 \leq l_{k} \leq k+1 \}.$$ If $p=3,$ then $K=5$ and we have $$\mathbb{K}^{*}=\{\underline{k}(1)=(1,0), \underline{k}(2)=(0,1),\underline{k}(3)=(2,0), \underline{k}(4)=(1,1), \underline{k}(5)=(0,2)\},$$ and therefore $\underline{w}=(1,2,2,3,4).$ Hence, for $p=3,$ we have $$\mathbf{L}_{3}(t)=\{\underline{\underline{l}}=(\underline{l}_{0},\ldots,\underline{l}_{k},\ldots,\underline{l}_{t}): \sum_{k=0}^{t}(l_{k1}+2l_{k2}+2l_{k3}+3l_{k4}+4l_{k5})3^{k}=3^{t},$$ $$ \ \ \ \ \ \ \ \ \ \ \ 0\leq | \underline{l}_{k}| \leq 1+k\},$$ where $\underline{l}_{k}=(l_{k1},l_{k2},l_{k3},l_{k4},l_{k5}), 0\leq k \leq t.$ {\bf 4.2.2 Partitions of $\mathbf{I(m)}$ and symmetric polynomials} \ Let $\mathbf{I}(m)=\{i: 0\leq i \leq m\}, \ 0\leq m \leq t.$ For $\underline{l}=(l_{1}, \ldots, l_{j},\ldots, l_{K})\in \mathbb{N}^{K}$ with $ | \underline{l}| \leq 1+m,$ we call $\underline{S}=(S_{1},\ldots, S_{j},\ldots, S_{K})$ an $\underline{l}$-partition of $\mathbf{I}(m),$ if it satisfies $$S_{j}\subseteq \mathbf{I}(m),\ |S_{j}|=l_{j}, $$ $$\ S_{j}\cap S_{j^{\prime}}=\phi, \ \forall\ j\neq j^{\prime}, 1\leq j,j^{\prime}\leq K.$$ The set of all possible $\underline{l}$-partitions of $\mathbf{I}(m)$ is denoted by $\mathbf{I}(m,\underline{l}),$ that is, $$\mathbf{I}(m,\underline{l})=\{(S_{1}, S_{2},\ldots, S_{K}): S_{j}\subseteq \mathbf{I}(m),\ |S_{j}|=l_{j}, \ S_{j}\cap S_{j^{\prime}}=\phi, \ $$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \forall\ j\neq j^{\prime}, 1\leq j,j^{\prime}\leq K\}.$$ Defining $l_{0}:=1+m-\sum_{j=1}^{K}l_{j},$ we get $$|\mathbf{I}(m,\underline{l})|=\frac{(1+m)!}{l_{0}!l_{1}!\cdots l_{K}!}$$ For a given integer $m, 0\leq m\leq t,$ and $\underline{l}=(l_{1},\ldots, l_{j}, \ldots, l_{K})\in \mathbb{N}^{K}$ with $|\underline{l}| \leq 1+m,$ define the function $$\tau_{\underline{l}}(x_{0},\ldots,x_{m}; y_{0},\ldots,y_{m})=\sum_{\underline{S}=(S_{1},\ldots, S_{j},\ldots, S_{K})\in \mathbf{I}(m, \underline{l})}\prod_{j=1}^{K}\prod_{i\in S_{j}}\pi_{\underline{k}(j)}(x_{i},y_{m-i}).$$ Clearly, $\tau_{\underline{l}}(x_{0},\ldots,x_{m}; y_{0},\ldots,y_{m})$ is a polynomial which is symmetric with respect to the pairs $\{(x_{i}, y_{m-i}): 0\leq i\leq m\}$, that is, it is invariant under the permutations of the pairs. When $p=2,$ we have $K=1, \mathbb{K}=\{0,1\}$ and hence $\underline{k}(1)=1$ as well as $l:=l_{1}=\underline{l}.$ So we have $$\tau_{\underline{l}}(x_{0},\ldots,x_{m}; y_{0},\ldots,y_{m})=\sum_{0\leq i_{1}< \cdots < i_{l}\leq m}\prod _{k=1}^{l}x_{i_{k}}y_{m-i_{k}}=\tau _{l }(x_{0}y_{m},x_{1}y_{m-1},\cdots ,x_{m}y_{0}),$$ where $\tau _{l}(X_{0},X_{1},\cdots ,X_{m})$ denote the $l$-th elementary symmetric polynomial of $X_{0},X_{1}, \cdots ,X_{m}.$ When $p=3,$ we have the ordered set $\mathbb{K}^{*}=\{(1,0), (0,1),(2,0), (1,1), (0,2)\}.$ It is easy to check that when $x_{i}, y_{j}\in \mathbb{F}_{3},$ as polynomial functions we have $$\tau_{\underline{l}}(x_{0},\ldots,x_{m}; y_{0},\ldots,y_{m})=\sum_{\underline{S}=(S_{1}, \ldots , S_{5})\in \mathbf{I}(m, \underline{l})}f_{\underline{S}}(x_{0},x_{1},\ldots,x_{m};y_{0},y_{1},\ldots,y_{m}),$$ where $$f_{\underline{S}}(x_{0},x_{1},\ldots,x_{m};y_{0},y_{1},\ldots,y_{m})=\prod_{ i_{1}\in S_{1}}x_{i_{1}}y_{m-i_{1}}\prod_{ i_{2}\in S_{2}} x_{i_{2}}(1-x_{i_{2}})y_{m-i_{2}}$$ $$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot\prod_{ i_{3}\in S_{3}}x_{i_{3}}^{2}y_{m-i_{3}}(1-y_{m-i_{3}})\prod_{ i\in S_{4}\cup S_{5}}x_{i}(1-x_{i})y_{m-i}(y_{m-i}-1).$$ {\bf 4.2.3. Multiplication formula} {\bf Theorem 4.4.} {\it Assume that $$A=\sum_{i=0}^{r}a_{i}p^{i}, \ \ B=\sum_{i=0}^{r}b_{i}p^{i}, \ \ AB=\sum_{i=0}^{2r+1}e_{i}p^{i}.$$ Then $e_{0} = a_{0}b_{0} (\hbox{\rm{mod}}\,p)$ and for $1\leq t \leq 2r+1,$ $$ e_{t} = \sum_{\underline{\underline{l}}=(\underline{l}_{0},\ldots,\underline{l}_{k},\ldots,\underline{l}_{t} )\in \mathbf{L}_{p}(t)}\prod_{k=0}^{t}\tau_{\underline{l}_{k}}(a_{0},\ldots,a_{k}; b_{0},\ldots, b_{k})\ \ ( \hbox{\rm{mod}} \, p). $$} {\bf Proof} For $\underline{\underline{k}}=( \cdots , \underline{k}_{i,j},\cdots )\in \mathbb{K}^{(t+1)^{2}},$ let $$\underline{\underline{S}}(\underline{\underline{k}})=(\underline{S}_{0},\ldots,\underline{S}_{m},\ldots,\underline{S}_{t}), \ \underline{S}_{m}=(S_{m,1},\ldots, S_{m,j},\ldots,S_{m,K}),$$ $$\underline{\underline{l}}(\underline{\underline{k}})=(\underline{l}_{0},\ldots,\underline{l}_{m},\ldots, \underline{l}_{t}), \ \underline{l}_{m}=(l_{m,1},\ldots, l_{m,j},\ldots,l_{m,K}),$$ where $$S_{m,j}=\{i: 0\leq i \leq m, \underline{k}_{i,m-i}=\underline{k}(j)\}, \ \ |S_{m,j}|=l_{m,j}.$$ Clearly, we have $$S_{m,j}\subseteq \mathbf{I}(m), \ S_{m,j}\cap S_{m,j^{\prime}}=\phi, \ \forall\ j\neq j^{\prime},$$ and $$|\underline{l}_{m}|=\sum_{j=1}^{K}l_{m,j}\leq 1+m.$$ So $\underline{S}_{m}\in \mathbf{I}(m, \underline{l}_{m}),$ and therefore $$\underline{\underline{S}}(\underline{\underline{k}})\in \mathbf{I}(0, \underline{l}_{0})\times\mathbf{I}(1, \underline{l}_{1})\times \cdots \times \mathbf{I}(t, \underline{l}_{t}). $$ We need the following two lemmas. {\bf Lemma 4.5.} {\it $\|\underline{\underline{k}} \|=p^{t}$ if and only if} $\underline{\underline{l}}(\underline{\underline{k}})\in \mathbf{L}_{p}(t).$ In fact, noting that $w(\underline{0})=0,$ we have $$\|\underline{\underline{k}}\|=\sum_{0\leq i,j \leq t}w(\underline{k}_{i,j})p^{i+j}=\sum_{0 \leq m\leq t}\left(\sum_{0\leq i\leq m}w(\underline{k}_{i,m-i})\right)p^{m}$$ $$=\sum_{0 \leq m\leq t}\left(\sum_{\begin{array}{c} 0\leq i\leq m, \underline{k}_{i,m-i}\neq \underline{0}\end{array}}w(\underline{k}_{i,m-i})\right)p^{m}$$ $$\ =\sum_{0 \leq m\leq t}\left(\sum_{1\leq j\leq K}\sum_{i\in S_{m,j}}w(\underline{k}(j))\right)p^{m} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$\ \ \ \ \ \ \ \ \ =\sum_{0 \leq m\leq t}\left(\sum_{1\leq j\leq K}l_{m,j}w(\underline{k}(j))\right)p^{m}=\sum_{0 \leq m\leq t}(\underline{w}\cdot\underline{l}_{m})p^{m},$$ as required. {\bf Lemma 4.6.} {\it For a fixed $(\underline{l}_{0},\ldots,\underline{l}_{m},\ldots,\underline{l}_{t})\in \mathbf{L}_{p}(t),$ we have the bijection:} $$\{\underline{\underline{k}}\in \mathbb{K}^{(1+t)^{2}}: \underline{\underline{l}}(\underline{\underline{k}})=(\underline{l}_{0},\ldots,\underline{l}_{m},\ldots,\underline{l}_{t})\}\longrightarrow \mathbf{I}(0,\underline{l}_{0})\times\cdots \times \mathbf{I}(t,\underline{l}_{t})$$ $$\underline{\underline{k}}\longmapsto \underline{\underline{S}}(\underline{\underline{k}})$$ Now, we turn to the proof of the theorem. From Lemma 4.3, 4.5 and 4.6, we have $$e_{t}=\sum_{\begin{array}{c} \underline{\underline{k}}\in \mathbb{K}^{(t+1)^{2}}\\ \| \underline{\underline{k}}\| = p^{t} \end{array} } \pi _{\underline{\underline{k}}}( \underline{a},\underline{b})=\sum_{\begin{array}{c} \underline{\underline{k}}\in \mathbb{K}^{(t+1)^{2}}\\ \underline{\underline{l}}(\underline{\underline{k}})\in \mathbf{L}_{p}(t) \end{array} } \pi _{\underline{\underline{k}}}( \underline{a},\underline{b})$$ $$=\sum_{\underline{\underline{l}}\in \mathbf{L}_{p}(t)}\sum_{\begin{array}{c} \underline{\underline{k}}\in \mathbb{K}^{(t+1)^{2}}\\ \underline{\underline{l}}(\underline{\underline{k}})=(\underline{l}_{0},\ldots,\underline{l}_{m},\ldots,\underline{l}_{t}) \end{array} }\pi _{\underline{\underline{k}}}( \underline{a},\underline{b})$$ $$=\sum_{\underline{\underline{l}}\in \mathbf{L}_{p}(t)}\sum_{(\underline{S}_{0},\ldots,\underline{S}_{m},\ldots,\underline{S}_{t}) \in \prod_{ m=0}^{ t}\mathbf{I}(m, \underline{l}_{m})}\prod_{m=0}^{t}\prod_{j=1}^{K}\prod_{i\in S_{m,j}}\pi_{\underline{k}(j)}(a_{i},b_{m-i})$$ $$=\sum_{\underline{\underline{l}}\in \mathbf{L}_{p}(t)}\prod_{m=0}^{t}\sum_{\underline{S}_{m}\in \mathbf{I}(m, \underline{l}_{m})}\prod_{j=1}^{K}\prod_{i\in S_{m,j}}\pi_{\underline{k}(j)}(a_{i},b_{m-i}) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$=\sum_{\underline{\underline{l}}\in \mathbf{L}_{p}(t)}\prod_{m=0}^{t}\tau_{\underline{l}_{m}}(a_{0},\ldots, a_{m}; b_{0}, \ldots, b_{m}) \ (\hbox{\rm{mod}} \, p). \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $\Box$ {\bf Corollary 4.7.} {\it Assume that $$a=\sum_{i=0}^{\infty}a_{i}p^{i}, b=\sum_{i=0}^{\infty}b_{i}p^{i}, ab=\sum_{i=0}^{\infty}e_{i}p^{i}, $$ with $a_{i}, b_{i}, e_{i}\in \{0,1,\ldots,p-1\}.$ Then $e_{0} = a_{0}b_{0}\,( \hbox{\rm{mod}}\,p)$ and for $t\geq 1 ,$ $$ e_{t} = \sum_{\underline{\underline{l}}=(\underline{l}_{0},\ldots,\underline{l}_{k},\ldots,\underline{l}_{t} )\in \mathbf{L}_{p}(t)}\prod_{k=0}^{t}\tau_{\underline{l}_{k}}(a_{0},\ldots,a_{k}; b_{0},\ldots, b_{k}) ( \hbox{\rm{mod}} \, p). $$ In particular, if $p=2,$ we have $e_{0} = a_{0}b_{0} (\hbox{\rm{mod}}\,2)$ and for $t\geq 1,$ $$e_{t} = \sum_{(l_{1},\ldots,l_{t})\in \mathbf{L}_{2}(t)} \prod_{ 1 \leq k \leq t} \tau _{l_{k} }(a_{0}b_{k},a_{1}b_{k-1},\cdots ,a_{k}b_{0}) (\hbox{\rm{mod}} \, 2 ); $$ if $p=3,$ we have $e_{0} = a_{0}b_{0} (\hbox{\rm{mod}}\,3)$ and for $t\geq 1,$ $$ e_{t}=\sum_{(\underline{l}_{0},\ldots,\underline{l}_{k},\ldots,\underline{l}_{t} )\in \mathbf{L}_{3}(t)}\prod_{k=0}^{t} \sum_{\underline{S}=(S_{1}, \ldots , S_{5})\in \mathbf{I}(k, \underline{l}_{k})}f_{\underline{S}}(a_{0},a_{1},\ldots,a_{k};b_{0},b_{1},\ldots,b_{k})\ ( \hbox{\rm{mod}} \, 3),$$ where $$f_{\underline{S}}(a_{0},a_{1},\ldots,a_{k};b_{0},b_{1},\ldots,b_{k})=\prod_{ i_{1}\in S_{1}}a_{i_{1}}b_{k-i_{1}}\prod_{ i_{2}\in S_{2}} a_{i_{2}}(1-a_{i_{2}})b_{k-i_{2}}$$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot\prod_{ i_{3}\in S_{3}}a_{i_{3}}^{2}b_{k-i_{3}}(1-b_{k-i_{3}})\prod_{ i\in S_{4}\cup S_{5}}a_{i}(1-a_{i})b_{k-i}(b_{k-i}-1).$$} $\Box$ {\bf Remark 4.8.} (i) We can give an algorithm to determine the set $\mathbf{L}_{2}(t).$ (ii) For $p=2,$ we once gave a rather complicated proof for the addition formula by simplifying the well-known recursion formulas for the addition of Witt vectors(see [1]), but we did not know whether the similar thing is possible for the multiplication formula. After reading that complicated proof, Browkin found a simple but quite different proof for our addition formula in the case of $p=2$ (see [2]). The present proofs, in particular those for the results in this section, were largely inspired by the following fact in Lucas lemma: $$a_{t}= \left( \begin{array}{c} A \\ p^{t}\\ \end{array} \right ) ( \hbox{\rm{mod}} \,\, p ),$$ which was first pointed in [3]. This fact was also used in [4]. {\bf Question 4.9.} How to simplify the expression of $e_{t}$ further ? \section{Transformation of coefficients} In this section, we will solve Browkin's problem. At first, we define the required polynomials as follows. $$f_{t}(x_{0},x_{1},\ldots,x_{t-1}):=\sum_{\lambda=0}^{ t-1 } \{\sum_{c=1}^{\frac{p-1}{2}}[(x_{\lambda}+c)^{p-1}-1]\}\prod_{\lambda < i < t}(1-x_{i}^{p-1}),$$ $$g_{t}(y_{0},y_{1},\ldots,y_{t-1}):=\sum_{\lambda=0}^{ t-1 } \{\sum_{c=\frac{p+1}{2}}^{ p-1 }[1- (y_{\lambda}-c )^{p-1}]\}\prod_{\lambda < i < t}[1-\left(y_{i}-\frac{p-1}{2}\right)^{p-1}],$$ where we also have the convention that $\prod_{i\in \phi}=1$ for the empty set $\phi.$ {\bf Theorem 5.1.} {\it Assume that $p\geq 3$ is a prime. Let $$A=\sum_{i}^{\infty}a_{i}p^{i}=\sum_{j}^{\infty}b_{j}p^{j} \in \mathbb{Z}_{p}, $$ with $a_{i}\in \{0, \pm1,\pm2,\ldots,\pm\frac{p-1}{2}\}$ and $b_{j}\in \{0,1,\ldots,p-1\}.$ Then $$b_{t}=a_{t}+f_{t}(a_{0},a_{1},\ldots,a_{t-1})\ (\hbox{\rm{mod}} \, p). \eqno(5.1)$$ $$a_{t}=b_{t}+g_{t}(b_{0},b_{1},\ldots,b_{t-1})\ (\hbox{\rm{mod}} \, p). \eqno(5.2)$$} {\bf Proof} \ Firstly, we prove (5.1). At first, define an index sequence. Let $j_{0}=-1$ for the initial value. If after $k-1$ rounds ($k\geq 1$) we have $j_{k-1},$ then we go on with the following two steps: i) Let $$i_{k}=\{\begin{array}{c} \infty, \ \ \mbox{if} \ \{i : j_{k-1}< i, -\frac{p-1}{2}\leq a_{i}\leq -1\}=\phi; \ \ \ \ \ \\ \mbox{min} \{i : j_{k-1}< i, -\frac{p-1}{2}\leq a_{i}\leq -1\}, \ \mbox{otherwise}. \end{array}$$ If $i_{k}=\infty,$ then the index sequence is completed; otherwise, go on with the next step: ii) Let $$j_{k}=\{\begin{array}{c} \infty, \ \ \mbox{if} \ \{i : i_{k}< i, 1 \leq a_{i}\leq \frac{p-1}{2} \}=\phi; \ \ \ \ \ \\ \mbox{min} \{i : i_{k}< i, 1 \leq a_{i}\leq \frac{p-1}{2}\}, \ \mbox{otherwise}. \end{array}$$ If $j_{k}=\infty,$ the index sequence is completed; otherwise, go on with the $(k+1)$-th round. For $k\geq 1 $ we define $$b^{\prime}_{i}=a_{i}, j_{k-1}< i < i_{k}, \ \ \mbox{and} \ \ b^{\prime}_{i_{k}}=p+a_{i_{k}}. \eqno(5.3)$$ $$b^{\prime}_{i}=a_{i}-1+p, i_{k}<i<j_{k}, \ \mbox{and} \ \ b^{\prime}_{j_{k}}=a_{j_{k}}-1. \eqno(5.4)$$ It is easy to check that $0\leq b^{\prime}_{t}< p $ for any $t.$ We will denote $$I_{k}=\sum_{j_{k-1}< i\leq i_{k}}a_{i}p^{i}, \ \ \ J_{k}=\sum_{i_{k}< i\leq j_{k}}a_{i}p^{i}, \ \forall k\geq 1.$$ When $i_{k}=\infty,$ from (5.3) we have $$I_{k}=\sum_{j_{k-1}< i\leq i_{k}=\infty}a_{i}p^{i}=\sum_{j_{k-1}< i <i_{k}=\infty}a_{i}p^{i}=\sum_{j_{k-1}< i}b^{\prime}_{i}p^{i}.\eqno(5.5)$$ When $i_{k}<\infty,$ from (5.3) we have $$I_{k}=\sum_{j_{k-1}< i\leq i_{k}}a_{i}p^{i}=\sum_{j_{k-1}< i <i_{k}}b^{\prime}_{i}p^{i}+b^{\prime}_{i_{k}}p^{i_{k}}-p^{1+i_{k}}=\sum_{j_{k-1}< i \leq i_{k}}b^{\prime}_{i}p^{i}-p^{1+i_{k}}.\eqno(5.6)$$ When $j_{k}=\infty,$ from (5.4) we have $$-p^{1+i_{k}}+J_{k}=\sum_{i_{k}< i}(p-1)p^{i}+\sum_{i_{k}< i \leq j_{k}=\infty}a_{i}p^{i}=\sum_{i_{k}< i}(a_{i}+p-1)p^{i}=\sum_{i_{k}< i}b^{\prime}_{i}p^{i}. \eqno(5.7)$$ When $j_{k}<\infty,$ from (5.4) we have $$-p^{1+i_{k}}+J_{k}=\sum_{i_{k}< i}(p-1)p^{i}+\sum_{i_{k}< i \leq j_{k}}a_{i}p^{i} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$\, =\sum_{i_{k}<i< j_{k}}(a_{i}+p-1)p^{i}+[a_{j_{k}}+\sum_{0\leq i}(p-1)p^{i}]p^{j_{k}}$$ $$=\sum_{i_{k}< i <j_{k}}(a_{i}+p-1)p^{i}+(a_{j_{k}}-1)p^{j_{k}} \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$=\sum_{i_{k}< i \leq j_{k}}b^{\prime}_{i}p^{i}. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \eqno(5.8)$$ When $j_{k}=\infty,$ from (5.6)(5.7) we have $$I_{k}+J_{k}=\sum_{j_{k-1}<i}b^{\prime}_{i}p^{i}.\eqno(5.9)$$ When $j_{k}<\infty,$ from (5.6)(5.8) we have $$I_{k}+J_{k}=\sum_{j_{k-1}<i\leq i_{k}}b^{\prime}_{i}p^{i}.\eqno(5.10)$$ It is easy to see that $$A=\left \{\begin{array}{c} I_{1}+J_{1}+\cdots +I_{k-1}+J_{k-1}+I_{k},\ \mbox{if} \ i_{k}=\infty;\\ I_{1}+J_{1}+\cdots +I_{k}+J_{k},\ \mbox{if} \ j_{k}=\infty;\ \ \ \ \ \ \ \ \ \ \ \ \ \\ \sum_{k\geq 1}(I_{k}+J_{k}),\ \mbox{otherwise}. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{array}\right. $$ Discussing the three cases respectively, from (5.5)-(5.10) we have $$A=\sum_{i\geq 0}b^{\prime}_{i}p^{i}.$$ By the definition of the index sequence, for $ k\geq1 $ clearly we have a) if $j_{k-1}< t \leq i_{k},$ then $0\leq a_{t-1}\leq \frac{p-1}{2},$ and $(a_{0},a_{1},\ldots,a_{t-1})$ is not of the form $(\ast, \ldots, \ast, -c,\underbrace{0,\ldots,0}_{m})$ with $m\geq 0$ and $ 1\leq c \leq \frac{p-1}{2};$ b) if\, $i_{k}< t \leq j_{k},$ then $-\frac{p-1}{2}\leq a_{t-1}\leq 0,$ and $(a_{0},a_{1},\ldots,a_{t-1})$ is of the form $(\ast, \ldots, \ast, -c, \underbrace{0,\ldots,0}_{m})$ with $m\geq 0$ and $ 1\leq c \leq \frac{p-1}{2}.$ Hence, for $ k\geq1 $ we have $i_{k}< t \leq j_{k} $ if and only if $ (a_{0},a_{1},\ldots,a_{t-1})$ is of the form $(\ast, \ldots, \ast, -c, \underbrace{0,\ldots,0}_{m})$ with $ m\geq 0$ and $ 1\leq c \leq \frac{p-1}{2}.$ Note that we have modulo $p$ : $$f_{t}(a_{0},a_{1},\ldots,a_{t-1})=\{ \begin{array}{c} -1, \ \mbox{if} \ (a_{0},a_{1},\ldots,a_{t-1})=(\ast, \ldots, \ast, -c, 0,\ldots,0), 1\leq c \leq \frac{p-1}{2};\\ 0, \ \mbox{otherwise}. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{array} $$ So $$a_{t}+f_{t}(a_{0},a_{1},\ldots,a_{t-1})=\{\begin{array}{c} a_{t} \ (\hbox{\rm{mod}} \, p), \ \mbox{if} \ j_{k-1}< t\leq i_{k},k\geq 1;\ \ \\ a_{t}-1 \ ( \hbox{\rm{mod}} \, p), \ \mbox{if} \ i_{k}< t\leq j_{k},k\geq 1. \end{array}$$ Therefore, from (5.3)(5.4), we have $$a_{t}+f_{t}(a_{0},a_{1},\ldots,a_{t-1})=b^{\prime}_{t}\ (\hbox{\rm{mod}} \, p). \eqno(5.11)$$ By the uniqueness, we have $b_{i}=b^{\prime}_{i}$ for any $i,$ so (5.1) follows from $(5.11).$ In a similar way, we can prove $(5.2).$ Similarly, define an index sequence. Let $j_{0}=-1$ for the initial value. If after $k$ rounds ($k\geq 1$) we have $j_{k-1},$ then we go on with the following two steps: i) Let $$i_{k}=\{\begin{array}{c} \infty, \ \ \mbox{if} \ \{i : j_{k-1}< i, \frac{p-1}{2}\leq b_{i}\leq p-1\}=\phi; \ \ \ \ \ \\ \mbox{min} \{i : j_{k-1}< i, \frac{p-1}{2}\leq b_{i}\leq p-1\}, \ \mbox{otherwise}. \end{array}$$ If $i_{k}=\infty,$ then the index sequence is completed; otherwise, go on with the next step: ii) Let $$j_{k}=\{\begin{array}{c} \infty, \ \ \mbox{if} \ \{i : i_{k}< i, 0 \leq b_{i}< \frac{p-1}{2} \}=\phi; \ \ \ \ \ \\ \mbox{min} \{i : i_{k}< i, 0 \leq b_{i}< \frac{p-1}{2}\}, \ \mbox{otherwise} . \end{array}$$ If $j_{k}=\infty,$ the index sequence is completed; otherwise, go on with the $k+1$ round. For $k\geq 1$ we define $$a^{\prime}_{i}=b_{i}, j_{k-1}< i < i_{k}, \ \ \mbox{and} \ \ a^{\prime}_{i_{k}}=b_{i_{k}}-p. \eqno(5.12)$$ $$a^{\prime}_{i}=b_{i}+1-p, i_{k}<i<j_{k}, \ \mbox{and} \ \ a^{\prime}_{j_{k}}=b_{j_{k}}+1. \eqno(5.13)$$ It is easy to check that $-\frac{p-1}{2}\leq a^{\prime}_{t}\leq \frac{p-1}{2} $ for any $t.$ For $ k\geq 1,$ let $$I_{k}=\sum_{j_{k-1}< i\leq i_{k}}b_{i}p^{i}, \ \ \ J_{k}=\sum_{i_{k}< i\leq j_{k}}b_{i}p^{i}.$$ When $i_{k}=\infty,$ from (5.12) we have $$I_{k}=\sum_{j_{k-1}< i\leq i_{k}=\infty}b_{i}p^{i}=\sum_{j_{k-1}< i}a^{\prime}_{i}p^{i}.\eqno(5.14)$$ When $i_{k}<\infty,$ from (5.12) we have $$I_{k}=\sum_{j_{k-1}< i\leq i_{k}}b_{i}p^{i}=\sum_{j_{k-1}< i <i_{k}}b_{i}p^{i}+b_{i_{k}}p^{i_{k}}=\sum_{j_{k-1}< i \leq i_{k}}b_{i}p^{i}+p^{1+i_{k}}.\eqno(5.15)$$ When $j_{k}=\infty,$ from (5.13) we have $$p^{1+i_{k}}+J_{k}=-\sum_{i_{k}< i}(p-1)p^{i}+\sum_{i_{k}< i \leq j_{k}=\infty}b_{i}p^{i}=\sum_{i_{k}< i}a^{\prime}_{i}p^{i}. \eqno(5.16)$$ When $j_{k}<\infty,$ from (5.13) we have $$p^{1+i_{k}}+J_{k}=-\sum_{i_{k}< i}(p-1)p^{i}+\sum_{i_{k}< i \leq j_{k}}b_{i}p^{i} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ $$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\sum_{i_{k}<i< j_{k}}(b_{i}-p+1)p^{i}+(b_{j_{k}}+1)p^{j_{k}}-p^{1+j_{k}}-\sum_{j_{k} < i}(p-1)p^{i}$$ $$=\sum_{i_{k}< i \leq j_{k}}a^{\prime}_{i}p^{i}. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \eqno(5.17)$$ Then, similarly from (5.14)-(5.17), we have $$A=\sum_{i\geq 0}a^{\prime}_{i}p^{i}.$$ By the definition of the index sequence, for $k\geq1$ we have: a) if $j_{k-1}< t \leq i_{k}, $ then $0\leq b_{t-1}\leq \frac{p-1}{2},$ and $(b_{0},b_{1},\ldots,b_{t-1})$ is not the form of $(\ast, \ldots, \ast, c,\underbrace{\frac{p-1}{2},\ldots,\frac{p-1}{2}}_{m})$ with $m\geq 0$ and $ \frac{p-1}{2}< c < p\, ;$ b) if \,$i_{k}< t \leq j_{k},$ then $\frac{p-1}{2}\leq b_{t-1} < p\, ,$ and $(b_{0},b_{1},\ldots,b_{t-1})$ is the form of $(\ast, \ldots, \ast, c,\underbrace{\frac{p-1}{2},\ldots,\frac{p-1}{2}}_{m})$ with $m\geq 0$ and $ \frac{p-1}{2}< c < p\, .$ Therefore, for $k\geq1$ we have that $i_{k}< t \leq j_{k} $ if and only if $ (b_{0},b_{1},\ldots,b_{t-1})$ is the form of $(\ast, \ldots, \ast, c,\underbrace{\frac{p-1}{2},\ldots,\frac{p-1}{2}}_{m})$ with $m\geq 0$ and $ \frac{p-1}{2}< c < p.$ Note that we have modulo $p$ : $$g_{t}(b_{0},b_{1},\ldots,b_{t-1})=\{ \begin{array}{c} 1, \ \mbox{if} \ (b_{0},b_{1},\ldots,b_{t-1})=(\ast, \ldots, \ast, c,\frac{p-1}{2},\ldots,\frac{p-1}{2}), \frac{p-1}{2}< c < p;\\ 0 , \ \mbox{otherwise} . \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{array} $$ So $$b_{t}+g_{t}(b_{0},b_{1},\ldots,b_{t-1})=\{\begin{array}{c} b_{t}+1 \ (\hbox{\rm{mod}} p), \ \mbox{if} \ j_{k-1}< t\leq i_{k},k\geq 1;\ \ \\ b_{t} \ (\hbox{\rm{mod}} p), \ \mbox{if} \ i_{k}< t\leq j_{k},k\geq 1. \ \ \ \ \ \ \ \ \ \ \ \end{array}$$ Hence $$b_{t}+g_{t}(b_{0},b_{1},\ldots,b_{t-1})=a^{\prime}_{t}\ (\hbox{\rm{mod}} p). \eqno (5.18)$$ As above, by uniqueness we know that (5.2) follows from (5.18). $\Box$ {\bf An alternative proof} After read the previous version of this paper, Browkin gave an alternative proof for Theorem 5.1. Now, we only give a sketch of his proof of the equality (5.1). Let $\sum_{i=0}^{\infty}a_{i}p^{i}=\sum_{i=0}^{\infty}b_{i}p^{i},$ where $a_{i}\in \{0, \pm1,\pm2,\ldots,\pm\frac{p-1}{2}\}, b_{i}\in \{0, 1, \ldots \\ , p-1\}.$ For $k\geq 0$ denote $$A_{k}:=\sum_{i=0}^{k}a_{i}p^{i},\ \ \ \ B_{k}:=\sum_{i=0}^{k}b_{i}p^{i}.$$ Clearly, for any $k\geq 0$, $A_{k}, B_{k}$ satisfy $A_{k}\equiv B_{k} (\hbox{\rm{mod}} p^{k+1}).$ We have $$ \mid A_{k}\mid< p^{k+1} \ \ \mbox{and} \ \ 0\leq B_{k}< p^{k+1}.\eqno (\ast)$$ In fact, we have $$\mid A_{k}\mid\leq \sum_{i=0}^{k}\mid a_{i}\mid p^{i}\leq \frac{p-1}{2}\sum_{i=0}^{k} p^{i}=\frac{1}{2}(p^{k+1}-1)< p^{k+1}$$ and $$0\leq B_{k}= \sum_{i=0}^{k} b_{i} p^{i}\leq (p-1)\sum_{i=0}^{k} p^{i}=p^{k+1}-1< p^{k+1}.$$ From $(\ast),$ it follows that $$-p^{k+1}< -A_{k}\leq B_{k}-A_{k}\leq B_{k}+\mid A_{k}\mid< p^{k+1},$$ so we have $B_{k}-A_{k}=0$ or $p^{k+1}.$ More precisely $$B_{k}=A_{k} \ \mbox{if} \ A_{k}\geq 0; \ \ B_{k}=A_{k}+p^{k+1}\ \mbox{if} \ A_{k}< 0. \eqno (\ast\ast)$$ From this, we know that $b_{0}\equiv a_{0}(\hbox{\rm{mod}} p).$ Now, we determine $b_{k} (\hbox{\rm{mod}} p)$ for $k\geq 1.$ i) Assume that $A_{k-1}\geq 0.$ Then from $(\ast)$ we have $A_{k-1}=B_{k-1}.$ If $A_{k}\geq 0,$ then $A_{k}=B_{k}$ similarly, so $$A_{k-1}+a_{k}p^{k}=A_{k}=B_{k}=B_{k-1}+b_{k}p^{k},$$ therefore $b_{k}=a_{k};$ if $A_{k}<0,$ then by $(\ast\ast)$ we have $B_{k}=A_{k}+p^{k+1},$ and so $$B_{k-1}+b_{k}p^{k}=B_{k}=A_{k}+p^{k+1}=A_{k-1}+a_{k}p^{k}+p^{k+1},$$ which implies $b_{k}=a_{k}+p\, .$ ii) Assume that $A_{k-1}< 0.$ If $A_{k}\geq 0,$ then from $(\ast\ast)$ we get $$A_{k-1}+p^{k}+b_{k}p^{k}=B_{k-1}+b_{k}p^{k}=B_{k}=A_{k}=A_{k-1}+a_{k}p^{k},$$ therefore $b_{k}=a_{k}-1; $ if $A_{k}< 0,$ then from $(\ast\ast)$ we get $$A_{k-1}+p^{k}+b_{k}p^{k}=B_{k-1}+b_{k}p^{k}=B_{k}=A_{k}+p^{k+1}=A_{k-1}+a_{k}p^{k}+p^{k+1},$$ therefore $b_{k}=a_{k}+p-1\equiv a_{k}-1 (\hbox{\rm{mod}} p)$. Thus we have proved: $$b_{k}-a_{k}\equiv \{\begin{array}{c} -1 (\hbox{\rm{mod}} p), \ \mbox{if}\ A_{k-1} < 0\, ; \\ 0 \ \ (\hbox{\rm{mod}} p), \ \mbox{otherwise} .\end{array}$$ Now we express these conditions by means of polynomials. Let $$A_{k-1}=\sum_{i=0}^{k-1}a_{i}p^{i}, \ \mbox{where} \ a_{k}=a_{k-1}=\ldots =a_{m+1}=0, a_{m}\neq 0,$$ for some $m, 0\leq m\leq k.$ From $A_{k-1}=A_{m}=A_{m-1}+a_{m}p^{m}$ and $\mid A_{m-1}\mid < p^{m}$ we conclude that $A_{k-1}< 0$ if and only if $a_{m}< 0,$ which is equivalent to $a_{m}\in \{-1,-2, \ldots, -\frac{p-1}{2}\}.$ So we get $$b_{k}-a_{k}\equiv \{\begin{array}{c} -1 (\hbox{\rm{mod}} p), \ \mbox{if} \ (a_{0},a_{1},\ldots,a_{k-1})=(\ast, \ldots, \ast, -c, 0,\ldots,0); \ \\ 0 \ \ (\hbox{\rm{mod}} p), \ \mbox{otherwise} , \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \end{array}$$ where $1\leq c \leq \frac{p-1}{2}.$ From the proof of Theorem 5.1, we know that $f_{k}(a_{0},a_{1},\ldots,a_{k-1})$ has the same property as $b_{k}-a_{k},$ so we have $$b_{k}=a_{k}+f_{k}(a_{0},a_{1},\ldots,a_{k-1})\ (\hbox{\rm{mod}} \, p). $$ $\Box$ {\bf Corollary 5.2.} {\it Let $$A=\sum_{i}^{\infty}a_{i}3^{i}=\sum_{j}^{\infty}b_{j}3^{j} \in \mathbb{Z}_{3}, $$ with $a_{i}\in \{0, \pm1\}$ and $b_{j}\in \{0,1,2 \}.$ Then $$b_{t}=a_{t}+\sum_{0\leq \lambda < t } a_{\lambda}(a_{\lambda}-1)\prod_{\lambda < i < t}(1-a_{i}^{2})\ (\hbox{\rm{mod}} \, 3).$$ $$a_{t}=b_{t}+\sum_{0\leq \lambda < t } b_{\lambda}(1-b_{\lambda})\prod_{\lambda < i < t}b_{i}(2-b_{i})\ (\hbox{\rm{mod}} \, 3).$$} $\Box$ We can also give the formulas of the sum and the multiplication of $p$-adic integers with respect to the numerically least residue system $\{0,\pm1,\pm 2,\ldots, \pm\frac{p-1}{2}\}$. Define $$a_{t}^{\vee}:=a_{t}+ \sum_{\lambda=0}^{ t-1 } \{\sum_{c=1}^{\frac{p-1}{2}}[(a_{\lambda}+c)^{p-1}-1]\}\prod_{\lambda < i < t}(1-a_{i}^{p-1}),$$ $$b_{t}^{\wedge}:=b_{t}+\sum_{\lambda=0}^{ t-1 } \{\sum_{c=\frac{p+1}{2}}^{ p-1 }[1- (b_{\lambda}-c )^{p-1}]\}\prod_{\lambda < i < t}[1-\left(b_{i}-\frac{p-1}{2}\right)^{p-1}],$$ where $a_{i}\in \{0, \pm1,\pm2,\ldots,\pm\frac{p-1}{2}\}$ and $b_{j}\in \{0,1,\ldots,p-1\}.$ {\bf Theorem 5.3. } {\it Let $p$ be an odd prime. Assume that $$a=\sum_{i=0}^{\infty}a_{i}p^{i}, b=\sum_{i=0}^{\infty}b_{i}p^{i}, -a=\sum_{i=0}^{\infty}d_{i}p^{i}, a+b=\sum_{i=0}^{\infty}c_{i}p^{i}\in \mathbb{Z}_{p},ab=\sum_{i=0}^{\infty}e_{i}p^{i}, $$ with $a_{i}, b_{i}, c_{i}, d_{i}\in \{0,\pm1,\pm 2,\ldots, \pm\frac{p-1}{2}\}.$ Then} (i) {\it $c_{0} = a_{0}+b_{0} (\hbox{\rm{mod}}\,p)$ and for $t\geq 1,$ $$c_{t}=a_{t}+b^{\vee}_{t}+\sum_{i=0}^{t-1}\left(\sum_{j=1}^{p-1}\left(\begin{array}{c} \frac{p-1}{2}+a_{i} \\ j\\ \end{array} \right)\left(\begin{array}{c} b_{i}^{\vee} \\ p-j\\ \end{array} \right)\right )\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} \frac{p-1}{2}+a_{j}+b^{\vee}_{j} \\ p-1\\ \end{array} \right)(\hbox{\rm{mod}} \ p).$$ In particular, if $p=3,$ then $c_{0} = a_{0}+b_{0}^{\vee}\, (\hbox{\rm{mod}}\,3)$ and for $t\geq 1,$ $$c_{t}=a_{t}+b^{\vee}_{t}-\sum_{i=0}^{t-1}[(a_{i}+1)(a_{i}+b_{i}^{\vee}-1)b_{i}^{\vee}]\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} a_{j}+b_{j}^{\vee}+1 \\ 2\\ \end{array} \right)\, (\hbox{\rm{mod}}\,3).$$} (ii) {\it $d_{0}=-a_{0}^{\vee} (\hbox{\rm{mod}} \,p)$ and for $t\geq1$ $$d_{t}=-a_{t}^{\vee}-1+ \prod_{i=0}^{t-1}(1-{a_{i}^{\vee}}^{p-1})( \hbox{\rm{mod}} \, p).$$ In particular, if $p=3,$ then $d_{0}=-a_{0}^{\vee} (\hbox{\rm{mod}} \,3)$ and for $t\geq1$ $$d_{t}=-a_{t}^{\vee}-1+ \prod_{i=0}^{t-1}(1-{a_{i}^{\vee}}^{2})(\hbox{\rm{mod}} \, 3).$$} (iii) {\it $e_{0} = (a_{0}^{\vee}b_{0}^{\vee})^{\wedge} ( \hbox{\rm{mod}} \,p)$ and for $t\geq 1 ,$ $$ e_{t} =\left( \sum_{\underline{\underline{l}}=(\underline{l}_{0},\ldots,\underline{l}_{k},\ldots,\underline{l}_{p} )\in \mathbf{L}_{p}(t)}\prod_{k=0}^{t}\tau_{\underline{l}_{k}}(a_{0}^{\vee},\ldots,a_{k}^{\vee}; b_{0}^{\vee},\ldots, b_{k}^{\vee})\right)^{\wedge} ( \hbox{\rm{mod}} \, p). $$} {\bf Proof} (i) From Theorem 5.1, we have $$a+b=\sum_{i=0}^{\infty}a_{i}p^{i}+\sum_{i=0}^{\infty}b^{\vee}_{i}p^{i}=\sum_{i=0}^{\infty}\left(\frac{p-1}{2}+a_{t-1}\right)p^{i}+\sum_{i=0}^{\infty}b^{\vee}_{i}p^{i}-\sum_{i=0}^{\infty}\left(\frac{p-1}{2}\right)p^{i}.$$ Note that $\frac{p-1}{2}+a_{t-1}, b ^{\vee}_{i} \in \{0,1,\ldots,p-1\}. $ Let $$\sum_{i=0}^{\infty}\left(\frac{p-1}{2}+a_{t-1}\right)p^{i}+\sum_{i=0}^{\infty}b^{\vee}_{i}p^{i}=\sum_{i=0}^{\infty}c_{i}^{\prime}p^{i}, \ c_{i}^{\prime}\in \{0,1,\ldots,p-1\}.$$ Then by Theorem 6.1 we have $$c_{t}^{\prime}=\frac{p-1}{2}+a_{t}+b^{\vee}_{t}+\sum_{i=1}^{p-1}\left(\begin{array}{c} \frac{p-1}{2}+a_{t-1} \\ i \\ \end{array} \right)\left(\begin{array}{c} b^{\vee}_{t-1} \\ p-i \\ \end{array} \right)$$ $$+\sum_{i=0}^{t-2}\left(\sum_{j=1}^{p-1}\left(\begin{array}{c} \frac{p-1}{2}+a_{i} \\ j\\ \end{array} \right)\left(\begin{array}{c} b_{i}^{\vee} \\ p-j\\ \end{array} \right)\right )\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} \frac{p-1}{2}+a_{j}+b^{\vee}_{j} \\ p-1\\ \end{array} \right)(\hbox{\rm{mod}} \ p).$$ Clearly $c_{t}=c_{t}^{\prime}-\frac{p-1}{2}.$ (ii) It follows from Theorem 5.1 and Theorem 3.1. (iii) It follows from Theorem 5.1, Corollary 2.4 and Corollary 4.7. $\Box$ \section{Applications to Witt vectors} Now, we apply the above results to $(\mathbf{W}(\mathbb{F}_{p}),\dot{+}, \dot{\times})$, the ring of Witt vectors with coefficients in $\mathbb{F}_{p}.$ Let $ \dot{-}$ denote the minus of Witt vectors. {\bf Theorem 6.1. } {\it Let $a=(a_{0}, a_{1},\ldots,a_{n},\ldots), b=(b_{0}, b_{1},\ldots,b_{n},\ldots)\in \mathbf{W}(\mathbb{F}_{2})$. If in $ \mathbf{W}(\mathbb{F}_{2})$ $$a\dot{+}b=(c_{0}, c_{1},\ldots,c_{n},\ldots),$$ $$ \dot{-}a=(d_{0}, d_{1},\ldots,d_{n},\ldots),$$ $$a\dot{\times}b=(e_{0}, e_{1},\ldots,e_{n},\ldots),$$ then in $\mathbb{F}_{2}$ we have} (i) {\it $c_{0}=a_{0}+b_{0}$ and for} $t\geq1,$ $$c_{t} = a_{t}+b_{t}+\sum_{i=0}^{t-1}a_{i}b_{i}\prod_{j=i+1}^{t-1}(a_{j}+b_{j}).$$ (ii) {\it $d_{0}=a_{0},$ and for} $t\geq1,$ $$ d_{t} = a_{t} + 1+ \prod_{i=0}^{t-1}(1+a_{i}).$$ (iii) {\it $e_{0}=a_{0}b_{0},$ and for} $t\geq 1,$ $$e_{t} = \sum_{(l_{1},\ldots,l_{t})\in \mathbf{L}_{2}(t) } \prod_{ 1 \leq k \leq t} \tau _{l_{k} }(a_{0}b_{k},a_{1}b_{k-1},\cdots ,a_{k}b_{0}). $$ {\bf Proof} It follows from Corollary 2.4 and 4.7. $\Box$ When $p=3$, $a_{t}^{\vee}$ and $ b_{t}^{\wedge}$ become $$a_{t}^{\vee}= a_{t}+\sum_{0\leq \lambda < t } a_{\lambda}(a_{\lambda}-1)\prod_{\lambda < i < t}(1-a_{i}^{2}),$$ $$b_{t}^{\wedge}=b_{t}+\sum_{0\leq \lambda < t } b_{\lambda}(1-b_{\lambda})\prod_{\lambda < i < t}b_{i}(2-b_{i}) $$ with $a_{i}\in \{0, \pm1\}$ and $b_{j}\in \{0,1,2\},$ and then we have: {\bf Theorem 6.2. } {\it Let $a=(a_{0}, a_{1},\ldots,a_{n},\ldots), b=(b_{0}, b_{1},\ldots,b_{n},\ldots)\in \mathbf{W}(\mathbb{F}_{3})$, If in $ \mathbf{W}(\mathbb{F}_{3})$ $$a\dot{+}b=(c_{0}, c_{1},\ldots,c_{n},\ldots),$$ $$ \dot{-}a=(d_{0}, d_{1},\ldots,d_{n},\ldots),$$ $$a\dot{\times}b=(e_{0}, e_{1},\ldots,e_{n},\ldots),$$ then in $\mathbb{F}_{3}$ we have} (i) {\it $c_{0} = a_{0}+b^{\vee}_{0}$ and for} $t\geq 1,$ $$c_{t}=a_{t}+b^{\vee}_{t}-\sum_{i=0}^{t-1}[(a_{i}+1)(a_{i}+b_{i}^{\vee}-1)b_{i}^{\vee}]\prod_{j=i+1} ^{t-1}\left(\begin{array}{c} a_{j}+b_{j}^{\vee}+1 \\ 2\\ \end{array} \right).$$ (ii) {\it $d_{0}=-a_{0}^{\vee}$ and for $t\geq1$ $$d_{t}=-a_{t}^{\vee}-1+ \prod_{i=0}^{t-1}(1-{a_{i}^{\vee}}^{2}).$$} (iii) $e_{0} = (a_{0}^{\vee}b_{0}^{\vee})^{\wedge} $ {\it and for} $t\geq 1 ,$ $$ e_{t}=\left (\sum_{(\underline{l}_{0},\ldots,\underline{l}_{k},\ldots,\underline{l}_{t} )\in \mathbf{L}_{3}(t)}\prod_{k=0}^{t} \sum_{\underline{S}=(S_{1}, \ldots , S_{5})\in \mathbf{I}(k, \underline{l}_{k})}f^{\vee}_{\underline{S}}(a_{0},a_{1},\ldots,a_{k};b_{0},b_{1},\ldots,b_{k})\right)^{\wedge},$$ where $$f^{\vee}_{\underline{S}}( a_{0},a_{1},\ldots,a_{k};b_{0},b_{1},\ldots,b_{k})=\prod_{ i_{1}\in S_{1}}a_{i_{1}}^{\vee}b_{k-i_{1}}^{\vee}\prod_{ i_{2}\in S_{2}} a_{i_{2}}^{\vee}(1-a_{i_{2}}^{\vee})b_{k-i_{2}}^{\vee}\ \ \ \ \ \ \ \ \ \ \ \ $$ $$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cdot\prod_{ i_{3}\in S_{3}}{a_{i_{3}}^{\vee}}^{2}b^{\vee}_{k-i_{3}}(1-b^{\vee}_{k-i_{3}}) \cdot\prod_{ i\in S_{4}\cup S_{5}}{a^{\vee}_{i}}^{2}(1-a^{\vee}_{i})b^{\vee}_{k-i}(b^{\vee}_{k-i}-1)$$ {\bf Proof} \ It follows from Corollary 2.4, Corollary 4.7 and Theorem 5.3 (See [1]). $\Box$ {\bf Remark 6.3.} (i) We can also write out for Witt vectors the results corresponding Corollary 2.5 and 2.6. (ii) The formulas given in Theorem 6.2 in particular for $e_{t}$ are really terribly complicated, but they are patterns. {\bf Question 6.4.} Can we give similar formulas for $ \mathbf{W}(\mathbb{F}_{p})$ for a prime $p> 3$ ? {\bf Acknowledgment } We are grateful to J. Browkin for many helpful suggestions, in particular, for his showing us the problems. {\begin{center} {\bf References} \end{center}} [1] J. P. Serre, {\it Local Fields}, Springer-Verlag, New York Heidelberg Berlin, 1979. [2] J. Browkin, {\it The sum of dyadic numbers}, preprint. [3] F.J.Macwilliam and N.J.A.Sloane, {\it The Theory of Error-Correcting Codes}, North-Holland Publishing Company, 1977 [4] Bao Li and Zongduo Dai, {\it A general result and a new lower bound of linear complexity for binary sequences derived from sequences over $\mathbb{Z}_{2^e}$}, preprint. \end{document}
arXiv
Ten-of-diamonds decahedron In geometry, the ten-of-diamonds decahedron is a space-filling polyhedron with 10 faces, 2 opposite rhombi with orthogonal major axes, connected by 8 identical isosceles triangle faces. Although it is convex, it is not a Johnson solid because its faces are not composed entirely of regular polygons. Michael Goldberg named it after a playing card, as a 10-faced polyhedron with two opposite rhombic (diamond-shaped) faces. He catalogued it in a 1982 paper as 10-II, the second in a list of 26 known space-filling decahedra.[1] Ten-of-diamonds decahedron Faces8 triangles 2 rhombi Edges16 Vertices8 Symmetry groupD2d, order 8 Dual polyhedronSkew-truncated tetragonal disphenoid Propertiesspace-filling Coordinates If the space-filling polyhedron is placed in a 3-D coordinate grid, the coordinates for the 8 vertices can be given as: (0, ±2, −1), (±2, 0, 1), (±1, 0, −1), (0, ±1, 1). Symmetry The ten-of-diamonds has D2d symmetry, which projects as order-4 dihedral (square) symmetry in two dimensions. It can be seen as a triakis tetrahedron, with two pairs of coplanar triangles merged into rhombic faces. The dual is similar to a truncated tetrahedron, except two edges from the original tetrahedron are reduced to zero length making pentagonal faces. The dual polyhedra can be called a skew-truncated tetragonal disphenoid, where 2 edges along the symmetry axis completely truncated down to the edge midpoint. Symmetric projection Ten of diamonds Related Dual Related Solid faces Edges triakis tetrahedron Solid faces Edges Truncated tetrahedron v=8, e=16, f=10 v=8, e=18, f=12 v=10, e=16, f=8 v=12, e=18, f=8 Honeycomb Ten-of-diamonds honeycomb Schläfli symboldht1,2{4,3,4} Coxeter diagram CellTen-of-diamonds Vertex figuresdodecahedron tetrahedron Space Fibrifold Coxeter I3 (204) 8−o [[4,3+,4]] DualAlternated bitruncated cubic honeycomb PropertiesCell-transitive The ten-of-diamonds is used in the honeycomb with Coxeter diagram , being the dual of an alternated bitruncated cubic honeycomb, . Since the alternated bitruncated cubic honeycomb fills space by pyritohedral icosahedra, , and tetragonal disphenoidal tetrahedra, vertex figures of this honeycomb are their duals – pyritohedra, and tetragonal disphenoids. Cells can be seen as the cells of the tetragonal disphenoid honeycomb, , with alternate cells removed and augmented into neighboring cells by a center vertex. The rhombic faces in the honeycomb are aligned along 3 orthogonal planes. Uniform Dual Alternated Dual alternated t1,2{4,3,4} dt1,2{4,3,4} ht1,2{4,3,4} dht1,2{4,3,4} Bitruncated cubic honeycomb of truncated octahedral cells tetragonal disphenoid honeycomb Dual honeycomb of icosahedra and tetrahedra Ten-of-diamonds honeycomb Honeycomb structure orthogonally viewed along cubic plane Related space-filling polyhedra The ten-of-diamonds can be dissected in an octagonal cross-section between the two rhombic faces. It is a decahedron with 12 vertices, 20 edges, and 10 faces (4 triangles, 4 trapezoids, 1 rhombus, and 1 isotoxal octagon). Michael Goldberg labels this polyhedron 10-XXV, the 25th in a list of space-filling decahedra.[2] The ten-of-diamonds can be dissected as a half-model on a symmetry plane into a space-filling heptahedron with 6 vertices, 11 edges, and 7 faces (6 triangles and 1 trapezoid). Michael Goldberg identifies this polyhedron as a triply truncated quadrilateral prism, type 7-XXIV, the 24th in a list of heptagonal space-fillers.[3] It can be further dissected as a quarter-model by another symmetry plane into a space-filling hexahedron with 6 vertices, 10 edges, and 6 faces (4 triangles, 2 right trapezoids). Michael Goldberg identifies this polyhedron as an ungulated quadrilateral pyramid, type 6-X, the 10th in a list of space-filling hexahedron.[4] Dissected models in symmetric projections Relation Decahedral half model Heptahedral half model Hexahedral quarter model Symmetry C2v, order 4 Cs, order 2 C2, order 2 Edges Net Elements v=12, e=20, f=10 v=6, e=11, f=7 v=6, e=10, f=6 Rhombic bowtie Rhombic bowtie Faces16 triangles 2 rhombi Edges28 Vertices12 Symmetry groupD2h, order 8 Propertiesspace-filling Net Pairs of ten-of-diamonds can be attached as a nonconvex bow-tie space-filler, called a rhombic bowtie for its cross-sectional appearance. The two right-most symmetric projections below show the rhombi edge-on on the top, bottom and a middle neck where the two halves are connected. The 2D projections can look convex or concave. It has 12 vertices, 28 edges, and 18 faces (16 triangles and 2 rhombi) within D2h symmetry. These paired-cells stack more easily as inter-locking elements. Long sequences of these can be stacked together in 3 axes to fill space.[5] The 12 vertex coordinates in a 2-unit cube. (further augmentations on the rhombi can be done with 2 unit translation in z.) (0, ±1, −1), (±1, 0, 0), (0, ±1, 1), (±1/2, 0, −1), (0, ±1/2, 0), (±1/2, 0, 1) Bow-tie model (two ten-of-diamonds) SkewSymmetric See also • Elongated gyrobifastigium References 1. Goldberg, Michael. On the Space-filling Decahedra. Structural Topology, 1982, num. Type 10-II 2. On Space-filling Decahedra, type 10-XXV. 3. Goldberg, Michael On the space-filling heptahedra Geometriae Dedicata, June 1978, Volume 7, Issue 2, pp 175–184 PDF type 7-XXIV 4. Goldberg, Michael On the space-filling hexahedra Geom. Dedicata, June 1977, Volume 6, Issue 1, pp 99–108 PDF type 6-X 5. Robert Reid, Anthony Steed Bowties: A Novel Class of Space Filling Polyhedron 2003 • Koch 1972 Koch, Elke, Wirkungsbereichspolyeder und Wirkungsbereichsteilunger zukubischen Gitterkomplexen mit weniger als drei Freiheitsgraden (Efficiency Polyhedra, and Efficiency Dividers, cubic lattice complexes with less than three degrees of freedom) Dissertation, University Marburg/Lahn 1972 - Model 10/8–1, 28–404.
Wikipedia
Forum — Daily Challenge Daily Challenge Course Discussion M0 Introduction Area Consequences Module 0 Day 1 Challenge Part 1 Module 0 Day 1 Your Turn Part 1 Someone little today gave me several answers today to the question, "What's the area of this triangle?" At first I was a bit shocked, appalled and dismayed at this answer... However, after thinking about it a while, I guess it's a very natural thing for a little person to say! A triangle looks like it should be \(\color{red}\text{easy}\) to calculate the area of. It's an easy shape, in the same way that a rectangle or a square is "easy": But if I could just multiply two of the sides of a triangle to get its area, then what if I draw the triangles in a weird way? What if I draw a very short and skinny triangle, like this? Does it really seem like the area of a triangle is related to the product of its sides? I can draw a triangle with the same two sides, but skinnier and skinnier until its area is almost \(0.\) My friend sort of "remembered" the formula for the area of a triangle, except she remembered the formula for a right triangle's area. She understood that a right triangle is half of the bounding rectangle that surrounds it. But she didn't understand exactly why the formula was true, so had trouble relating it to a non-right triangle. $$ \textcolor{red}{\text{Area of a right triangle}} = \frac{1}{2} \text{ } \text{ base } \times \text{ height} $$ Now, why is the area of the second triangle equal to the first triangle? She cuts the first right triangle and says it's because the two mini-triangles are the same size. Is that really true? No, it's not! They look the same, but their lengths are actually different. The longest side of the left triangle is a diagonal, which is longer than the longest side of the right triangle, which is just the long side of the rectangle. Also, the altitudes (heights) of the triangle are different. The altitude on the left, colored yellow, is shorter than the height of the rectangle, so it is shorter than the altitude of the right triangle. There's a different reason why both triangles inscribed in the box have the same area. It's because you can split the box into two smaller boxes, each of which has half taken up by a mini-triangle. Thus we say the area of a triangle is equal to \( \color{red}\frac{1}{2} \times \text{ } \text{ base } \times \text{ height}, \) where the height is the height of the triangle's bounding rectangle. We can interpret any of the three sides of the triangle as the base; just remember to draw a bounding rectangle and find the height of this rectangle. This altitude is always perpendicular ( \(\perp\) ) to the base. Since each of the three sides can be a different base, there are three formulas for the area of a triangle. Box Calculations Counting Numbers Hi @delightfulllama! Thanks a lot for your feedback! We are happy that you enjoy our Daily Challenge Course and find it useful. We try to create our course materials and structure so that we teach others not just how to solve one particular problem, but teach how one should think about it, in order to solve many other problems as well as learn different tricks and strategies that could be useful in the future! Making mistakes is normal! The question is how to use your mistakes to improve yourself and move forward. We are looking forward to hearing from you again! Seating Probability That's a good question! It actually doesn't matter if the circle is symmetric or not, or what shape the table is, unless the table is a strange shape like this: The problem with this table is that A could be considered as sitting next to E, and D could be considered as sitting next to F. As long as the table preserves the "neighbor arrangement" of the chairs, it doesn't matter what the shape is. Please don't hesitate to ask if you have any more questions! Daily Challenge | Terms | COPPA
CommonCrawl
\begin{document} \title{Mechanism Design for Public Projects via Neural Networks} \author{ Guanhua Wang, Runqi Guo\\ School of Computer Science\\ University of Adelaide\\ Australia\\ \And Yuko Sakurai\\ National Institute of Advanced\\ Industrial Science and Technology\\ Japan\\ \And Ali Babar, Mingyu Guo\\ School of Computer Science\\ University of Adelaide\\ Australia\\ } \maketitle \begin{abstract} We study mechanism design for nonexcludable and excludable binary public project problems. We aim to maximize the expected number of consumers and the expected social welfare. For the nonexcludable public project model, we identify a sufficient condition on the prior distribution for the conservative equal costs mechanism to be the optimal strategy-proof and individually rational mechanism. For general distributions, we propose a dynamic program that solves for the optimal mechanism. For the excludable public project model, we identify a similar sufficient condition for the serial cost sharing mechanism to be optimal for $2$ and $3$ agents. We derive a numerical upper bound. Experiments show that for several common distributions, the serial cost sharing mechanism is close to optimality. The serial cost sharing mechanism is not optimal in general. We design better performing mechanisms via neural networks. Our approach involves several technical innovations that can be applied to mechanism design in general. We interpret the mechanisms as price-oriented rationing-free (PORF) mechanisms, which enables us to move the mechanism's complex (\emph{e.g.}, iterative) decision making off the network, to a separate program. We feed the prior distribution's analytical form into the cost function to provide quality gradients for training. We use supervision to manual mechanisms as a systematic way for initialization. Our approach of ``supervision and then gradient descent'' is effective for improving manual mechanisms' performances. It is also effective for fixing constraint violations for heuristic-based mechanisms that are infeasible. \end{abstract} \keywords{Mechanism Design; Neural Networks; Public Projects} \section{Introduction}\label{sec:intro} Many multiagent system applications (\emph{e.g.}, crowdfunding) are related to the public project problem. The public project problem is a classic economic model that has been studied extensively in both economics and computer science~\cite{Mas-Colell1995:Microeconomic,Moore2006:General,Moulin1988:Axioms}. Under this model, a group of agents decide whether or not to fund a \emph{nonrivalrous} public project --- when one agent consumes the project, it does not prevent others from using it. We study both the \textbf{nonexcludable} and the \textbf{excludable} versions of the \emph{binary} public project problem. The binary decision is either to build or not. If the decision is not to build, then no agents can consume the project. For the \emph{nonexcludable} version, once a project is built, all agents can consume it, including those who do not pay. For example, if the public project is an open source software project, then once the project is built, everyone can consume it. For the \emph{excludable} version, the mechanism has the capability to exclude agents from the built project. For example, if the public project is a swimming pool, then we could impose the restriction that only some agents (\emph{e.g.}, the paying agents) have access to it. Our aim is to design mechanisms that maximize \emph{expected} performances. We consider two design objectives. One is to maximize the \textbf{expected number of consumers} (expected number of agents who are allowed to consume the project).\footnote{For the nonexcludable public project model, this is simply to maximize the probability of building, as the number of consumers is always the total number of agents if the project is built. } The other objective is to maximize the agents' \textbf{expected social welfare} (considering payments). It should be noted that for some settings, we obtain the same optimal mechanism under these two different objectives. In general, the optimal mechanisms differ. We argue that maximizing the expected number of consumers is \emph{more fair} in some application scenarios. When maximizing the social welfare, the main focus is to ensure the high-valuation agents are served by the project, while low-valuation agents have much lower priorities. On the other hand, if the objective is to maximize the expected number of consumers, then low-valuation agents are as important as high-valuation agents. Guo~\emph{et.al.}~\cite{Guo2018:Cost} studied an objective that is very similar to maximizing the expected number of consumers. The authors studied the problem of crowdfunding security information. There is a premium time period. If an agent pays more, then she receives the information earlier. If an agent pays less or does not pay, then she incurs a time penalty --- she receives the information slightly delayed. The authors' objective is to minimize the expected delay. If every agent either receives the information at the very beginning of the premium period, or at the very end, then minimizing the expected delay is equivalent to maximizing the expected number of consumers. The public project is essentially the premium period. It should be noted that when crowdfunding security information, it is desirable to have more agents protected, whether their valuations are high or low. Hence, in this application domain, maximizing the number of consumers is more suitable than maximizing social welfare. However, since any delay that falls \emph{strictly} inside the premium period is not valid for our \emph{binary} public project model, the mechanisms proposed in~\cite{Guo2018:Cost} do not apply to our setting. With slight technical adjustments, we adopt the existing characterization results from Ohseto~\cite{Ohseto2000:Characterizations} for \emph{strategy-proof} and \emph{individually rational} mechanisms for both the nonexcludable and the excludable public project problems. Before summarizing our results, we introduce the following notation. We assume the agents' valuations are drawn independently and identically from a known distribution, with $f$ being the probability density function. For the nonexcludable public project problem, we propose a sufficient condition for the \emph{conservative equal costs mechanism}~\cite{Moulin1994:Serial} to be optimal. For maximizing the expected number of consumers, $f$ being \emph{log-concave} is a sufficient condition. For maximizing social welfare, besides log-concavity, we propose a condition on $f$ called \emph{welfare-concavity}. For distributions not satisfying the above conditions, we propose a dynamic program that solves for the optimal mechanism. For the excludable public project problem, we also propose a sufficient condition for the \emph{serial cost sharing mechanism}~\cite{Moulin1994:Serial} to be optimal. Our condition only applies to cases with $2$ and $3$ agents. For $2$ agents, the condition is identical to the nonexcludable version. For $3$ agents, we also need $f$ to be nonincreasing. For more agents, we propose a numerical technique for calculating the objective upper bounds. For a few example log-concave distributions, including common distributions like uniform and normal, our experiments show that the serial cost sharing mechanism is close to optimality. Without log-concavity, the serial cost sharing mechanism can be far away from optimality. We propose a neural network based approach, which successfully identifies better performing mechanisms. Mechanism design via deep learning/neural networks has been an emerging topic~\cite{Golowich2018:Deep, Duetting2019:Optimal,Shen2019:Automated,Manisha2018:Learning}. Duetting \emph{et.al.}~\cite{Duetting2019:Optimal} proposed a general approach for revenue maximization via deep learning. The high-level idea is to manually construct often complex network structures for representing mechanisms for different auction types. The cost function is the negate of the revenue. By minimizing the cost function via gradient descent, the network parameters are adjusted, which lead to better performing mechanisms. The mechanism design constraints (such as strategy-proofness) are enforced by adding a penalty term to the cost function. The penalty is calculated by sampling the type profiles and adding together the constraint violations. Due to this setup, the final mechanism is only approximately strategy-proof. The authors demonstrated that this technique scales better than the classic mixed integer programming based automated mechanism design approach~\cite{Conitzer2002:Complexity}. Shen \emph{et.al.}~\cite{Shen2019:Automated} proposed another neural network based mechanism design technique, involving a seller's network and a buyer's network. The seller's network provides a menu of options to the buyers. The buyer's network picks the utility-maximizing menu option. An exponential-sized hard-coded buyer's network is used (\emph{e.g.}, for every discretized type profile, the utility-maximizing option is pre-calculated and stored in the network). The authors mostly focused on settings with only one buyer. Our approach is different from previous approaches, and it involves three technical innovations, which have the potential to be applied to mechanism design in general. \noindent \emph{Calculating mechanism decisions off the network by interpreting mechanisms as price-oriented rationing-free (PORF) mechanisms~\cite{Yokoo2003:Characterization}:} A mechanism often involves binary decisions (\emph{e.g.}, for an agent, depending on whether her valuation is above the price offered to her, we end up with different situations). A common way to model binary decisions on neural networks is by using the \emph{sigmoid} function (or similar activation functions). A mechanism may involve a complex decision process, which makes it difficult or impractical to model via \emph{static} neural networks. For example, for our setting, a mechanism involves \emph{iterative} decision making. We could stack multiple sigmoid functions to model this. However, stacking sigmoid functions leads to vanishing gradients and significant numerical errors. Instead, we rely on the PORF interpretation: every agent faces a set of options (outcomes with prices) determined by the other agents. We single out a randomly chosen agent $i$, and draw a sample of \emph{the other agents' types $v_{-i}$}. We use a separate program (off the network) to calculate the options $i$ would face. For example, the separate program can be any Python function, so it is trivial to handle complex and iterative decision making. We no longer need to construct complex network structures like the approach in~\cite{Duetting2019:Optimal} or resort to exponential-sized hard-coded buyer networks like the approach in~\cite{Shen2019:Automated}. After calculating $i$'s options, we link the options together using terms that carry gradients. One effective way to do this is by making use of the prior distribution as discussed below. \noindent \emph{Feeding prior distribution into the cost function:} In conventional machine learning, we have access to a finite set of samples, and the process of machine learning is essentially to infer the true probability distribution of the samples. For existing neural network mechanism design approaches~\cite{Duetting2019:Optimal,Shen2019:Automated} (as well as this paper), it is assumed that the prior distribution is known. After calculating agent $i$'s options, we make use of $i$'s distribution to figure out the probabilities of all the options, and then derive the expected objective value from $i$'s perspective. We assume that the prior distribution is continuous. If we have the \emph{analytical form} of the prior distribution, then the probabilities can provide quality gradients for our training process. This is due to the fact that probabilities are calculated based on neural network outputs. In summary, we combine both samples and distribution in our cost function. We also have an example showing that even if the distribution we provide is not $100\%$ accurate, it is still useful. (Sometimes, we do not have the analytical form of the distribution. We can then use an analytical approximation instead.) \noindent \emph{Supervision to manual mechanisms as initialization:} We start our training by first conducting supervised learning. We teach the network to mimic an existing manual mechanism, and then leave it to gradient descent. This is essentially a systematic way to improve manual mechanisms.\footnote{Of course, if the manual mechanism is already optimal, or is ``locally optimal'', then the gradient descent process may fail to find improvement.} In our experiments, besides the \emph{serial cost sharing mechanism}, we also considered two heuristic-based manual mechanisms as starting points. One heuristic is feasible but not optimal, and the gradient descent process is able to improve its performance. The second heuristic is not always feasible, and the gradient descent process is able to fix the constraint violations. Supervision to manual mechanisms is often better than random initializations. For one thing, the supervision step often pushes the performance to a state that is already somewhat close to optimality. It may take a long time for random initializations to catch up. In computational expensive scenarios, it may never catch up. Secondly, supervision to a manual mechanism is a systematic way to set good initialization point, instead of trials and errors. It should be noted that for many conventional deep learning application domains, such as computer vision, well-performing manual algorithms do not exist. Fortunately, for mechanism design, we often have simple and well-performing mechanisms to be used as starting points. \section{Model Description} $n$ agents need to decide whether or not to build a public project. The project is \emph{binary} (build or not build) and \emph{nonrivalrous} (the cost of the project does not depend on how many agents are consuming it). We normalize the project cost to $1$. Agent $i$'s type $v_i\in[0,1]$ represents her private valuation for the public project. We assume that the $v_i$ are drawn \emph{i.i.d.} from a known prior distribution. Let $F$ and $f$ be the CDF and PDF, respectively. We assume that the distribution is continuous and $f$ is differentiable. \begin{itemize} \item For the nonexcludable public project model, agent $i$'s valuation is $v_i$ if the project is built, and $0$ otherwise. \item For the excludable public project model, the outcome space is $\{0,1\}^n$. Under outcome $(a_1,a_2,\ldots,a_n)$, agent $i$ consumes the public project if and only if $a_i=1$. If for all $i$, $a_i=0$, then the project is not built. As long as $a_i=1$ for some $i$, the project is built. \end{itemize} We use $p_i\ge 0$ to denote agent $i$'s payment. We require that $p_i=0$ for all $i$ if the project is not built and $\sum p_i=1$ if the project is built. An agent's payment is also referred to as her \emph{cost share} of the project. An agent's utility is $v_i-p_i$ if she gets to consume the project, and $0$ otherwise. We focus on \emph{strategy-proof} and \emph{individually rational} mechanisms. We study two objectives. One is to maximize the expected number of consumers. The other is to maximize the social welfare. \section{Characterizations and Bounds} We adopt a list of existing characterization results from~\cite{Ohseto2000:Characterizations}, which characterizes strategy-proof and individual rational mechanisms for both nonexcludable and excludable public project problems. A few technical adjustments are needed for the existing characterizations to be valid for our problem. The characterizations in~\cite{Ohseto2000:Characterizations} were not proved for quasi-linear settings. However, we verify that the assumptions needed by the proofs are valid for our model setting. One exception is that the characterizations in~\cite{Ohseto2000:Characterizations} assume that every agent's valuation is strictly positive. This does not cause issues for our objectives as we are maximizing for expected performances and we are dealing with continuous distributions.\footnote{Let $M$ be the optimal mechanism. If we restrict the valuation space to $[\epsilon,1]$, then $M$ is Pareto dominated by an unanimous/largest unanimous mechanism $M'$ for the nonexcludable/excludable setting. The expected performance difference between $M$ and $M'$ vanishes as $\epsilon$ approaches $0$. Unanimous/largest unanimous mechanisms are still strategy-proof and individually rational when $\epsilon$ is set to exactly $0$.} We are also safe to drop the \emph{citizen sovereign} assumption mentioned in one of the characterizations\footnote{If a mechanism always builds, then it is not individually rational in our setting. If a mechanism always does not build, then it is not optimal.}, but not the other two minor technical assumptions called \emph{demand monotonicity} and \emph{access independence}. \subsection{Nonexcludable Mech. Characterization} \begin{definition}[Unanimous mechanism~\cite{Ohseto2000:Characterizations}] There is a constant cost share vector $(c_1,c_2,\ldots,c_n)$ with $c_i\ge 0$ and $\sum c_i=1$. The mechanism builds if and only if $v_i\ge c_i$ for all $i$. Agent $i$ pays exactly $c_i$ if the decision is to build. The unanimous mechanism is strategy-proof and individually rational. \end{definition} \begin{theorem}[Nonexcludable mech. characterization~\cite{Ohseto2000:Characterizations}] For the nonexcludable public project model, if a mechanism is strategy-proof, individually rational, and citizen sovereign, then it is weakly Pareto dominated by an unanimous mechanism. \noindent Citizen sovereign: Build and not build are both possible outcomes. \end{theorem} Mechanism $1$ weakly Pareto dominates Mechanism $2$ if every agent weakly prefers Mechanism $1$ under every type profile. \begin{example}[Conservative equal costs mechanism~\cite{Moulin1994:Serial}] An example unanimous mechanism works as follows: we build the project if and only if every agent agrees to pay $\frac{1}{n}$. \end{example} \subsection{Excludable Mech. Characterization} \begin{definition}[Largest unanimous mechanism~\cite{Ohseto2000:Characterizations}] For every nonempty coalition of agents $S = \{S_1,S_2,\ldots,S_k\}$, there is a constant cost share vector $C_S=(c_{S_1},c_{S_2},\ldots,c_{S_k})$ with $c_{S_i}\ge 0$ and $\sum_{1\le i\le k} c_{S_i}=1$. $c_{S_i}$ is agent $S_i$'s cost share under coalition $S$. Agents in $S$ unanimously approve the cost share vector $C_S$ if and only if $v_{S_i}\ge c_{S_i}$ for all $i$. The mechanism picks the largest coalition $S^*$ satisfying that $C_{S^*}$ is unanimously approved. If $S^*$ does not exist, then the decision is not to build. If $S^*$ exists, then it is always unique, in which case the decision is to build. Only agents in $S^*$ are consumers of the public project and they pay according to $C_{S^*}$. If agent $i$ belongs to two coalitions $S$ and $T$ with $S\subsetneq T$, then $i$'s cost share under $S$ must be greater than or equal to her cost share under $T$. Let $N$ be the set of all agents. One way to interpret the mechanism is that the agents start with the cost share vector $C_N$. If some agents do not approve their cost shares, then they are forever removed. The remaining agents face new and increased cost shares. We repeat the process until all remaining agents approve their shares, or when all agents are removed. The largest unanimous mechanism is strategy-proof and individually rational. \end{definition} \begin{theorem}[Excludable mech. characterization~\cite{Ohseto2000:Characterizations}] For the excludable public project model, if a mechanism is strategy-proof, individually rational, and satisfies the following assumptions, then it is weakly Pareto dominated by a largest unanimous mechanism. Demand monotonicity: Let $S$ be the set of consumers. If for every agent $i$ in $S$, $v_i$ stays the same or increases, then all agents in $S$ are still consumers. If for every agent $i$ in $S$, $v_i$ stays the same or increases, and for every agent $i$ not in $S$, $v_i$ stays the same or decreases, then the set of consumers should still be $S$. Access independence: For all $v_{-i}$, there exist $v_i$ and $v_i'$ so that agent $i$ is a consumer under type profile $(v_i,v_{-i})$ and is not a consumer under type profile $(v_i',v_{-i})$. \end{theorem} \begin{example}[Serial cost sharing mechanism~\cite{Moulin1994:Serial}] Here is an example largest unanimous mechanism. For every nonempty subset of agents $S$ with $|S|=k$, the cost share vector is $(\frac{1}{k},\frac{1}{k},\ldots,\frac{1}{k})$. The mechanism picks the largest coalition where the agents are willing to pay equal shares. \end{example} Deb and Razzolini~\cite{Deb1999:Voluntary} proved that if we further require an \emph{equal treatment of equals} property (if two agents have the same type, then they should be treated the same), then the only strategy-proof and individually rational mechanism left is the serial cost sharing mechanism. For many distributions, we are able to outperform the serial cost sharing mechanism. That is, equal treatment of equals (or requiring anonymity) may hurt performances. \subsection{Nonexcludable Public Project Analysis}\label{sub:nonexcludable} We start with an analysis on the nonexcludable public project. The results presented in this section will lay the foundation for the more complex excludable public project model coming up next. Due to the characterization results, we focus on the family of unanimous mechanisms. That is, we are solving for the optimal cost share vector $(c_1,c_2,\ldots,c_n)$, satisfying that $c_i\ge 0$ and $\sum c_i=1$. Recall that $f$ and $F$ are the PDF and CDF of the prior distribution. The \emph{reliability function} $\overline{F}$ is defined as $\overline{F}(x)=1-F(x)$. We define $w(c)$ to be the expected utility of an agent when her cost share is $c$, conditional on that she accepts this cost share. \[w(c)=\frac{\int_c^1 (x-c)f(x)dx}{\int_c^1f(x)dx}\] One condition we will use is \emph{log-concavity}: if $\log(f(x))$ is concave in $x$, then $f$ is log-concave. We also introduce another condition called \emph{welfare-concavity}, which requires $w$ to be concave. \begin{theorem}\label{thm:nonexcludable} If $f$ is log-concave, then the conservative equal costs mechanism maximizes the expected number of consumers. If $f$ is log-concave and welfare-concave, then the conservative equal costs mechanism maximizes the expected social welfare. \end{theorem} \begin{proof} Let $C=(c_1,c_2,\ldots,c_n)$ be the cost share vector. Maximizing the expected number of consumers is equivalent to maximizing the probability of $C$ getting unanimously accepted, which equals $\overline{F}(c_1) \overline{F}(c_2) \ldots \overline{F}(c_n)$. Its log equals $\sum_{1\le i\le n}\log(\overline{F}(c_i))$. When $f$ is log-concave, so is $\overline{F}$ according to~\cite{Bagnoli2005:Log}. This means that when cost shares are equal, the above probability is maximized. The expected social welfare under the cost share vector $C$ equals $\sum w(c_i)$, conditional on all agents accepting their shares. This is maximized when shares are equal. Furthermore, when all shares are equal, the probability of unanimous approval is also maximized. \end{proof} $f$ being log-concave is also called the \emph{decreasing reversed failure rate} condition~\cite{Shao2016:Optimal}. Bagnoli and Bergstrom~\cite{Bagnoli2005:Log} proved log-concavity for many common distributions, including the distributions in Table~\ref{tb:logconcave} (for all distribution parameters). All distributions are restricted to $[0,1]$. We also list some limited results for welfare-concavity. We prove that the uniform distribution is welfare-concave, but for the other distributions, the results are based on simulations. Finally, we include the conditions for $f$ being nonincreasing, which will be used in the excludable public project model. \begin{table}[ht] \caption{Example Log-Concave Distributions} \centering \begin{tabular}{ l c r }\label{tb:logconcave} & Welfare-Concavity & Nonincreasing \\ Uniform $U(0,1)$ & Yes & Yes \\ \hline Normal & No ($\mu=0.5,\sigma=0.1$) & $\mu\le 0$ \\ \hline Exponential & Yes ($\lambda=1$) & Yes \\ \hline Logistic & No ($\mu=0.5,\sigma=0.1$) & $\mu\le 0$ \\ \end{tabular} \end{table} Even when optimal, the conservative equal costs mechanism performs poorly. We take the uniform $U(0,1)$ distribution as an example. Every agent's cost share is $\frac{1}{n}$. The probability of acceptance for one agent is $\frac{n-1}{n}$, which approaches $1$ asymptotically. However, we need unanimous acceptance, which happens with much lower probability. For the uniform distribution, asymptotically, the probability of unanimous acceptance is only $\frac{1}{e}\approx 0.368$. In general, we have the following bound: \begin{theorem} If $f$ is Lipschitz continuous, then when $n$ goes to infinity, the probability of unanimous acceptance under the conservative equal costs mechanism is $e^{-f(0)}$. \end{theorem} Without log-concavity, the conservative equal costs mechanism is not necessarily optimal. We present the following dynamic program (DP) for calculating the optimal unanimous mechanism. We only present the formation for welfare maximization.\footnote{Maximizing the expected number of consumers can be viewed as a special case where every agent's utility is $1$ if the project is built} We assume that there is an ordering of the agents based on their identities. We define $B(k,u,m)$ as the maximum expected social welfare under the following conditions: \begin{itemize} \item The first $n-k$ agents have already approved their cost shares, and their total cost share is $1-m$. That is, the remaining $k$ agents need to come up with $m$. \item The first $n-k$ agents' total expected utility is $u$. \end{itemize} The optimal social welfare is then $B(n,0,1)$. We recall that $\overline{F}(c)$ is the probability that an agent accepts a cost share of $c$, we have \[ B(k,u,m)=\max_{0\le c\le m}\overline{F}(c)B(k-1,u+w(c), m-c) \] The base case is $B(1,u,m)=\overline{F}(m)(u+w(m))$. In terms of implementation of this DP, we have $0\le u\le n$ and $0\le m\le 1$. We need to discretize these two intervals. If we pick a discretization size of $\frac{1}{H}$, then the total number of DP subproblems is about $H^2n^2$. To compare the performance of the conservative equal costs mechanism and our DP solution, we focus on distributions that are not log-concave (hence, uniform and normal are not eligible). We introduce the following non-log-concave distribution family: \begin{definition}[Two-Peak Distribution $(\mu_1,\sigma_1,\mu_2,\sigma_2,p)$] With probability $p$, the agent's valuation is drawn from the normal distribution $N(\mu_1,\sigma_1)$ (restricted to $[0,1]$). With probability $1-p$, the agent's valuation is drawn from $N(\mu_2,\sigma_2)$ (restricted to $[0,1]$). \end{definition} The motivation behind the two-peak distribution is that there may be two categories of agents. One category is directly benefiting from the public project, and the other is indirectly benefiting. For example, if the public project is to build bike lanes, then cyclists are directly benefiting, and the other road users are indirectly benefiting (\emph{e.g.}, less congestion for them). As another example, if the public project is to crowdfund a piece of security information on a specific software product (\emph{e.g.}, PostgreSQL), then agents who use PostgreSQL in production are directly benefiting and the other agents are indirectly benefiting (\emph{e.g.}, every web user is pretty much using some websites backed by PostgreSQL). Therefore, it is natural to assume the agents' valuations are drawn from two different distributions. For simplicity, we do not consider three-peak, \emph{etc.} For the two-peak distribution $(0.1,0.1,0.9,0.1,0.5)$, DP significantly outperforms the conservative equal costs (CEC) mechanism. \begin{center} \begin{tabular}{ l c r } & E(no. of consumers) & E(welfare)\\ n=3 CEC & 0.376 & 0.200 \\ \hline n=3 DP & 0.766 & 0.306 \\ \hline n=5 CEC & 0.373 & 0.199 \\ \hline n=5 DP & 1.426 & 0.591 \\ \end{tabular} \end{center} \subsection{Excludable Public Project} Due to the characterization results, we focus on the family of largest unanimous mechanisms. We start by showing that the serial cost sharing mechanism is optimal in some scenarios. \begin{theorem}\label{thm:excludable}\, $2$ agents case: If $f$ is log-concave, then the serial cost sharing mechanism maximizes the expected number of consumers. If $f$ is log-concave and welfare-concave, then the serial cost sharing mechanism maximizes the expected social welfare. $3$ agents case: If $f$ is log-concave and nonincreasing, then the serial cost sharing mechanism maximizes the expected number of consumers. If $f$ is log-concave, nonincreasing, and welfare-concave, then the serial cost sharing mechanism maximizes the social welfare. \end{theorem} For $2$ agents, the conditions are identical to the nonexcludable case. For $3$ agents, we also need $f$ to be nonincreasing. Example distributions satisfying these conditions were listed in Table~\ref{tb:logconcave}. \begin{proof} We only present the proof for welfare maximization when $n=3$, which is the most complex case. (For maximizing the number of consumers, all references to the $w$ function should be replaced by the constant $1$.) The largest unanimous mechanism specifies constant cost shares for every coalition of agents. We use $c_{1\underline{2}3}$ to denote agent $2$'s cost share when the coalition is $\{1,2,3\}$. Similarly, $c_{\underline{2}3}$ denotes agent $2$'s cost share when the coalition is $\{2,3\}$. If the largest unanimous coalition has size $3$, then the expected social welfare gained due to this case is: \[ \overline{F}(c_{\underline{1}23}) \overline{F}(c_{1\underline{2}3}) \overline{F}(c_{12\underline{3}}) ( w(c_{\underline{1}23}) +w(c_{1\underline{2}3}) +w(c_{12\underline{3}}) ) \] Given log-concavity of $\overline{F}$ (implied by the log-concavity of $f$) and welfare-concavity, and given that $c_{\underline{1}23}+c_{1\underline{2}3}+c_{12\underline{3}}=1$. We have that the above is maximized when all agents have equal shares. If the largest unanimous coalition has size $2$ and is $\{1,2\}$, then the expected social welfare gained due to this case is: \[ \overline{F}(c_{\underline{1}2}) \overline{F}(c_{1\underline{2}}) F(c_{12\underline{3}}) ( w(c_{\underline{1}2}) +w(c_{1\underline{2}}) ) \] $F(c_{12\underline{3}})$ is the probability that agent $3$ does not join in the coalition. The above is maximized when $c_{\underline{1}2}=c_{1\underline{2}}$, so it simplifies to $2\overline{F}(\frac{1}{2})^2 w(\frac{1}{2}) F(c_{12\underline{3}})$. We then consider the welfare gain from all coalitions of size $2$: \[ 2\overline{F}(\frac{1}{2})^2 w(\frac{1}{2})( F(c_{\underline{1}23}) +F(c_{1\underline{2}3}) +F(c_{12\underline{3}}) ) \] Since $f$ is nonincreasing, we have that $F$ is concave, the above is again maximized when all cost shares are equal. Finally, the probability of coalition size $1$ is $0$, which can be ignored in our analysis. Therefore, throughout the proof, all terms referenced are maximized when the cost shares are equal. \end{proof} For $4$ agents and uniform distribution, we have a similar result. \begin{theorem}\label{thm:uniform} Under the uniform distribution $U(0,1)$, when $n=4$, the serial cost sharing mechanism maximizes the expected number of consumers and the expected social welfare. \end{theorem} For $n\ge 4$ and for general distributions, we propose a numerical method for calculating the performance upper bound. A largest unanimous mechanism can be carried out by the following process: we make cost share offers to the agents one by one based on an ordering of the agents. Whenever an agent disagrees, we remove this agent and move on to a coalition with one less agent. We repeat until all agents are removed or all agents have agreed. We introduce the following mechanism based on a Markov process. The initial state is $\{(\underbrace{0,0,\ldots,0}_n),n\}$, which represents that initially, we only know that the agents' valuations are at least $0$, and we have not made any cost share offers to any agents yet (there are $n$ agents yet to be offered). We make a cost share offer $c_1$ to agent $1$. If agent $1$ accepts, then we move on to state $\{(c_1,\underbrace{0,\ldots,0}_{n-1}),n-1\}$. If agent $1$ rejects, then we remove agent $1$ and move on to reduced-sized state $\{(\underbrace{0,\ldots,0}_{n-1}),n-1\}$. In general, let us consider a state with $t$ users $\{(l_1,l_2,\ldots,l_t),t\}$. The $i$-th agent's valuation lower bound is $l_i$. Suppose we make offers $c_1,c_2,\ldots,c_{t-k}$ to the first $t-k$ agents and they all accept, then we are in a state $\{(\underbrace{c_1,\ldots,c_{t-k}}_{t-k},\underbrace{l_{t-k+1},\ldots,l_{t}}_k),k\}$. The next offer is $c_{t-k+1}$. If the next agent accepts, then we move on to $\{(\underbrace{c_1,\ldots,c_{t-k+1}}_{t-k+1},\underbrace{l_{t-k+2},\ldots,l_{t}}_{k-1}),k-1\}$. If she disagrees (she is then the first agent to disagree), then we move on to a reduced-sized state $\{(\underbrace{c_1,\ldots,c_{t-k}}_{t-k},\underbrace{l_{t-k+2},\ldots,l_{t}}_{k-1}),t-1\}$. Notice that whenever we move to a reduced-sized state, the number of agents yet to be offered should be reset to the total number of agents in this state. Whenever we are in a state with all agents offered $\{(\underbrace{c_1,\ldots,c_t}_t),0\}$, we have gained an objective value of $t$ if the goal is to maximize the number of consumers. If the goal is to maximize welfare, then we have gained an objective value of $\sum_{1\le i\le t} w(c_i)$. Any largest unanimous mechanism can be represented via the above Markov process. So for deriving performance upper bounds, it suffices to focus on this Markov process. Starting from a state, we may end up with different objective values. A state has an expected objective value, based on all the transition probabilities. We define $U(t,k,m,l)$ as the maximum expected objective value starting from a state that satisfies: \begin{itemize} \item There are $t$ agents in the state. \item There are $k$ agents yet to be offered. The first $t-k$ agents (those who accepted the offers) have a total cost share of $1-m$. That is, the remaining $k$ agents are responsible for a total cost share of $m$. \item The $k$ agents yet to be offered have a total lower bound of $l$. \end{itemize} The upper bound we are looking for is then $U(n,n,1,0)$, which can be calculated via the following DP process: \[ U(t,k,m,l) = \max_{\substack{0\le l^*\le l\\l^*\le c^*\le m}} \left( \frac{\overline{F}(c^*)}{\overline{F}(l^*)}U(t,k-1,m-c^*,l-l^*)\right. \] \[ \left.+(1-\frac{\overline{F}(c^*)}{\overline{F}(l^*)})U(t-1,t-1,1,1-m+l-l^*)\right) \] In the above, there are $k$ agents yet to be offered. We maximize over the next agent's possible lower bound $l^*$ and the cost share $c^*$. That is, we look for the best possible lower bound situation and the corresponding optimal offer. $\frac{\overline{F}(c^*)}{\overline{F}(l^*)}$ is the probability that the next agent accepts the cost share, in which case, we have $k-1$ agents left. The remaining agents need to come up with $m-c^*$, and their lower bounds sum up to $l-l^*$. When the next agent does not accept the cost share, we transition to a new state with $t-1$ agents in total. All agents are yet to be offered, so $t-1$ agents need to come up with $1$. The lower bounds sum up to $1-m+l-l^*$. There are two base conditions. When there is only one agent, she has $0$ probability for accepting an offer of $1$, so $U(1,k,m,l) = 0$. The other base case is that when there is only $1$ agent yet to be offered, the only valid lower bound is $l$ and the only sensible offer is $m$. Therefore, \[U(t,1,m,l) = \frac{\overline{F}(m)}{\overline{F}(l)}G(t)+(1-\frac{\overline{F}(m)}{\overline{F}(l)})U(t-1,t-1,1,1-m)\] Here, $G(t)$ is the maximum objective value when the largest unanimous set has size $t$. For maximizing the number of consumers, $G(t)=t$. For maximizing welfare, \[G(t)= \max_{\substack{c_1,c_2,\ldots,c_t\\c_i\ge 0\\\sum c_i=1}}\sum_i w(c_i)\] The above $G(t)$ can be calculated via a trivial DP. Now we compare the performances of the serial cost sharing mechanism against the upper bounds. All distributions used here are log-concave. In every cell, the first number is the objective value under serial cost sharing, and the second is the upper bound. We see that the serial cost sharing mechanism is close to optimality in all these experiments. We include both welfare-concave and non-welfare-concave distributions (uniform and exponential with $\lambda=1$ are welfare-concave). For the two distributions not satisfying welfare-concavity, the welfare performance is relatively worse. \begin{center} \begin{tabular}{ l c r } & E(no. of consumers) & E(welfare)\\ n=5 $U(0,1)$ & 3.559, 3.753 & 1.350, 1.417 \\ \hline n=10 $U(0,1)$ & 8.915, 8.994& 3.938, 4.037 \\ \hline n=5 $N(0.5,0.1)$ & 4.988, 4.993 & 1.492, 2.017 \\ \hline n=10 $N(0.5,0.1)$ & 10.00, 10.00 & 3.983, 4.545 \\ \hline n=5 Exponential $\lambda=1$ & 2.799, 3.038 & 0.889, 0.928 \\ \hline n=10 Exponential $\lambda=1$ & 8.184, 8.476 & 3.081, 3.163 \\ \hline n=5 Logistic$(0.5,0.1)$ & 4.744, 4.781 & 1.451, 1.910 \\ \hline n=10 Logistic$(0.5,0.1)$ & 9.873, 9.886 & 3.957, 4.487 \\ \end{tabular} \end{center} \begin{example} Here we provide an example to show that the serial cost sharing mechanism can be far away from optimality. We pick a simple Bernoulli distribution, where an agent's valuation is $0$ with $0.5$ probability and $1$ with $0.5$ probability.\footnote{Our paper assumes that the distribution is continuous, so technically we should be considering a smoothed version of the Bernoulli distribution. For the purpose of demonstrating an elegant example, we ignore this technicality.} Under the serial cost sharing mechanism, when there are $n$ agents, only half of the agents are consumers (those who report $1$s). So in expectation, the number of consumers is $\frac{n}{2}$. Let us consider another simple mechanism. We assume that there is an ordering of the agents based on their identities (not based on their types). The mechanism asks the first agent to accept a cost share of $1$. If this agent disagrees, she is removed from the system. The mechanism then moves on to the next agent and asks the same, until an agent agrees. If an agent agrees, then all future agents can consume the project for free. The number of removed agents follows a geometric distribution with $0.5$ success probability. So in expectation, $2$ agents are removed. That is, the expected number of consumers is $n-2$. \end{example} \section{Mech. Design vs Neural Networks} For the rest of this paper, we focus on the excludable public project model and distributions that are not log-concave. Due to the characterization results, we only need to consider the largest unanimous mechanisms. We use neural networks and deep learning to solve for well-performing largest unanimous mechanisms. Our approach involves several technical innovations as discussed in Section~\ref{sec:intro}. \subsection{Network Structure} A largest unanimous mechanism specifies constant cost shares for every coalition of agents. The mechanism can be characterized by a neural network with $n$ binary inputs and $n$ outputs. The $n$ binary inputs present the coalition, and the $n$ outputs represent the constant cost shares. We use $\vec{b}$ to denote the input vector (tensor) and $\vec{c}$ to denote the output vector. We use $NN$ to denote the neural network, so $NN(\vec{b})=\vec{c}$. There are several constraints on the neural network. \begin{itemize} \item All cost shares are nonnegative: $\vec{c}\ge 0$. \item For input coordinates that are $1$s, the output coordinates should sum up to $1$. For example, if $n=3$ and $\vec{b}=(1,0,1)$ (the coalition is $\{1,3\}$), then $\vec{c}_1+\vec{c}_3=1$ (agent $1$ and $3$ are to share the total cost). \item For input coordinates that are $0$s, the output coordinates are irrelevant. We set these output coordinates to $1$s, which makes it more convenient for the next constraint. \item Every output coordinate is nondecreasing in every input coordinate. This is to ensure that the agents' cost shares are nondecreasing when some other agents are removed. If an agent is removed, then her cost share offer is kept at $1$, which makes it trivially nondecreasing. \end{itemize} All constraints except for the last is easy to achieve. We will simply use $OUT(\vec{b})$ as output instead of directly using $NN(\vec{b})$\footnote{This is done by appending additional calculation structures to the output layer.}: \[OUT(\vec{b})=\text{softmax}(NN(\vec{b})-1000(1-\vec{b}))+(1-\vec{b})\] Here, $1000$ is an arbitrary large constant. For example, let $\vec{b}=(1,0,1)$ and $\vec{c}=NN(\vec{b})=(x,y,z)$. We have \[OUT(\vec{b})=\text{softmax}((x,y,z)-1000(0,1,0))+(0,1,0)\] \[=\text{softmax}((x,y-1000,z))+(0,1,0)\] \[=(x',0,z')+(0,1,0)=(x',1,y')\] In the above, $\text{softmax}((x,y-1000,z))$ becomes $(x',0,y')$ with $x',y'\ge 0$ and $x'+y'=1$ because the second coordinate is very small so it (essentially) vanishes after softmax. Softmax always produces nonnegtive outputs that sum up to $1$. Finally, the $0$s in the output are flipped to $1$s per our third constraint. The last constraint is enforced using a penalty function. For $\vec{b}$ and $\vec{b}'$, where $\vec{b}'$ is obtained from $\vec{b}$ by changing one $1$ to $0$, we should have that $OUT(\vec{b})\le OUT(\vec{b}')$, which leads to the following penalty (times a large constant): \[\text{ReLU}(OUT(\vec{b})-OUT(\vec{b}'))\] Another way to enforce the last constraint is to adopt the idea behind Sill~\cite{Sill1998:Monotonic}. The authors proposed a network structure called the \emph{monotonic networks}. This idea has been used in~\cite{Golowich2018:Deep}, where the authors also dealt with networks that take binary inputs and must be monotone. However, we do not use this approach because it is incompatible with our design for achieving the other constraints. There are two other reasons for not using the monotonic network structure. One is that it has only two layers. Some argue that having a \emph{deep} model is important for performance in deep learning~\cite{Zhou2017:Deep}. The other is that under our approach, we only need a fully connected network with ReLU penalty, which is highly optimized in state-of-the-art deep learning toolsets. In our experiments, we use a fully connected network with four layers ($100$ nodes each layer) to represent our mechanism. \subsection{Cost Function} For presentation purposes, we focus on maximizing the expected number of consumers. Only slight adjustments are needed for welfare maximization. Previous approaches of mechanism design via neural networks used \emph{static} networks~\cite{Golowich2018:Deep, Duetting2019:Optimal,Shen2019:Automated,Manisha2018:Learning}. Given a sample, the mechanism simulation is done on the network. Our largest unanimous mechanism involves iterative decision making. We actually can model the process via a static network, but the result is not good. The initial offers are $OUT((1,1,\ldots,1))$. The remaining agents after the first round are then $S=\text{sigmoid}(v-OUT((1,1,\ldots,1)))$. Here, $v$ is the type profile sample. The sigmoid function turns positive values to (approximately) $1$s and negative values to (approximately) $0$s. The next round of offers are then $OUT(S)$. The remaining agents afterwards are then $\text{sigmoid}(v-OUT(S))$. We repeat this $n$ times because the largest unanimous mechanism must terminate after $n$ rounds. The final coalition is a converged state, so even if the mechanism terminates before the $n$-th round, having it repeat $n$ times does not change the result (except for additional numerical errors). Once we have the final coalition $S^f$, we include $\sum_{x\in S^f}x$ (number of consumers) in the cost function.\footnote{Has to multiply $-1$ as we typically minimize the cost function.} However, this approach performs \emph{abysmally}, possibly due to the vanishing gradient problem and numerical errors caused by stacking $n$ sigmoid functions. We would like to avoid stacking sigmoid to model iterative decision making (or get rid of sigmoid altogether). We propose an alternative approach, where decisions are simulated off the network using a separate program (\emph{e.g.}, any Python function). The advantage of this approach is that it is now trivial to handle complex decision making. However, experienced neural network practitioners may immediately notice a pitfall. Given a type profile sample $v$ and the current network $NN$, if we simulate the mechanism off the network to obtain the number of consumers $x$, and include $x$ in the cost function, then training will fail completely. This is because $x$ is a constant that carries no gradients at all.\footnote{We use PyTorch in our experiments. An overview of Automated Differentiation in PyTorch is available here~\cite{Paszke2017:Automatic}.} One way to resolve this is to interpret the mechanisms as price-oriented rationing-free (PORF) mechanisms~\cite{Yokoo2003:Characterization}. That is, if we single out one agent, then her options (outcomes combined with payments) are completely determined by the other agents and she simply has to choose the utility-maximizing option. Under a largest unanimous mechanism, an agent faces only two results: either she belongs to the largest unanimous coalition or not. If an agent is a consumer, then her payment is a constant due to strategy-proofness, and the constant payment is determined by the other agents. Instead of sampling over complete type profiles, we sample over $v_{-i}$ with a random $i$. To better convey our idea, we consider a specific example. Let $i=1$ and $v_{-1}=(\cdot, \frac{1}{2},\frac{1}{2},\frac{1}{4}, 0)$. We assume that the current state of the neural network is exactly the serial cost sharing mechanism. Given a sample, we use a separate program to calculate the following entries. In our experiments, we simply used Python simulation to obtain these entries. \begin{itemize} \item The objective value if $i$ is a consumer ($O_s$). Under the example, if $1$ is a consumer, then the decision must be $4$ agents each pays $\frac{1}{4}$. So the objective value is $O_s=4$. \item The objective value if $i$ is not a consumer ($O_f$). Under the example, if $1$ is not a consumer, then the decision must be $2$ agents each pay $\frac{1}{2}$. So the objective value is $O_f=2$. \item The binary vector that characterizes the coalition that decides $i$'s offer ($\vec{O_b}$). Under the example, the vector is $\vec{O_b}=(1,1,1,1,0)$. \end{itemize} $O_s$, $O_f$, and $\vec{O_b}$ are constants without gradients. We link them together using terms with gradients, which is then included in the cost function: \begin{equation}\label{eq:single1} (1-F(OUT(\vec{O_b})_i))O_s + F(OUT(\vec{O_b})_i)O_f \end{equation} $1-F(OUT(\vec{O_b})_i)$ is the probability that agent $i$ accepts her offer. $F(OUT(\vec{O_b})_i)$ is then the probability that agent $i$ rejects her offer. $OUT(\vec{O_b})_i$ carries gradients as it is generated by the network. We use the analytical form of $F$, so the above term carries gradients.\footnote{PyTorch has built-in analytical CDFs of many common distributions.} The above approach essentially feeds the prior distribution into the cost function. We also experimented with two other approaches. One does not use the prior distribution. It uses a full profile sample and uses one layer of sigmoid to select between $O_s$ or $O_f$: \begin{equation}\label{eq:sigmoid} \text{sigmoid}(v_i-OUT(\vec{O_b})_i)O_s + \text{sigmoid}(OUT(\vec{O_b})_i-v_i))O_f \end{equation} The other approach is to feed ``even more'' distribution into the cost function. We single out two agents $i$ and $j$. Now there are $4$ options: they both win or both lose, only $i$ wins, and only $j$ wins. We still use $F$ to connect these options together. In Section~\ref{sec:experiment}, in one experiment, we show that singling out one agent works the best. In another experiment, we show that even if we do not have the analytical form of $F$, using an analytical approximation also enables successful training. \subsection{Supervision as Initialization} We introduce an additional supervision step in the beginning of the training process as a systematic way of initialization. We first train the neural network to mimic an existing manual mechanism, and then leave it to gradient descent. We considered three different manual mechanisms. One is the serial cost sharing mechanism. The other two are based on two different heuristics: \begin{definition}[One Directional Dynamic Programming] We make offers to the agents one by one. Every agent faces only one offer. The offer is based on how many agents are left, the objective value cumulated so far by the previous agents, and how much money still needs to be raised. If an agent rejects an offer, then she is removed from the system. At the end of the algorithm, we check whether we have collected $1$. If so, the project is built and all agents not removed are consumers. This mechanism belongs to the largest unanimous mechanism family. This mechanism is not optimal because we cannot go back and increase an agent's offer. \end{definition} \begin{definition}[Myopic Mechanism] For coalition size $k$, we treat it as a nonexcludable public project problem with $k$ agents. The offers are calculated based on the dynamic program proposed at the end of Subsection~\ref{sub:nonexcludable}, which computes the optimal offers for the nonexcludable model. This is called the myopic mechanism, because it does not care about the payoffs generated in future rounds. This mechanism is not necessarily feasible, because the agents' offers are not necessarily nondecreasing when some agents are removed. \end{definition} \section{Experiments}\label{sec:experiment} The experiments are conducted on a machine with Intel i5-8300H CPU.\footnote{We experimented with both PyTorch and Tensorflow (eager mode). The PyTorch version runs significantly faster, possibly because we are dealing with dynamic graphs.} The largest experiment with $10$ agents takes about $3$ hours. Smaller scale experiments take only about $15$ minutes. In our experiments, unless otherwise specified, the distribution considered is two-peak $(0.15,0.1,0.85,0.1,0.5)$. The x-axis shows the number of training rounds. Each round involves $5$ batches of $128$ samples ($640$ samples each round). Unless otherwise specified, the y-axis shows the expected number of \textbf{non}consumers (so lower values represent better performances). Random initializations are based on Xavier normal with bias $0.1$. Figure~\ref{fig:1} (Left) shows the performance comparison of three different ways for constructing the cost function: using one layer of sigmoid (without using distribution) based on~\eqref{eq:sigmoid}, singling out one agent based on~\eqref{eq:single1}, and singling out two agents. All trials start from random initializations. In this experiment, singling out one agent works the best. The sigmoid-based approach is capable of moving the parameters, but its result is noticeably worse. Singling out two agents has almost identical performance to singling out one agent, but it is slower in terms of time per training step. Figure~\ref{fig:1} (Right) considers the Beta $(0.1,0.1)$ distribution. We use Kumaraswamy $(0.1,0.354)$'s analytical CDF to approximate the CDF of Beta $(0.1,0.1)$. The experiments show that if we start from random initializations (Random) or start by supervision to serial cost sharing (SCS), then the cost function gets stuck. Supervision to one directional dynamic programming (DP) and Myoptic mechanism (Myopic) leads to better mechanisms. So in this example scenario, approximating CDF is useful when analytical CDF is not available. It also shows that supervision to manual mechanisms works better than random initializations in this case. \begin{figure} \caption{Effect of Distribution Info on Training} \label{fig:1} \end{figure} Figure~\ref{fig:2} (Top-Left $n=3$, Top-Right $n=5$, Bottom-Left $n=10$) shows the performance comparison of supervision to different manual mechanisms. For $n=3$, supervision to DP performs the best. Random initializations is able to catch up but not completely close the gap. For $n=5$, random initializations caught up and actually became the best performing one. The Myopic curve first increases and then decreases because it needs to first fix the constraint violations. For $n=10$, supervision to DP significantly outperforms the others. Random initializations closes the gap with regard to serial cost sharing, but it then gets stuck. Even though it looks like the DP curve is flat, it is actually improving, albeit very slowly. A magnified version is shown in Figure~\ref{fig:2} (Bottom-Right). \begin{figure} \caption{Supervision to Different Manual Mechanisms} \label{fig:2} \end{figure} Figure~\ref{fig:3} shows two experiments on maximizing expected social welfare (y-axis) under two-peak $(0.2,0.1,0.6,0.1,0.5)$. For $n=3$, supervision to DP leads to the best result. For $n=5$, SCS is actually the best mechanism we can find (the cost function barely moves). It should be noted that all manual mechanisms \emph{before training} have very similar welfares: $0.7517$ (DP), $0.7897$ (SCS), $0.7719$ (Myopic). Even random initialization before training has a welfare of $0.7648$. It could be that there is just little room for improvement here. \begin{figure} \caption{Maximizing Social Welfare} \label{fig:3} \end{figure} \end{document}
arXiv
\begin{document} \preprint{CALT-68-2066, QUIC-96-001} \draft \title{Pasting quantum codes} \author{Daniel Gottesman\thanks{[email protected]}} \address{California Institute of Technology, Pasadena, CA 91125} \maketitle \begin{abstract} I describe a method for pasting together certain quantum error-correcting codes that correct one error to make a single larger one-error quantum code. I show how to construct codes encoding 7 qubits in 13 qubits using the method, as well as 15 qubits in 21 qubits and all the other ``perfect'' codes. \end{abstract} \pacs{03.65.Bz,89.80.+h} Quantum computers have a great deal of promise, but they are likely to be inherently much noisier than classical computers. One approach to dealing with noise and decoherence in quantum computers and quantum communications is to encode the data using a quantum error-correcting code. A number of such codes and classes of codes are known \cite{shor1,calderbank1,steane1,laflamme,bennett,gottesman,calderbank2,steane2}. However, the only known method of automatically generating such codes is to find a suitable classical error-correcting code and convert it into a quantum code~\cite{calderbank1,steane1}. This method is limited to producing less efficient codes (i.e., with smaller ratio of encoded qubits to total qubits) than dedicated quantum codes, so a method of automatically producing highly efficient quantum codes is desirable. I will present here a method to create one-error quantum codes from smaller ones with almost no effort. The conditions for a set of $n$-qubit states $|\psi_1 \rangle, \ldots, |\psi_{2^k} \rangle$ to form an error-correcting code for the errors $E_a$ is \begin{equation} \langle \psi_i | E_a^\dagger E_b | \psi_j \rangle = C_{ab} \delta_{ij}, \label{conditions} \end{equation} where $C_{ab}$ is independent of $i$ and $j$~\cite{bennett,knill}. A code with $2^k$ states encodes $k$ qubits. Typically, a code will be designed to correct all possible errors affecting less than or equal to $t$ qubits. The basis errors $E_a$ are usually tensor products of \begin{equation} I = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right),\ X_i = \left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right),\ Y_i = \left( \begin{array}{cc} 0 & -1 \\ 1 & 0 \end{array} \right),\ Z_i = \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right), \end{equation} where the subscript $i$ refers to the qubit which the error acts on. If the matrix $C_{ab}$ has maximum rank, the code is called a {\em nondegenerate} code. If $C_{ab}$ has determinant 0, it is a {\em degenerate} code. Most known codes are nondegenerate codes (in fact, for most known codes, $C_{ab} = \delta_{ab}$). For a nondegenerate code, each error acting on each code word must produce a linearly independent state. In order to have enough room in the Hilbert space for all of these states, there is a maximum possible efficiency for the code, known as the quantum Hamming bound~\cite{ekert}. For codes to correct one error, the quantum Hamming bound takes the form \begin{equation} (3n+1) 2^k \leq 2^n. \end{equation} If equality holds, the code is known as a {\em perfect} code. For a perfect code, $3n+1$ must be a power of 2. Since $4^j-1$ is divisible by 3, while $2^{2j+1}-1$ is not, there are possible one-error perfect codes for $n=(4^j - 1)/3$. The perfect codes have $n-k = 2j$. The two smallest such codes are for $n=5, 21$, but there is a full infinite class of them. Multiple-error perfect codes are much rarer. There are known $n=5$ codes~\cite{laflamme,bennett}, but until now, it was unknown if the other perfect codes existed. Finding a set of states that satisfies condition~(\ref{conditions}) without guidance is difficult at best. In~\cite{gottesman} and~\cite{calderbank2}, more powerful group theoretic methods are presented that reduce the task to an admittedly still difficult combinatorial problem. Using the terminology of~\cite{gottesman}, a quantum error-correcting code is defined in terms of its {\em stabilizer} ${\cal H}$, which is the set of operators $M$ formed from products of $X_i$, $Y_i$, and $Z_i$ that fix all of the states in the coding space $T$. $T$ forms the joint $+1$-eigenspace of the operators in ${\cal H}$. In order for this to be non-empty, the elements of ${\cal H}$ must all commute with each other and square to $+1$. If ${\cal H}$ is generated by $a$~elements, the code encodes $n-a$~qubits. If an error $E$ anticommutes with $M \in {\cal H}$, when $E$ acts on a state $|\psi \rangle$ in $T$, it will take it from the $+1$-eigenspace of $M$ to the $-1$-eigenspace, where we can recognize it as an incorrect state, and hopefully correct it. Since all products of $X_i$, $Y_i$, and $Z_i$ commute or anticommute, we can define functions $f_M$ and $f$: \begin{equation} f_M (E) = \left\{ \begin{array}{ll} 0 & \mbox{if $[M,E]=0$} \\ 1 & \mbox{if $\{M,E\}=0$} \end{array} \right. \end{equation} \begin{equation} f (E) = \left(f_{M_1}(E), f_{M_2}(E), \ldots f_{M_a} (E) \right), \end{equation} where $M_1, \ldots, M_a$ are the generators of ${\cal H}$. Given two errors $E$ and $F$, if $f(E) \neq f(F)$, then $E |\psi \rangle$ and $F |\psi \rangle$ are in different eigenspaces for some element of ${\cal H}$, so they are orthogonal, and we can distinguish them and correct them. Conversely, if $f(E) = f(F)$, then we cannot properly distinguish $E$ and $F$, which will cause a problem unless $E |\psi \rangle$ is actually equal to $F |\psi \rangle$, giving us a degenerate code. In this case, $F^\dagger E |\psi \rangle = |\psi \rangle$, so $F^\dagger E \in {\cal H}$. For a nondegenerate code, all of the values $f(E)$ must therefore be distinct, allowing $f(E)$ to serve as the error syndrome. Note that $f(I)=0$, so $f(E)$ must be nonzero for nontrivial $E$. In~\cite{gottesman}, I gave a construction for one-error codes with $n=2^j$, $k=n-j-2$. For all of these codes, the first two generators have the form $M_1 = X_1 \ldots X_n$ and $M_2 = Z_1 \ldots Z_n$. Therefore, all of the error syndromes for these codes start with $01$ for an $X_i$ error, with $10$ for a $Z_i$ error, and $11$ for a $Y_i$ error. None of the error syndromes beginning with $00$ are used. Therefore, we can add more qubits and thus more possible errors to the code, so long as all the error syndromes for the new errors begin with $00$. The new error syndromes will all have to be different, of course, which will necessitate extending most of the generators of ${\cal H}$ to have nontrivial action on the new qubits. We want the new errors to have $f_{M_1} (E) = f_{M_2} (E) = 0$, so we will leave $M_1$ and $M_2$ alone, letting them act trivially on the new qubits. If we extend the remaining generators by pasting on the generators of a nondegenerate code with two fewer generators, all of the new error syndromes are guaranteed to be distinct, since the smaller code must distinguish them to be a good code. See figure~\ref{scheme} for a schematic picture of this process. \begin{figure} \caption{Pasting together the generators for two codes} \label{scheme} \end{figure} Just distinguishing all errors is not sufficient for ${\cal H}$ to define a code. It must also be Abelian and all elements must square to 1. However, each new generator $M=NP$, where $N$ and $P$ are generators from existing codes. They must individually square to 1, so the product also squares to 1. Similarly, another generator $M' = N' P'$ commutes with $M$: $N N' = N' N$ and $P P' = P' P$. The $N$s and $P$s act on different qubits, and therefore commute. Thus, \begin{equation} M M' = (N P) (N' P') = (N' P') (N P) = M' M. \end{equation} Therefore, ${\cal H}$ formed by this method will always form a new error-correcting code. The smallest code we can create this way from existing codes is given by pasting a 5-qubit code~\cite{laflamme,bennett} onto an 8-qubit code~\cite{gottesman,calderbank2,steane2}. Since the 5-qubit code has four generators, while the 8-qubit code has only five, we must first augment the 8-qubit code by adding a trivial sixth generator. The resulting stabilizer (using the stabilizer from \cite{gottesman} for the 8-qubit code and from \cite{calderbank2} for the 5-qubit code) is given in table~\ref{code13}. Since the stabilizer has six generators, this code encodes seven qubits in 13 qubits. This is the best code on 13 qubits allowed by the quantum Hamming bound. \begin{table} \begin{tabular}{|l|ccccccccccccc|} $M_1$ & $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ & $X_6$ & $X_7$ & $X_8$ & $ I $ & $ I $ & $ I $ & $ I $ & $ I $\\ $M_2$ & $Z_1$ & $Z_2$ & $Z_3$ & $Z_4$ & $Z_5$ & $Z_6$ & $Z_7$ & $Z_8$ & $ I $ & $ I $ & $ I $ & $ I $ & $ I $\\ $M_3$ & $X_1$ & $ I $ & $X_3$ & $ I $ & $Z_5$ & $Y_6$ & $Z_7$ & $Y_8$ & $X_9$ & $X_{10}$ & $Z_{11}$ & $ I $ & $Z_{13}$\\ $M_4$ & $X_1$ & $ I $ & $Y_3$ & $Z_4$ & $X_5$ & $ I $ & $Y_7$ & $Z_8$ & $Z_9$ & $X_{10}$ & $X_{11}$ & $Z_{12}$ & $ I $\\ $M_5$ & $X_1$ & $Z_2$ & $ I $ & $Y_4$ & $ I $ & $Y_6$ & $X_7$ & $Z_8$ & $ I $ & $Z_{10}$ & $X_{11}$ & $X_{12}$ & $Z_{13}$\\ $M_6$ & $ I $ & $ I $ & $ I $ & $ I $ & $ I $ & $ I $ & $ I $ & $ I $ & $Z_9$ & $ I $ & $Z_{11}$ & $X_{12}$ & $X_{13}$\\ \end{tabular} \caption{The stabilizer for $n=13$ formed by pasting an $n=5$ code to an $n=8$ code.} \label{code13} \end{table} We can also paste a 5-qubit code to the 16-qubit code of the class given in \cite{gottesman}. Since the 16-qubit code already has six generators, no augmentation is needed. This produces a 21-qubit code encoding 15 qubits. This is the second perfect code. In general, if we paste the $(j-1)$th perfect code (with $n=(4^j-1)/3$ and $2j$ generators) to a $n=2^{2j}$ code, we get a code with $2j+2$ generators on $4^j + (4^j-1)/3 = (4^{j+1}-1)/3$ qubits. This is therefore the $j$th perfect code, and we can produce all the perfect codes using this construction. Pasting other combinations of codes is also possible, but not all combinations can be used. The larger code must always have a generator formed from the product of all $X_i$s and a generator equal to the product of all $Z_i$s, or some equivalent set of generators that can be used to distinguish errors on the original set of qubits from those on the new qubits added after the pasting operation. Both codes must be nondegenerate, because when $F^\dagger E$ is in ${\cal H}$ before the pasting, it is unlikely to remain in ${\cal H}$ after the pasting operation, which lengthens most of the generators. Also, the smaller code must have exactly two fewer generators than the larger code. This requirement can be largely circumvented, however, by adding on identity generators to either the larger or smaller code, as seen in the above construction of a 13-qubit code. In addition, this method does not work at all on codes to correct two or more errors. Suppose we used a similar method to distinguish one- or two-qubit errors on the original qubits from those on the new qubits. A new two-qubit error formed of one error on the original qubits and one on the new qubits would look like an error on the original qubits, since it does actually affect them, and would not typically be distinguishable from errors on the original qubits. Of course, in some special cases, the new code might distinguish such errors, but we cannot be sure that it will based purely on the pasting method described here. \end{document}
arXiv
\begin{definition}[Definition:Die/Historical Note] Dice have been around for thousands of years. An early reference to the design of a die can be found in {{AuthorRef|W.R. Paton}}'s $1918$ translation of {{BookLink|The Greek Anthology Book XIV|W.R. Paton}}: :''The numbers on a die run so: six one, five two, three four.'' \end{definition}
ProofWiki
\begin{document} \begin{center} \textbf{Metric and Edge Metric Dimension of Zigzag Edge Coronoid Fused with Starphene} \end{center} \begin{center} Sunny Kumar Sharma$^{1,a}$, Vijay Kumar Bhat$^{1,}$$^{\ast}$, Hassan Raza$^{2,b}$, and Karnika Sharma$^{1,c}$ \end{center} $^{1}$School of Mathematics, Shri Mata Vaishno Devi University, Katra-$182320$, J \& K, India.\\ $^{2}$Business School, University of Shanghai for Science and Technology, Shanghai 200093, China.\\ $^{a}[email protected], $^{\ast}[email protected], $^{b}$hassan\[email protected],\\ $^{c}[email protected]\\\\ \textbf{Abstract} Let $\Gamma=(V,E)$ be a simple connected graph. $d(\alpha,\epsilon)=min\{d(\alpha, w), d(\alpha, d\}$ computes the distance between a vertex $\alpha \in V(\Gamma)$ and an edge $\epsilon=wd\in E(\Gamma)$. A single vertex $\alpha$ is said to recognize (resolve) two different edges $\epsilon_{1}$ and $\epsilon_{2}$ from $E(\Gamma)$ if $d(\alpha, \epsilon_{2})\neq d(\alpha, \epsilon_{1}\}$. A subset of distinct ordered vertices $U_{E}\subseteq V(\Gamma)$ is said to be an edge metric generator for $\Gamma$ if every pair of distinct edges from $\Gamma$ are recognized by some element of $U_{E}$. An edge metric generator with a minimum number of elements in it, is called an edge metric basis for $\Gamma$. Then, the cardinality of this edge metric basis of $\Gamma$, is called the edge metric dimension of $\Gamma$, denoted by $edim(\Gamma)$. The concept of studying chemical structures using graph theory terminologies is both appealing and practical. It enables chemical researchers to more precisely and easily examine various chemical topologies and networks. In this article, we investigate a fascinating cluster of organic chemistry as a result of this motivation. We consider a zigzag edge coronoid fused with starphene and find its minimum vertex and edge metric generators.\\\\ \textbf{MSC(2020)}: 05C12, 05C90.\\\\ \textbf{Keywords:} Resolving set, starphene, hollow coronoid structure, metric dimension, independent set \section{Introduction} The theory of chemical graphs is the part of graph theory that deals with chemistry and mathematics. The study of various structures related to chemicals from the perspective of graphs is the subject of chemical graph theory. Chemical structures that are complicated and large in size are difficult to examine in their natural state. Then chemical graph theory is employed to make these complex chemical structures understandable. The molecular graph is a graph of a chemical structure in which the atoms are the vertices and the edges reflect the bonds between the atoms.\\ The physical attributes of a chemical structure are studied using a unique mathematical representation in which every atom (vertex) has its own identification or position within the given chemical structure. A few atoms (vertices) are chosen for this unique identification of the whole vertex set so that the set of atoms has a unique location to the selected vertices. This idea is known as metric basis in graph theory \cite{ps} and a resolving set (metric generator) in applied graph theory \cite{fr}. If any element of a metric generator fails (crashes), the entire system can be shut down, to address such problems the concept of fault tolerance in metric generators was introduced by Hernando et al. \cite{hn}.\\ Next, one can think that, rather than obtaining a unique atomic position, bonds could be utilized to shape the given structure, to address this Kelenc et al. \cite{emd} proposed and initiated the study of a new variant of metric dimension in non-trivial connected graphs that focuses on uniquely identifying the graph\textquotesingle s edges, called the edge metric dimension (EMD). Similar to the concept of fault tolerance in resolving sets, the idea of fault tolerance in edge resolving sets has also been introduced by Liu et al. \cite{lft}.\\ The researchers are motivated by the fact that the metric dimension has a variety of practical applications in everyday life and so it has been extensively investigated. Metric dimension is utilized in a wide range of fields of sciences, including robot navigation \cite{krr}, geographical routing protocols \cite{pil}, connected joints in network and chemistry \cite{cee}, telecommunications \cite{zf}, combinatorial optimization \cite{co}, network discovery and verification \cite{zf} etc. NP-hardness and computational complexity for the resolvability parameters are addressed in \cite{1,2}.\\ An organic compound with the chemical formula $C_{6}H_{6}$ is known as benzene. Many commercial, research and industrial operations use it as a solvent. Benzene is a key component of gasoline and can be found in crude oil. Dyes, detergents, resins, plastics, rubber lubricants, medicines, pesticides, and synthetic fibers are all made from it. When benzene rings are linked together, they form larger polycyclic aromatic compounds known as polyacenes.\\ The word coronoid was coined by Brunvoll et al. \cite{edd}, due to its possible relationship with benzenoid. A coronoid is a benzenoid that has a hole in the middle. Coronoid is a polyhex system that has its origin in organic chemistry. The zigzag-edge coronoids, denoted by $HC_{a,b,c}$, as shown in Fig. 1(i), can be considered as a structure obtained by fusing six linear polyacenes segments into a closed loop. This structure is also known as a hollow coronoid \cite{alikom}. Next, starphenes, denoted by $SP_{a,b,c}$, are the two-dimensional polyaromatic hydrocarbons with three polyacene arms joined by a single benzene ring, as shown in Fig. 1(ii). They can be utilized as logic gates in single-molecule electronics. Furthermore, as a type of 2D polyaromatic hydrocarbon, starphenes could be a promising material for organic electronics, such as organic light-emitting diodes (OLEDs) or organic field-effect transistors \cite{111}. A composite benzenoid obtained by fusing a zigzag-edge coronoid $HC_{a,b,c}$ with a starphene $SP_{a,b,c}$ is depicted in Fig. 2. We denote this system by $FCS_{a,b,c}$.\\ The metric dimension was investigated for numerous chemical structures because of several application of this parameter in chemical sciences. \cite{17n} discuss the vertex resolvability of $VC_{5}C_{7}$ and H-Napthalenic nanotubes. \cite{41n} determines the minimum resolving sets for silicates star networks, \cite{40n} set upper bounds for the minimum resolving sets of cellulose network, and \cite{16n} discuss the metric dimension of 2D lattice of Boron nanotubes (Alpha). Similarly, other variants of metric dimension, such as EMD, fault-tolerant metric dimension (FTMD), fault-tolerant edge metric dimension (FTEMD), etc have been studied for different graph families and chemical structures.\\ Azeem and Nadeem \cite{an}, studied metric dimension, EMD, FTMD, and FTEMD for polycyclic aromatic hydrocarbons. Sharma and Bhat \cite{sv, ssv}, studied metric dimension and EMD for some convex polytope graphs. Koam et al. \cite{alikom} studied the metric dimension and FTMD of hollow coronoid structures. The metric dimension and these recently introduced concepts have been studied by many authors for different graph families. For instance, path graphs, cycle graphs, prism graphs, wheel-related graphs, tadpole graphs, cycle with chord graphs, kayak paddle graphs, etc. But still there are several chemical graphs for which the metric dimension and the EMD has not been found yet. Such as the graph $FCS_{a,b,c}$. Thus, this paper aims to compute the metric dimension and the EMD of $FCS_{a,b,c}$.\\ The present paper is organized as follows. In Sect. 2 theory and concepts related to metric dimension, EMD, and independence in their respective metric generators have been discussed. In Sect. 3 we study the metric dimension and independence in the vertex metric generator of $FCS_{a,b,c}$. Sect. 4 gives the edge metric dimension of $FCS_{a,b,c}$. Finally, the conclusion and future work of this paper is presented in section 5. \section{Preliminaries} In this section, we discuss some basic concepts, definitions, and existing results related to the metric dimension, edge metric dimension, and independent (vertex and edge) metric generators of graphs.\\\\ Suppose $\Gamma=(V,E)$ is a non-trivial, connected, simple, and finite graph with the edge set $E(\Gamma)$ and the vertex set $V(\Gamma)$. We write $E$ instead of $E(\Gamma)$ and $V$ instead of $V(\Gamma)$ throughout the manuscript when there is no scope for ambiguity. The topological distance (geodesic) between two vertices $a$ and $w$ in $\Gamma$, denoted by $d(a,w)$, is the length of a shortest $a-w$ path between the vertices $a$ and $w$ in $\Gamma$.\\\\ \textbf{Degree of a vertex:} The number of edges that are incident to a vertex of a graph $H$ is known as its degree (or valency) and is denoted by $d_{\alpha}$. The minimum degree and the maximum degree of $\Gamma$ are denoted by $\delta(\Gamma)$ and $\Delta(\Gamma)$, respectively.\\\\ \textbf{Independent set:} \cite{sv} An independent set is a set of vertices in $\Gamma$, in which no two vertices are adjacent.\\\\ \textbf{Metric Dimension:} \cite{ps} If for any three vertices $\alpha$, $\beta$, $\gamma$ $\in V(\Gamma)$, we have $d(\alpha,\beta)\neq d_{G}(\alpha,\gamma)$, then the vertex $\alpha$ is said to recognize (resolve or distinguish) the pair of vertices $\beta$, $\gamma$ $(\beta\neq \gamma)$ in $V(\Gamma)$. If this condition of resolvability is fulfilled by some vertices comprising a subset $U \subseteq V(\Gamma)$ i.e., every pair of different vertices in the given undirected graph $\Gamma$ is resolved by at least one element of $U$, then $U$ is said to be a $metric$ $generator$ ($resolving$ $set$) for $\Gamma$. The $metric$ $dimension$ of the given graph $\Gamma$ is the minimum cardinality of a metric generator $U$, and is usually denoted by $dim(\Gamma)$. The metric generator $U$ with minimum cardinality is the metric basis for $\Gamma$. For an ordered subset of vertices $U=\{a_{1}, a_{2}, a_{3},...,a_{k}\}$, the $k$-code (representation or coordinate) of vertex $j$ in $V(\Gamma)$ is; \begin{center} \begin{eqnarray*} \gamma(j|R_{m})&=&(d(a_{1},j),d(a_{2},j),d(a_{3},j),...,d(a_{k},j)) \end{eqnarray*} \end{center} Then we say that, the set $U$ is a metric generator for $\Gamma$, if $\gamma(a|R_{m})\neq \gamma(w|R_{m})$, for any pair of vertices $a,w \in V(\Gamma)$ with $a\neq w$.\\\\ \textbf{Independent metric generator (IMG):} \cite{ssv} A set of distinct ordered vertices $U$ in $\Gamma$ is said to be an IMG for $\Gamma$ if $U$ is both independent as well as a metric generator.\\\\ \textbf{Edge Metric Dimension:} \cite{emd} The topological distance between a vertex $a$ and an edge $\epsilon=bw$ is given as $d(a,\epsilon)=min\{d(a,w), d(a,b)\}$. The vertex $\alpha$ is said to recognize (resolve or distinguish) the pair of edges $\epsilon_{1}$, $\epsilon_{2}$ with $\epsilon_{1}\neq \epsilon_{2})$ in $E(\Gamma)$. If this condition of edge resolvability is fulfilled by some vertices comprising a subset $U_{E} \subseteq V(\Gamma)$ i.e., every pair of different edges in the given undirected graph $\Gamma$ is resolved by at least one element of $U_{E}$, then $U_{E}$ is said to be an $edge$ $metric$ $generator$ (EMG) for $\Gamma$. The $edge$ $metric$ $dimension$ of the graph $\Gamma$ is the minimum cardinality of an ERS $U_{E}$, and is usually denoted by $edim(\Gamma)$. The edge metric generator (EMG) $U_{E}$ with minimum cardinality is the edge metric basis (EMB) for $\Gamma$. For an ordered subset of vertices $U_{E}=\{b_{1}, b_{2}, b_{3},...,b_{k}\}$, the $k$-edge code (coordinate) of an edge $\epsilon$ in $E(\Gamma)$ is; \begin{center} \begin{eqnarray*} \gamma_{E}(\epsilon|U_{E})&=&(d(b_{1},\epsilon),d(b_{2},\epsilon),d(b_{3},\epsilon),...,d(b_{k},\epsilon)) \end{eqnarray*} \end{center} Then we say that, the set $U_{E}$ is an EMG for $\Gamma$, if $\gamma_{E}(\epsilon_{1}|U_{E})\neq \gamma(\epsilon_{2}|U_{E})$, for any pair of edges $\epsilon_{1}, \epsilon_{2} \in V(\Gamma)$ with $\epsilon_{1} \neq \epsilon_{2}$.\\\\ \textbf{Independent edge metric generator (IEMG):} \cite{ssv} A set of distinct vertices $U^{i}_{E}$ (ordered) in $\Gamma$ is said to be an IEMG for $\Gamma$ if $U^{i}_{E}$ is both independent as well as a edge metric generator.\\\\ \begin{center} \begin{figure} \caption{$HC_{a,b,c}$ and $SP_{a,b,c}$} \label{p2} \end{figure} \end{center} $P_{n}$, $C_{n}$, and $K_{n}$ denotes respectively the path graph, cycle graph and the complete graph on $n$ vertices. Then the following results are helpful in obtaining the metric and the edge metric dimension of a graph. \begin{prop} \cite{emd} For $n \geq3$, we have $dim(P_{n})= edim(P_{n})=1$, $dim(C_{n})= edim(C_{n})=2$, and $dim(K_{n})= edim(K_{n})=n-1$. \end{prop} \section{Metric Dimension of $FCS_{a,b,c}$} In this section, we obtain the metric dimension and IVMG for $FCS_{a,b,c}$.\\\\ The fused hollow coronoid with starphene structure $FCS_{a,b,c}$ comprises of six sides in which three sides $(a,b,c)$ are symmetric to other three sides $(a,b,c)$ as shown in Fig. 2. This means that $FCS_{a,b,c}$ has three linear polyacenes segments consist of $a$, $b$, and $c$ number of benzene rings. It consists of $3a+3b+3c-11$ number of faces having six sides, two faces having $4a+2b+2c-18$ sides, a face having $4b+4c-18$ sides, and a face having $4a+4b+4c-6$ sides. $FCS_{a,b,c}$ has $6(a+b+c-6)$ number of vertices of degree two and $6(a+b+c-3)$ number of vertices of degree three. From this, we find that $\delta(FCS_{a,b,c})=2$ and $\Delta(FCS_{a,b,c})=3$. The vertex set and the edge set of $FCS_{a,b,c}$, are denoted by $V(FCS_{a,b,c})$ and $E(FCS_{a,b,c})$ respectively. Moreover, the cardinality of edges and vertices in $FCS_{a,b,c}$ is given by $|E(FCS_{a,b,c})|=3(5a+5b+5c-21)$ and $|V(FCS_{a,b,c})|=6(2a+2b+2c-9)$, respectively. The edge and vertex set of $FCS_{a,b,c}$ are describe as follows: $V(FCS_{a,b,c})=\{p_{1,d}, p_{2,d}|1\leq d\leq 2a-1\}\cup\{q_{1,d}, q_{2,d}|1\leq d\leq 2c-1\}\cup\{r_{1,d}, r_{2,d}|1\leq d\leq 2b-1\}\cup \{s_{1,d}, s_{2,d}|1\leq d\leq 2a-3\}\cup \{u_{1,d}, u_{2,d}|1\leq d\leq 2b-3\}\cup\{t_{1,d}, t_{2,d}|1\leq d\leq 2c-3\}\cup \{p_{3,d}, s_{3,d}|1\leq d\leq 2a-5\}\cup \{q_{3,d}, t_{3,d}|1\leq d\leq 2c-5\}\cup \{r_{3,d}, u_{3,d}|1\leq d\leq 2b-5\}$ \\\\ and\\\\ $E(FCS_{a,b,c})=\{p_{1,d}p_{1,d+1}, p_{2,d}p_{2,d+1}|1\leq d\leq 2a-2\}\cup\{q_{1,d}q_{1,d+1}, q_{2,d}q_{2,d+1}|1\leq d\leq 2c-2\}\cup\{r_{1,d}r_{1,d+1}, r_{2,d}r_{2,d+1}|1\leq d\leq 2b-2\}\cup \{s_{1,d}s_{1,d+1}, s_{2,d}s_{2,d+1}|1\leq d\leq 2a-4\}\cup \{u_{1,d}u_{1,d+1}, t_{2,d}t_{2,d+1}|1\leq d\leq 2b-4\}\cup\{t_{1,d}t_{1,d+1}, u_{2,d}u_{2,d+1}|1\leq d\leq 2c-4\}\cup\{p_{1,2d}s_{1,2d-1}, p_{2,2d}s_{2,2d-1}|1 \leq d \leq a-1\}\cup\{q_{1,2d}t_{1,2d-1}, q_{2,2d}u_{2,2d-1}|1 \leq d \leq c-1\}\cup\{r_{1,2d}u_{1,2d-1}, r_{2,2d}t_{2,2d-1}|1 \leq d \leq b-1\}\cup\{p_{3,d}p_{3,d+1}, s_{3,d}s_{3,d+1} \\|1\leq d\leq 2a-6\}\cup\{q_{3,d}q_{3,d+1}, t_{3,d}t_{3,d+1}|1\leq d\leq 2c-6\}\cup\{r_{3,d}r_{3,d+1}, u_{3,d}u_{3,d+1}|1\leq d\leq 2b-6\}\cup\{p_{3,2d-1}s_{3,2d-1}|1 \leq d \leq a-2\}\cup\{q_{3,2d-1}t_{3,2d-1}|1 \leq d \leq c-2\}\cup\{r_{3,2d-1}u_{3,2d-1}|1 \leq d \leq b-2\}\cup\{p_{3,1}r_{3,1}, q_{3,1}u_{3,1}, s_{3,1}t_{3,1}\}\cup\{p_{3,2a-5}t_{2,j-3}, s_{3,2a-5}u_{2,2}, q_{3,2c-5}u_{1,2c-4}, t_{3,2c-5}s_{2,2a-4}, r_{3,2b-5}s_{1,2a-4}, \\u_{3,2b-5}t_{1,2}\}\cup \{p_{1,1}q_{2,1}, s_{1,1}u_{2,1}, p_{1,i}q_{1,1}, s_{1,i-2}t_{1,1}, q_{1,j}r_{1,1}, t_{1,j-2}u_{1,1}, r_{1,k}p_{2,i}, u_{1,k-2}s_{2,l}, p_{2,1}r_{2,k}, \\s_{2,1}t_{2,k-2}, r_{2,1}q_{2,j}, t_{2,1}u_{2,j-2}\}$.\\\\ We name the vertices on the cycle $p_{1,1},...,p_{1,i},q_{1,1},...,q_{1,j}r_{1,1},...,r_{1,k},p_{2,1},...,p_{2,i},q_{2,1},...,q_{2,j}r_{2,1},...,\\r_{2,k}$ as the outer $pqr$-cycle vertices, the vertices on the cycle $s_{1,1},...,s_{1,i-2}, r_{3,k-4},...,r_{3,1}, p_{3,1},...,\\p_{3,i-4}, t_{2,j-2},...,t_{2,1}$ as the vertices of first interior cycle, the vertices on the cycle $t_{1,1},...,t_{1,j-2}, u_{1,1},\\...,u_{1,k-2}, q_{3,j-4},...,q_{3,1}, u_{3,1},...,u_{3,k-4}$ as the vertices of second interior cycle, and the vertices on the cycle $u_{2,1},...,u_{2,k-2}, s_{2,1},...,s_{2,i-2}, t_{3,j-4},...,t_{3,1}, s_{3,1},...,s_{3,i-4}$ as the vertices of third interior cycle in $FCS_{a,b,c}$. In vertices, $p_{1,i}$, $p_{2,i}$, $q_{1,j}$, $q_{2,j}$, $r_{1,k}$, and $r_{2,k}$, the indices $i=2a-1$, $j=2c-1$ and $k=2b-1$. In the next result, we determine the metric dimension of $FCS_{a,b,c}$. \begin{thm} For positive integers $a,b,c\geq 4$, we have $dim(FCS_{a,b,c})=3$. \end{thm} \begin{proof} In order to show that $dim(FCS_{a,b,c})\leq 3$, we construct a metric generator for $FCS_{a,b,c}$. Let $U=\{p_{1,1}, r_{1,1}, r_{2,k}\}$ be a set of distinct vertices from $FCS_{a,b,c}$. We claim that $U$ is a vertex metric generator for $FCS_{a,b,c}$. Now, to obtain $dim(FCS_{a,b,c}) \leq 3$, we can give metric coordinate to every vertex of $FCS_{a,b,c}$ with respect to the set $U$. For the vertices $\{\upsilon=p_{1,d}|1 \leq d \leq 2a-1\}$, the set of vertex metric coordinates is as follow:\\ $P_{1}=\{\gamma(\upsilon|U)=(d-1, 2a+2c-d-1, 2b+2c-2)| d=1\}\cup\{\gamma(\upsilon|U)=(d-1, 2a+2c-d-1, 2b+2c+d-5)|2\leq d \leq 2a-3\}\cup\{\gamma(\upsilon|U)=(d-1, 2a+2c-d-1, 2a+2b+2c-9)|d=2a-2\}\cup\{\gamma(\upsilon|U)=(d-1, 2a+2c-d-1, 2a+2b+2c-8)|d=2a-1\}$.\\\\ For the vertices $\{\upsilon=q_{1,d}|1 \leq d \leq 2c-1\}$, the set of vertex metric coordinates is as follow:\\ $Q_{1}=\{\gamma(\upsilon|U)=(2a+d-2, 2c-d, 2a+2b+2c-7)|d=1\}\cup\{\gamma(\upsilon|U)=(2a+d-2, 2c-d, 2a+2b+2c-8)|d=2\}\cup\{\gamma(\upsilon|U)=(2a+d-2, 2c-d, 2a+2b+2c-d-4)|3\leq d \leq 2c-2\}\cup\{\gamma(\upsilon|U)=(2a+d-2, 2c-d, 2a+2b-1)|d=2c-1\}$.\\\\ For the vertices $\{\upsilon=r_{1,d}|1 \leq d \leq 2b-1\}$, the set of vertex metric coordinates is as follow:\\ $R_{1}=\{\gamma(\upsilon|U)=(2a+2c-2, d-1, 2a+2b-d-1)|d=1\}\cup\{\gamma(\upsilon|U)=(2a+2c+d-5, d-1, 2a+2b-d-1)|2\leq d \leq 2c-3\}\cup\{\gamma(\upsilon|U)=(2a+2b+2c-9, d-1, 2a+2b-d-1)|d=2b-2\}\cup\{\gamma(\upsilon|U)=(2a+2b+2c-8, d-1, 2a+2b-d-1)|d=2b-1\}$.\\\\ For the vertices $\{\upsilon=p_{2,d}|1 \leq d \leq 2a-1\}$, the set of vertex metric coordinates is as follow:\\ $P_{2}=\{\gamma(\upsilon|U)=(d, 2a+2b-d-2, 2b+2c-1)|d=1\}\cup\{\gamma(\upsilon|U)=(d, 2a+2b-d-2, 2b+2c+d-4)|2\leq d \leq 2a-3\}\cup\{\gamma(\upsilon|U)=(d, 2a+2b-d-2, 2a+2b+2c-8)|d=2a-2\}\cup\{\gamma(\upsilon|U)=(d, 2a+2b-d-2, 2a+2b+2c-7)|d=2a-1\}$.\\\\ For the vertices $\{\upsilon=q_{2,d}|1 \leq d \leq 2c-1\}$, the set of vertex metric coordinates is as follow:\\ $Q_{2}=\{\gamma(\upsilon|U)=(d, 2a+2c-1, 2b+2c-d-2)|d=1\}\cup\{\gamma(\upsilon|U)=(d, 2a+2c+d-4, 2b+2c-d-2)|2\leq d \leq 2c-3\}\cup\{\gamma(\upsilon|U)=(d, 2a+2b+2c-8, 2b+2c-d-2)|d=2c-2\}\cup\{\gamma(\upsilon|U)=(d, 2a+2b+2c-7, 2b+2c-d-2)|d=2c-1\}$.\\\\ For the vertices $\{\upsilon=r_{2,d}|1 \leq d \leq 2b-1\}$, the set of vertex metric coordinates is as follow:\\ $R_{2}=\{\gamma(\upsilon|U)=(2c+d-1, 2a+2b+2c-8, 2b-d-1)|d=1\}\cup\{\gamma(\upsilon|U)=(2c+d-1, 2a+2b+2c-9, 2b-d-1)|d=2\}\cup\{\gamma(\upsilon|U)=(2c+d-1, 2a+2b+2c-d-5, 2b-d-1)|3\leq d \leq 2b-2\}\cup\{\gamma(\upsilon|U)=(2c+d-1, 2a+2b-2, 2b-d-1)|d=2b-1\}$.\\\\ For the vertices $\{\upsilon=s_{1,d}|1 \leq d \leq 2a-3\}$, the set of vertex metric coordinates is as follow:\\ $S_{1}=\{\gamma(\upsilon|U)=(d+1, 2a+2c-d-3, 2b+2c+d-5)|1\leq d \leq 2a-5\}\cup\{\gamma(\upsilon|U)=(d+1, 2a+2c-d-3, 2a+2b+2c-11)|d=2a-4\}\cup\{\gamma(\upsilon|U)=(d+1, 2a+2c-d-3, 2a+2b+2c-10)|d=2a-3\}$.\\\\ For the vertices $\{\upsilon=t_{1,d}|1 \leq d \leq 2c-3\}$, the set of vertex metric coordinates is as follow:\\ $T_{1}=\{\gamma(\upsilon|U)=(2a+d-2, 2c-d, 2a+2b+2c-9)|d=1\}\cup\{\gamma(\upsilon|U)=(2a+d-2, 2c-d, 2a+2b+2c-10 )|d=2\}\cup\{\gamma(\upsilon|U)=(2a+d-2, 2c-d, 2a+2b+2c-d-6)|3\leq d \leq 2c-3\}$.\\\\ For the vertices $\{\upsilon=u_{1,d}|1 \leq d \leq 2b-3\}$, the set of vertex metric coordinates is as follow:\\ $U_{1}=\{\gamma(\upsilon|U)=(2a+2c+d-5, d+1, 2a+2b-d-3)|1 \leq d \leq 2b-5\}\cup\{\gamma(\upsilon|U)=(2a+2b+2c-11, d+1, 2a+2b-d-3)|d=2b-4\}\cup \{\gamma(\upsilon|U)=(2a+2b+2c-10, d+1, 2a+2b-d-3)|d=2b-3\}$.\\\\ For the vertices $\{\upsilon=s_{2,d}|1 \leq d \leq 2a-3\}$, the set of vertex metric coordinates is as follow:\\ $S_{2}=\{\gamma(\upsilon|U)=(2b+2c+d-4, 2a+2b-d-4, d+2)|1\leq d \leq 2a-5\}\cup\{\gamma(\upsilon|U)=(2a+2b+2c-10, 2a+2b-d-4, d+2)|d=2a-4\}\cup\{\gamma(\upsilon|U)=(2a+2b+2c-9, 2a+2b-d-4, d+2)|d=2a-3\}$.\\\\ For the vertices $\{\upsilon=t_{2,d}|1 \leq d \leq 2c-3\}$, the set of vertex metric coordinates is as follow:\\ $T_{2}=\{\gamma(\upsilon|U)=(d+2, 2a+2c+d-4, 2b+2c-d-4)|1\leq d \leq 2c-5\}\cup\{\gamma(\upsilon|U)=(d+2, 2a+2b+2c-10, 2b+2c-d-4 )|d=2c-4\}\cup\{\gamma(\upsilon|U)=(d+2, 2a+2b+2c-9, 2b+2c-d-4)|d=2c-3\}$.\\\\ For the vertices $\{\upsilon=u_{2,d}|1 \leq d \leq 2b-3\}$, the set of vertex metric coordinates is as follow:\\ $U_{2}=\{\gamma(\upsilon|U)=(2c+d-1, 2a+2b+2c-10, 2b-d-1)|d=1\}\cup\{\gamma(\upsilon|U)=(2c+d-1, 2a+2b+2c-11, 2b-d-1)|d=2\}\cup \{\gamma(\upsilon|U)=(2c+d-1, 2a+2b+2c-d-7, 2b-d-1)|3 \leq d \leq 2b-3\}$.\\\\ For the vertices $\{\upsilon=p_{3,d}|1 \leq d \leq 2a-5\}$, the set of vertex metric coordinates is as follow:\\ $P_{3}=\{\gamma(\upsilon|U)=(2a+2c-d-6, 2a+2c+d-6, 2a+2b-d-4)|1\leq d \leq 2a-5\}$.\\\\ For the vertices $\{\upsilon=q_{3,d}|1 \leq d \leq 2c-5\}$, the set of vertex metric coordinates is as follow:\\ $Q_{3}=\{\gamma(\upsilon|U)=(2a+2b+d-7, 2b+2c-d-7, 2a+2c-d-5)|1\leq d \leq 2c-5\}$.\\\\ For the vertices $\{\upsilon=r_{3,d}|1 \leq d \leq 2b-5\}$, the set of vertex metric coordinates is as follow:\\ $R_{3}=\{\gamma(\upsilon|U)=(2a+2b-d-7, 2b+2c-d-5, 2a+2b+d-7)|1\leq d \leq 2b-5\}$.\\\\ For the vertices $\{\upsilon=s_{3,d}|1 \leq d \leq 2a-5\}$, the set of vertex metric coordinates is as follow:\\ $S_{3}=\{\gamma(\upsilon|U)=(2a+2c-d-5, 2a+2c+d-7, 2a+2b-d-5)|1\leq d \leq 2a-5\}$.\\\\ For the vertices $\{\upsilon=t_{3,d}|1 \leq d \leq 2c-5\}$, the set of vertex metric coordinates is as follow:\\ $T_{3}=\{\gamma(\upsilon|U)=(2a+2c+d-6, 2b+2c-d-6, 2a+2c-d-6)|1\leq d \leq 2c-5\}$.\\\\ For the vertices $\{\upsilon=u_{3,d}|1 \leq d \leq 2b-5\}$, the set of vertex metric coordinates is as follow:\\ $U_{3}=\{\gamma(\upsilon|U)=(2a+2b-d-6, 2b+2c-d-6, 2a+2b+d-6)|1\leq d \leq 2b-5\}$.\\\\ Now, from these sets of vertex metric codes for the graph $FCS_{a,b,c}$, we find that $|P_{1}|=|P_{2}|=2a-1$, $|Q_{1}|=|Q_{2}|=2c-1$, $|R_{1}|=|R_{2}|=2b-1$, $|S_{1}|=|S_{2}|=2a-3$, $|T_{1}|=|T_{2}|=2c-3$, $|U_{1}|=|U_{2}|=2b-3$, $|P_{3}|=|S_{3}|=2a-5$, $|Q_{3}|=|T_{3}|=2c-5$, and $|R_{3}|=|U_{3}|=2b-5$. We see that the sum of all of these cardinalities is equal to $|V(FCS_{a,b,c})|$ and which is $6(2a+2b+2c-9)$. Moreover, all of these sets are pairwise disjoint, which implies that $dim(FCS_{a,b,c})\leq 3$. To complete the proof, we have to show that $dim(FCS_{a,b,c})\geq 3$. To show this, we have to prove that there exists no vertex metric generator $U$ for $FCS_{a,b,c}$ such that $|U|\leq2$. Since, the graph $FCS_{a,b,c}$ is not a path graph, so the possibility of a singleton vertex metric generator for $FCS_{a,b,c}$ is ruled out \cite{cee}. Next, suppose on the contrary that there exists an edge resolving set $U$ with $|U|=2$. Therefore, we have the following ten cases to be discussed (for the contradictions, the naturals $a$, $b$, and $c$ are $\geq5$): \begin{center} \begin{figure} \caption{$FCS_{a,b,c}$} \label{p2} \end{figure} \end{center} \textbf{Case(\rom{1})} When $U=\{a, b\}$, where $a$ and $b$ are the vertices from the outer $pqr$-cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{p_{1,1}, p_{1,d}\}$, $p_{1,d}$ ($2\leq d \leq i$). Then $\gamma(q_{1,2}|U)=\gamma(t_{1,2} |U)$, for $2\leq d\leq i-1$; $\gamma(u_{1,2}|U)=\gamma(r_{1,2}|U)$ when $d=i$, a contradiction. \item Suppose $U=\{p_{1,1}, q_{1,d}\}$, $q_{1,d}$ ($1\leq d \leq j$). Then $\gamma(r_{1,2}|U)=\gamma(u_{1,2} |U)$, for $d=1$; $\gamma(p_{1,i}|U)=\gamma(s_{1,i-2}|U)$ when $2\leq d\leq j$, a contradiction. \item Suppose $U=\{p_{1,1}, r_{1,d}\}$, $r_{1,d}$ ($1\leq d \leq k$). Then $\gamma(p_{1,i}|U)=\gamma(s_{1,i-2}|U)$ for $d=1$; $\gamma(p_{1,3}|U) =\gamma(s_{1,1}|U)$, for $2\leq d\leq k$, a contradiction. \item Suppose $U=\{p_{1,1}, p_{2,d}\}$, $p_{2,d}$ ($1\leq d \leq i$). Then $\gamma(s_{1,1}|U)=\gamma(q_{2,2}|U)$ for $1\leq d\leq i-3$; $\gamma(p_{1,3}|U)=\gamma(s_{1,1}|U)$, for $i-2\leq d\leq i$, a contradiction. \item Suppose $U=\{p_{1,1}, r_{2,d}\}$, $r_{2,d}$ ($1\leq d \leq k$). Then $\gamma(s_{2,2}|U)=\gamma(p_{2,2}|U)$ for $d=1$; $\gamma(q_{2,j}|U)=\gamma(t_{2,j-2}|U)$ for $2\leq d\leq k$, a contradiction. \item Suppose $U=\{p_{1,1}, q_{2,d}\}$, $q_{2,d}$ ($1\leq d \leq j$). Then $\gamma(s_{2,2}|U)=\gamma(p_{2,2}|U)$ for $1\leq d\leq j$, a contradiction. \end{itemize} \textbf{Case(\rom{2})} When $U=\{a, b\}$, where $a$ and $b$ are the vertices from the first interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{s_{1,1}, s_{1,d}\}$, $s_{1,d}$ ($2\leq d \leq i-2$). Then $\gamma(q_{2,2}|U)=\gamma(t_{2,2}|U)$, for $2\leq d\leq i-1$, a contradiction. \item Suppose $U=\{s_{1,1}, r_{3,d}\}$, $r_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(p_{1,2}|U)=\gamma(t_{2,1}|U)$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U=\{s_{1,1}, t_{2,d}\}$, $t_{2,d}$ ($1\leq d \leq j-2$). Then $\gamma(p_{1,2}|U)=\gamma(s_{1,2}|U)$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U=\{s_{1,1}, p_{3,d}\}$, $p_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(s_{2,2}|U)=\gamma(p_{2,2}|U)$, for $1\leq d\leq 2a-3$, a contradiction. \end{itemize} \textbf{Case(\rom{3})} When $U=\{a, b\}$, where $a$ and $b$ are the vertices from the second interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{u_{1,1}, u_{1,d}\}$, $u_{1,d}$ ($1\leq d \leq k-2$). Then $\gamma(p_{2,i-1}|U)=\gamma(s_{2,i-3}|U)$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U=\{u_{1,1}, t_{1,d}\}$, $t_{1,d}$ ($1\leq d \leq j-2$). Then $\gamma(r_{1,1}|U)=\gamma(r_{1,3}|U)$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U=\{u_{1,1}, u_{3,d}\}$, $u_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(p_{1,i}|U)=\gamma(p_{1,i-2}|U)$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U=\{u_{1,1}, q_{3,d}\}$, $q_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(p_{1,i}|U)=\gamma(p_{1,i-2}|U)$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{4})} When $U=\{a, b\}$, where $a$ and $b$ are the vertices from the third interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{s_{2,1}, s_{2,d}\}$, $s_{2,d}$ ($2\leq d \leq i-2$). Then $\gamma(r_{2,k}|U)=\gamma(r_{2,k-2}|U)$, for $2\leq d\leq i-2$, a contradiction. \item Suppose $U=\{s_{2,1}, u_{2,d}\}$, $u_{2,d}$ ($1\leq d \leq k-2$). Then $\gamma(p_{2,1}|U)=\gamma(p_{2,3}|U)$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U=\{s_{2,1}, s_{3,d}\}$, $s_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(q_{2,j}|U)=\gamma(q_{2,j-2}|U)$, for $1\leq d\leq i-5$, a contradiction. \item Suppose $U=\{s_{2,1}, t_{3,d}\}$, $t_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(q_{2,j}|U)=\gamma(q_{2,j-2}|U)$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{5})} When $U=\{a, b\}$, where $a$ is in outer $pqr$-cycle and $b$ is in the first interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{p_{1,1}, s_{1,d}\}$, $s_{1,d}$ ($1\leq d \leq i-2$). Then $\gamma(r_{2,2}|U)=\gamma(u_{2,2}|U)$, for $1\leq d\leq i-2$, a contradiction. \item Suppose $U=\{p_{1,1}, t_{2,d}\}$, $t_{2,d}$ ($1\leq d \leq j-2$). Then $\gamma(r_{2,2}|U)=\gamma(u_{2,2}|U)$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U=\{p_{1,1}, r_{3,d}\}$, $r_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(q_{1,4}|U)=\gamma(t_{1,4}|U)$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U=\{p_{1,1}, p_{3,d}\}$, $p_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(q_{1,4}|U)=\gamma(t_{1,4}|U)$, for $1\leq d\leq i-5$, a contradiction. \end{itemize} \textbf{Case(\rom{6})} When $U=\{a, b\}$, where $a$ is in outer $pqr$-cycle and $b$ is in the second interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{p_{1,1}, t_{1,d}\}$, $t_{1,d}$ ($1\leq d \leq j-2$). Then $\gamma(u_{1,2}|U)=\gamma(r_{1,2}|U)$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U=\{p_{1,1}, u_{1,d}\}$, $u_{1,d}$ ($1\leq d \leq k-2$). Then $\gamma(t_{1,1}|U)=\gamma(u_{3,k-5}|U)$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U=\{p_{1,1}, u_{3,d}\}$, $u_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(q_{1,4}|U)=\gamma(t_{1,4}|U)$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U=\{p_{1,1}, q_{3,d}\}$, $q_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(u_{3,2}|U)=\gamma(r_{3,1}|U)$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{7})} When $U=\{a, b\}$, where $a$ is in outer $pqr$-cycle and $b$ is in the third interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{p_{1,1}, u_{2,d}\}$, $u_{2,d}$ ($1\leq d \leq k-2$). Then $\gamma(s_{2,2}|U)=\gamma(p_{2,2}|U)$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U=\{p_{1,1}, s_{3,d}\}$, $s_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(u_{3,4}|U)=\gamma(r_{2,4}|U)$, for $1\leq d\leq i-5$, a contradiction. \item Suppose $U=\{p_{1,1}, s_{2,d}\}$, $s_{2,d}$ ($1\leq d \leq i-2$). Then $\gamma(s_{1,1}|U)=\gamma(q_{2,2}|U)$, for $1\leq d\leq i-5$; $\gamma(s_{3,2}|U)=\gamma(p_{3,1}|U)$, for $i-4\leq d\leq i-2$, a contradiction. \item Suppose $U=\{p_{1,1}, t_{3,d}\}$, $t_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(s_{3,2}|U)=\gamma(p_{3,1}|U)$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{8})} When $U=\{a, b\}$, where $a$ is in first interior cycle and $b$ is in the second interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{s_{1,1}, t_{1,d}\}$, $t_{1,d}$ ($1\leq d \leq j-2$). Then $\gamma(t_{2,2}|U)=\gamma(q_{2,2}|U)$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U=\{s_{1,1}, u_{3,d}\}$, $u_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(p_{1,2}|U)=\gamma(t_{2,1}|U)$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U=\{s_{1,1}, u_{1,d}\}$, $u_{1,d}$ ($1\leq d \leq k-2$). Then $\gamma(s_{3,2}|U)=\gamma(p_{3,1}|U)$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U=\{s_{1,1}, q_{3,d}\}$, $q_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(s_{3,2}|U)=\gamma(p_{3,1}|U)$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{9})} When $U=\{a, b\}$, where $a$ is in first interior cycle and $b$ is in the third interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{s_{1,1}, u_{2,d}\}$, $u_{2,d}$ ($1\leq d \leq k-2$). Then $\gamma(p_{1,1}|U)=\gamma(p_{1,3}|U)$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U=\{s_{1,1}, s_{2,d}\}$, $s_{2,d}$ ($1\leq d \leq i-2$). Then $\gamma(u_{3,2}|U)=\gamma(r_{3,1}|U)$, for $1\leq d\leq i-2$, a contradiction. \item Suppose $U=\{s_{1,1}, t_{3,d}\}$, $t_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(u_{3,2}|U)=\gamma(r_{3,1}|U)$, for $1\leq d\leq j-5$, a contradiction. \item Suppose $U=\{s_{1,1}, s_{3,d}\}$, $s_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(r_{2,4}|U)=\gamma(u_{2,4}|U)$, for $1\leq d\leq i-5$, a contradiction. \end{itemize} \textbf{Case(\rom{10})} When $U=\{a, b\}$, where $a$ is in second interior cycle and $b$ is in the third interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U=\{u_{1,1}, u_{2,d}\}$, $u_{2,d}$ ($1\leq d \leq k-2$). Then $\gamma(r_{3,3}|U)=\gamma(u_{3,2}|U)$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U=\{u_{1,1}, s_{2,d}\}$, $s_{2,d}$ ($1\leq d \leq i-2$). Then $\gamma(q_{1,j-1}|U)=\gamma(t_{1,j-3}|U)$, for $1\leq d\leq i-2$, a contradiction. \item Suppose $U=\{u_{1,1}, t_{3,d}\}$, $t_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(s_{3,2}|U)=\gamma(p_{3,1}|U)$, for $1\leq d\leq j-5$, a contradiction. \item Suppose $U=\{u_{1,1}, s_{3,d}\}$, $s_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(t_{3,2}|U)=\gamma(q_{3,1}|U)$, for $1\leq d\leq i-5$, a contradiction. \end{itemize} As a result, we infer that for $FCS_{a,b,c}$, there is no vertex metric generator $U$ such that $|U|=2$. Therefore, we must have $|U| \geq 3$ i.e., $dim(FCS_{a,b,c})\geq 3$. Hence, $dim(FCS_{a,b,c})=3$, which concludes the theorem. \end{proof} In terms of minimum an IVMG, we have the following result \begin{thm} For $a,b,c\geq4$, the graph $FCS_{a,b,c}$ has an IVMG with cardinality three. \end{thm} \begin{proof} To show that, for zigzag edge coronoid fused with starphene $FCS_{a,b,c}$, there exists an IVMG $U^{i}$ with $|U^{i}|=3$, we follow the same technique as used in Theorem $1$.\\\\ Suppose $U^{i} = \{p_{1,1}, r_{1,1}, r_{2, k}\} \subset V(FCS_{a,b,c})$. Now, by using the definition of an independent set and following the same pattern as used in Theorem $1$, it is simple to show that the set of vertices $U^{i}= \{p_{1,1}, r_{1,1}, r_{2, k}\}$ forms an IVMG for $FCS_{a,b,c}$ with $|U^{i}|=3$, which concludes the theorem. \\ \end{proof} \section{Edge Metric Dimension of $FCS_{a,b,c}$} In this section, we obtain the metric dimension and IVMG for $FCS_{a,b,c}$.\\\\ \begin{thm} For positive integers $a,b,c\geq 4$, we have $edim(FCS_{a,b,c})=3$. \end{thm} \begin{proof} In order to show that $edim(FCS_{a,b,c}) \leq 3$, we construct an edge metric generator for $FCS_{a,b,c}$. Let $U_{E}=\{p_{1,1}, r_{1,1}, r_{2, k}\}$ be a set of distinct vertices from $FCS_{a,b,c}$. We claim that $U_{E}$ is an edge metric generator for $FCS_{a,b,c}$. Now, to obtain $edim(FCS_{a,b,c}) \leq 3$, we can give edge metric coordinate to every edge of $FCS_{a,b,c}$ with respect to $U_{E}$. For the edges $\{\eta=p_{1,d}p_{1,d+1}|1 \leq d \leq 2a-2\}$, the set of edge metric coordinates is as follow:\\ $P_{1}=\{\gamma_{E}(\eta|U_{E})=(d-1, 2a+2c-d-2, 2b+2-3)|d=1\}\cup \{\gamma_{E}(\eta|U_{E})=(d-1, 2a+2c-d-2, 2b+2c+d-5)|2\leq d \leq 2a-4\}\cup \{\gamma_{E}(\eta|U_{E})=(d-1, 2a+2c-d-2, 2a+4b-8)|2a-3 \leq d \leq 2a-2\}$.\\\\ For the edges $\{\eta=q_{1,d}q_{1,d+1}|1 \leq d \leq 2c-2\}$, the set of edge metric coordinates is as follow:\\ $Q_{1}=\{\gamma_{E}(\eta|U_{E})=(2a+d-2, 2c-d-1, 2a+2b+2c-8)|1 \leq d \leq 2\}\cup\{\gamma_{E}(\eta|U_{E})=(2a+d-2, 2c-d-1, 2a+2b+2c-d-5)|3 \leq d \leq 2c-3\}\cup\{\gamma_{E}(\eta|U_{E})=(2a+d-2, 2c-d-1, 2a+2b-2)|d=2c-2\}$.\\\\ For the edges $\{\eta=r_{1,d}r_{1,d+1}|1 \leq d \leq 2b-2\}$, the set of edge metric coordinates is as follow:\\ $R_{1}=\{\gamma_{E}(\eta|U_{E})=(2a+2c-3, d-1, 2a+2b-d-1)|d=1\}\cup\{\gamma_{E}(\eta|U_{E})=(2a+2c+d-5, d-1, 2a+2b-d-1)|2 \leq d \leq 2b-4\}\cup\{\gamma_{E}(\eta|U_{E})=(2a+2b+2c-9, d-1, 2a+2b-d-1)|2b-3 \leq d \leq 2b-2\}$.\\\\ For the edges $\{\eta=p_{2,d}p_{2,d+1}|1 \leq d \leq 2a-2\}$, the set of edge metric coordinates is as follow:\\ $P_{2}=\{\gamma_{E}(\eta|U_{E})=(2b+2c-2, 2a+2b-d-1, d)|d=1\}\cup \{\gamma_{E}(\eta|U_{E})=(2b+2c+d-4, 2a+2b-d-1, d)|2 \leq d \leq 2a-4\}\cup \{\gamma_{E}(\eta|U_{E})=(2a+2b+2c-8, 2a+2b-d-1, d)| 2a-3 \leq d \leq 2a-2\}$.\\\\ For the edges $\{\eta=q_{2,d}q_{2,d+1}|1 \leq d \leq 2c-2\}$, the set of edge metric coordinates is as follow:\\ $Q_{2}=\{\gamma_{E}(\eta|U_{E})=(d, 2a+2c-2, 2b+2c-d-3)|d=1\}\cup \{\gamma_{E}(\eta|U_{E})=(d, 2a+2c+d-4, 2b+2c-d-3))|2 \leq d \leq 2c-4\}\cup \{\gamma_{E}(\eta|U_{E})=(d, 2a+2b+2c-8, 2b+2c-d-3))|2c-3 \leq d \leq 2c-2\}$.\\\\ For the edges $\{\eta=r_{2,d}r_{2,d+1}|1 \leq d \leq 2b-2\}$, the set of edge metric coordinates is as follow:\\ $R_{2}=\{\gamma_{E}(\eta|U_{E})=(2c+d-1, 2a+2b+2c-9, 2b-d-2)|1\leq d \leq 2\}\cup\{\gamma_{E}(\eta|U_{E})=(2c+d-1, 2a+4b-d-6, 2b-d-2)|3 \leq d \leq 2b-3\}\cup \{\gamma_{E}(\eta|U_{E})=(2c+d-1, 2a+2b-3, 2b-d-2)|d=2b-2\}$.\\\\ For the edges $\{\eta=s_{1,d}s_{1,d+1}|1 \leq d \leq 2a-4\}$, the set of edge metric coordinates is as follow:\\ $S_{1}=\{\gamma_{E}(\eta|U_{E})=(d+1, 2a+2c-d-4, 2b+2c+d-5)|1 \leq d \leq 2a-6\}\cup \{\gamma_{E}(\eta|U_{E})=(d+1, 2a+2c-d-4, 2a+4b-11)|2a-5 \leq d \leq 2a-4\}$.\\\\ For the edges $\{\eta=t_{1,d}t_{1,d+1}|1 \leq d \leq 2c-4\}$, the set of edge metric coordinates is as follow:\\ $T_{1}=\{\gamma_{E}(\eta|U_{E})=(2a+d, 2c-d-1, 2a+2b+2c-10)|1 \leq d \leq 2\}\{\gamma_{E}(\eta|U_{E})=(2a+d, 2c-d-1, 2a+2b+2c-d-7)|3 \leq d \leq 2c-4\}$.\\\\ For the edges $\{\eta=u_{1,d}u_{1,d+1}|1 \leq d \leq 2b-4\}$, the set of edge metric coordinates is as follow:\\ $U_{1}=\{\gamma_{E}(\eta|U_{E})=(2a+2c+d-5, d+1, 2a+2b-d-4)|1 \leq d \leq 2b-6\}\cup\{\gamma_{E}(\eta|U_{E})=(2a+2b+2c-11, d+1, 2a+2b-d-4)|2b-3 \leq d \leq 2b-4\}$.\\\\ For the edges $\{\eta=s_{2,d}s_{2,d+1}|1 \leq d \leq 2a-4\}$, the set of edge metric coordinates is as follow:\\ $S_{2}=\{\gamma_{E}(\eta|U_{E})=(2b+2c+d-4, 2a+2b-d-5, d+2)|1 \leq d \leq 2a-6\}\cup\{\gamma_{E}(\eta|U_{E})=(2a+2b+2c-10, 2a+2b-d-5, d+2)|2a-5 \leq d \leq 2a-4\}$.\\\\ For the edges $\{\eta=t_{2,d}t_{2,d+1}|1 \leq d \leq 2c-4\}$, the set of edge metric coordinates is as follow:\\ $T_{2}=\{\gamma_{E}(\eta|U_{E})=(d+2, 2a+2c+d-4, 2b+2c-d-7)|1 \leq d \leq 2b-6\}\cup\{\gamma_{E}(\eta|U_{E})=(d+2, 2a+2b+2c-10, 2b+2c-d-7)|2b-5 \leq d \leq 2c-4\}$.\\\\ For the edges $\{\eta=u_{2,d}u_{2,d+1}|1 \leq d \leq 2b-4\}$, the set of edge metric coordinates is as follow:\\ $U_{2}=\{\gamma_{E}(\eta|U_{E})=(2c+d-1, 2a+2b+2c-11, 2b-d-2)|1 \leq d \leq 2\}\cup\{\gamma_{E}(\eta|U_{E})=(2c+d-1, 2a+4b-d-8, 2b-d-2)|3 \leq d \leq 2b-4\}$.\\\\ For the edges $\{\eta=p_{3,d}p_{3,d+1}|1 \leq d \leq 2a-6\}$, the set of edge metric coordinates is as follow:\\ $P_{3}=\{\gamma_{E}(\eta|U_{E})=(2a+2c-d-7, 2b+2c+d-6, 2a+2b-d-7)|1 \leq d \leq 2a-6\}$.\\\\ For the edges $\{\eta=q_{3,d}q_{3,d+1}|1 \leq d \leq 2c-6\}$, the set of edge metric coordinates is as follow:\\ $Q_{3}=\{\gamma_{E}(\eta|U_{E})=(2a+2b+d-7, 2b+2c-d-8, 2a+2c-d-6)|1 \leq d \leq 2c-6\}$.\\\\ For the edges $\{\eta=r_{3,d}r_{3,d+1}|1 \leq d \leq 2b-6\}$, the set of edge metric coordinates is as follow:\\ $R_{3}=\{\gamma_{E}(\eta|U_{E})=(2a+2b-d-8, 2b+2c-d-6, 2a+2b+d-7)|1 \leq d \leq 2b-6\}$.\\\\ For the edges $\{\eta=s_{3,d}s_{3,d+1}|1 \leq d \leq 2a-6\}$, the set of edge metric coordinates is as follow:\\ $S_{3}=\{\gamma_{E}(\eta|U_{E})=(2a+2c-d-6, 2b+2c+d-7, 2a+2b-d-8)|1 \leq d \leq 2a-6\}$.\\\\ For the edges $\{\eta=t_{3,d}t_{3,d+1}|1 \leq d \leq 2c-6\}$, the set of edge metric coordinates is as follow:\\ $T_{3}=\{\gamma_{E}(\eta|U_{E})=(2a+2b+d-6, 2b+2c-d-7, 2a+2c-d-7)|1 \leq d \leq 2c-6\}$.\\\\ For the edges $\{\eta=u_{3,d}u_{3,d+1}|1 \leq d \leq 2b-6\}$, the set of edge metric coordinates is as follow:\\ $U_{3}=\{\gamma_{E}(\eta|U_{E})=(2a+2b-d-7, 2b+2c-d-7, 2a+2b+d-6)|1 \leq d \leq 2b-6\}$.\\\\ For the edges $\{\eta_{1}=p_{1,i}q_{1,1}, \eta_{2}=s_{1,i-2}t_{1,1}, \eta_{3}=r_{1,1}q_{1,j}, \eta_{4}=t_{1,j}u_{1,1}, \eta_{5}=r_{1,k}p_{2,i}, \eta_{6}=u_{1,k-2}s_{2,i-2}, \eta_{7}=p_{2,1}r_{2,k}, \eta_{8}=s_{2,1}u_{2,k-2}, \eta_{9}=r_{2,1}q_{2,j}, \eta_{10}=u_{2,1}t_{2,j-2}, \eta_{11}=p_{1,1}q_{2,1}, \eta_{12}=u_{1,1}t_{2,1}\}$, the set of edge metric coordinates is as follow:\\ $V_{1}=\{\gamma_{E}(\eta_{1}|U_{E})=(2a-2, 2c-1, 2a+4b-8), \gamma_{E}(\eta_{2}|U_{E})=(2a-2, 2c-1, 2a+4b-10), \gamma_{E}(\eta_{3}|U_{E})=(2a+2c-3, 0, 2a+2b-2), \gamma_{E}(\eta_{4}|U_{E})=(2a+2c-5, 2, 2a+2b-4), \gamma_{E}(\eta_{5}|U_{E})=(2a+2b+2c-10, 2b-2, 2a-1), \gamma_{E}(\eta_{6}|U_{E})=(2a+2b+2c-8, 2b-2, 2a-1), \gamma_{E}(\eta_{7}|U_{E})=(2b+2c-2, 2a+2b-3, 0), \gamma_{E}(\eta_{8}|U_{E})=(2b+2c-4, 2a+2b-5, 2), \gamma_{E}(\eta_{9}|U_{E})=(2c-1, 2a+2b+2c-10, 2b-2), \gamma_{E}(\eta_{10}|U_{E})=(2c-1, 2a+2b+2c-8, 2b-4), \gamma_{E}(\eta_{11}|U_{E})=(0, 2a+2c-2, 2a+2c-3), \gamma_{E}(\eta_{12}|U_{E})=(2, 2a+2c-4, 2b+2c-5)\}$.\\\\ For the edges $\{\eta=p_{1,2d}s_{1,2d-1}|1 \leq d \leq a-1\}$, the set of edge metric coordinates is as follow:\\ $PS_{1}=\{\gamma_{E}(\eta|U_{E})=(2d-1, 4a+2c-2d-12, 4a+2c+2d-18)|1 \leq d \leq a-2\}\cup\{\gamma_{E}(\eta|U_{E})=(2d-1, 4a+2c-2d-12, 2a+4b-10)|d=a-1 \}$.\\\\ For the edges $\{\eta=q_{1,2d}t_{1,2d-1}|1 \leq d \leq c-1\}$, the set of edge metric coordinates is as follow:\\ $QT_{1}=\{\gamma_{E}(\eta|U_{E})=(4a+2d-13, 4c-2d-8, 2a+2b+2c-9)|d=1\}\cup\{\gamma_{E}(\eta|U_{E})=(4a+2d-13, 4c-2d-8, 4a+2b+2c-2d-15)|1 \leq d \leq c-1\}$.\\\\ For the edges $\{\eta=r_{1,2d}u_{1,2d-1}|1 \leq d \leq b-1\}$, the set of edge metric coordinates is as follow:\\ $RU_{1}=\{\gamma_{E}(\eta|U_{E})=(4a+2c+2d-16, 2d-1, 4a+2b-2d-12)|1 \leq d \leq b-2\}\cup\{\gamma_{E}(\eta|U_{E})=(4a+2c+2d-16, 2d-1, 2a+2b+2c-10)|d=b-1\}$.\\\\ For the edges $\{\eta=p_{2,2d}s_{2,2d-1}|1 \leq d \leq a-1\}$, the set of edge metric coordinates is as follow:\\ $PS_{2}=\{\gamma_{E}(\eta|U_{E})=(4b+2c+2d-13, 4a+2b-2d-13, 2d-1)|1 \leq d \leq a-2\}\cup\{\gamma_{E}(\eta|U_{E})=(2a+2b+2c-9, 4a+2b-2d-13, 2d-1)|d=a-1\}$.\\\\ For the edges $\{\eta=r_{2,2d}u_{2,2d-1}|1 \leq d \leq b-1\}$, the set of edge metric coordinates is as follow:\\ $RU_{2}=\{\gamma_{E}(\eta|U_{E})=(4c+2d-10, 2a+2b+2c-10, 4b-2d-8)|d=1\}\cup\{\gamma_{E}(\eta|U_{E})=(4c+2d-10, 4a+2b-2d-16, 4b-2d-8)|2 \leq d \leq b-1\}$.\\\\ For the edges $\{\eta=q_{2,2d}t_{2,2d-1}|1 \leq d \leq c-1\}$, the set of edge metric coordinates is as follows:\\ $QT_{2}=\{\gamma_{E}(\eta|U_{E})=(2d, 4a+2c+2d-15, 4c+2b-2d-11)|1\leq d \leq c-2\}\cup\{\gamma_{E}(\eta|U_{E})=(2d, 2a+2b+2c-8, 4c+2b-2d-11)|d=c-1 \}$.\\\\ For the edges $\{\eta=p_{3,2d-1}s_{3,2d-1}|1 \leq d \leq a-2\}$, the set of edge metric coordinates is as follow:\\ $PS_{3}=\{\gamma_{E}(\eta|U_{E})=(4a+2c-2d-15, 4b+2c+2d-16, 4a+2b-2d-16)|1 \leq d \leq a-2\}$.\\\\ For the edges $\{\eta=q_{3,2d-1}t_{3,2d-1}|1 \leq d \leq c-2\}$, the set of edge metric coordinates is as follows:\\ $QT_{3}=\{\gamma_{E}(\eta|U_{E})=(4a+2b+2d-18, 4b+2c-2d-14, 4a+2c-2d-15)|1\leq d \leq c-2\}$.\\\\ For the edges $\{\eta=r_{3,2d-1}u_{3,2d-1}|1 \leq d \leq b-2\}$, the set of edge metric coordinates is as follow:\\ $RU_{3}=\{\gamma_{E}(\eta|U_{E})=(4a+2b-2d-16, 2b+4c-2d-13, 4a+2b+2d-18)|1 \leq d \leq b-2\}$.\\\\ For the edges $\{\eta_{1}=t_{2,j-3}p_{3,i-5}, \eta_{2}=u_{2,2}s_{3,i-5}, \eta_{3}=s_{2,i-3}t_{3,j-5}, \eta_{4}=u_{1,k-3}q_{3,j-5}, \eta_{5}=t_{1,2}u_{3,k-5}, \eta_{6}=s_{1,i-3}r_{3,k-5}, \eta_{7}=p_{3,1}r_{3,1}, \eta_{8}=u_{3,1}q_{3,1}, \eta_{9}=s_{3,1}t_{3,1}\}$, the set of edge metric coordinates is as follow:\\ $V_{2}=\{\gamma_{E}(\eta_{1}|U_{E})=(2c-2, 2a+2b+2c-11, 2b-1), \gamma_{E}(\eta_{2}|U_{E})=(2c, 2a+2b+2c-12, 2b-3), \gamma_{E}(\eta_{3}|U_{E})=(2a+2b+2c-11, 2b-1, 2a-2), \gamma_{E}(\eta_{4}|U_{E})=(2a+2b+2c-12, 2b-3, 2a), \gamma_{E}(\eta_{5}|U_{E})=(2a-1, 2c-2, 2a+2b+2c-11), \gamma_{E}(\eta_{6}|U_{E})=(2a-3, 2c, 2a+2b+2c-12), \gamma_{E}(\eta_{7}|U_{E})=(2b+2b-8, 2b+2c-8, 2a+2b-7), \gamma_{E}(\eta_{8}|U_{E})=(2a+2b-7, 2b+2c-8, 2a+2b-6), \gamma_{E}(\eta_{9}|U_{E})=(2a+2c-6, 2b+2c-7, 2a+2b-8)\}$.\\\\ Now, from these sets of edge metric codes for the graph $FCS_{a,b,c}$, we find that $|P_{1}|=|P_{2}|=2a-2$, $|Q_{1}|=|Q_{2}|=2c-2$, $|R_{1}|=|R_{2}|=2b-2$, $|S_{1}|=|S_{2}|=2a-4$, $|T_{1}|=|T_{2}|=2c-4$, $|U_{1}|=|U_{2}|=2b-4$, $|P_{3}|=|S_{3}|=2a-6$, $|Q_{3}|=|T_{3}|=2c-6$, $|R_{3}|=|U_{3}|=2b-6$, $|PS_{1}|=|PS_{2}|=a-1$, $|RU_{1}|=|RU_{2}|=b-1$, $|QT_{1}|=|QT_{2}|=c-1$, $|PS_{3}|=a-2$, $|QT_{3}|=c-2$, $|RU_{3}|=b-2$, $|V_{1}|=12$, and $|V_{2}|=9$. We see that the sum of all of these cardinalities is equal to $|E(FCS_{a,b,c})|$ and which is $3(5a+5b+5c-21)$. Moreover, all of these sets are pairwise disjoint, which implies that $edim(FCS_{a,b,c})\leq 3$. To complete the proof, we have to show that $edim(FCS_{a,b,c})\geq 3$. To show this, we have to prove that there exists no edge metric generator $U_{E}$ for $FCS_{a,b,c}$ such that $|U_{E}|\leq2$. Since, the graph $FCS_{a,b,c}$ is not a path graph, so the possibility of a singleton edge metric generator for $FCS_{a,b,c}$ is ruled out \cite{emd}. Next, suppose on the contrary that there exists an edge metric generator $U_{E}$ with $|U_{E}|=2$. Therefore, we have the following cases to be discussed (for the contradictions, the naturals $a$, $b$, and $c$ are $\geq5$):\\\\ \textbf{Case(\rom{1})} When $U_{E}=\{a, b\}$, where $a$ and $b$ are the vertices from the outer $pqr$-cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{p_{1,1}, p_{1,d}\}$, $p_{1,d}$ ($2\leq d \leq i$). Then $\gamma(u_{3,k-5}r_{3,k-5}|U_{E})=\gamma(r_{3,k-5}r_{3,k-6} |U_{E})$, for $2\leq d\leq i$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, q_{1,d}\}$, $q_{1,d}$ ($1\leq d \leq j$). Then $\gamma(r_{1,2}u_{1,1}|U_{E})=\gamma(u_{1,1}u_{1,2} |U_{E})$, for $1\leq d\leq j-1$; $\gamma(p_{1,i}s_{1,i-2}|U_{E})=\gamma(s_{1,i-2}s_{1,i-3}|U_{E})$ when $d=j$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, r_{1,d}\}$, $r_{1,d}$ ($1\leq d \leq k$). Then $\gamma(p_{3,1}s_{3,1}|U_{E})=\gamma(p_{3,1}r_{3,1}|U_{E})$, for $1\leq d\leq k$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, p_{2,d}\}$, $p_{2,d}$ ($1\leq d \leq i$). Then $\gamma(t_{1,1}q_{3,1}|U_{E})=\gamma(q_{3,1}q_{3,2}|U_{E})$, for $1\leq d\leq i$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, r_{2,d}\}$, $r_{2,d}$ ($1\leq d \leq k$). Then $\gamma(s_{2,1}s_{2,2}|U_{E})=\gamma(s_{2,1}p_{2,2}|U_{E})$, for $1\leq d \leq k-1$; $\gamma(s_{1,1}t_{2,1}|U_{E})=\gamma(q_{2,2}t_{2,1}|U_{E})$, for $d=k$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, q_{2,d}\}$, $q_{2,d}$ ($1\leq d \leq j$). Then $\gamma(s_{2,1}s_{2,2}|U_{E})=\gamma(s_{2,1}p_{2,2}|U_{E})$, for $1\leq d\leq j$, a contradiction. \end{itemize} \textbf{Case(\rom{2})} When $U_{E}=\{a, b\}$, where $a$ and $b$ are the vertices from the first interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{s_{1,1}, s_{1,d}\}$, $s_{1,d}$ ($2\leq d \leq i-2$). Then $\gamma(t_{2,1}t_{2,2}|U_{E})=\gamma(t_{2,1}q_{2,2}|U_{E})$, for $2\leq d\leq i-1$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, r_{3,d}\}$, $r_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(q_{1,4}t_{1,3}|U_{E})=\gamma(t_{1,3}t_{1,4}|U_{E})$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, t_{2,d}\}$, $t_{2,d}$ ($1\leq d \leq j-2$). Then $\gamma(p_{1,2}s_{1,1}|U_{E})=\gamma(s_{1,1}s_{1,2}|U_{E})$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, p_{3,d}\}$, $p_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(q_{3,1}t_{3,1}|U_{E})=\gamma(t_{3,1}t_{3,2}|U_{E})$, for $1\leq d\leq 2a-3$, a contradiction. \end{itemize} \textbf{Case(\rom{3})} When $U_{E}=\{a, b\}$, where $a$ and $b$ are the vertices from the second interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{u_{1,1}, u_{1,d}\}$, $u_{1,d}$ ($1\leq d \leq k-2$). Then $\gamma(u_{1,1}r_{1,2}|U_{E})=\gamma(u_{1,1}t_{1,j-2}|U_{E})$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U_{E}=\{u_{1,1}, t_{1,d}\}$, $t_{1,d}$ ($1\leq d \leq j-2$). Then $\gamma(u_{1,1}u_{1,2}|U_{E})=\gamma(u_{1,1}r_{1,2}|U_{E})$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U_{E}=\{u_{1,1}, u_{3,d}\}$, $u_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(s_{1,i-4}p_{1,i-3}|U_{E})=\gamma(s_{1,i-4}s_{1,i-5}|U_{E})$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U_{E}=\{u_{1,1}, q_{3,d}\}$, $q_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(p_{3,1}s_{3,1}|U_{E})=\gamma(s_{3,1}s_{3,2}|U_{E})$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{4})} When $U_{E}=\{a, b\}$, where $a$ and $b$ are the vertices from the third interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{s_{2,1}, s_{2,d}\}$, $s_{2,d}$ ($2\leq d \leq i-2$). Then $\gamma(u_{2,k-2}u_{2,k-3}|U_{E})=\gamma(u_{2,k-2}r_{2,k-1}|U_{E})$, for $2\leq d\leq i-2$, a contradiction. \item Suppose $U_{E}=\{s_{2,1}, u_{2,d}\}$, $u_{2,d}$ ($1\leq d \leq k-2$). Then $\gamma(s_{2,1}s_{2,2}|U_{E})=\gamma(s_{2,1}p_{2,2}|U_{E})$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U_{E}=\{s_{2,1}, s_{3,d}\}$, $s_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(r_{1,k-3}u_{1,k-4}|U_{E})=\gamma(u_{1,k-4}u_{1,k-5}|U_{E})$, for $1\leq d\leq i-5$, a contradiction. \item Suppose $U_{E}=\{s_{2,1}, t_{3,d}\}$, $t_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(r_{1,k-3}u_{1,k-4}|U_{E})=\gamma(u_{1,k-4}u_{1,k-5}|U_{E})$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{5})} When $U_{E}=\{a, b\}$, where $a$ is in outer $pqr$-cycle and $b$ is in the first interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{p_{1,1}, s_{1,d}\}$, $s_{1,d}$ ($1\leq d \leq i-2$). Then $\gamma(t_{1,1}q_{1,2}|U_{E})=\gamma(t_{1,1}t_{1,2}|U_{E})$, for $1\leq d\leq i-2$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, t_{2,d}\}$, $t_{2,d}$ ($1\leq d \leq j-2$). Then $\gamma(u_{2,1}u_{2,2}|U_{E})=\gamma(u_{2,1}r_{2,2}|U_{E})$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, r_{3,d}\}$, $r_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(q_{1,4}t_{1,3}|U_{E})=\gamma(t_{1,3}t_{1,4}|U_{E})$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, p_{3,d}\}$, $p_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(q_{3,1}t_{3,1}|U_{E})=\gamma(t_{3,1}t_{3,2}|U_{E})$, for $1\leq d\leq i-5$, a contradiction. \end{itemize} \textbf{Case(\rom{6})} When $U_{E}=\{a, b\}$, where $a$ is in outer $pqr$-cycle and $b$ is in the second interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{p_{1,1}, t_{1,d}\}$, $t_{1,d}$ ($1\leq d \leq j-2$). Then $\gamma(u_{1,1}u_{1,2}|U_{E})=\gamma(u_{1,1}r_{1,2}|U_{E})$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, u_{1,d}\}$, $u_{1,d}$ ($1\leq d \leq k-2$). Then $\gamma(u_{1,k-2}s_{2,i-2}|U_{E})=\gamma(r_{1,k-2}u_{1,k-2}|U_{E})$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, u_{3,d}\}$, $u_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(q_{1,4}t_{1,3}|U_{E})=\gamma(t_{1,3}t_{1,4}|U_{E})$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, q_{3,d}\}$, $q_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(u_{1,k-2}u_{1,k-3}|U_{E})=\gamma(u_{1,k-3}u_{1,k-4}|U_{E})$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{7})} When $U_{E}=\{a, b\}$, where $a$ is in outer $pqr$-cycle and $b$ is in the third interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{p_{1,1}, u_{2,d}\}$, $u_{2,d}$ ($1\leq d \leq k-2$). Then $\gamma(s_{2,1}s_{2,2}|U_{E})=\gamma(s_{2,1}p_{2,2}|U_{E})$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, s_{3,d}\}$, $s_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(q_{3,1}t_{3,1}|U_{E})=\gamma(t_{3,1}t_{3,2}|U_{E})$, for $1\leq d\leq i-5$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, s_{2,d}\}$, $s_{2,d}$ ($1\leq d \leq i-2$). Then $\gamma(r_{1,k}r_{1,k-1}|U_{E})=\gamma(r_{1,k-1}r_{1,k-2} |U_{E})$, for $1\leq d\leq i-2$, a contradiction. \item Suppose $U_{E}=\{p_{1,1}, t_{3,d}\}$, $t_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(r_{1,k}r_{1,k-1}|U_{E})=\gamma(r_{1,k-1}r_{1,k-2}|U_{E})$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{8})} When $U_{E}=\{a, b\}$, where $a$ is in first interior cycle and $b$ is in the second interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{s_{1,1}, t_{1,d}\}$, $t_{1,d}$ ($1\leq d \leq j-2$). Then $\gamma(q_{2,2}t_{2,1}|U_{E})=\gamma(t_{2,1}t_{2,2}|U_{E})$, for $1\leq d\leq j-2$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, u_{3,d}\}$, $u_{3,d}$ ($1\leq d \leq k-5$). Then $\gamma(q_{3,1}q_{3,2}|U_{E})=\gamma(q_{3,1}t_{3,1}|U_{E})$, for $1\leq d\leq k-5$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, u_{1,d}\}$, $u_{1,d}$ ($1\leq d \leq k-2$). Then $\gamma(r_{1,k-1}u_{1,k-2}|U_{E})=\gamma(u_{1,k-2}s_{2,i-2}|U_{E})$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, q_{3,d}\}$, $q_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(u_{1,k-2}u_{1,k-3}|U_{E})=\gamma(u_{1,k-3}u_{1,k-4}|U_{E})$, for $1\leq d\leq j-5$, a contradiction. \end{itemize} \textbf{Case(\rom{9})} When $U_{E}=\{a, b\}$, where $a$ is in first interior cycle and $b$ is in the third interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{s_{1,1}, u_{2,d}\}$, $u_{2,d}$ ($1\leq d \leq k-2$). Then $\gamma(p_{1,2}s_{1,1}|U_{E})=\gamma(s_{1,1}s_{1,3}|U_{E})$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, s_{2,d}\}$, $s_{2,d}$ ($1\leq d \leq i-2$). Then $\gamma(r_{1,k}r_{1,k-1}|U_{E})=\gamma(r_{1,k-1}r_{1,k-2}|U_{E})$, for $1\leq d\leq i-2$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, t_{3,d}\}$, $t_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(r_{1,k}r_{1,k-1}|U_{E})=\gamma(r_{1,k-1}r_{1,k-2}|U_{E})$, for $1\leq d\leq j-5$, a contradiction. \item Suppose $U_{E}=\{s_{1,1}, s_{3,d}\}$, $s_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(q_{3,1}t_{3,1}|U_{E})=\gamma(t_{3,1}t_{3,2}|U_{E})$, for $1\leq d\leq i-5$, a contradiction. \end{itemize} \textbf{Case(\rom{10})} When $U_{E}=\{a, b\}$, where $a$ is in second interior cycle and $b$ is in the third interior cycle of $FCS_{a,b,c}$. \begin{itemize} \item Suppose $U_{E}=\{u_{1,1}, u_{2,d}\}$, $u_{2,d}$ ($1\leq d \leq k-2$). Then $\gamma(q_{2,j}q_{2,j-1}|U_{E})=\gamma(q_{2,j-1}q_{2,j-2}|U_{E})$, for $1\leq d\leq k-2$, a contradiction. \item Suppose $U_{E}=\{u_{1,1}, s_{2,d}\}$, $s_{2,d}$ ($1\leq d \leq i-2$). Then $\gamma(r_{1,2}u_{1,1}|U_{E})=\gamma(u_{1,1}t_{1,j-2}|U_{E})$, for $1\leq d\leq i-2$, a contradiction. \item Suppose $U_{E}=\{u_{1,1}, t_{3,d}\}$, $t_{3,d}$ ($1\leq d \leq j-5$). Then $\gamma(s_{3,1}s_{3,2}|U_{E})=\gamma(p_{3,1}s_{3,1}|U_{E})$, for $1\leq d\leq j-5$, a contradiction. \item Suppose $U_{E}=\{u_{1,1}, s_{3,d}\}$, $s_{3,d}$ ($1\leq d \leq i-5$). Then $\gamma(u_{2,1}u_{2,d}|U_{E})=\gamma(u_{2,2}u_{2,3}|U_{E})$, for $1\leq d\leq i-5$, a contradiction. \end{itemize} As a result, we infer that for $FCS_{a,b,c}$, there is no edge resolving set $U_{E}$ such that $|U_{E}|=2$. Therefore, we must have $|U_{E}| \geq 3$ i.e., $edim(FCS_{a,b,c})\geq 3$. Hence, $edim(FCS_{a,b,c})=3$, which concludes the theorem. \end{proof} In terms of minimum IEMG, we have the following result \begin{thm} For $a,b,c\geq4$, the graph $FCS_{a,b,c}$ has an IEMG with cardinality three. \end{thm} \begin{proof} To show that, for zigzag edge coronoid fused with starphene $FCS_{a,b,c}$, there exists an IEMG $U_{E}^{i}$ with $|U_{E}^{i}|=3$, we follow the same technique as used in Theorem $3$.\\\\ Suppose $U_{E}^{i} = \{p_{1,1}, r_{1,1}, r_{1, k}\} \subset V(FCS_{a,b,c})$. Now, by using the definition of an independent set and following the same pattern as used in Theorem $1$, it is simple to show that the set of vertices $U_{E}^{i}= \{p_{1,1}, r_{1,1}, r_{2, k}\}$ forms an IEMG for $FCS_{a,b,c}$ with $|U_{E}^{i}|=3$, which concludes the theorem. \\ \end{proof} \section{Conclusions} In this paper, we have studied the minimum vertex and edge metric generators for the zigzag edge coronoid fused with starphene $FCS_{a,b,c}$ structure. For positive integers $a,b,c\geq4$, we have proved that $dim(FCS_{a,b,c})=edim(FCS_{a,b,c})=3$ (a partial response to the question raised recently in \cite{emd}). We also observed that the vertex and edge metric generators for $FCS_{a,b,c}$ are independent. In future, we will try to obtain the other variants of metric dimension (for instance, fault-tolerant metric dimension (vertex and edge), mixed metric dimension, etc) for the graph $FCS_{a,b,c}$. \end{document}
arXiv
Protecting the grid topology and user consumption patterns during state estimation in smart grids based on data obfuscation Lakshminarayanan Nandakumar1, Gamze Tillem2, Zekeriya Erkin2 & Tamas Keviczky3 Energy Informatics volume 2, Article number: 25 (2019) Cite this article Smart grids promise a more reliable, efficient, economically viable, and environment-friendly electricity infrastructure for the future. State estimation in smart grids plays a pivotal role in system monitoring, reliable operation, automation, and grid stabilization. However, the power consumption data collected from the users during state estimation can be privacy-sensitive. Furthermore, the topology of the grid can be exploited by malicious entities during state estimation to launch attacks without getting detected. Motivated by the essence of a secure state estimation process, we consider a weighted-least-squares estimation carried out batch-wise at repeated intervals, where the resource-constrained clients utilize a malicious cloud for computation services. We propose a secure masking protocol based on data obfuscation that is computationally efficient and successfully verifiable in the presence of a malicious adversary. Simulation results show that the state estimates calculated from the original and obfuscated dataset are exactly the same while demonstrating a high level of obscurity between the original and the obfuscated dataset both in time and frequency domain. Smart grids are widely regarded as a key ingredient to reduce the effects of growing energy consumption and emission levels (Commission 2014b). By 2020, the European Union (EU) aims to replace 80% of the existing electricity meters in households with smart meters (Commission 2014b). Currently, there are close to about 200 million smart meters accounting for 72% of the total European consumers (Commission 2014b). This smart metering and smart grid rollout can reduce emissions in the EU by up to 9% and annual household energy consumption by similar amounts (Commission 2014b). Despite the environment-friendly and the cost-cutting nature of the smart grid, deployment of smart meters at households actually raises serious data privacy and security concerns for the users. For example, with the advent of machine learning and data mining techniques, occupant activity patterns can be deduced from the power consumption measurement data (Molina-Markham et al. 2010; Lisovich et al. 2010; Kursawe et al. 2011; Zeifman and Roth 2011). Additionally, the configuration of the power network/grid topology can be used by attackers to launch stealth attacks (Liu et al. 2011). Thus, despite the apparent benefits, without convincing privacy and security guarantees, users are likely to be reluctant to take risks and might prefer conventional meters to smart meters. State estimation in smart grids enables the utility providers and Energy Management Systems (EMS) to perform various control and planning tasks such as optimizing power flow, establishing network models, and bad measurement detection analysis. State estimation is a process of estimating the unmeasured quantities of the grid such as the phase angle from the measurement data. The operating range of the state variables determines the current status of the network which enables the operator to perform any necessary action if required. The state of the system, the network topology, and impedance parameters of the grid can be used to characterize the entire power system (Huang et al. 2012). Traditionally, the centralized state estimation technique with the weighted-least-squares method yielded a very accurate result (Rahman and Venayagamoorthy 2017). However, now due to the increased complexity and the scale of the grid size, state estimation in a wide area grid network requires multiple smart meters from different localities to share data, some of which could be hosted by a third-party cloud infrastructure (Kim et al. 2011) due to coupling constraints, superior computational resources, greater flexibility, and cost-effectiveness. The problem with the current cloud computation practice is that it operates mostly over plaintexts (Ren et al. 2012; Deng 2017); hence users reveal data and computation results to the commercial cloud (Ren et al. 2012). It becomes a huge problem when the user data contains sensitive information such as the power consumption patterns in smart meters. Moreover, there are strong financial incentives for the cloud service provider to return false results especially if the clients cannot verify or validate the results (Wang et al. 2011). For example, the cloud service provider could simply store the previously computed result and use it as the result for future computation problems to save computational costs. A recent breakthrough in fully homomorphic encryption (FHE) (Gentry and Boneh 2009) has shown that secure computation outsourcing is viable in theory. However, applying this mechanism to compute arbitrary operations and functions on encrypted data is still far from practice due to its high complexity and overhead (Wang et al. 2011). This problem leads researchers to alternative mechanisms for the design of efficient and verifiable secure cloud computation schemes. Existing work and our contributions Numerous privacy challenges related to smart grids are pointed out in the literature in different contexts. Amongst them, the most popular and widely studied is the privacy-preserving billing and data aggregation problem in smart grids (Molina-Markham et al. 2010; Kursawe et al. 2011; Erkin 2015; Ge et al. 2018; Knirsch et al. 2017; Emura 2017; Danezis et al. 2013). Our main objective is different from these work since we focus on the privacy concerns of state estimation in smart grids. Existing literature in smart grid state estimation problem focuses either on the problem of protecting the grid topology (Liu et al. 2011; Rahman and Venayagamoorthy 2017; Deng et al. 2017) or on preserving the power consumption data of the users separately (Kim et al. 2011; Beussink et al. 2014; Tonyali et al. 2016). In Liu et al. (2011), the authors present a new class of attacks called false data injection attacks (FDI) against state estimation in smart grids and show that an attacker can exploit the configuration of a power network to successfully introduce arbitrary errors into the state variables while bypassing existing techniques for bad measurement detection. The authors in Deng et al. (2017) propose a design for a least-budget defense strategy to protect the power system from such FDI attacks. The authors in Rahman and Venayagamoorthy (2017) extends this problem to a non-linear state estimation and examines the possibilities of FDI attacks in an AC power network. To preserve the privacy of the user's daily activities, (Kim et al. 2011) exploits the kernel of the electric grid configuration matrix. In Beussink et al. (2014), a data obfuscation approach for an 802.11s-based mesh network is proposed to securely distribute obfuscated values along the routes available via 802.11s. The obfuscation approach in Tonyali et al. (2016) tackles this problem through advanced encryption standard (AES) scheme for hiding the power consumption data and uses elliptic-curve cryptography (ECC) for authenticating the obfuscation values that are distributed within the advanced metering infrastructure (AMI) network. Contrary to the above work in smart grid state estimation, we focus on protecting both the power consumption data of the users and the grid topology. An open problem pointed out in Efthymiou and Kalogridis (2010); Li et al. (2010); Kim et al. (2011) is to provide a light-weight implementation of state estimation that can run in a smart meter platform. In this paper, we attempt to solve this problem by proposing Obfuscate(.), an efficient secure masking scheme based on randomization. Our scheme obfuscates the measurement data of a collection of smart meters installed in a particular locality and send it to the lead smart meter in their respective locality. These lead smart meters, in turn, gather these randomized data and send it to the cloud service provider to perform the required computations. The major contributions of our paper are as follows: We propose Obfuscate(.), the first batch-wise state estimation scheme in smart grids with the goal of protecting both the power consumption data of the consumers and the grid topology. Our scheme is based on secure masking through obfuscated transformation and is proven to be efficient with no major computational overhead to the users. We evaluate the performance of Obfuscate(.) with real-time hourly power consumption dataset of different smart meters. We use the dataset under the assumption that these meters are connected to an IEEE-14 bus test grid system and a fully measured 5 bus power system. Furthermore, we evaluate the illegibility of the obfuscated dataset with respect to the original dataset. In the rest of the paper, first we discuss the necessary prerequisites on state estimation in smart grids and the adversarial models in "Background information" section. In "Secure state estimation with Obfuscate(.)" section, we explain Obfuscate(.) in detail. In "Analyses of Obfuscate(.)" section, we present the correctness, privacy, verification and complexity analyses of our scheme. In "Simulation results" section, we present the simulation results and we conclude the paper in "Conclusions and future work" section. Static state estimation in electric grids The static state estimation (SSE) in smart grids is a well established problem with well-known techniques that rely on a set of measurement data to estimate the states at regular time intervals (Schweppe and Wildes 1970; Schweppe and Rom 1970; Schweppe 1970). The state vector \(x = [x_{1}, x_{2}, \cdots x_{n}]^{T} \in \mathbb {R}^{n}\) represents the phase angles at each electric branch or system node, and the measurement data \(z \in \mathbb {R}^{m}\) denotes the power readings of the smart meters. The state vector x and the measurement data z are related by a nonlinear mapping function h such that z=h(x)+e, where the sensor measurement noise e is a zero-mean Gaussian noise vector. Typically, for state estimation a linear approximation of this equation is used (Kim et al. 2011; Liu et al. 2011; Gera et al. 2017) as z=Hx+e, where \(\mathbf {H} \in \mathbb {R}^{m \times n}\) is the full column rank (m>n) measurement Jacobian matrix determined by the grid structure and line parameters (Liang et al. 2017). The matrix H is known as the grid configuration or the power network topology matrix (Kim et al. 2011; Liang et al. 2017; Gera et al. 2017). In an electric grid m≫n (Zimmerman et al. 2009) and the best unbiased linear estimation of the state (Wood and Wollenberg 1996) is given by $$ \hat{x} = \left(\mathbf{H}^{T} W \mathbf{H}\right)^{-1} \mathbf{H}^{T} W z, $$ where \(W^{-1} \in \mathbb {R}^{m \times m}\) represents the covariance matrix of the measurement noise. W−1 is taken to be a diagonal matrix W−1=σ2I (Wood and Wollenberg 1996), so Eq. 1 reduces to $$ \hat{x} = \left(\mathbf{H}^{T} \mathbf{H}\right)^{-1} \mathbf{H}^{T} z. $$ The SSE technique reduces the computational complexity of performing state estimation in smart grids, where the estimates are usually updated on a periodic basis (Huang et al. 2012). Measurement devices in current transmission systems are installed specifically catering to the needs of SSE (Krause and Lehnhoff 2012). The recent evolution of phasor measurement units (PMUs) are able to measure voltage and line current phasors with high accuracy and sampling rates. However, deployment of a large number of PMUs across the system requires significant investments since the average overall cost per PMU ranges from $40k to $180k (Department of Energy 2014). Hence SSE will remain an important technique to estimate the state variables at medium and low voltage levels (Cosovic and Vukobratovic 2017). Practically, state estimation is run only for every few minutes or only when a significant change occurs in the network (Cosovic and Vukobratovic 2017; Monticelli 2000). Bad measurement detection (BMD) Bad measurements may be introduced due to meter failures or malicious attacks. They may affect the outcome of state estimation and can mislead the grid control algorithms, possibly causing catastrophic consequences such as blackouts in large geographical areas. For example, a large portion of the Midwest and Northeast United States and Ontario, Canada, experienced an electric power blackout affecting a population of about 50 million (n.a. 2003). The power outage cost was about $80bn in the USA and usually, the utility operators amortize it by increasing the energy tariff, which is unfortunately transferred to consumer expenses (Salinas and Li 2016). Thus, BMD is vital to ensure smooth and reliable operations in the grid. The most common technique to detect bad measurements is to calculate the L2-norm \( \left \Vert z - \mathbf {H}\>\hat {x} \right \Vert \), and if \( \left \Vert z - \mathbf {H} \> \hat {x} \right \Vert > \tau \), where τ is the threshold limit, then the measurement z is considered to be bad. The reason is that, intuitively, normal sensor measurements yield estimates closer to their actual values, while abnormal ones deviate the estimated values away from their true values. This inconsistency check is used to differentiate the good and the bad measurements (Liu et al. 2011). However, this is not always the case, as exposing H could make the grid vulnerable to stealth attacks (Liu et al. 2011). Liu, Reiter and Ning proved that a malicious entity can exploit the row and column properties of H when exposed, and launch false data injection attacks without getting detected (Liu et al. 2011). The H matrix includes the arrangement of of loads or generators, transmission lines, transformers, and status of system devices and is an integral part of state estimation, security, and power market design (Gera et al. 2017). Thus, there is a strong need to protect not just the power consumption data but also the power network topology during state estimation. Cryptographic preambles To understand the privacy goals of our problem, we state the following definitions: Obfuscation (Shoukry et al. 2016) is the procedure of transforming the data into masked data through randomization and performing the necessary operations on this masked obfuscated data. The obfuscated data can be unmasked by inverting the randomized transformation using the respective private keys. Semi-honest Adversary (Lindell and Pinkas 2009) is an adversary who correctly follows the protocol specification but keeps track of all the information exchanged to possibly analyze it together with any other public information to leak sensitive data. It is also known as honest-but-curious or passive adversary. Malicious Adversary (Lindell and Pinkas 2009) is an adversary who can arbitrarily deviate from the protocol specification. Here the attacks are no longer restricted to eavesdropping since the adversary might actually inject or tamper with the data provided. It is also known as active adversaries. Secure state estimation with O b f u s c a t e(.) In this section, we explain our secure state estimation protocol Obfuscate(.) along with the setup and the threat model. Let an area \(\mathcal {A}\) consist of two localitiesFootnote 1 denoted by L1 and L2 as shown in Fig. 1. The symbol Sij refers to the smart meter installed at the household j situated in locality Li and \(X_{i} \in \mathbb {R}^{n_{i} \times T}\) denotes the state sequences of all the smart meters installed in Li for a given batch of time duration T. The electric grid configuration matrix of Li is represented as Hi and the coupling matrices between Li and Lj are denoted as Hij and Hji respectively. The symbol [·] denotes the obfuscation of a vector or matrix. For example, [Zi] represents the obfuscated value of the matrix \(Z_{i} \in \mathbb {R}^{m_{i} \times T}\) where mi is the number of smart meters in Li. The participating entities in our design are as follows: Proposed solution framework Utility Provider\(\mathcal {U}\): provides utility services to \(\mathcal {A}\) and has access to the grid configuration matrix H. \(\mathcal {U}\) generates all the keys to initiate Obfuscate(.) and distributes a selected portion of these keys to the smart meters at each locality through a private channel to carry out obfuscation. \(\mathcal {U}\) is a decision-making entity performing any necessary action after receiving the state variables at regular intervals. Lead Smart Meter Si1 receives the randomized masked data from the other meters connected to it and obfuscates the dynamics of the power consumption pattern of all the meters in its locality. Then, sends it to the cloud for state estimation. The lead meter at every locality is assumed to be a trusted node in the local network. A similar entity was proposed in Kim et al. (2011) where the lead meter is connected to all the meters based on the mesh topology network. The lead meter, for instance, could be the local distributed system operator (DSO) of a particular locality. Other Smart Meters Sij (∀j≠1) are all the other meters in Li. They obfuscate their measurement data and send it to the lead meter Si1 to avoid leaking information about their respective consumptions to any potential eavesdropping. Cloud\(\mathcal {C}\) is computationally super efficient and hence provides computation services for \(\mathcal {A}\) performing state estimation. As pointed out before, since most of the current cloud computations are performed in plaintext, modeling the cloud as a malicious entity is crucial in practice. Threat model The smart meters in Li and Lj, where j≠i, are considered to be semi-honest to each other i.e., clients living in different localities are curious about each other consumption data. This means that people who are situated geographically apart may try to learn information about people in other localities such as energy usage consumption pattern, pricing, etc. Also, households living in the same locality are modeled to be honest-but-curious. Albeit, it is natural for people living in the same locality - next to each other to have at least some prior knowledge about each other's activity pattern, it is not acceptable if the neighbors can deduce the usage of a particular appliance at a given time-stamp applying techniques such as non-intrusive load monitoring (Zeifman and Roth 2011) to the original power consumption data. Thus, all the smart meters in a particular locality securely mask their consumption data before sending it to their respective lead meter. Unlike the problem of protecting the user power consumption data from the utility provider for billing, data aggregation and other statistical purposes (Kursawe et al. 2011; Erkin 2015; Ge et al. 2018; Knirsch et al. 2017; Emura 2017; Danezis et al. 2013), here we study the problem of carrying out secure state estimation by outsourcing the data to an untrusted third party. These state variables with high accuracy are essential to the utility provider for effective decision-making and providing good quality services such as demand forecasting, optimal power flow, and contingency analysis. Hence \(\mathcal {U}\) here is not considered to be an adversarial entity and is non-colluding in nature. The utility provider's main objective is to earn the consumer trust by protecting their privacy and encouraging more user participation to install smart meters for business and commercial purposes. Investment in smart metering technology is directly impacted by customer trust in the utility operators (Commission 2014a). To protect the privacy of consumers, utility providers make use of secure communication channels and databases with access control (Kim et al. 2011). In addition, with EU's newly devised General Data Protection Regulation (GDPR), energy companies are liable to pay large penalties up to €20m (Hunt 2017), if customer data are misused. One might argue about the need to apply a similar compliance factor to the cloud service provider. However, the major problem specific to cloud computation services is that, with the current technology, most of the computations in the cloud are performed in plaintext (Ren et al. 2012; Deng 2017). Arbitrary computations on encrypted data using FHE schemes are still under active research for effective implementation (Tebaa and Hajji 2014). Providing data in the clear makes the cloud vulnerable to both active and passive attacks. According to the latest Microsoft security intelligence report (Simos 2017), the number of attacks in the cloud environment has increased by 300% which further justifies considering the cloud as a malicious entity in our problem setup. O b f u s c a t e(.) The aim of our scheme is to protect the privacy of the power consumption data of the consumers Zand the grid configuration matrix H during state estimation, while outsourcing these pieces of information to an untrusted malicious third party cloud. Our design goals are as follows: Input/Output Privacy: Neither the input data sent nor the output data computed by the cloud should be inferred by the cloud. Correctness: Any cloud server faithfully following the protocol must be able to compute an output that can be verified successfully. Verification: If the cloud server acts maliciously, then it should not be able to pass the utility-side verification test with a high probability. Efficiency: Computational overhead for the clients (\(\mathcal {U}\) and Sij) should be minimal. Nevertheless it is important to note that local smart meters in the localities cannot estimate the states on their own due to the coupling constraints (See Eq. 3). The efficiency criteria is mainly considered to exploit the nearly unlimited computational resources of the cloud. Furthermore, since the smart meters in different neighborhoods are semi-honest to each other, the designed protocol should also guarantee a very low probability of a particular neighbour inferring any sensitive information through eavesdropping and combining any other publicly available information of the localities. Proposed scheme Consider the proposed scheme depicted in Fig. 1. The equation z=Hx+e, can be rewritten as : $$ \left[\begin{array}{l} Z_{1} \\ Z_{2} \\ \end{array}\right] \> = \>\underbrace{ \left[\begin{array}{ll} H_{1} & H_{12} \\ H_{21} & H_{2} \\ \end{array}\right]}_{\mathbf{H}} \>\left[\begin{array}{l} X_{1} \\ X_{2} \\ \end{array}\right] \> + \>\left[\begin{array}{l} e_{1} \\ e_{2} \\ \end{array}\right], $$ where \(H_{1} \in \mathbb {R}^{m_{1} \times n_{1}}\) and \(H_{2} \in \mathbb {R}^{m_{2} \times n_{2}}\) are the grid configuration matrix of L1 and L2. The matrix \(H_{12} \in \mathbb {R}^{m_{1} \times n_{2}}\) and \(H_{21} \in \mathbb {R}^{m_{2} \times n_{1}}\) denote the coupling matrices. The measurement data and the states of Locality Li are represented by \(Z_{i} \in \mathbb {R}^{m_{i} \times T}\) and \(X_{i} \in \mathbb {R}^{n_{i} \times T}\) respectively. The solution to Eq. 3 is given by Eq. 2. In general, the configuration of the power network H is not time-varying during the state estimation process (Schweppe and Wildes 1970; Schweppe and Rom 1970; Schweppe 1970; Wood and Wollenberg 1996), and hence the matrix H+=(HTH)−1HT can be pre-computed during the offline stage. Typically, this information is computed during the creation of the power network by the utility provider using a trusted party. Hence, the state estimation can be recast and reduced into \(\hat {X} = \mathbf {H}^{+} Z\), where \(\hat {X} \in \mathbb {R}^{n \times T}\), \(Z \in \mathbb {R}^{m \times T}\) and \(\textbf {H}^{+} \in \mathbb {R}^{n \times m}\) with m=m1+m2 and n=n1+n2. Thus, our privacy-aware state estimation problem can be recast into solving a matrix multiplication securely. The matrix H+ can be rewritten block-wise as follows: $$ \begin{aligned} \mathbf{H}^{+} \> &= \>\left(\left[\begin{array}{ll} H_{1} & H_{12} \\ H_{21} & H_{2} \end{array}\right]^{T} \left[\begin{array}{ll} H_{1} & H_{12} \\ H_{21} & H_{2} \end{array}\right] \right)^{-1} \>\left[\begin{array}{ll} H_{1} & H_{12} \\ H_{21} & H_{2} \end{array}\right]^{T} \> = \>\left[\begin{array}{ll} F_{1} & F_{12} \\ F_{21} & F_{2} \\ \end{array}\right], \end{aligned} $$ where \(F_{1} \in \mathbb {R}^{n_{1} \times m_{1}}, F_{2}\in \mathbb {R}^{n_{2} \times m_{2}}, F_{12} \in \mathbb {R}^{n_{1} \times m_{2}}\) and \(F_{21} \in \mathbb {R}^{n_{2} \times m_{1}}\). The exact expression of the F matrix is omitted here due to space constraints. Notice from \(\hat {X} = \mathbf {H}^{+} Z\) that it is not possible for the lead meter in each locality to carry out the estimation process locally due to the coupling constraints generated by the matrices H12 and H21. Namely, the state estimate \(\hat {X}_{1}\) also depends on the consumption data of the other locality Z2 and vice versa. Thus, the lead meter collects all the obfuscated measurement data from the other meters in its locality and sends it to the cloud. The matrix H+ is obfuscated by the utility provider and sent to the cloud. However, it is important that the matrix H+ is not completely randomized using a single key but is randomized block-wise with different keys for different blocks (see Eq. 4). The estimation problem can be further broken down into $$ \left[\begin{array}{l} \hat{X_{1}} \\ \hat{X_{2}} \\ \end{array}\right] \> = \>\left[\begin{array}{l} F_{1}\, Z_{1} + F_{12} \, Z_{2} \\ F_{21}\, Z_{1} + F_{2} \, Z_{2} \\ \end{array}\right]. $$ Let us denote the matrix $$ Y \> = \>\left[\begin{array}{llll} F_{1} Z_{1} & F_{12} Z_{2} \\ F_{21} Z_{1} & F_{2} Z_{2} \\ \end{array}\right] \> = \>\left[\begin{array}{llll} Y_{1} & Y_{12} \\ Y_{21} & Y_{2} \\ \end{array}\right]. $$ Using Eq. 5 for estimating the states, we solve the matrix multiplication of each blocks in Eq. 6 privately and then perform matrix addition. The matrix multiplication is a fundamental problem in cryptography and several solutions have been proposed to solve it (Atallah and Frikken 2010; Atallah et al. 2012; Fiore and Gennaro 2012; Zhang and Blanton 2014). However, these protocols are not designed for the cloud environment and hence do not consider the computational asymmetry of the cloud server and the client. Another drawback is that these protocols use advanced cryptography to encrypt the input and output dataset, which makes them unsuitable for the computation on the cloud with large datasets due to high overhead. Furthermore, the verification of the result, which is an essential requirement in a malicious cloud setting, is not considered in these protocols (Kumar et al. 2017). A secure multiparty computation (MPC) approach was considered in Dreier and Kerschbaum (2011); López-Alt et al. (2012), where the computation is divided among multiple parties without allowing any participating entity to access another individual's private information. However, this approach is not feasible for our problem setup since all the parties are required to have a comparable computing capability. Also, in MPC approach, the result verification is often troublesome since it requires expensive zero-knowledge proofs (Saia and Zamani 2015; Goldwasser et al. 2015). Recently, a privacy-preserving, verifiable and efficient outsourcing algorithm for matrix multiplication to a malicious cloud was proposed in Kumar et al. (2017) utilizing linear transformation techniques. In our paper, we adopt a similar approach to the one prescribed in Kumar et al. (2017) to outsource the multiplication of block matrices in Eq. 6 securely to the cloud. However, Obfuscate(.) is not a straightforward application of the protocol in Kumar et al. (2017). Kumar et al. (2017) considers only a single client and a cloud setup, where the client performs the key generation, problem transformation, re-transformation and verification on his/her own. In our scheme, there are multiple smart meters installed in different neighborhoods. The keys cannot be generated locally by the individual households because the smart meters have access only to their respective consumption data which forms only a part of the information required for state estimation. Hence, besides the key generation we also propose KeyDist - a key distribution scheme as shown in Fig. 2 used by \(\mathcal {U}\) to distribute keys to the smart meters. Obfuscate(.) comprises of eight subalgorithms which are explained in the rest of this section. A triangular key distribution scheme for a locality Li KeyGen(1λ,m1,n1) algorithm (Algorithm 1) takes as input the security parameter λ and generates a total of n1+m1 non-zero random numbers each of bit size λ. These numbers are used to generate the key matrices of size \(\mathbb {R}^{m_{1}}\) and \(\mathbb {R}^{n_{1}}\). Table 1 shows the entire keys that are generated per batch. Table 1 Key generation protocol run by \(\mathcal {U}\) per batch After the KeyDist() (Algorithm 2), matrix transformation ψK() is carried out by the respective entities using their respective keys K. For every new input matrix, ψK() is invoked to securely mask the input through linear transformation in order to preserve the privacy. This operation dominates the client-side computation cost, but is not significant compared to the computations performed by the cloud. The matrix transformation for a given input matrix F1 and Z1 are given by Algorithm 3 and 4, respectively. Table 2 summarizes the complete matrix transformation protocol. Table 2 Matrix transformation protocol run per batch Next, the obfuscated matrix H+ and the masked measurement matrix Zi are sent by \(\mathcal {U}\) and Si1, respectively to \(\mathcal {C}\) to perform Computeψ([F1],[Z1]) algorithm given in Algorithm 5. This algorithm performs the computation on the cloud server. It computes MM as \(\psi (\left [F_{1}\right ], \left [Z_{1}\right ]) \> = \> (D_{1} F_{1} A_{1}). \left (A_{1}^{-1} Z_{1} D_{2}\right)\). Table 3 shows the Computeψ() protocol run by the cloud server for estimating the state samples. Table 3 Computation protocol run by \(\mathcal {C}\) per batch Upon computing the Y matrix, the cloud sends the computed result to the utility provider \(\mathcal {U}\) to execute the verification step. Verify([Y],γ) algorithm computes Q=([F]·([Z]·γ))−([Y]·γ), where γ is a binary key matrix of size T i.e. γ∈{1,0}T. The algorithm introduces the binary column matrix key γ to minimize the complexity of computation since the matrix-vector multiplication only cost quadratic time. The verification protocol for Li is given in Algorithm 6. It is important to note that the verification step serves as the BMD test in our setup and is run for all the four block matrices given by Eq. 6. Table 4 presents the verification protocol. The results are accepted only if the cloud server passes all the four verification tests. If the verification is positive, then it means that no false data has been injected into the measurements by the cloud which is conclusive to the absence of bad measurements in the network. Table 4 Verification Protocol run by \(\mathcal {U}\) per batch After positive verification, Unmask(Y,K) algorithm (Algorithm 7) is run by \(\mathcal {U}\). This algorithm returns the original values of the states \(\hat {X}\) by de-randomizing Y using their respective keys K. Table 5 summarizes the Unmask() protocol carried out for all the four block matrices. Once, all the four blocks of Y are unmasked, \(\mathcal {U}\) carries out the protocol given in Algorithm 8 to reach the final state estimates. Table 5 Unmasking Protocol run by \(\mathcal {U}\) Analyses of O b f u s c a t e(.) In this section, we show that Obfuscate(.) complies with the design goals stated in "Secure state estimation with Obfuscate(.)" section which are correctness, privacy, verifiability, and efficiency. Correctness analysis If the smart meters, utility provider, and the cloud correctly follow Obfuscate(.) as per the protocol, then Obfuscate(.) produces correct results for all the four matrix multiplications. This follows from a simple proof: \(\mathcal {U}\) first transforms the matrix F1 into [F1]=D1F1A1 and the lead smart meter in L1 transforms the matrix \(Z^{\prime }_{1} = A^{-1} Z_{1}\) into \(\left [Z_{1}\right ] = A_{1}^{-1} Z_{1} D_{2}\). The cloud server computes \(\left [Y_{1}\right ] = \left [F_{1}\right ] \cdot \left [Z_{1}\right ] = (D_{1} F_{1} A_{1}) \cdot \left (A_{1}^{-1} Z_{1} D_{2}\right) = D_{1} Y_{1} D_{2}.\) Then, in the de-randomization step, \(\mathcal {U}\) computes Y1, where \(Y_{1} = D_{1}^{-1} \left [Y_{1}\right ] D_{2}^{-1} = F_{1} \cdot Z_{1}\). □ The above analysis holds for all the Computeψ(.) presented in Table 3, thereby proving the correctness of Obfuscate(.). Privacy analysis Input Privacy: Since \(\mathcal {C}\) has only access to the masked input matrices [F] and [Z], it cannot not retrieve the original input matrices F and Z. Furthermore, the keys generated as in Table 1 do not leak any information about the original input since the keys are completely random devoid of dependency on the topology and the power consumption data. This can be seen from the following proof: The key matrix A1 and A2 are diagonal matrices with each element being a random real number of λ bit long. There are \(\phantom {\dot {i}\!}2^{m_{i} \lambda }\) possibilities of Ai matrix where i∈{1,2}. For diagonal matrices D1 and D2, there are in total \(2^{n_{1} \lambda + T \lambda }\phantom {\dot {i}\!}\) possibilities. Thus for a single block F1 in Y, there are a total of \(\phantom {\dot {i}\!}2^{(m_{1} + n_{1} + T) \lambda }\) possible choices of key matrices, which is an exponential bound quantity in terms of (m1,n1,T). □ For example, consider a practical scenario where a locality has m1=1000,n1=600,T=400 for which we have 22000λ possibilities. Thus, with increase in m1,n1 and T, the cloud does not recover any meaningful information. Output Privacy: Similar to the input privacy analysis, the output result is also protected. The resulting obfuscated matrix does not leak any information to \(\mathcal {C}\), even if it records all the computed results. Besides, for every batch, \(\mathcal {U}\) generates new keys given in Table 1 which makes our protocol resistant to any known-plain-text attack (KPA) or chosen-plain-text-attack (CPA) (Kumar et al. 2017). Verification analysis Since in a malicious threat model, the cloud server may deviate from the actual instructions of the given protocol, we equip Obfuscate(.) with a result verification algorithm to validate and verify the correctness of the result. The proof that a wrong or an invalid result never passes the verification step follows from the total probability theorem as followed in Kumar et al. (2017); Lei et al. (2013). If the cloud produces the correct result say Y1, then Q1=([F1]·[Z1]−[Y1])=[0,0,⋯0]T. If the cloud produces the wrong result, then Q1·γ1≠[F1][Z1]·γ−[Y1].γ, i.e. there exists at least a row in Q1 which is not equal to zero, \(\phantom {\dot {i}\!}Q_{1} \gamma _{1} = [q_{1},\cdots q_{m_{1}}]^{T}\). Let the row qi≠0, where $$q_{i} = \sum_{j=1}^{T} Q_{1i,j} \cdot \gamma_{j} = Q_{1i,1} \cdot \gamma_{1} + \cdots Q_{1i,k} \cdot \gamma_{k} + Q_{1i,T} \cdot \gamma_{T}.$$ There exists at least one element in this row which is not equal to zero. Let Q1i,k≠0, qi=Q1i,k·γk+Γ where \(\Gamma = \sum _{j=1}^{T} Q_{1i,j}. \gamma _{j} - Q_{1i,k}. \gamma _{k}\). Applying the total probability theorem yields, $$\begin{array}{*{20}l} &\Pr (q_{i} = 0) = \Pr [(q_{i} = 0) | (\Gamma = 0)] \Pr [\Gamma = 0] + \Pr [(q_{i} = 0) | (\Gamma \neq 0)] \Pr [\Gamma \neq 0] \\ &\Pr [(q_{i} = 0) | (\Gamma = 0)] = \Pr [\gamma_{k} = 0] = 1/2 \\ &\Pr [(q_{i} = 0) | (\Gamma \neq 0)] \leq \Pr [\gamma_{k} = 1] = 1/2 \end{array} $$ Substituting (8) in (7), we derive $$ \begin{aligned} \Pr [(q_{i} = 0) ] &\leq 1/2 \Pr [\Gamma = 0] + 1/2 \Pr [\Gamma \neq 0],\\ \Pr [(q_{i} = 0) ] &\leq 1/2 (1 - \Pr [\Gamma \neq 0]) + 1/2 \Pr [\Gamma \neq 0], \\ \Pr [(q_{i} = 0) ] &\leq 1/2. \end{aligned} $$ If the verification process is run p times, then Pr[(qi=0)]≤1/2p. □ The value p reveals the trade-off between computational efficiency and verifiability. Theoretically p≥80 is sufficient to ensure negligible probability for the cloud to pass the verification test despite producing wrong result. However, in practice, p=20 is acceptable with 1/220≈1 million (Kumar et al. 2017; Lei et al. 2013). The verification process fails to detect a wrong result one in a million times. Efficiency analysis In this section, we carry out the computation complexity analysis to prove the efficiency of Obfuscate(.). The computational cost of each step in Obfuscate(.) is analyzed in Table 6. KeyDist() protocol introduces an additional communication cost of O(m) since \(\mathcal {U}\) distributes the key aij to all the smart meters through a private channel for obfuscating their measurement data. In Table 6, it is clear that the computations performed by the client side are substantially lower than that of the cloud server. Due to the diagonal structure of the key matrices, the problem transformation step given by Algorithms 3 and 4 only costs O(nm+mT). The asymptotic complexity of the client side computation is only O(nm+mT+nT) (Kumar et al. 2017). Thus, outsourcing the computation yields a performance gain of \(O\left (\frac {1}{n} + \frac {1}{m} + \frac {1}{T}\right)\). Clearly, when n, m, and T increases, the clients will achieve a higher performance gain. Especially, with the increase in the number of smart meters m by the year 2020 as aimed by the EU (Commission 2014b), Obfuscate(.) will significantly reduce the computational overhead of its clients in the long run. Table 6 Computation complexity analysis of the protocol Simulation results In this section, we evaluate the degree of obscurity of Obfuscate(.) using two case studies: a fully measured 5-bus system and the IEEE 14-bus system with real-time power consumption data. We start with a fully measured 5-bus system and the structure of the H matrix for this system can be found in the Appendix. In this case, the total number of meters m=10 and the state variables n=4. We consider m1=4, m2=6 and n1=n2=2 and the duration of every batch to be 13 hours. Note in practice, smart meters can sample at much higher frequencies (Chen et al. 2011). Research on disaggregating electricity load has been conducted on smart meter readings with a fine granularity of frequency between 1 Hz to 1 MHz (Chen et al. 2011). The authors in Kim et al. (2011) collected real-time power consumption data of both residential and office spaces with a sampling rate of 1 Hz. Hence in practice the number of data points collected per batch T could be in order of tens of thousands. However, due to the unavailability of such high-frequency measurement data, we restrict the size of T. Since, we had access to only hourly power consumption data we restrict T=13. Although the size of the matrix \(Z \in \mathbb {R}^{m \times T}\) is smaller than in practice, the state estimation still cannot be performed locally due to the coupling constraints between the two localities. Upon inspecting the power consumption values of all the meters, we found these values are mostly 4 to 5 decimal digits long. To mask this data securely, we use a key size of length λ=log2(105)≈16 + 80 ≈ 96 bits. The additional 80 bits ensures that Obfuscate() follows the National Institute of Standards and Technology (NIST) recommendationsFootnote 2 to securely mask the data. Based on the present computational capabilities, it is not possible to break our scheme, thereby proving it's robustness in terms of attack from a malicious adversary. Figure 4 shows the illegibility of the Obfuscate(.) for a fully measured 5-bus power system. Illegibility measures the level of difficulty of interpreting and mining data to the malicious cloud server (Kim et al. 2011). In Fig. 4a, we can see the original power consumption data of a household (blue) is always positive, whereas, the obfuscated data (red) show negative power readings and behave more as random variables. The degree of obscurity becomes more clear when transforming these datasets into the frequency domain. Figure 4b plots the Fast Fourier Transform (FFT) coefficients against various frequencies and shows that the original data consists mostly of low-frequency components, whereas the obfuscated data exhibits high-frequency components. This can also be inferred from the power spectral density plot shown in Fig. 4c. Clearly, we can see that the original data (top) consists of a higher power in low-frequency regions, whereas the obfuscated dataset (bottom) behaves exactly the opposite consisting of a higher power in high-frequency regions. Nevertheless, as it can be seen from Fig. 4d, the estimated states from these obfuscated dataset are exactly the same as that of the original measurement data. Thus, Obfuscate(.) does not degrade the quality of the estimate of the state variables. Furthermore, to evaluate the resilience of Obfuscate(.), we estimate the Pearson's correlation coefficient. The Pearson's correlation coefficient gives us the measure of the degree of similarity between two signals. The correlation coefficient between two identical signals in phase is always 1 while two identical signals out of phase (phase difference = 180∘) is −1. Figure 3 depicts the plot showing the Pearson correlation coefficient of all the metering points of the 5-bus systems. It can be seen that the correlation between the original and the obfuscated datasets are mostly smaller than 0.2 for almost all the metering points. This implies that it is very hard for any pattern recognition and data mining algorithm to infer information about the original power consumption pattern of the smart meters from the obfuscated datasets (Kim et al. 2011). Pearson correlation coefficients for all the metering points in a 5-bus power network Illustration of data Obfuscation in a 5-bus power network. a Original and Obfuscated Time Domain Data from Meter #1. b Original and Obfuscated Frequency Domain Data from Meter #1. c Power Spectral Density of True and Obfuscated Measurement Data. d Estimate State Value at Branch #1. Estimation error between true and obfuscated dataset = 0 Next, we evaluate the degree of obscurity for an IEEE 14 bus system. The H matrix for the 14 bus system is extracted from MATPOWER (Zimmerman et al. 2011), an open-source tool for solving steady-state power system simulation and optimization problems. In this case, the number of metering points m=31 and the number of state variables n=13. We further partition the number of meters and state variables for L1 and L2 as m1=15,m2=16 and n1=6,n2=7. Figure 5 depicts the time domain, frequency domain data and the estimated states from the original and obfuscated measurement data. Comparing Figs. 4 and 5, we arrive at similar conclusions for a 14-bus system to that of a 5-bus system. Figure 6a shows the correlation coefficients of all the 31 metering points for T=13 and it can be seen that the values are lesser than 0.3. Note from Fig. 6b that as expected when the number of measurement data samples is increased i.e., when the value of T was increased from 13 to 360, the correlation coefficient was found to be lesser than 0.2 which makes this scheme practically secure for estimation with fine granular high-frequency meter readings. Also, in this case, since each key size is 96 bits, a semi-honest neighbor trying to infer the power consumption of a household in the same locality has about 296=7.92×1028 possibilities for every batch. Naturally, when the time duration per batch drops down to every few minutes with high-frequency datasets, the task becomes almost impossible for a semi-honest adversary to deduce the appliance usage patterns of his/her neighbor living in the same locality. Illustration of data obfuscation in IEEE 14-bus power network. a Original and Obfuscated Time Domain Data from Meter #27. b Original and Obfuscated Frequency Domain Data from Meter #27. c Power Spectral Density of True and Obfuscated Measurement Data. d Estimate State Value at Branch #7. Estimation error between true and obfuscated dataset = 0 Pearson Correlation Coefficients of all the metering points in IEEE 14 bus system. a T=13. b T=360 However, Obfuscate(.) still has a shortcoming since it cannot preserve the privacy of zero elements. The power grid topology matrix H is, in general, a full column rank and a sparse matrix. However, H+ is less sparse than H and is likely to be dense. Upon inspecting the sparsity pattern of H+ for both the 5-bus and 14-bus power system, we found that H+ for the 14-bus was about 20% sparse, whereas H+ for the 5-bus power system was completely dense. Exposing the sparsity pattern of H+ to the cloud may, in turn, reveal some information about the structure of H which is undesirable. Thus, to confront such sparse attacks, we introduce the matrix \(\mathbf {H}^{+}_{\Delta } = \mathbf {H}^{+} + \Delta \), where the matrix Δ is 100% dense. The utility provider \(\mathcal {U}\) sends \(\mathbf {H}^{+}_{\Delta }\) instead of H+ to the cloud which computes XΔ=(H++Δ)Z. Then, \(\mathcal {U}\) computes the product ΔZ by invoking Obfuscate(.) again. Later, the original state estimates can be retrieved by \(\mathcal {U}\) as \(\hat {X} = X_{\Delta } - \Delta Z\). Note that this step does not incur any major computational overhead on the utility provider since it requires another simple invocation of Obfuscate(.). Conclusions and future work In this paper, we considered a privacy-aware batch-wise state estimation problem in power networks with the objective of protecting both the grid configuration and power consumption data of the smart meters. We formulated a weighted least-squares problem and reduced the state estimation problem of a power grid into a matrix multiplication problem of four block matrices. Our proposal, Obfuscate(.), exploits highly efficient and verifiable obfuscation-based cryptographic solutions. It supports error-free estimation between the original and obfuscated dataset without compromising the accuracy of the state variables essential to the utility provider and is proven to be correct and privacy-preserving. Complexity analysis shows the efficiency and the practical applicability of Obfuscate(.). We further evaluated the performance of Obfuscate(.) in terms of its illegibility and resilience with a real-time hourly power consumption data. Simulation results demonstrate a high level of obscurity making it hard for the malicious cloud server to infer any behavioral pattern from the obfuscated datasets. We also discussed the problem of revealing the sparsity structure of the pseudo-inverse of network topology matrix and proposed a solution to resist such sparse attacks. Currently, our scheme does not take into account that the grid configuration matrix H, although time invariant during the state estimation process may still be susceptible to changes all the time. For example, consider a person living in a particular locality is now motivated to install a smart meter at his home due to good security reasons or a person living in one locality is now moving to another locality. Such situations clearly result in an extra row addition or deletion of the existing H matrix, and assuming a pre-computation of H+ at every stage is not reasonable. Hence, to deal with such instances, we require a protocol computing the matrix A=(HTH)−1 for secure outsourcing of large matrix inversion to the cloud ensuring the privacy of sparsity pattern of the matrix. It is also important to point out that the proposed solution can be applied only to those classes of state estimation which essentially boils down to solving a matrix multiplication problem batch-wise or recursively. Although the behavioral pattern and the power dynamics of the other smart meters in every locality are hidden from the malicious cloud, the respective lead meter has access to this information. The lead meter can access to the scaled measurements z′=aij·z (Pearson coefficient =1) whose dynamics are exactly the same as z. Hence, it was essential in our problem setup to consider a single non-collusive trusted node in every local network termed as the lead meter to initiate the obfuscation of the measurement data dynamics. Future work may involve developing privacy-aware protocols without any such assumptions. Another possible future work is developing a statistical measure to quantify the degree of obscurity introduced by these obfuscation schemes to understand how indistinguishable the obfuscated datasets are compared to the original measurement datasets. A fully measured 5-bus power system. Taken from (Deng et al. 2017) A fully measured 5-bus power system is shown in Fig. 7. The total number of meters m is 10 and the meter measurements are z=[F12,F23,F24,F35,F45,P1,P2,P3,P4,P5]T where Fij represents the branch (i,j) active power flow and Pj represents bus j active power injection. The structure of the measurement matrix H is given in Eq. 10, where bij denotes the susceptance of the transmission line (i,j) (Deng et al. 2017). The susceptance is the imaginary part of admittance and the admittance matrix is obtained from (McCalley 2018). The H+ is pre-computed from H and the F blocks are partitioned according to their respective dimensions. $$ {H =\left(\begin{array}{cccc} b_{12} & 0 & 0 & 0 \\-b_{23} & b_{23} & 0 & 0 \\-b_{24} & 0 & b_{24} & 0 \\0 &-b_{35} & 0 & b_{35} \\0& 0 & -b_{45} & b_{45} \\b_{12} & 0 & 0 & 0\\ -b_{12}-b_{23}-b_{24} & b_{23} & b_{24} & 0 \\b_{23} & - b_{23}-b_{35} & 0 & b_{35}\\ b_{24} & 0 & -b_{24}-b_{45} & b_{45} \\0 & -b_{35} & -b_{45} & b_{35} + b_{45} \end{array}\right)} $$ For brevity, here we assume that the area consists of only two localities. The protocol presented in this paper can easily be extended to an area consisting of more than two localities. https://www.keylength.com/en/ Atallah, MJ, Frikken KB (2010) Securely outsourcing linear algebra computations In: Proceedings of the 5th ACM Symposium on Information, Computer and Communications Security, ASIACCS 2010, Beijing, China, April 13-16, 2010, 48–59.. ACM, New York. Atallah, MJ, Frikken KB, Wang S (2012) Private outsourcing of matrix multiplication over closed semi-rings In: SECRYPT 2012 - Proceedings of the International Conference on Security and Cryptography, Rome, Italy, 24-27 July, 2012, SECRYPT Is Part of ICETE - The International Joint Conference on e-Business and Telecommunications, 136–144.. SciTePress, Setúbal. Beussink, A, Akkaya K, Senturk IF, Mahmoud M. M. E. A. (2014) Preserving consumer privacy on IEEE 802.11s-based smart grid AMI networks using data obfuscation In: 2014 Proceedings IEEE INFOCOM Workshops, Toronto, ON, Canada, April 27 - May 2, 2014, 658–663.. IEEE, New York. Chen, F, Dai J, Wang B, Sahu S, Naphade MR, Lu C (2011) Activity analysis based on low sample rate smart meters In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, August 21-24, 2011, 240–248.. ACM, New York. Commission, E (2014a) Benchmarking Smart Metering Deployment in the EU-27 with a Focus on Electricity. https://ses.jrc.ec.europa.eu/publications/reports/benchmarking-smart-metering-deployment-eu-27-focus-electricity. Commission, E (2014b) Energy. Smart Grids and Meters. https://ec.europa.eu/energy/en/topics/market-and-consumers/smart-grids-and-meters. Cosovic, M, Vukobratovic D (2017) Fast real-time DC state estimation in electric power systems using belief propagation In: 2017 IEEE International Conference on Smart Grid Communications, SmartGridComm 2017, Dresden, Germany, October 23-27, 2017, 207–212.. IEEE, New York. Danezis, G, Fournet C, Kohlweiss M, Béguelin SZ (2013) Smart meter aggregation via secret-sharing In: SEGS'13, Proceedings of the 2013 ACM Workshop on Smart Energy Grid Security, Co-located with CCS 2013, November 8, 2013, Berlin, Germany, 75–80.. ACM, New York. Deng, R (2017) Why We Need to Improve Cloud Computing's Security? https://phys.org/news/2017-10-cloud.html. Deng, R, Xiao G, Lu R (2017) Defending against false data injection attacks on power system state estimation. IEEE Trans Ind Inform 13(1):198–207. Department of Energy, US (2014) Factors Affecting PMU Installation Costs. https://www.smartgrid.gov/files/PMU-cost-study-final-10162014_1.pdf. Dreier, J, Kerschbaum F (2011) Practical privacy-preserving multiparty linear programming based on problem transformation In: PASSAT/SocialCom 2011, Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Conference on Social Computing (SocialCom), Boston, MA, USA, 9-11 Oct., 2011, 916–924.. IEEE, New York. Efthymiou, C, Kalogridis G (2010) Smart grid privacy via anonymization of smart metering data In: 2010 First IEEE International Conference on Smart Grid Communications, 238–243.. IEEE, New York. Emura, K (2017) Privacy-preserving aggregation of time-series data with public verifiability from simple assumptions In: Information Security and Privacy - 22nd Australasian Conference, ACISP 2017, Auckland, New Zealand, July 3-5, 2017, Proceedings, Part II, 193–213.. Springer, Cham. Erkin, Z (2015) Private data aggregation with groups for smart grids in a dynamic setting using CRT In: 2015 IEEE International Workshop on Information Forensics and Security, WIFS 2015, Roma, Italy, November 16-19, 2015, 1–6.. IEEE, New York. Fiore, D, Gennaro R (2012) Publicly verifiable delegation of large polynomials and matrix computations, with applications In: the ACM Conference on Computer and Communications Security, CCS'12, Raleigh, NC, USA, October 16-18, 2012, 501–512.. ACM, New York. Ge, S, Zeng P, Lu R, Choo KR (2018) FGDA: fine-grained data analysis in privacy-preserving smart grid communications. Peer-to-Peer Netw Appl 11(5):966–978. Gentry, C, Boneh D (2009) A fully homomorphic encryption scheme. PhD thesis, Stanford University, Stanford vol. 20, no. 09. Gera, I, Yakoby Y, Routtenberg T (2017) Blind estimation of states and topology (BEST) in power systems In: 2017 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2017, Montreal, QC, Canada, November 14-16, 2017, 1080–1084.. IEEE, New York. Goldwasser, S, Kalai YT, Rothblum GN (2015) Delegating computation: Interactive proofs for muggles. J ACM 62(4):27–12764. Huang, Y, Werner S, Huang J, Kashyap N, Gupta V (2012) State estimation in electric power grids: Meeting new challenges presented by the requirements of the future grid. IEEE Signal Process Mag 29(5):33–43. Hunt, G (2017) What Does GDPR Mean for Your Energy Business? https://www.siliconrepublic.com/enterprise/gdpr-energy-sector. Krause, O, Lehnhoff S (2012) Generalized static-state estimation In: 2012 22nd Australasian Universities Power Engineering Conference (AUPEC), 1–6.. IEEE, New York. Kumar, M, Meena J, Vardhan M (2017) Privacy preserving, verifiable and efficient outsourcing algorithm for matrix multiplication to a malicious cloud server. Cogent Eng 4(1). Lei, X, Liao X, Huang T, Li H, Hu C (2013) Outsourcing large matrix inversion computation to A public cloud. IEEE Trans Cloud Comput 1(1). Li, F, Luo B, Liu P (2010) Secure information aggregation for smart grids using homomorphic encryption In: 2010 First IEEE International Conference on Smart Grid Communications, 327–332.. IEEE, New York. Liang, G, Zhao J, Luo F, Weller SR, Dong ZY (2017) A review of false data injection attacks against modern power systems. IEEE Trans Smart Grid 8(4):1630–1638. Lindell, Y, Pinkas B (2009) Secure multi-party computation for privacy-preserving data mining. J Privacy Confidentiality 1(1):59–98. Lisovich, MA, Mulligan DK, Wicker SB (2010) Inferring personal information from demand-response systems. IEEE Secur Priv 8(1):11–20. Liu, Y, Ning P, Reiter MK (2011) False data injection attacks against state estimation in electric power grids. ACM Trans Inf Syst Secur 14(1):13–11333. López-Alt, A, Tromer E, Vaikuntanathan V (2012) On-the-fly multiparty computation on the cloud via multikey fully homomorphic encryption In: Proceedings of the 44th Symposium on Theory of Computing Conference, STOC 2012, New York, NY, USA, May 19 - 22, 2012, 1219–1234.. ACM, New York. Kim, Y, Ngai ECH, Srivastava MB (2011) Cooperative state estimation for preserving privacy of user behaviors in smart grid In: IEEE Second International Conference on Smart Grid Communications, SmartGridComm 2011, Brussels, Belgium, October 17-20, 2011, 178–183.. IEEE, New York. Knirsch, F, Engel D, Erkin Z (2017) A fault-tolerant and efficient scheme for data aggregation over groups in the smart grid In: 2017 IEEE Workshop on Information Forensics and Security, WIFS 2017, Rennes, France, December 4-7, 2017, 1–6.. IEEE, New York. Kursawe, K, Danezis G, Kohlweiss M (2011) Privacy-friendly aggregation for the smart-grid In: Privacy Enhancing Technologies - 11th International Symposium, PETS 2011, Waterloo, ON, Canada, July 27-29, 2011. Proceedings, 175–191.. Springer, Heidelberg. McCalley, JD (2018) The Power Flow Problem. Technical report, Iowa State University. Iowa State University. https://home.engineering.iastate.edu/~jdm/ee553/PowerFlow.doc. Molina-Markham, A, Shenoy PJ, Fu K, Cecchet E, Irwin DE (2010) Private memoirs of a smart meter In: BuildSys'10, Proceedings of the 2nd ACM Workshop on Embedded SensingSystems for Energy-Efficiency in Buildings, Zurich, Switzerland, November 3-5, 2010, 61–66.. ACM, New York. Monticelli, A (2000) Electric power system state estimation. Proc IEEE 88(2):262–282. n.a. (2003) U.S.-Canada Power System Outage Task Force. https://digital.library.unt.edu/ark:/67531/metadc26005/. Rahman, MA, Venayagamoorthy GK (2017) Distributed dynamic state estimation for smart grid transmission system. IFAC-PapersOnLine 50(2):98–103. Ren, K, Wang C, Wang Q (2012) Security challenges for the public cloud. IEEE Internet Comput 16(1):69–73. Salinas, SA, Li P (2016) Privacy-preserving energy theft detection in microgrids: A state estimation approach. IEEE Trans Power Syst 31(2):883–894. Saia, J, Zamani M (2015) Recent results in scalable multi-party computation In: SOFSEM 2015: Theory and Practice of Computer Science - 41st International Conference on Current Trends in Theory and Practice of Computer Science, Pec Pod Sněžkou, Czech Republic, January 24-29, 2015. Proceedings, 24–44.. Springer, Heidelberg. Schweppe, FC (1970) Power system static-state estimation, part III: Implementation. IEEE Trans Power Appar Syst PAS-89(1):130–135. Schweppe, FC, Rom DB (1970) Power system static-state estimation, part II: Approximate model. IEEE Trans Power Appar Syst PAS-89(1):125–130. Schweppe, FC, Wildes J (1970) Power system static-state estimation, part I: Exact model. IEEE Trans Power Appar Syst PAS-89(1):120–125. Shoukry, Y, Gatsis K, Al-Anwar A, Pappas GJ, Seshia SA, Srivastava MB, Tabuada P (2016) Privacy-aware quadratic optimization using partially homomorphic encryption In: 55th IEEE Conference on Decision and Control, CDC 2016, Las Vegas, NV, USA, December 12-14, 2016, 5053–5058.. IEEE, New York. Simos, M (2017) Microsoft Security Intelligence Report. https://www.microsoft.com/en-us/security/Intelligence-report. Tebaa, M, Hajji SE (2014) Secure cloud computing through homomorphic encryption. CoRR abs/1409.0829. Tonyali, S, Cakmak O, Akkaya K, Mahmoud MMEA, Güvenç I (2016) Secure data obfuscation scheme to enable privacy-preserving state estimation in smart grid AMI networks. IEEE Internet Things J 3(5):709–719. Wang, C, Ren K, Wang J (2011) Secure and practical outsourcing of linear programming in cloud computing In: INFOCOM 2011. 30th IEEE International Conference on Computer Communications, Joint Conference of the IEEE Computer and Communications Societies, 10-15 April 2011, Shanghai, China, 820–828.. IEEE, New York. Wood, AJ, Wollenberg BF (1996) Power Generation, Operation, and Control. Wiley, Hoboken. Zhang, Y, Blanton M (2014) Efficient secure and verifiable outsourcing of matrix multiplications In: Information Security - 17th International Conference, ISC 2014, Hong Kong, China, October 12-14, 2014. Proceedings, 158–178.. Springer, Cham Heidelberg New York. Zeifman, M, Roth K (2011) Nonintrusive appliance load monitoring: Review and outlook. IEEE Trans Consum Electron 57(1):76–84. Zimmerman, RD, Murillo-Sánchez CE, Thomas RJ (2009) Matpower's extensible optimal power flow architecture In: 2009 IEEE Power Energy Society General Meeting, 1–7.. IEEE, New York. Zimmerman, RD, Murillo-Sánchez CE, Thomas RJ (2011) Matpower: Steady-state operations, planning, and analysis tools for power systems research and education. IEEE Trans Power Syst 26(1):12–19. We would also like to thank Antans Sauhatas from Riga Technical University for sharing the real-time power consumption data of the smart meters. About this supplement This article has been published as part of?Energy Informatics?Volume 2 Supplement 1, 2019: Proceedings of the 8th DACH+ Conference on Energy Informatics. The full contents of the supplement are available online at?https://energyinformatics.springeropen.com/articles/supplements/volume-2-supplement-1. This work was supported by the TU Delft Safety and Security Institute under the DSyS Grant. Publication of this supplement was funded by Austrian Federal Ministry for Transport, Innovation and Technology. CGI Nederland B.V, Rotterdam, The Netherlands Cyber Security Group, Delft University of Technology, Delft, The Netherlands Gamze Tillem & Zekeriya Erkin Delft Center for Systems and Control, Delft University of Technology, Delft, The Netherlands Tamas Keviczky Search for Lakshminarayanan Nandakumar in: Search for Gamze Tillem in: Search for Zekeriya Erkin in: Search for Tamas Keviczky in: Authors 1, 2, and 4 conceived and conceptualized the presented framework. Author 1 developed the theory, performed the simulations and analyses, and took the lead to write the manuscript. Author 2 helped in drafting the manuscript and in critical revision of the same. Authors 3 and 4 supervised the findings of this work and provided valuable feedback for the final version of the manuscript. All authors read and approved the final manuscript. Correspondence to Lakshminarayanan Nandakumar. Nandakumar, L., Tillem, G., Erkin, Z. et al. Protecting the grid topology and user consumption patterns during state estimation in smart grids based on data obfuscation. Energy Inform 2, 25 (2019) doi:10.1186/s42162-019-0078-y Data obfuscation
CommonCrawl
Function (mathematics) In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y.[1] The set X is called the domain of the function[2] and the set Y is called the codomain of the function.[3] Function x ↦ f (x) Examples of domains and codomains • $X$ → $\mathbb {B} $, $\mathbb {B} $ → $X$, $\mathbb {B} ^{n}$ → $X$ • $X$ → $\mathbb {Z} $, $\mathbb {Z} $ → $X$ • $X$ → $\mathbb {R} $, $\mathbb {R} $ → $X$, $\mathbb {R} ^{n}$ → $X$ • $X$ → $\mathbb {C} $, $\mathbb {C} $ → $X$, $\mathbb {C} ^{n}$ → $X$  Classes/properties  • Constant • Identity • Linear • Polynomial • Rational • Algebraic • Analytic • Smooth • Continuous • Measurable • Injective • Surjective • Bijective   Constructions • Restriction • Composition • λ • Inverse   Generalizations   • Partial • Multivalued • Implicit • space Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly enlarged the domains of application of the concept. A function is most often denoted by letters such as f, g and h, and the value of a function f at an element x of its domain is denoted by f(x); the numerical value resulting from the function evaluation at a particular input value is denoted by replacing x with this value; for example, the value of f at x = 4 is denoted by f(4). When the function is not named and is represented by an expression E, the value of the function at, say, x = 4 may be denoted by E|x=4. For example, the value at 4 of the function that maps x to $(x+1)^{2}$ may be denoted by $\left.(x+1)^{2}\right\vert _{x=4}$ (which results in 25). A function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function.[note 1][4] When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics.[5] Definition Diagram of a function, with domain X = {1, 2, 3} and codomain Y = {A, B, C, D}, which is defined by the set of ordered pairs {(1, D), (2, C), (3, C)} . The image/range is the set {C, D} . This diagram, representing the set of pairs {(1,D), (2,B), (2,C)} , does not define a function. One reason is that 2 is the first element in more than one ordered pair, (2, B) and (2, C), of this set. Two other reasons, also sufficient by themselves, is that neither 3 nor 4 are first elements (input) of any ordered pair therein. A function from a set X to a set Y is an assignment of an element of Y to each element of X. The set X is called the domain of the function and the set Y is called the codomain of the function. A function, its domain, and its codomain, are declared by the notation f: X→Y, and the value of a function f at an element x of X, denoted by f(x), is called the image of x under f, or the value of f applied to the argument x. Functions are also called maps or mappings, though some authors make some distinction between "maps" and "functions" (see § Other terms). Two functions f and g are equal if their domain and codomain sets are the same and their output values agree on the whole domain. More formally, given f: X → Y and g: X → Y, we have f = g if and only if f(x) = g(x) for all x ∈ X.[6][note 2] The domain and codomain are not always explicitly given when a function is defined, and, without some (possibly difficult) computation, one might only know that the domain is contained in a larger set. Typically, this occurs in mathematical analysis, where "a function from X to Y " often refers to a function that may have a proper subset[note 3] of X as domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable. However, a "function from the reals to the reals" does not mean that the domain of the function is the whole set of the real numbers, but only that the domain is a set of real numbers that contains a non-empty open interval. Such a function is then called a partial function. For example, if f is a function that has the real numbers as domain and codomain, then a function mapping the value x to the value g(x) = 1/f(x) is a function g from the reals to the reals, whose domain is the set of the reals x, such that f(x) ≠ 0. The range or image of a function is the set of the images of all elements in the domain.[7][8][9][10] One also says a function on $S$ and means by that a function from the domain $S$. Total, univalent relation Any subset of the Cartesian product of two sets X and Y defines a binary relation R ⊆ X × Y between these two sets. It is immediate that an arbitrary relation may contain pairs that violate the necessary conditions for a function given above. A binary relation is univalent (also called right-unique) if $\forall x\in X,\forall y\in Y,\forall z\in Y,\quad ((x,y)\in R\land (x,z)\in R)\implies y=z.$ A binary relation is total if $\forall x\in X,\exists y\in Y,\quad (x,y)\in R.$ A partial function is a binary relation that is univalent, and a function is a binary relation that is univalent and total. Various properties of functions and function composition may be reformulated in the language of relations.[11] For example, a function is injective if the converse relation RT ⊆ Y × X is univalent, where the converse relation is defined as RT = {(y, x) | (x, y) ∈ R}. Set exponentiation See also: Exponentiation § Sets as exponents The set of all functions from a set $X$ to a set $Y$ is commonly denoted as $Y^{X},$ which is read as $Y$ to the power $X$. This notation is the same as the notation for the Cartesian product of a family of copies of $Y$ indexed by $X$: $Y^{X}=\prod _{x\in X}Y.$ The identity of these two notations is motivated by the fact that a function $f$ can be identified with the element of the Cartesian product such that the component of index $x$ is $f(x)$. When $Y$ has two elements, $Y^{X}$ is commonly denoted $2^{X}$ and called the powerset of X. It can be identified with the set of all subsets of $X$, through the one-to-one correspondence that associates to each subset $S\subseteq X$ the function $f$ such that $f(x)=1$ if $x\in S$ and $f(x)=0$ otherwise. Notation There are various standard ways for denoting functions. The most commonly used notation is functional notation, which is the first notation described below. Functional notation In functional notation, the function is immediately given a name, such as $f$, and its definition is given by what $f$ does to the explicit argument $x$, using a formula in terms of $x$. For example, the function which takes a real number as input and outputs that number plus 1 is denoted by $f(x)=x+1$. If a function is defined in this notation, its domain and codomain are implicitly taken to both be $\mathbb {R} $, the set of real numbers. If the formula cannot be evaluated at all real numbers, then the domain is implicitly taken to be the maximal subset of $\mathbb {R} $ on which the formula can be evaluated; see Domain of a function. A more complicated example is the function $f(x)=\sin(x^{2}+1)$. In this example, the function f takes a real number as input, squares it, then adds 1 to the result, then takes the sine of the result, and returns the final result as the output. When the symbol denoting the function consists of several characters and no ambiguity may arise, the parentheses of functional notation might be omitted. For example, it is common to write sin x instead of sin(x). Functional notation was first used by Leonhard Euler in 1734.[12] Some widely used functions are represented by a symbol consisting of several letters (usually two or three, generally an abbreviation of their name). In this case, a roman type is customarily used instead, such as "sin" for the sine function, in contrast to italic font for single-letter symbols. When using this notation, one often encounters the abuse of notation whereby the notation f(x) can refer to the value of f at x, or to the function itself. If the variable x was previously declared, then the notation f(x) unambiguously means the value of f at x. Otherwise, it is useful to understand the notation as being both simultaneously; this allows one to denote composition of two functions f and g in a succinct manner by the notation f(g(x)). However, distinguishing f and f(x) can become important in cases where functions themselves serve as inputs for other functions. (A function taking another function as an input is termed a functional.) Other approaches of notating functions, detailed below, avoid this problem but are less commonly used. Arrow notation Arrow notation defines the rule of a function inline, without requiring a name to be given to the function. For example, $x\mapsto x+1$ is the function which takes a real number as input and outputs that number plus 1. Again, a domain and codomain of $\mathbb {R} $ is implied. The domain and codomain can also be explicitly stated, for example: ${\begin{aligned}\operatorname {sqr} \colon \mathbb {Z} &\to \mathbb {Z} \\x&\mapsto x^{2}.\end{aligned}}$ This defines a function sqr from the integers to the integers that returns the square of its input. As a common application of the arrow notation, suppose $f\colon X\times X\to Y;\;(x,t)\mapsto f(x,t)$ is a function in two variables, and we want to refer to a partially applied function $X\to Y$ produced by fixing the second argument to the value t0 without introducing a new function name. The map in question could be denoted $x\mapsto f(x,t_{0})$ using the arrow notation. The expression $x\mapsto f(x,t_{0})$ (read: "the map taking x to f(x, t0)") represents this new function with just one argument, whereas the expression f(x0, t0) refers to the value of the function f at the point (x0, t0). Index notation Index notation is often used instead of functional notation. That is, instead of writing f (x), one writes $f_{x}.$ This is typically the case for functions whose domain is the set of the natural numbers. Such a function is called a sequence, and, in this case the element $f_{n}$ is called the nth element of the sequence. The index notation is also often used for distinguishing some variables called parameters from the "true variables". In fact, parameters are specific variables that are considered as being fixed during the study of a problem. For example, the map $x\mapsto f(x,t)$ (see above) would be denoted $f_{t}$ using index notation, if we define the collection of maps $f_{t}$ by the formula $f_{t}(x)=f(x,t)$ for all $x,t\in X$. Dot notation In the notation $x\mapsto f(x),$ the symbol x does not represent any value, it is simply a placeholder meaning that, if x is replaced by any value on the left of the arrow, it should be replaced by the same value on the right of the arrow. Therefore, x may be replaced by any symbol, often an interpunct " ⋅ ". This may be useful for distinguishing the function f (⋅) from its value f (x) at x. For example, $a(\cdot )^{2}$ may stand for the function $x\mapsto ax^{2}$, and $ \int _{a}^{\,(\cdot )}f(u)\,du$ may stand for a function defined by an integral with variable upper bound: $ x\mapsto \int _{a}^{x}f(u)\,du$. Specialized notations There are other, specialized notations for functions in sub-disciplines of mathematics. For example, in linear algebra and functional analysis, linear forms and the vectors they act upon are denoted using a dual pair to show the underlying duality. This is similar to the use of bra–ket notation in quantum mechanics. In logic and the theory of computation, the function notation of lambda calculus is used to explicitly express the basic notions of function abstraction and application. In category theory and homological algebra, networks of functions are described in terms of how they and their compositions commute with each other using commutative diagrams that extend and generalize the arrow notation for functions described above. Other terms For broader coverage of this topic, see Map (mathematics). Term Distinction from "function" Map/Mapping None; the terms are synonymous.[13] A map can have any set as its codomain, while, in some contexts, typically in older books, the codomain of a function is specifically the set of real or complex numbers.[14] Alternatively, a map is associated with a special structure (e.g. by explicitly specifying a structured codomain in its definition). For example, a linear map.[15] Homomorphism A function between two structures of the same type that preserves the operations of the structure (e.g. a group homomorphism).[16] Morphism A generalisation of homomorphisms to any category, even when the objects of the category are not sets (for example, a group defines a category with only one object, which has the elements of the group as morphisms; see Category (mathematics) § Examples for this example and other similar ones).[17] A function is often also called a map or a mapping, but some authors make a distinction between the term "map" and "function". For example, the term "map" is often reserved for a "function" with some sort of special structure (e.g. maps of manifolds). In particular map is often used in place of homomorphism for the sake of succinctness (e.g., linear map or map from G to H instead of group homomorphism from G to H). Some authors[15] reserve the word mapping for the case where the structure of the codomain belongs explicitly to the definition of the function. Some authors, such as Serge Lang,[14] use "function" only to refer to maps for which the codomain is a subset of the real or complex numbers, and use the term mapping for more general functions. In the theory of dynamical systems, a map denotes an evolution function used to create discrete dynamical systems. See also Poincaré map. Whichever definition of map is used, related terms like domain, codomain, injective, continuous have the same meaning as for a function. Specifying a function Given a function $f$, by definition, to each element $x$ of the domain of the function $f$, there is a unique element associated to it, the value $f(x)$ of $f$ at $x$. There are several ways to specify or describe how $x$ is related to $f(x)$, both explicitly and implicitly. Sometimes, a theorem or an axiom asserts the existence of a function having some properties, without describing it more precisely. Often, the specification or description is referred to as the definition of the function $f$. By listing function values On a finite set, a function may be defined by listing the elements of the codomain that are associated to the elements of the domain. For example, if $A=\{1,2,3\}$, then one can define a function $f\colon A\to \mathbb {R} $ by $f(1)=2,f(2)=3,f(3)=4.$ By a formula Functions are often defined by a formula that describes a combination of arithmetic operations and previously defined functions; such a formula allows computing the value of the function from the value of any element of the domain. For example, in the above example, $f$ can be defined by the formula $f(n)=n+1$, for $n\in \{1,2,3\}$. When a function is defined this way, the determination of its domain is sometimes difficult. If the formula that defines the function contains divisions, the values of the variable for which a denominator is zero must be excluded from the domain; thus, for a complicated function, the determination of the domain passes through the computation of the zeros of auxiliary functions. Similarly, if square roots occur in the definition of a function from $\mathbb {R} $ to $\mathbb {R} ,$ the domain is included in the set of the values of the variable for which the arguments of the square roots are nonnegative. For example, $f(x)={\sqrt {1+x^{2}}}$ defines a function $f\colon \mathbb {R} \to \mathbb {R} $ whose domain is $\mathbb {R} ,$ because $1+x^{2}$ is always positive if x is a real number. On the other hand, $f(x)={\sqrt {1-x^{2}}}$ defines a function from the reals to the reals whose domain is reduced to the interval [−1, 1]. (In old texts, such a domain was called the domain of definition of the function.) Functions are often classified by the nature of formulas that define them: • A quadratic function is a function that may be written $f(x)=ax^{2}+bx+c,$ where a, b, c are constants. • More generally, a polynomial function is a function that can be defined by a formula involving only additions, subtractions, multiplications, and exponentiation to nonnegative integers. For example, $f(x)=x^{3}-3x-1,$ and $f(x)=(x-1)(x^{3}+1)+2x^{2}-1.$ • A rational function is the same, with divisions also allowed, such as $f(x)={\frac {x-1}{x+1}},$ and $f(x)={\frac {1}{x+1}}+{\frac {3}{x}}-{\frac {2}{x-1}}.$ • An algebraic function is the same, with nth roots and roots of polynomials also allowed. • An elementary function[note 4] is the same, with logarithms and exponential functions allowed. Inverse and implicit functions A function $f\colon X\to Y,$ with domain X and codomain Y, is bijective, if for every y in Y, there is one and only one element x in X such that y = f(x). In this case, the inverse function of f is the function $f^{-1}\colon Y\to X$ that maps $y\in Y$ to the element $x\in X$ such that y = f(x). For example, the natural logarithm is a bijective function from the positive real numbers to the real numbers. It thus has an inverse, called the exponential function, that maps the real numbers onto the positive numbers. If a function $f\colon X\to Y$ is not bijective, it may occur that one can select subsets $E\subseteq X$ and $F\subseteq Y$ such that the restriction of f to E is a bijection from E to F, and has thus an inverse. The inverse trigonometric functions are defined this way. For example, the cosine function induces, by restriction, a bijection from the interval [0, π] onto the interval [−1, 1], and its inverse function, called arccosine, maps [−1, 1] onto [0, π]. The other inverse trigonometric functions are defined similarly. More generally, given a binary relation R between two sets X and Y, let E be a subset of X such that, for every $x\in E,$ there is some $y\in Y$ such that x R y. If one has a criterion allowing selecting such an y for every $x\in E,$ this defines a function $f\colon E\to Y,$ called an implicit function, because it is implicitly defined by the relation R. For example, the equation of the unit circle $x^{2}+y^{2}=1$ defines a relation on real numbers. If −1 < x < 1 there are two possible values of y, one positive and one negative. For x = ± 1, these two values become both equal to 0. Otherwise, there is no possible value of y. This means that the equation defines two implicit functions with domain [−1, 1] and respective codomains [0, +∞) and (−∞, 0]. In this example, the equation can be solved in y, giving $y=\pm {\sqrt {1-x^{2}}},$ but, in more complicated examples, this is impossible. For example, the relation $y^{5}+y+x=0$ defines y as an implicit function of x, called the Bring radical, which has $\mathbb {R} $ as domain and range. The Bring radical cannot be expressed in terms of the four arithmetic operations and nth roots. The implicit function theorem provides mild differentiability conditions for existence and uniqueness of an implicit function in the neighborhood of a point. Using differential calculus Many functions can be defined as the antiderivative of another function. This is the case of the natural logarithm, which is the antiderivative of 1/x that is 0 for x = 1. Another common example is the error function. More generally, many functions, including most special functions, can be defined as solutions of differential equations. The simplest example is probably the exponential function, which can be defined as the unique function that is equal to its derivative and takes the value 1 for x = 0. Power series can be used to define functions on the domain in which they converge. For example, the exponential function is given by $e^{x}=\sum _{n=0}^{\infty }{x^{n} \over n!}$. However, as the coefficients of a series are quite arbitrary, a function that is the sum of a convergent series is generally defined otherwise, and the sequence of the coefficients is the result of some computation based on another definition. Then, the power series can be used to enlarge the domain of the function. Typically, if a function for a real variable is the sum of its Taylor series in some interval, this power series allows immediately enlarging the domain to a subset of the complex numbers, the disc of convergence of the series. Then analytic continuation allows enlarging further the domain for including almost the whole complex plane. This process is the method that is generally used for defining the logarithm, the exponential and the trigonometric functions of a complex number. By recurrence Main article: Recurrence relation Functions whose domain are the nonnegative integers, known as sequences, are often defined by recurrence relations. The factorial function on the nonnegative integers ($n\mapsto n!$) is a basic example, as it can be defined by the recurrence relation $n!=n(n-1)!\quad {\text{for}}\quad n>0,$ and the initial condition $0!=1.$ Representing a function A graph is commonly used to give an intuitive picture of a function. As an example of how a graph helps to understand a function, it is easy to see from its graph whether a function is increasing or decreasing. Some functions may also be represented by bar charts. Graphs and plots Main article: Graph of a function Given a function $f\colon X\to Y,$ its graph is, formally, the set $G=\{(x,f(x))\mid x\in X\}.$ In the frequent case where X and Y are subsets of the real numbers (or may be identified with such subsets, e.g. intervals), an element $(x,y)\in G$ may be identified with a point having coordinates x, y in a 2-dimensional coordinate system, e.g. the Cartesian plane. Parts of this may create a plot that represents (parts of) the function. The use of plots is so ubiquitous that they too are called the graph of the function. Graphic representations of functions are also possible in other coordinate systems. For example, the graph of the square function $x\mapsto x^{2},$ consisting of all points with coordinates $(x,x^{2})$ for $x\in \mathbb {R} ,$ yields, when depicted in Cartesian coordinates, the well known parabola. If the same quadratic function $x\mapsto x^{2},$ with the same formal graph, consisting of pairs of numbers, is plotted instead in polar coordinates $(r,\theta )=(x,x^{2}),$ the plot obtained is Fermat's spiral. Tables Main article: Mathematical table A function can be represented as a table of values. If the domain of a function is finite, then the function can be completely specified in this way. For example, the multiplication function $f\colon \{1,\ldots ,5\}^{2}\to \mathbb {R} $ defined as $f(x,y)=xy$ can be represented by the familiar multiplication table y x 12345 1 12345 2 246810 3 3691215 4 48121620 5 510152025 On the other hand, if a function's domain is continuous, a table can give the values of the function at specific values of the domain. If an intermediate value is needed, interpolation can be used to estimate the value of the function. For example, a portion of a table for the sine function might be given as follows, with values rounded to 6 decimal places: xsin x 1.2890.960557 1.2900.960835 1.2910.961112 1.2920.961387 1.2930.961662 Before the advent of handheld calculators and personal computers, such tables were often compiled and published for functions such as logarithms and trigonometric functions. Bar chart Bar charts are often used for representing functions whose domain is a finite set, the natural numbers, or the integers. In this case, an element x of the domain is represented by an interval of the x-axis, and the corresponding value of the function, f(x), is represented by a rectangle whose base is the interval corresponding to x and whose height is f(x) (possibly negative, in which case the bar extends below the x-axis). General properties This section describes general properties of functions, that are independent of specific properties of the domain and the codomain. Standard functions There are a number of standard functions that occur frequently: • For every set X, there is a unique function, called the empty function, or empty map, from the empty set to X. The graph of an empty function is the empty set.[note 5] The existence of empty functions is needed both for the coherency of the theory and for avoiding exceptions concerning the empty set in many statements. Under the usual set-theoretic definition of a function as an ordered triplet (or equivalent ones), there is exactly one empty function for each set, thus the empty function $\varnothing \mapsto X$ is not equal to $\varnothing \mapsto Y$ if and only if $X\neq Y$, although their graph are both the empty set. • For every set X and every singleton set {s}, there is a unique function from X to {s}, which maps every element of X to s. This is a surjection (see below) unless X is the empty set. • Given a function $f\colon X\to Y,$ the canonical surjection of f onto its image $f(X)=\{f(x)\mid x\in X\}$ is the function from X to f(X) that maps x to f(x). • For every subset A of a set X, the inclusion map of A into X is the injective (see below) function that maps every element of A to itself. • The identity function on a set X, often denoted by idX, is the inclusion of X into itself. Function composition Main article: Function composition Given two functions $f\colon X\to Y$ and $g\colon Y\to Z$ such that the domain of g is the codomain of f, their composition is the function $g\circ f\colon X\rightarrow Z$ defined by $(g\circ f)(x)=g(f(x)).$ That is, the value of $g\circ f$ is obtained by first applying f to x to obtain y = f(x) and then applying g to the result y to obtain g(y) = g(f(x)). In the notation the function that is applied first is always written on the right. The composition $g\circ f$ is an operation on functions that is defined only if the codomain of the first function is the domain of the second one. Even when both $g\circ f$ and $f\circ g$ satisfy these conditions, the composition is not necessarily commutative, that is, the functions $g\circ f$ and $f\circ g$ need not be equal, but may deliver different values for the same argument. For example, let f(x) = x2 and g(x) = x + 1, then $g(f(x))=x^{2}+1$ and $f(g(x))=(x+1)^{2}$ agree just for $x=0.$ The function composition is associative in the sense that, if one of $(h\circ g)\circ f$ and $h\circ (g\circ f)$ is defined, then the other is also defined, and they are equal. Thus, one writes $h\circ g\circ f=(h\circ g)\circ f=h\circ (g\circ f).$ The identity functions $\operatorname {id} _{X}$ and $\operatorname {id} _{Y}$ are respectively a right identity and a left identity for functions from X to Y. That is, if f is a function with domain X, and codomain Y, one has $f\circ \operatorname {id} _{X}=\operatorname {id} _{Y}\circ f=f.$ • A composite function g(f(x)) can be visualized as the combination of two "machines". • A simple example of a function composition • Another composition. In this example, (g ∘ f )(c) = #. Image and preimage Main article: Image (mathematics) Let $f\colon X\to Y.$ The image under f of an element x of the domain X is f(x).[7] If A is any subset of X, then the image of A under f, denoted f(A), is the subset of the codomain Y consisting of all images of elements of A,[7] that is, $f(A)=\{f(x)\mid x\in A\}.$ The image of f is the image of the whole domain, that is, f(X).[18] It is also called the range of f,[7][8][9][10] although the term range may also refer to the codomain.[10][18][19] On the other hand, the inverse image or preimage under f of an element y of the codomain Y is the set of all elements of the domain X whose images under f equal y.[7] In symbols, the preimage of y is denoted by $f^{-1}(y)$ and is given by the equation $f^{-1}(y)=\{x\in X\mid f(x)=y\}.$ Likewise, the preimage of a subset B of the codomain Y is the set of the preimages of the elements of B, that is, it is the subset of the domain X consisting of all elements of X whose images belong to B.[7] It is denoted by $f^{-1}(B)$ and is given by the equation $f^{-1}(B)=\{x\in X\mid f(x)\in B\}.$ For example, the preimage of $\{4,9\}$ under the square function is the set $\{-3,-2,2,3\}$. By definition of a function, the image of an element x of the domain is always a single element of the codomain. However, the preimage $f^{-1}(y)$ of an element y of the codomain may be empty or contain any number of elements. For example, if f is the function from the integers to themselves that maps every integer to 0, then $f^{-1}(0)=\mathbb {Z} $. If $f\colon X\to Y$ is a function, A and B are subsets of X, and C and D are subsets of Y, then one has the following properties: • $A\subseteq B\Longrightarrow f(A)\subseteq f(B)$ • $C\subseteq D\Longrightarrow f^{-1}(C)\subseteq f^{-1}(D)$ • $A\subseteq f^{-1}(f(A))$ • $C\supseteq f(f^{-1}(C))$ • $f(f^{-1}(f(A)))=f(A)$ • $f^{-1}(f(f^{-1}(C)))=f^{-1}(C)$ The preimage by f of an element y of the codomain is sometimes called, in some contexts, the fiber of y under f. If a function f has an inverse (see below), this inverse is denoted $f^{-1}.$ In this case $f^{-1}(C)$ may denote either the image by $f^{-1}$ or the preimage by f of C. This is not a problem, as these sets are equal. The notation $f(A)$ and $f^{-1}(C)$ may be ambiguous in the case of sets that contain some subsets as elements, such as $\{x,\{x\}\}.$ In this case, some care may be needed, for example, by using square brackets $f[A],f^{-1}[C]$ for images and preimages of subsets and ordinary parentheses for images and preimages of elements. Injective, surjective and bijective functions Let $f\colon X\to Y$ be a function. The function f is injective (or one-to-one, or is an injection) if f(a) ≠ f(b) for any two different elements a and b of X.[18][20] Equivalently, f is injective if and only if, for any $y\in Y,$ the preimage $f^{-1}(y)$ contains at most one element. An empty function is always injective. If X is not the empty set, then f is injective if and only if there exists a function $g\colon Y\to X$ such that $g\circ f=\operatorname {id} _{X},$ that is, if f has a left inverse.[20] Proof: If f is injective, for defining g, one chooses an element $x_{0}$ in X (which exists as X is supposed to be nonempty),[note 6] and one defines g by $g(y)=x$ if $y=f(x)$ and $g(y)=x_{0}$ if $y\not \in f(X).$ Conversely, if $g\circ f=\operatorname {id} _{X},$ and $y=f(x),$ then $x=g(y),$ and thus $f^{-1}(y)=\{x\}.$ The function f is surjective (or onto, or is a surjection) if its range $f(X)$ equals its codomain $Y$, that is, if, for each element $y$ of the codomain, there exists some element $x$ of the domain such that $f(x)=y$ (in other words, the preimage $f^{-1}(y)$ of every $y\in Y$ is nonempty).[18][21] If, as usual in modern mathematics, the axiom of choice is assumed, then f is surjective if and only if there exists a function $g\colon Y\to X$ such that $f\circ g=\operatorname {id} _{Y},$ that is, if f has a right inverse.[21] The axiom of choice is needed, because, if f is surjective, one defines g by $g(y)=x,$ where $x$ is an arbitrarily chosen element of $f^{-1}(y).$ The function f is bijective (or is a bijection or a one-to-one correspondence) if it is both injective and surjective.[18][22] That is, f is bijective if, for any $y\in Y,$ the preimage $f^{-1}(y)$ contains exactly one element. The function f is bijective if and only if it admits an inverse function, that is, a function $g\colon Y\to X$ such that $g\circ f=\operatorname {id} _{X}$ and $f\circ g=\operatorname {id} _{Y}.$[22] (Contrarily to the case of surjections, this does not require the axiom of choice; the proof is straightforward). Every function $f\colon X\to Y$ may be factorized as the composition $i\circ s$ of a surjection followed by an injection, where s is the canonical surjection of X onto f(X) and i is the canonical injection of f(X) into Y. This is the canonical factorization of f. "One-to-one" and "onto" are terms that were more common in the older English language literature; "injective", "surjective", and "bijective" were originally coined as French words in the second quarter of the 20th century by the Bourbaki group and imported into English. As a word of caution, "a one-to-one function" is one that is injective, while a "one-to-one correspondence" refers to a bijective function. Also, the statement "f maps X onto Y" differs from "f maps X into B", in that the former implies that f is surjective, while the latter makes no assertion about the nature of f. In a complicated reasoning, the one letter difference can easily be missed. Due to the confusing nature of this older terminology, these terms have declined in popularity relative to the Bourbakian terms, which have also the advantage of being more symmetrical. Restriction and extension Main article: Restriction (mathematics) If $f\colon X\to Y$ is a function and S is a subset of X, then the restriction of $f$ to S, denoted $f|_{S}$, is the function from S to Y defined by $f|_{S}(x)=f(x)$ for all x in S. Restrictions can be used to define partial inverse functions: if there is a subset S of the domain of a function $f$ such that $f|_{S}$ is injective, then the canonical surjection of $f|_{S}$ onto its image $f|_{S}(S)=f(S)$ is a bijection, and thus has an inverse function from $f(S)$ to S. One application is the definition of inverse trigonometric functions. For example, the cosine function is injective when restricted to the interval [0, π]. The image of this restriction is the interval [−1, 1], and thus the restriction has an inverse function from [−1, 1] to [0, π], which is called arccosine and is denoted arccos. Function restriction may also be used for "gluing" functions together. Let $ X=\bigcup _{i\in I}U_{i}$ be the decomposition of X as a union of subsets, and suppose that a function $f_{i}\colon U_{i}\to Y$ is defined on each $U_{i}$ such that for each pair $i,j$ of indices, the restrictions of $f_{i}$ and $f_{j}$ to $U_{i}\cap U_{j}$ are equal. Then this defines a unique function $f\colon X\to Y$ such that $f|_{U_{i}}=f_{i}$ for all i. This is the way that functions on manifolds are defined. An extension of a function f is a function g such that f is a restriction of g. A typical use of this concept is the process of analytic continuation, that allows extending functions whose domain is a small part of the complex plane to functions whose domain is almost the whole complex plane. Here is another classical example of a function extension that is encountered when studying homographies of the real line. A homography is a function $h(x)={\frac {ax+b}{cx+d}}$ such that ad − bc ≠ 0. Its domain is the set of all real numbers different from $-d/c,$ and its image is the set of all real numbers different from $a/c.$ If one extends the real line to the projectively extended real line by including ∞, one may extend h to a bijection from the extended real line to itself by setting $h(\infty )=a/c$ and $h(-d/c)=\infty $. Multivariate function Further information: Real multivariate function Not to be confused with Multivalued function. A multivariate function, or function of several variables is a function that depends on several arguments. Such functions are commonly encountered. For example, the position of a car on a road is a function of the time travelled and its average speed. More formally, a function of n variables is a function whose domain is a set of n-tuples. For example, multiplication of integers is a function of two variables, or bivariate function, whose domain is the set of all pairs (2-tuples) of integers, and whose codomain is the set of integers. The same is true for every binary operation. More generally, every mathematical operation is defined as a multivariate function. The Cartesian product $X_{1}\times \cdots \times X_{n}$ of n sets $X_{1},\ldots ,X_{n}$ is the set of all n-tuples $(x_{1},\ldots ,x_{n})$ such that $x_{i}\in X_{i}$ for every i with $1\leq i\leq n$. Therefore, a function of n variables is a function $f\colon U\to Y,$ where the domain U has the form $U\subseteq X_{1}\times \cdots \times X_{n}.$ When using function notation, one usually omits the parentheses surrounding tuples, writing $f(x_{1},x_{2})$ instead of $f((x_{1},x_{2})).$ In the case where all the $X_{i}$ are equal to the set $\mathbb {R} $ of real numbers, one has a function of several real variables. If the $X_{i}$ are equal to the set $\mathbb {C} $ of complex numbers, one has a function of several complex variables. It is common to also consider functions whose codomain is a product of sets. For example, Euclidean division maps every pair (a, b) of integers with b ≠ 0 to a pair of integers called the quotient and the remainder: ${\begin{aligned}{\text{Euclidean division}}\colon \quad \mathbb {Z} \times (\mathbb {Z} \setminus \{0\})&\to \mathbb {Z} \times \mathbb {Z} \\(a,b)&\mapsto (\operatorname {quotient} (a,b),\operatorname {remainder} (a,b)).\end{aligned}}$ The codomain may also be a vector space. In this case, one talks of a vector-valued function. If the domain is contained in a Euclidean space, or more generally a manifold, a vector-valued function is often called a vector field. In calculus Further information: History of the function concept The idea of function, starting in the 17th century, was fundamental to the new infinitesimal calculus. At that time, only real-valued functions of a real variable were considered, and all functions were assumed to be smooth. But the definition was soon extended to functions of several variables and to functions of a complex variable. In the second half of the 19th century, the mathematically rigorous definition of a function was introduced, and functions with arbitrary domains and codomains were defined. Functions are now used throughout all areas of mathematics. In introductory calculus, when the word function is used without qualification, it means a real-valued function of a single real variable. The more general definition of a function is usually introduced to second or third year college students with STEM majors, and in their senior year they are introduced to calculus in a larger, more rigorous setting in courses such as real analysis and complex analysis. Real function See also: Real analysis A real function is a real-valued function of a real variable, that is, a function whose codomain is the field of real numbers and whose domain is a set of real numbers that contains an interval. In this section, these functions are simply called functions. The functions that are most commonly considered in mathematics and its applications have some regularity, that is they are continuous, differentiable, and even analytic. This regularity insures that these functions can be visualized by their graphs. In this section, all functions are differentiable in some interval. Functions enjoy pointwise operations, that is, if f and g are functions, their sum, difference and product are functions defined by ${\begin{aligned}(f+g)(x)&=f(x)+g(x)\\(f-g)(x)&=f(x)-g(x)\\(f\cdot g)(x)&=f(x)\cdot g(x)\\\end{aligned}}.$ The domains of the resulting functions are the intersection of the domains of f and g. The quotient of two functions is defined similarly by ${\frac {f}{g}}(x)={\frac {f(x)}{g(x)}},$ but the domain of the resulting function is obtained by removing the zeros of g from the intersection of the domains of f and g. The polynomial functions are defined by polynomials, and their domain is the whole set of real numbers. They include constant functions, linear functions and quadratic functions. Rational functions are quotients of two polynomial functions, and their domain is the real numbers with a finite number of them removed to avoid division by zero. The simplest rational function is the function $x\mapsto {\frac {1}{x}},$ whose graph is a hyperbola, and whose domain is the whole real line except for 0. The derivative of a real differentiable function is a real function. An antiderivative of a continuous real function is a real function that has the original function as a derivative. For example, the function $x\mapsto {\frac {1}{x}}$ is continuous, and even differentiable, on the positive real numbers. Thus one antiderivative, which takes the value zero for x = 1, is a differentiable function called the natural logarithm. A real function f is monotonic in an interval if the sign of ${\frac {f(x)-f(y)}{x-y}}$ does not depend of the choice of x and y in the interval. If the function is differentiable in the interval, it is monotonic if the sign of the derivative is constant in the interval. If a real function f is monotonic in an interval I, it has an inverse function, which is a real function with domain f(I) and image I. This is how inverse trigonometric functions are defined in terms of trigonometric functions, where the trigonometric functions are monotonic. Another example: the natural logarithm is monotonic on the positive real numbers, and its image is the whole real line; therefore it has an inverse function that is a bijection between the real numbers and the positive real numbers. This inverse is the exponential function. Many other real functions are defined either by the implicit function theorem (the inverse function is a particular instance) or as solutions of differential equations. For example, the sine and the cosine functions are the solutions of the linear differential equation $y''+y=0$ such that $\sin 0=0,\quad \cos 0=1,\quad {\frac {\partial \sin x}{\partial x}}(0)=1,\quad {\frac {\partial \cos x}{\partial x}}(0)=0.$ Vector-valued function Main articles: Vector-valued function and Vector field When the elements of the codomain of a function are vectors, the function is said to be a vector-valued function. These functions are particularly useful in applications, for example modeling physical properties. For example, the function that associates to each point of a fluid its velocity vector is a vector-valued function. Some vector-valued functions are defined on a subset of $\mathbb {R} ^{n}$ or other spaces that share geometric or topological properties of $\mathbb {R} ^{n}$, such as manifolds. These vector-valued functions are given the name vector fields. Function space Main articles: Function space and Functional analysis In mathematical analysis, and more specifically in functional analysis, a function space is a set of scalar-valued or vector-valued functions, which share a specific property and form a topological vector space. For example, the real smooth functions with a compact support (that is, they are zero outside some compact set) form a function space that is at the basis of the theory of distributions. Function spaces play a fundamental role in advanced mathematical analysis, by allowing the use of their algebraic and topological properties for studying properties of functions. For example, all theorems of existence and uniqueness of solutions of ordinary or partial differential equations result of the study of function spaces. Multi-valued functions Main article: Multi-valued function Several methods for specifying functions of real or complex variables start from a local definition of the function at a point or on a neighbourhood of a point, and then extend by continuity the function to a much larger domain. Frequently, for a starting point $x_{0},$ there are several possible starting values for the function. For example, in defining the square root as the inverse function of the square function, for any positive real number $x_{0},$ there are two choices for the value of the square root, one of which is positive and denoted ${\sqrt {x_{0}}},$ and another which is negative and denoted $-{\sqrt {x_{0}}}.$ These choices define two continuous functions, both having the nonnegative real numbers as a domain, and having either the nonnegative or the nonpositive real numbers as images. When looking at the graphs of these functions, one can see that, together, they form a single smooth curve. It is therefore often useful to consider these two square root functions as a single function that has two values for positive x, one value for 0 and no value for negative x. In the preceding example, one choice, the positive square root, is more natural than the other. This is not the case in general. For example, let consider the implicit function that maps y to a root x of $x^{3}-3x-y=0$ (see the figure on the right). For y = 0 one may choose either $0,{\sqrt {3}},{\text{ or }}-{\sqrt {3}}$ for x. By the implicit function theorem, each choice defines a function; for the first one, the (maximal) domain is the interval [−2, 2] and the image is [−1, 1]; for the second one, the domain is [−2, ∞) and the image is [1, ∞); for the last one, the domain is (−∞, 2] and the image is (−∞, −1]. As the three graphs together form a smooth curve, and there is no reason for preferring one choice, these three functions are often considered as a single multi-valued function of y that has three values for −2 < y < 2, and only one value for y ≤ −2 and y ≥ −2. Usefulness of the concept of multi-valued functions is clearer when considering complex functions, typically analytic functions. The domain to which a complex function may be extended by analytic continuation generally consists of almost the whole complex plane. However, when extending the domain through two different paths, one often gets different values. For example, when extending the domain of the square root function, along a path of complex numbers with positive imaginary parts, one gets i for the square root of −1; while, when extending through complex numbers with negative imaginary parts, one gets −i. There are generally two ways of solving the problem. One may define a function that is not continuous along some curve, called a branch cut. Such a function is called the principal value of the function. The other way is to consider that one has a multi-valued function, which is analytic everywhere except for isolated singularities, but whose value may "jump" if one follows a closed loop around a singularity. This jump is called the monodromy. In the foundations of mathematics and set theory The definition of a function that is given in this article requires the concept of set, since the domain and the codomain of a function must be a set. This is not a problem in usual mathematics, as it is generally not difficult to consider only functions whose domain and codomain are sets, which are well defined, even if the domain is not explicitly defined. However, it is sometimes useful to consider more general functions. For example, the singleton set may be considered as a function $x\mapsto \{x\}.$ Its domain would include all sets, and therefore would not be a set. In usual mathematics, one avoids this kind of problem by specifying a domain, which means that one has many singleton functions. However, when establishing foundations of mathematics, one may have to use functions whose domain, codomain or both are not specified, and some authors, often logicians, give precise definition for these weakly specified functions.[23] These generalized functions may be critical in the development of a formalization of the foundations of mathematics. For example, Von Neumann–Bernays–Gödel set theory, is an extension of the set theory in which the collection of all sets is a class. This theory includes the replacement axiom, which may be stated as: If X is a set and F is a function, then F[X] is a set. In computer science Main articles: Function (computer programming) and Lambda calculus In computer programming, a function is, in general, a piece of a computer program, which implements the abstract concept of function. That is, it is a program unit that produces an output for each input. However, in many programming languages every subroutine is called a function, even when there is no output, and when the functionality consists simply of modifying some data in the computer memory. Functional programming is the programming paradigm consisting of building programs by using only subroutines that behave like mathematical functions. For example, if_then_else is a function that takes three functions as arguments, and, depending on the result of the first function (true or false), returns the result of either the second or the third function. An important advantage of functional programming is that it makes easier program proofs, as being based on a well founded theory, the lambda calculus (see below). Except for computer-language terminology, "function" has the usual mathematical meaning in computer science. In this area, a property of major interest is the computability of a function. For giving a precise meaning to this concept, and to the related concept of algorithm, several models of computation have been introduced, the old ones being general recursive functions, lambda calculus and Turing machine. The fundamental theorem of computability theory is that these three models of computation define the same set of computable functions, and that all the other models of computation that have ever been proposed define the same set of computable functions or a smaller one. The Church–Turing thesis is the claim that every philosophically acceptable definition of a computable function defines also the same functions. General recursive functions are partial functions from integers to integers that can be defined from • constant functions, • successor, and • projection functions via the operators • composition, • primitive recursion, and • minimization. Although defined only for functions from integers to integers, they can model any computable function as a consequence of the following properties: • a computation is the manipulation of finite sequences of symbols (digits of numbers, formulas, ...), • every sequence of symbols may be coded as a sequence of bits, • a bit sequence can be interpreted as the binary representation of an integer. Lambda calculus is a theory that defines computable functions without using set theory, and is the theoretical background of functional programming. It consists of terms that are either variables, function definitions (𝜆-terms), or applications of functions to terms. Terms are manipulated through some rules, (the α-equivalence, the β-reduction, and the η-conversion), which are the axioms of the theory and may be interpreted as rules of computation. In its original form, lambda calculus does not include the concepts of domain and codomain of a function. Roughly speaking, they have been introduced in the theory under the name of type in typed lambda calculus. Most kinds of typed lambda calculi can define fewer functions than untyped lambda calculus. See also Subpages • List of types of functions • List of functions • Function fitting • Implicit function Generalizations • Higher-order function • Homomorphism • Morphism • Microfunction • Distribution • Functor Related topics • Associative array • Closed-form expression • Elementary function • Functional • Functional decomposition • Functional predicate • Functional programming • Parametric equation • Set function • Simple function Notes 1. This definition of "graph" refers to a set of pairs of objects. Graphs, in the sense of diagrams, are most applicable to functions from the real numbers to themselves. All functions can be described by sets of pairs but it may not be practical to construct a diagram for functions between other sets (such as sets of matrices). 2. This follows from the axiom of extensionality, which says two sets are the same if and only if they have the same members. Some authors drop codomain from a definition of a function, and in that definition, the notion of equality has to be handled with care; see, for example, "When do two functions become equal?". Stack Exchange. August 19, 2015. 3. called the domain of definition by some authors, notably computer science 4. Here "elementary" has not exactly its common sense: although most functions that are encountered in elementary courses of mathematics are elementary in this sense, some elementary functions are not elementary for the common sense, for example, those that involve roots of polynomials of high degree. 5. By definition, the graph of the empty function to X is a subset of the Cartesian product ∅ × X, and this product is empty. 6. The axiom of choice is not needed here, as the choice is done in a single set. References 1. Halmos 1970, p. 30; the words map, mapping, transformation, correspondence, and operator are often used synonymously. 2. Halmos 1970 3. "Mapping", Encyclopedia of Mathematics, EMS Press, 2001 [1994] 4. "function | Definition, Types, Examples, & Facts". Encyclopedia Britannica. Retrieved 2020-08-17. 5. Spivak 2008, p. 39. 6. "Functions - Composition and Inverse | Duke University - KeepNotes". keepnotes.com. Retrieved 2023-08-09. 7. Kudryavtsev, L.D. (2001) [1994], "Function", Encyclopedia of Mathematics, EMS Press 8. Taalman, Laura; Kohn, Peter (2014). Calculus. New York City: W. H. Freeman and Company. p. 3. ISBN 978-1-4292-4186-1. LCCN 2012947365. OCLC 856545590. OL 27544563M. 9. Trench, William F. (2013) [2003]. Introduction to Real Analysis (2.04th ed.). Pearson Education (originally; self-republished by the author). pp. 30–32. ISBN 0-13-045786-8. LCCN 2002032369. OCLC 953799815. Zbl 1204.00023. 10. Thomson, Brian S.; Bruckner, Judith B.; Bruckner, Andrew M. (2008) [2001]. Elementary Real Analysis (PDF) (2nd ed.). Prentice Hall (originally; 2nd ed. self-republished by the authors). pp. A-4–A-5. ISBN 978-1-4348-4367-8. OCLC 1105855173. OL 31844948M. Zbl 0872.26001. 11. Schmidt, Gunther (2011). "§5.1 Functions". Relational Mathematics. Encyclopedia of Mathematics and its Applications. Vol. 132. Cambridge University Press. pp. 49–60. ISBN 978-0-521-76268-7. 12. Ron Larson, Bruce H. Edwards (2010), Calculus of a Single Variable, Cengage Learning, p. 19, ISBN 978-0-538-73552-0 13. Weisstein, Eric W. "Map". mathworld.wolfram.com. Retrieved 2019-06-12. 14. Lang, Serge (1987). "III §1. Mappings". Linear Algebra (3rd ed.). Springer. p. 43. ISBN 978-0-387-96412-6. A function is a special type of mapping, namely it is a mapping from a set into the set of numbers, i.e. into, R, or C or into a field K. 15. Apostol, T.M. (1981). Mathematical Analysis (2nd ed.). Addison-Wesley. p. 35. ISBN 978-0-201-00288-1. OCLC 928947543. 16. James, Robert C.; James, Glenn (1992). Mathematics dictionary (5th ed.). Van Nostrand Reinhold. p. 202. ISBN 0-442-00741-8. OCLC 25409557. 17. James & James 1992, p. 48 18. Gowers, Timothy; Barrow-Green, June; Leader, Imre, eds. (2008). The Princeton Companion to Mathematics. Princeton, New Jersey: Princeton University Press. p. 11. doi:10.1515/9781400830398. ISBN 978-0-691-11880-2. JSTOR j.ctt7sd01. LCCN 2008020450. MR 2467561. OCLC 227205932. OL 19327100M. Zbl 1242.00016. 19. Quantities and Units - Part 2: Mathematical signs and symbols to be used in the natural sciences and technology, p. 15. ISO 80000-2 (ISO/IEC 2009-12-01) 20. Ivanova, O.A. (2001) [1994], "Injection", Encyclopedia of Mathematics, EMS Press 21. Ivanova, O.A. (2001) [1994], "Surjection", Encyclopedia of Mathematics, EMS Press 22. Ivanova, O.A. (2001) [1994], "Bijection", Encyclopedia of Mathematics, EMS Press 23. Gödel 1940, p. 16; Jech 2003, p. 11; Cunningham 2016, p. 57 Sources • Bartle, Robert (1976). The Elements of Real Analysis (2nd ed.). Wiley. ISBN 978-0-471-05465-8. OCLC 465115030. • Bloch, Ethan D. (2011). Proofs and Fundamentals: A First Course in Abstract Mathematics. Springer. ISBN 978-1-4419-7126-5. • Cunningham, Daniel W. (2016). Set theory: A First Course. Cambridge University Press. ISBN 978-1-107-12032-7. • Gödel, Kurt (1940). The Consistency of the Continuum Hypothesis. Princeton University Press. ISBN 978-0-691-07927-1. • Halmos, Paul R. (1970). Naive Set Theory. Springer-Verlag. ISBN 978-0-387-90092-6. • Jech, Thomas (2003). Set theory (3rd ed.). Springer-Verlag. ISBN 978-3-540-44085-7. • Spivak, Michael (2008). Calculus (4th ed.). Publish or Perish. ISBN 978-0-914098-91-1. Further reading • Anton, Howard (1980). Calculus with Analytical Geometry. Wiley. ISBN 978-0-471-03248-9. • Bartle, Robert G. (1976). The Elements of Real Analysis (2nd ed.). Wiley. ISBN 978-0-471-05464-1. • Dubinsky, Ed; Harel, Guershon (1992). The Concept of Function: Aspects of Epistemology and Pedagogy. Mathematical Association of America. ISBN 978-0-88385-081-7. • Hammack, Richard (2009). "12. Functions" (PDF). Book of Proof. Virginia Commonwealth University. Retrieved 2012-08-01. • Husch, Lawrence S. (2001). Visual Calculus. University of Tennessee. Retrieved 2007-09-27. • Katz, Robert (1964). Axiomatic Analysis. D. C. Heath and Company. • Kleiner, Israel (1989). "Evolution of the Function Concept: A Brief Survey". The College Mathematics Journal. 20 (4): 282–300. CiteSeerX 10.1.1.113.6352. doi:10.2307/2686848. JSTOR 2686848. • Lützen, Jesper (2003). "Between rigor and applications: Developments in the concept of function in mathematical analysis". In Porter, Roy (ed.). The Cambridge History of Science: The modern physical and mathematical sciences. Cambridge University Press. ISBN 978-0-521-57199-9. An approachable and diverting historical presentation. • Malik, M. A. (1980). "Historical and pedagogical aspects of the definition of function". International Journal of Mathematical Education in Science and Technology. 11 (4): 489–492. doi:10.1080/0020739800110404. • Reichenbach, Hans (1947). Elements of Symbolic Logic. Dover. ISBN 0-486-24004-5. • Ruthing, D. (1984). "Old Intelligencer: Some definitions of the concept of function from Bernoulli, Joh. to Bourbaki, N.". Mathematical Intelligencer. 6 (4): 71–78. doi:10.1007/BF03026743. S2CID 189883712. • Thomas, George B.; Finney, Ross L. (1995). Calculus and Analytic Geometry (9th ed.). Addison-Wesley. ISBN 978-0-201-53174-9. External links Wikimedia Commons has media related to Functions (mathematics). • "Function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • The Wolfram Functions Site gives formulae and visualizations of many mathematical functions. • NIST Digital Library of Mathematical Functions Major topics in mathematical analysis • Calculus: Integration • Differentiation • Differential equations • ordinary • partial • stochastic • Fundamental theorem of calculus • Calculus of variations • Vector calculus • Tensor calculus • Matrix calculus • Lists of integrals • Table of derivatives • Real analysis • Complex analysis • Hypercomplex analysis (quaternionic analysis) • Functional analysis • Fourier analysis • Least-squares spectral analysis • Harmonic analysis • P-adic analysis (P-adic numbers) • Measure theory • Representation theory • Functions • Continuous function • Special functions • Limit • Series • Infinity Mathematics portal Authority control: National • France • BnF data • Germany • Israel • United States • Japan • Czech Republic
Wikipedia
\begin{document} \newcommand\ud{\mathrm{d}} \newcommand\dist{\buildrel\rm d\over\sim} \newcommand\ind{\stackrel{\rm indep.}{\sim}} \newcommand\iid{\stackrel{\rm i.i.d.}{\sim}} \newcommand\logit{{\rm logit}} \renewcommand\r{\right} \renewcommand\l{\left} \newcommand\pre{{(t-1)}} \newcommand\cur{{(t)}} \newcommand\cA{\mathcal{A}} \newcommand\cB{\mathcal{B}} \newcommand\bone{\mathbf{1}} \newcommand\E{\mathbb{E}} \newcommand\Var{{\rm Var}} \newcommand\cD{\mathcal{D}} \newcommand\cK{\mathcal{K}} \newcommand\cP{\mathcal{P}} \newcommand\cT{\mathcal{T}} \newcommand\cX{\mathcal{X}} \newcommand\cXR{\mathcal{X,R}} \newcommand\wX{\widetilde{X}} \newcommand\wT{\widetilde{T}} \newcommand\wY{\widetilde{Y}} \newcommand\wZ{\widetilde{Z}} \newcommand\bX{\mathbf{X}} \newcommand\bx{\mathbf{x}} \newcommand\bT{\mathbf{T}} \newcommand\bt{\mathbf{t}} \newcommand\bwT{\widetilde{\mathbf{T}}} \newcommand\bwt{\tilde{\mathbf{t}}} \newcommand\bbT{\overline{\mathbf{T}}} \newcommand\bbt{\overline{\mathbf{t}}} \newcommand\ubT{\underline{\mathbf{T}}} \newcommand\ubt{\underline{\mathbf{t}}} \newcommand\bhT{\widehat{\mathbf{T}}} \newcommand\bht{\hat{\mathbf{t}}} \newcommand\cF{\mathcal{F}} \newcommand\cC{\mathcal{C}} \newcommand\cS{\mathcal{S}} \newcommand\cN{\mathcal{N}} \newcommand\bZ{\mathbf{Z}} \newcommand\bz{\mathbf{z}} \newcommand\bW{\mathbf{W}} \newcommand\bY{\mathbf{Y}} \newcommand\bC{\mathbf{C}} \newcommand\bc{\mathbf{c}} \newcommand\bV{\mathbf{V}} \newcommand\bv{\mathbf{v}} \newcommand\bbv{\mathbf{\bar{v}}} \newcommand\bbx{\mathbf{\bar{x}}} \newcommand\bd{\mathbf{d}} \newcommand\bP{\mathbf{P}} \newcommand\bu{\mathbf{u}} \newcommand\cw{\mathcal{w}} \newcommand\cW{\mathcal{W}} \newcommand\bg{\bar{g}} \newcommand\pg{g^\prime} \newcommand\bw{\mathbf{w}} \newcommand\cG{\mathcal{G}} \newcommand\cH{\mathcal{H}} \newcommand\cU{\mathcal{U}} \newcommand\pS{\mathscr{S}} \newcommand\pP{\mathscr{P}} \newcommand\cM{\mathcal{M}} \newcommand\cO{\mathcal{O}} \newcommand\cJ{\mathcal{J}} \newcommand\CDE{{\rm {\bf CDE}}} \newcommand\ADE{{\rm {\bf ADE}}} \newcommand\cADE{{\rm {\bf cADE}}} \newcommand\DE{{\rm {\bf DE}}} \newcommand\cANSE{{\rm {\bf cANSE}}} \newcommand\ANSE{{\rm {\bf ANSE}}} \newcommand\ASE{{\rm {\bf ASE}}} \newcommand\MSE{{\rm {\bf MSE}}} \newcommand\INT{{\rm {\bf INT}}} \newcommand\ATSE{{\rm {\bf ATSE}}} \newcommand\MDE{{\rm {\bf MDE}}} \newcommand\CSE{{\rm {\bf CSE}}} \newcommand\TSE{{\rm {\bf TSE}}} \newcommand\Prz{{\rm Pr}_{\mathbf{z}}} \newcommand\MR{{\rm MR}} \newcommand\RR{{\rm RR}} \newcommand\Nor{{\rm Normal}} \newcommand\cATIE{{\sc cATIE}} \newcommand\ATIE{{\sc ATIE}} \newcommand\cg{\cellcolor[gray]{0.7}} \newcommand\mo{\mathbf{1}} \newcommand\PA{\mbox{PA}} \newcommand\PAo{\mbox{PA}^\ast} \newcommand\sign{\texttt{sign}} \newcommand{\operatornamewithlimits{argmax}}{\operatornamewithlimits{argmax}} \newcommand{\operatornamewithlimits{argmin}}{\operatornamewithlimits{argmin}} \newcommand{\operatornamewithlimits{min}}{\operatornamewithlimits{min}} \newcommand{\operatornamewithlimits{max}}{\operatornamewithlimits{max}} \newcommand\spacingset[1]{\renewcommand{\baselinestretch} {#1}\small\normalsize} \newcommand{\nec}[1]{\textbf{\textcolor{magenta}{(NE: #1)}}} \spacingset{1.25} \newcommand{\tit}{ \mbox{ {Covariate Selection for Generalizing Experimental Results:}} \\ Application to Large-Scale Development Program in Uganda} \title{\tit \thanks{We would like to thank Christopher Blattman, Nathan Fiala, and Sebastian Martinez for sharing their data. We are also grateful to Alexander Coppock, Don Green, Chad Hazlett, Zhichao Jiang, and Soichiro Yamauchi as well as the participants of the MPSA, the UCLA IDSS workshop and the Yale Quantitative Methods Workshop, for their helpful comments on an earlier version of the paper. We also thank the editor and two anonymous reviewers for providing us with valuable comments. }} \spacingset{1.0} \author{Naoki Egami\thanks{Assistant Professor, Department of Political Science, Columbia University, New York NY 10027. Email: \href{mailto:[email protected]}{[email protected]}, URL: \url{https://naokiegami.com} } \hspace{1in} Erin Hartman\thanks{Assistant Professor, Department of Statistics and Department of Political Science, University of California, Los Angeles, Los Angeles, CA 90095. Email: \href{mailto:[email protected]}{[email protected]}, URL: \url{www.erinhartman.com}} } \date{This Draft: December 19, 2020} \maketitle \spacingset{1.25} \pdfbookmark[1]{Title Page}{Title Page} \thispagestyle{empty} \setcounter{page}{0} \begin{abstract} Generalizing estimates of causal effects from an experiment to a target population is of interest to scientists. However, researchers are usually constrained by available covariate information. Analysts can often collect much fewer variables from population samples than from experimental samples, which has limited applicability of existing approaches that assume rich covariate data from both experimental and population samples. In this article, we examine how to select covariates necessary for generalizing experimental results under such data constraints. In our concrete context of a large-scale development program in Uganda, although more than 40 pre-treatment covariates are available in the experiment, only 8 of them were also measured in a target population. We propose a method to estimate a {\it separating} set -- a set of variables affecting both the sampling mechanism and treatment effect heterogeneity -- and show that the population average treatment effect (PATE) can be identified by adjusting for estimated separating sets. Our algorithm only requires a rich set of covariates in the experimental data, not in the target population, by incorporating researcher-specific constraints on what variables are measured in the population data. Analyzing the development experiment in Uganda, we show that the proposed algorithm can allow for the PATE estimation in situations where conventional methods fail due to data requirements. \end{abstract} \noindent \small{{\it Keywords:} Causal inference, External validity, Generalization, Randomized experiments} \newcommand\origspace{1.5} \spacingset{\origspace} \section{Introduction} Over the last few decades, social and biomedical scientists have developed and applied an array of statistical tools to make valid causal inferences \citep{imbens2015causal}. In particular, randomized experiments have become the mainstay for estimating causal effects. Although many scholars agree upon the high internal validity of experimental results, there is a debate about how scientists should infer the impact of policies and interventions on broader populations \citep{imai2008misunderstandings, Angrist:2010jv, Imbens:2010hu, bare2016causal, Deaton:2017kg}. This issue of generalizability \citep{Stuart:2011hr} is pervasive in practice because randomized controlled trials are often conducted on non-representative samples \citep{shadish2002experiment, druckman2011cambridge, Allcott:2015eh, Stuart:2015id}. In this paper, we examine how to generalize the experimental results of the Youth Opportunities Program (YOP) in Uganda, which aims to help the poor and unemployed become self-employed artisans and increase incomes. This large scale development program, involving more than 10,000 individuals was implemented by the government of Uganda and the authors of \citet{blattman2013generating} from 2008 to 2012. Young adults in Northern Uganda were invited to form groups and submit grant proposals for vocational training and to start independent trades. To evaluate the causal impact of the program, funding was randomly assigned and a host of economic variables (e.g., employment and income) were measured. The question of generalizability is especially important in this application. The aim of such development programs is elegantly noted in \citet{duflo2005use}, ``the benefits of knowing which programs work and which do not extend far beyond any program or agency, and credible impact evaluations are global public goods in the sense that they can offer reliable guidance to international organizations, governments, donors, and nongovernmental organizations (NGOs) beyond national borders.'' Researchers and policy makers are not just concerned to learn about the very individuals who participated in the trial. The ultimate goal is to learn whether and how much the program can improve economic conditions in a larger target population --- about 10 million people in Northern Uganda \citep{uganda2007}. Despite its importance, estimating population average treatment effects is not straightforward because we have to adjust for differences between experimental samples and the target population. One pervasive question is what covariates should and can we adjust for? Although previous research shows that adjusting for a set of variables explaining sampling mechanism or treatment heterogeneity is sufficient for generalization \citep{Stuart:2011hr, bare2014recover}, researchers are often constrained by available covariate information in applied settings. In this paper, we address this problem of covariate selection for estimating population average treatment effects. In particular, we develop a data-driven method to estimate {\it a separating set} -- a set of variables affecting both sampling mechanism and treatment effect heterogeneity. Recent papers show that the population average treatment effect can be identified by adjusting for this separating set \citep{Cole:2010bf, Tipton:2013ew, Pearl:2014hb, Kern:2016ez}. In Section~\ref{sec:sep}, we extend this result and show that the separating set relaxes data requirements of conventional methods by generalizing two widely-used covariate selection approaches: (1) {\it a sampling set} -- a set of variables explaining how units are sampled into a given experiment \citep{pressler2013use, Hartman:2015hq, Buchanan:2018dd} and (2) {\it a heterogeneity set}: a set of variables explaining treatment effect heterogeneity \citep{Kern:2016ez, Nguyen:2017jw}. In Section~\ref{sec:dis}, we demonstrate that such separating sets are estimable from the experimental data and provide a new estimation algorithm based on Markov random fields. This algorithm only requires that a sampling set be observed in the experimental sample, not in the target population. We estimate a separating set as a set that makes a sampling set conditionally independent of observed outcomes in the experimental data. Therefore, in contrast to conventional methods, we can exploit all covariates in the experiment to find necessary separating sets, even when there are few variables measured in both the experimental and population data. Importantly, our proposed approach maintains a widely used assumption that a sampling set is observed in the experimental data. However, unlike many existing methods, we do not assume that a sampling set is also observed in the population data. This distinction in data requirement is subtle and yet practically essential because in many applied contexts, a larger number of covariates are measured in the experimental data than in the population data. For example, the experimental data of \citet{blattman2013generating} contains about 40 pre-treatment covariates, even though only 8 of them are also measured in the target population. To estimate separating sets, our proposed method incorporates such user-constraints on what variables can feasibly be collected in the target population. For instance, suppose people selected into the YOP due to social connections, which were unmeasured in the target population. Even in this scenario where conventional methods fail, the proposed method can estimate separating sets accounting for this data constraint, if any exist. Our article builds on a growing literature on the population average treatment effect, which has two general directions. First, many previous studies have focused on articulating identification assumptions and proposing consistent estimators of the population average treatment effect \citep[e.g.,][]{Stuart:2011hr, Hartman:2015hq, Buchanan:2018dd}. In particular, recent papers explicitly show that researchers have to jointly consider treatment effect heterogeneity and the sampling mechanism \citep{Cole:2010bf, Tipton:2013ew, Pearl:2014hb, Kern:2016ez}. These existing approaches often assume researchers have access to a large number of covariates in both the experimental sample and the non-experimental target population. In contrast, we provide a new data-driven covariate selection algorithm to find separating sets in situations where researchers have data constraints in the target population. Our focus on covariate selection is similar to recent influential work on causal directed acyclic graphs (causal DAGs) \citep{bare2014recover, bare2016causal}. We differ from the DAG-based approaches in that we empirically estimate separating sets under assumptions about sampling and heterogeneity sets rather than analytically selecting separating sets from fully specified causal DAGs. Although assumptions about the entire causal DAGs are sufficient for covariate selection, the proposed algorithm can estimate separating sets under weaker assumptions about sampling and heterogeneity sets at the expense of statistical uncertainty. Research in the second direction argues that the necessary assumptions for existing methods are often too strong in practice. Recent papers have explored methods for sensitivity analyses \citep[e.g.,][]{andrews2019} and bounds \citep[e.g.,][]{Chan:2017de} to achieve partial identification under weaker assumptions. In particular, \citet{Nguyen:2017jw, nguyen2018} also consider data scenarios where the experimental data has a richer set of covariates than the target population data, and propose a sensitivity analysis by specifying sensitivity parameters that captures the distribution of covariates unmeasured in the target population data. Our paper is complementary to these approaches. We instead focus on the point identification of the population average treatment effect and alleviate strong assumptions about data requirements by adding an additional step of estimating a separating set. Our approach can also be used in conjunction with sensitivity analysis; the proposed method can estimate a smaller separating set, thereby reducing the number of unobserved covariates that sensitivity analyses have to consider. We provide further discussions about relationships between our proposed approach and sensitivity analysis in Section~\ref{subsec:sa}. \section{Youth Opportunities Program in Uganda} \label{sec:app} As well documented by the World Bank, a large number of young adults in developing countries are unemployed or underemployed \citep{worldbank2012}. In addition to its direct implication to poverty, concerns for policy makers are that such large young and unemployed populations can increase risk of crime and social unrest \citep{blattman2013generating}. Uganda, especially conflict-affected Northern Uganda, is not an exception. According to estimates from the government, two-thirds of northern Ugandans could not meet basic needs, about 50\% were illiterate, and most were underemployed in subsistence agriculture in 2006 \citep{uganda2007}. In this paper, we study the Youth Opportunities Program (YOP) in Uganda, designed to help the poor and unemployed become self-employed artisans and increase incomes. This intervention is one example of widely used cash transfer programs in which participants are offered a certain amount of cash in the hope that they invest in training and start new, profitable enterprises. In 2008, the government invited young adults in Northern Uganda to form groups and submit grant proposals for how they would use a grant for vocational training and business start-up. Then, funding was randomly assigned among 535 screened, eligible applicant groups --- 265 and 270 groups to treatment and control, respectively. Treatment groups received a one-time unsupervised grant worth \$7,500 on average --- about \$382 per group member, roughly their average annual income. Following the original analysis, we focus on a binary treatment, whether they receive any grants or not through the YOP. To evaluate the impact of this intervention, \citet{blattman2013generating} surveyed 5 people per group three times over four years, resulting in a panel of 2,598 individuals after removing 79 observations due to missing data. They measured 17 outcome variables across five dimensions --- employment (7), income (2), investments (3), business formality (3), and urbanization (2). They find that the effects of the YOP are large across all dimensions. Notably, after two years, the treatment groups were 4.5 times more likely to have vocational training, 2.6 times more likely to engage with a skilled trade and had 16\% more hours of employment and 42\% higher earnings. \newcommand{\PreserveBackslash}[1]{\let\temp=\\#1\let\\=\temp} \newcolumntype{C}[1]{>{\PreserveBackslash\centering}p{#1}} \newcolumntype{R}[1]{>{\PreserveBackslash\raggedleft}p{#1}} \newcolumntype{L}[1]{>{\PreserveBackslash\raggedright}p{#1}} Although it is unambiguous that the YOP had large, persistent positive effects on experimental subjects, it is of great policy interest to empirically investigate how much these experimental estimates are generalizable to a larger population. Estimating population average treatment effects (PATE) can inform which specific development policies governments should scale up. While the focus of the program was on Northern Uganda as a whole, participants of the YOP were inevitably not representative, as in many other development programs. To take into account differences between experimental samples and Northern Uganda's population, \citet{blattman2013generating} merged their experimental samples with a 2008 population-based household survey, the Northern Uganda Survey (NUS). They adjusted for eight variables shared by experimental and population data; gender, age, urban status, marital status, school attainment, household size, durable assets, and district indicators. In Table~\ref{tab:app}, we report estimates based on an inverse probability weighting (IPW) estimator \citep{Stuart:2011hr} that adjusts for the original eight variables.\footnote{\spacingset{1}{\footnotesize Although the original authors rely on weighted linear regression models in their paper, we focus on the IPW estimator widely studied in the literature of generalization \citep[e.g.,][]{Buchanan:2018dd}.}} As a reference, we also report estimates of the average treatment effect within the experimental sample, called the sample average treatment effect (SATE). Estimates of the SATEs and PATEs have roughly the same sign, suggesting that the program will have a positive impact on a variety of outcomes even in the target population. Importantly, this finding, however, rests on an assumption that the original eight variables adjust for all relevant differences between the experimental sample and the target population. Given that the magnitude of the PATE estimates has strong implications for a cost-benefit analysis of these large-scale expensive interventions, it is critical to examine this common methodological challenge of covariate selection for generalizing experimental results. \begin{table}[!t] \centering \small \resizebox{\textwidth}{!}{ \begin{tabular}{@{\extracolsep{5pt}} L{2.1in}C{0.5in}C{0.55in}|L{2in}C{0.5in}C{0.55in}} \hline \hline & & Original & & & Original \\[-5pt] & SATE & PATE & & SATE & PATE \\[-8pt] & estimate & estimate & & estimate & estimate \\ \hline \normalsize \bf \underline {Employment} & & & \normalsize \bf \underline {Investments} & & \\ Average employment hours & 5.13 & 5.87 & Vocational training & 0.55 & 0.51 \\[-3pt] & (1.22) & (3.25) & & (0.03) & (0.07) \\ Agricultural & -0.01 & 0.75 & Hours of vocational training & 349.65 & 277.18 \\[-3pt] & (1.02) & (2.01) & & (23.61) & (51.21) \\ Nonagricultural & 5.14 & 5.12 & Business assets & 426.17 & 400.31 \\[-3pt] & (0.92) & (2.62) & & (82.79) & (125.19) \\ Skilled trades only & 4.75 & 2.73 & & & \\[-3pt] & (0.64) & (1.56) & \normalsize \bf \underline {Business Formality} & & \\ No employment hours & -0.02 & 0.00 & Maintain records & 0.12 & 0.19 \\[-3pt] & (0.01) & (0.02) & & (0.03) & (0.07) \\ Any skilled trade & 0.27 & 0.24 & Registered & 0.06 & 0.07 \\[-3pt] & (0.03) & (0.07) & & (0.02) & (0.04) \\ Works mostly in a skilled trade & 0.06 & -0.01 & Pays taxes & 0.08 & -0.01 \\[-3pt] & (0.01) & (0.03) & & (0.02) & (0.06) \\ \normalsize \bf \underline {Income} & & & \normalsize \bf \underline {Urbanization} & & \\ Cash earnings & 13.41 & 12.15 & Changed parish & 0.01 & -0.03 \\[-3pt] & (3.87) & (5.3) & & (0.02) & (0.07) \\ Durable assets & 0.11 & 0.09 & Lives in Urban area & -0.01 & -0.06 \\[-3pt] & (0.05) & (0.16) & & (0.03) & (0.07) \\ \hline \end{tabular}} \spacingset{1.2}{ \caption{Estimates of Sample Average Treatment Effects and Population Average Treatment Effects based on the Original Eight Variables. {\it Note:} We estimated population average treatment effects (PATE) of the above 17 outcomes using an inverse probability weighting estimator with standard errors clustered by group. Weights are estimated by a logistic regression including the eight variables additively. See details of the estimation in Section~\ref{sec:app_ana}. As a reference, we also report estimates of the sample average treatment effect (SATE).}\label{tab:app}} \end{table} In practice, there are several pervasive concerns about covariate selection. First, although it is common to adjust for all observed covariates shared by experimental and population data, it is unclear whether such sets of covariates include all necessary covariates for generalization. In fact, the authors carefully pay attention to this point in the original paper; ``young adults are selected into our sample because of unobserved initiative, connections or affinity for entrepreneurship'' \citep{blattman2013generating}. If there are unobserved differences between the experimental and population samples, the original PATE estimate would be biased. Second, it is also possible that the original analysis adjusted for unnecessary variables, resulting in inefficient estimators of the PATE. \citet{miratrix_sekhon_theodoridis_campos_2018} show that weighting on many variables, particularly those not highly correlated to treatment effect heterogeneity, can lead to inefficient estimation of the PATE. In this paper, we investigate necessary and sufficient sets of covariates for generalizing experimental estimates, called separating sets, and then provide a new algorithm to empirically estimate such sets. We select the separating sets under several different assumptions and assess how estimates of the PATE vary. Our reanalysis of this experiment appears in Section~\ref{sec:app_ana}. \section{Separating Sets For Generalization} \label{sec:sep} This section sets up the potential outcomes framework \citep{neyman1923, rubin1974causal} for studying population average treatment effects. We review a definition of {\it a separating set} --- a set of variables affecting both the sampling mechanism and treatment effect heterogeneity, and then show that a sampling set and a heterogeneity set, the main focus of existing approaches, are special cases of the separating sets. \subsection{The Setup} \label{subsec:setup} We consider a scenario in which we have two data sets. Following \citet{Buchanan:2018dd}, we define the first sample of $n$ individuals to be participants in a randomized experiment (``Experimental Data'') and the second data set to be a random sample of $m$ individuals from the target population (``Population Data''). In our application, the experimental data has 2,598 individuals and the population data contains 21,348 individuals. We define a sampling indicator $S_i$ taking $1$ if unit $i$ is in the experiment and $0$ if unit $i$ is in the target population. We assume that every unit has non-zero probability of being in the experiment.\footnote{This assumption of non-zero sampling probability is known as the ``Positivity of trial participation'' \citep{colnet2020causal}, and is commonly made in the generalization literature. This assumption is untestable and researchers have to evaluate it with domain knowledge \citep{dahabreh2020extending}. When this assumption is violated, researchers have to rely on modeling assumptions and model-based extrapolation \citep{mealli2019overlap}. Alternatively, researchers can restrict their attention to a subset of the target population that has non-zero sampling probability \citep[e.g.,][]{tipton2016site}, which has been the most common approach in the causal inference literature to deal with non-overlap between the treatment and control groups.} Although experimental units can be randomly sampled from the target population in ideal settings, units often non-randomly select into the experiment, as in the YOP, making the experimental sample non-representative. Note that we consider cases in which units are either in the experimental data or in the target population data, but similar results hold for cases in which the experimental sample is a subset of the target population. Let $T_i$ be a binary treatment assignment variable for unit $i$ with $T_i= 1$ for treatment and $0$ for control. We define $Y_i(t)$ to be the potential outcome variable of unit $i$ if the unit were to receive the treatment $t$ for $t \in \{0, 1\}$. In this paper, we make a stability assumption, which states that there is neither interference between units nor different versions of the treatment, either across units or settings \citep{Rubin:1990tm, Tipton:2013ew, Hartman:2015hq}. We define pre-treatment covariates $\bX_i$ to be any variables not affected by the treatment variable. We are interested in estimating the average treatment effect in the target population. We call this causal estimand the population average treatment effect (PATE). \begin{definition}[Population Average Treatment Effect] \ \spacingset{1.2}{ $$ \tau \equiv \E[Y_i(1) - Y_i(0) \mid S_i = 0], $$ where $S_i = 0$ represents the target population data. } \end{definition} The treatment assignment mechanism is controlled by researchers within the experiment ($S_i=1$), but it is unknown for units in the target population ($S_i=0$; observational data). Formally, we assume that the treatment assignment is randomized within the experiment. \begin{assumption}[Randomization in Experiment] \label{random} \ \spacingset{1}{ $$ \{Y_i(1), Y_i(0), \bX_i\} \ \mbox{$\perp\!\!\!\perp$} \ T_i \mid S_i=1$$} \end{assumption} This assumption holds by design in randomized experiments. Here, we consider unconditional randomization, but results in the paper can be naturally extended to settings with randomization conditional on some pre-treatment covariates. Finally, for each unit in the experimental condition, only one of the potential outcome variables can be observed, and the realized outcome variable for unit $i$ is denoted by $Y_i = T_i Y_i(1) + (1-T_i) Y_i(0)$ \citep{rubin1974causal}. \subsection{Definition of Separating Sets and Identification} \label{subsec:sep_set_def} Recent papers show that the PATE can be identified by a set of variables affecting both treatment effect heterogeneity and the sampling mechanism \citep{Cole:2010bf, Tipton:2013ew, Pearl:2014hb, Kern:2016ez}. In this paper, we refer to this set as a {\it separating set} and investigate its statistical properties. Formally, a separating set is any set that makes the sampling indicator and treatment effect heterogeneity conditionally independent. \begin{definition}[Separating Set] \label{sep} A separating set is a set $\bW$ that makes the sampling indicator and treatment effect heterogeneity conditionally independent. \begin{equation} Y_i(1) - Y_i(0) \ \mbox{$\perp\!\!\!\perp$} \ S_i \mid \bW_i.\label{eq:sepeq} \end{equation} \end{definition} This definition of a separating set contains two simple cases: (1) when no treatment effect heterogeneity exists and (2) when the experimental sample is randomly drawn from the target population. In both of these cases, $\bW_i = \{\varnothing\}$. This separating set also encompasses two common approaches in the literature as special cases. First, researchers often employ statistical methods based on a \emph{sampling set} -- a set of all variables affecting the sampling mechanism \citep[e.g.,][]{Stuart:2011hr}. Second, researchers might adjust for a \emph{heterogeneity set} -- a set of all variables governing treatment effect heterogeneity \citep[e.g.,][]{Kern:2016ez}. Below, we formalize these sets based on the potential outcomes framework. We define a {\it sampling set} as a set of variables that determines the sampling mechanism by which individuals come to be in the experimental sample. For example, when a researcher implements stratified sampling based on gender and age, the sampling set consists of those two variables. When researchers control the sampling mechanism, a sampling set is known by design. However, when samples are selected without such an explicit sampling design, a sampling set is unknown and in practice, researchers must posit a sampling mechanism. For example, \citet{blattman2013generating} assume that a sampling set consists of eight variables: gender, age, urban status, marital status, school attainment, household size, durable assets, and district indicators. Formally, we can define a sampling set $\bX^S$ as follows. \begin{definition}[Sampling Set] \label{sample} \spacingset{1.2}{ $ $ \\ \begin{equation} \{Y_i(1), Y_i(0), \bX_i^{-S}\} \ \mbox{$\perp\!\!\!\perp$} \ S_i \mid \bX_i^S\label{eq:par} \end{equation}} where $\bX^{-S}$ is a set of pre-treatment variables that are not in $\bX^S$. \end{definition} This conditional independence means that the sampling set is a set that sufficiently explains the sampling mechanism. Given the sampling set, the sampling indicator is independent of the joint distribution of potential outcomes and all other pre-treatment covariates. We refer to variables in the sampling set as sampling variables. The other popular approach is to adjust for a set of all variables explaining treatment effect heterogeneity, which we call a {\it heterogeneity set}. Formally, we can define a heterogeneity set $\bX^H$ as follows. \begin{definition}[Heterogeneity Set] \label{hetero} \spacingset{1.2}{ $ $ \\ \begin{equation} Y_i(1) - Y_i(0) \ \mbox{$\perp\!\!\!\perp$} \ \{S_i, \bX_i^{-H}\} \mid \bX_i^H, \end{equation}} where $\bX^{-H}$ is a set of pre-treatment variables that are not in $\bX^H$. \end{definition} In this case, because a heterogeneity set fully accounts for treatment heterogeneity, $Y_i(1) - Y_i(0)$ is independent of all other variables. We refer to variables in the heterogeneity set as heterogeneity variables. In our application, \citet{blattman2013generating} discuss at least two heterogeneity variables, gender and initial credit constraints. We want to emphasize that a sampling set and a heterogeneity set are special cases of a separating set in the sense that both sets satisfy equation~\eqref{eq:sepeq}. Yet, there may exist many other separating sets, which we explore in Section~\ref{sec:dis}. Finally, the PATE is nonparametrically identified by adjusting for a separating set \citep{Cole:2010bf, Tipton:2013ew, Pearl:2014hb, Kern:2016ez}. \begin{result}[Identification of the PATE] \label{pate} The PATE is identified with separating set $\bW_i$ under Assumption~\ref{random}. \begin{eqnarray*} \tau & = & \int \biggl\{\E[Y_i \mid T_i =1, S_i =1, \bW_i=\bw] - \E[Y_i \mid T_i =0, S_i =1, \bW_i=\bw] \biggr\} d F_{\bW_i \mid S_i=0}(\bw), \end{eqnarray*} where $F_{\bW_i \mid S_i = 0}(\bw)$ is the cumulative distribution function of $\bW$ conditional on $S_i=0$. \end{result} As sampling and heterogeneity sets are special cases of a separating set, the PATE is identified with the same formula. \subsubsection{Illustration with A Causal DAG.} We consider a causal DAG in Figure~\ref{fig:dag} as a concrete illustration based on a selection diagram approach \citep{bare2016causal}. In this causal DAG, three variables $\{X_2, X_4, X_5\}$ serve as a sampling set, which has direct arrows to sampling variable $S$. Three variables $\{X_1, X_2, X_3\}$ serve as a heterogeneity set, which has direct arrows to outcome variable $Y$ and can moderate the causal effect of $T$ on $Y$. Finally, from Definition~\ref{sep}, there are many valid separating sets, including the sampling and heterogeneity sets, but the smallest separating set is $\{X_2, X_3\}$, which makes the sampling indicator $S$ and the outcome variable $Y$ conditionally independent. The key is that there are potentially many valid separating sets, and researchers can use any of them for identification given their data constraint. \begin{remark} We only use the causal DAG in Figure~\ref{fig:dag} as an illustrative example, and our proposed approach relies only on assumptions we clarify in Section~\ref{sec:dis} using the potential outcomes. We do not use any particular causal DAG structure in Figure~\ref{fig:dag}, and we do not assume knowledge of the underlying causal DAG structure. When knowledge of the underlying causal DAG is available, results in the causal diagram literature are of great importance, and we refer readers to \citet{bare2016causal}. \end{remark} \begin{figure} \caption{Example of a Causal DAG based on a selection diagram approach \citep{bare2016causal}. \textit{Note:} Three variables $\{X_2, X_4, X_5\}$ serve as a sampling set, three variables $\{X_1, X_2, X_3\}$ as a heterogeneity set, and two variables as the smallest separating set $\{X_2, X_3\}$. } \label{fig:dag} \end{figure} \section{Identification and Estimation of Separating Sets} \label{sec:dis} In this section, we first show that a variant of {\it separating} sets, which is sufficient for the identification of the PATE, is estimable even when a {\it sampling} set is unobserved in the population data as far as it is observed in the experimental data (Section~\ref{subsec:iden}). This is in contrast to existing methodologies which assume that a sampling set is observed in {\it both} of the experimental and population data. This distinction is subtle and yet practically important because in many applied contexts, including the YOP \citep{blattman2013generating}, a larger number of covariates are measured in the experimental data than in the population data. This is because, while analysts of experiments can often control what variables should be measured within the experiment, population data is usually more expensive; collected by other organizations, such as a national-level survey (the NUS in \citet{blattman2013generating}); or otherwise impractical to collect. Thus, our focus is on this type of common research setting where analysts are able to measure more covariates in the experimental data than in the population data. After demonstrating the identification of separating sets, we propose an algorithm to estimate separating sets using Markov random fields (Section~\ref{subsec:est}). \subsection{Identification of Separating Sets} \label{subsec:iden} We begin with the identification of an exact separating set and then turn to the identification of a modified separating set under a weaker assumption. First, we can estimate an exact separating set in settings where both a sampling set and a heterogeneity set are observed in the experimental data. A key feature of this result is that we only require rich covariate information about the experimental units, not the target population units, to discover separating sets, should they exist. In many applied research contexts, however, the heterogeneity set is not readily available even in the experimental data. The fundamental problem of causal inference states that only one of two potential outcomes are observable, which implies that the causal effect is unobserved at the unit level, and thus so is the heterogeneity set. For example, in our application, although \citet{blattman2013generating} discuss two specific heterogeneity variables (gender and initial credit constraints), it might be unreasonable to assume away the existence of other potential heterogeneity variables. We therefore develop an additional method to find a variant of a separating set, which we call a {\it marginal} separating set, using only knowledge of a sampling set, a commonly employed assumption in the extant literature. We show that a marginal separating set can be discovered when a sampling set is measured in the experimental data, but not in the target population. Although this data requirement might still be stringent in some contexts, it is much weaker than the one necessary for widely-used existing approaches based on sampling sets, which require that sampling set be measured in the population data as well as in the experimental data. \subsubsection{Identification of Exact Separating Sets}\label{subsubsec:disc_sep_set} We begin with settings in which a sampling set and a heterogeneity set are observed in the experimental sample. In this setting, we can use the experimental data to identify exact separating sets. Although this data requirement is still restrictive, we emphasize that it does not require rich data on the target population. \begin{setting}[Sampling and Heterogeneity Sets are Observed in Experiment] \label{obsSamHet} \spacingset{1}{ Sampling set $\bX^S$ and heterogeneity set $\bX^H$ are observed in the experiment ($S_i=1$).} \end{setting} In this setting, a separating set is estimable as a set that makes the sampling set and the heterogeneity set conditionally independent within the experimental data. \begin{theorem}[Identification of Separating Sets in Experiment] \label{thm_id_Exsep} \spacingset{1}{ In Setting~\ref{obsSamHet}, consider a set of covariates $\bW$, which is a subset of pre-treatment variables. Under Assumption~\ref{random}, \begin{eqnarray} \widetilde{\bX_i}^H \ \mbox{$\perp\!\!\!\perp$} \ \widetilde{\bX_i}^S \mid \bW_i, T_i, S_i=1 & \Longrightarrow & Y_i (1) - Y_i(0) \ \mbox{$\perp\!\!\!\perp$} \ S_i \mid \bW_i, \label{eq:Exsep} \end{eqnarray} \\ where $ \widetilde{\bX}^H$ and $ \widetilde{\bX}^S$ are the set difference $\bX^H \setminus \bW$ and $\bX^S \setminus \bW$, respectively.} \end{theorem} We provide the proof in the supplementary material. Theorem~\ref{thm_id_Exsep} states that as long as we can find a set that satisfies the testable conditional independence on the left hand side, the discovered set is guaranteed to be a separating set. That is, we can identify an exact separating set from the experimental data alone. Note that when $\bX^H$ and $\bX^S$ share some variables, those variables should always be in $\bW$. Using the selected separating set, researchers can identify the PATE based on Result~\ref{pate}. Intuitively, this theorem can be explained through two conceptual steps. First, because a heterogeneity set $\bX^H$ fully explains treatment effect heterogeneity $Y(1) - Y(0)$, the sampling indicator $S$ and $Y(1) - Y(0)$ are conditionally dependent only when $S$ and $\bX^H$ are conditionally dependent. Second, because a sampling set $\bX^S$ fully explains the sampling indicator $S$, $S$ and $\bX^H$ are conditionally dependent only when $\bX^S$ and $\bX^H$ are conditionally dependent. Taken together, $S$ and $Y(1)- Y(0)$ are conditionally dependent only when $\bX^S$ and $\bX^H$ are conditionally dependent. \paragraph{Example.} Here, we illustrate the general result from Theorem~\ref{thm_id_Exsep} using the causal DAG in Figure~\ref{fig:dag} as a concrete example. Suppose we are interested in a separating set $\bW = \{X_2, X_3\}$, which has the smallest size. Given that heterogeneity set $\bX^H = \{X_1, X_2, X_3\}$ and $\bX^S = \{X_2, X_4, X_5\}$, we have $\widetilde{\bX}^H = X_1$ and $\widetilde{\bX}^S = \{X_4, X_5\}.$ Then, as equation~\eqref{eq:Exsep} suggests, we have $X_1 \ \mbox{$\perp\!\!\!\perp$} \ \{X_4, X_5\} \mid X_2, X_3, T, S = 1$ in the causal DAG in Figure~\ref{fig:dag}. More generally, Theorem~\ref{thm_id_Exsep} guarantees that any set that makes the sampling set and the heterogeneity set conditionally independent within the experimental data is a separating set under Assumption~\ref{random}. \subsubsection{Identification of Marginal Separating Sets} \label{subsubsec:marginal_sep_sets} While Theorem~\ref{thm_id_Exsep} allows us to discover separating sets using the experimental data, a key challenge would be to measure both a sampling set and a heterogeneity set in the experimental data. In particular, it is often difficult to measure the heterogeneity set in practice. We show that a modified version of a separating set -- a {\it marginal} separating set -- is estimable from the experimental data under a weaker assumption. We define a marginal separating set as follows. \begin{definition}[Marginal Separating Set] \label{marginal_sep} \spacingset{1}{ A marginal separating set is a set $\bW$ that makes the sampling indicator and the marginal distributions of potential outcomes conditionally independent. \begin{equation} Y_i(t) \ \mbox{$\perp\!\!\!\perp$} \ S_i \mid \bW_i \hspace{0.2in} \mbox{for } t = \{0, 1\}.\label{eq:marginal_sep} \end{equation}} \end{definition} We refer to this as a {\it marginal} separating set since it renders the marginal, not the joint, distribution of potential outcomes conditionally independent of the sampling process. Now we turn to our final setting researchers may find themselves in -- that the sampling set is observed only in the experimental data. Previous work using the sampling set assumes it is measured in both the experimental sample and the target population \citep[e.g.,][]{Cole:2010bf, Tipton:2013ew, Hartman:2015hq, Buchanan:2018dd}. Since researchers often have much more control over what data is collected in the experiment, this final setting greatly relaxes the data requirements of the previous literature. \begin{setting}[Sampling Set is Observed in Experiment] \label{obsSamEx} \label{obsSam} \spacingset{1}{ Sampling set $\bX^S$ is observed in the experimental data ($S_i=1$).} \end{setting} \begin{theorem}[Identification of Marginal Separating Sets in Experiment] \label{thm_id_sep} \spacingset{1}{ In Setting~\ref{obsSamEx}, consider a set of covariates $\bW$, which is a subset of pre-treatment variables. Under Assumption~\ref{random}, \begin{eqnarray} Y_i \ \mbox{$\perp\!\!\!\perp$} \ \bX_i^S \mid \bW_i, T_i, S_i=1 & \Longrightarrow & Y_i(t)\ \mbox{$\perp\!\!\!\perp$} \ S_i \mid \bW_i. \end{eqnarray}} \end{theorem} We provide the proof in the supplementary material. Theorem~\ref{thm_id_sep} states that as long as we can find a set that makes the observed outcome $Y$ conditionally independent of the sampling set within the experimental data, the discovered set is guaranteed to be a marginal separating set. With a large enough sample size, we can find a marginal separating set from the experimental data alone. Intuition behind this theorem is similar to the one used for Theorem~\ref{thm_id_Exsep}. Because the sampling set $\bX^S$ fully explains the sampling indicator $S$, if the sampling indicator $S$ and the potential outcome $Y(t)$ are conditionally dependent, the sampling set $\bX^S$ and the observed outcome $Y$ are also conditionally dependent. The marginal separating set may be larger than an exact separating set, as it may include covariates that explain the marginal potential outcomes but not treatment effect heterogeneity. Once we have discovered a marginal separating set using the experimental data, we can identify the PATE with this discovered set. \begin{result}[Identification of the PATE with Marginal Separating Sets] \label{thm_id_Marginal} \spacingset{1}{ $ $\\ When a marginal separating set $\bW$ is observed both in the experimental sample and the target population, the PATE is identified with the marginal separating set $\bW$ under Assumption~\ref{random}. \begin{eqnarray*} \tau & = & \int \biggl\{\E[Y_i \mid T_i =1, S_i =1, \bW_i=\bw] - \E[Y_i \mid T_i =0, S_i =1, \bW_i=\bw] \biggr\} d F_{\bW_i \mid S_i=0} (\bw). \end{eqnarray*}} \end{result} We omit the proof because it is straightforward from the one of Result~\ref{pate}. \spacingset{1.1}{ \begin{table}[t] \label{tab:settings} \centering \renewcommand{1.25}{1.25} \resizebox{\textwidth}{!}{ \begin{tabular}{|l|c : c|} \hline \multirow{2}{*}{\ \ \textbf{Set to Adjust For}} & \multicolumn{2}{c|}{\textbf{Data Requirements}} \\ & Experiment & Target Population \\ \hline \hline \ \ Sampling set & Sampling set & Sampling set \\ \hline \ \ Heterogeneity set & Heterogeneity set & Heterogeneity set \\ \hline \begin{tabular}{l} Estimated separating set \\ (Theorem~\ref{thm_id_Exsep} under Setting~\ref{obsSamHet}) \end{tabular} & $\left\{\vbox to 12pt{} \begin{tabular}{c} Sampling Set \\ Heterogeneity Set \end{tabular} \right.$ & User Specified Constraints \\ \hline \begin{tabular}{l} Estimated marginal separating set\\ (Theorem~\ref{thm_id_sep} under Setting~\ref{obsSam}) \end{tabular} & Sampling set & User Specified Constraints \\ \hline \end{tabular}} \caption{Identifying the PATE under different data requirements. {\it Note:} Many previous approaches assume that a sampling set or a heterogeneity set is measured in both the experimental sample and the target population (the first two rows). Our proposed approaches relax data requirements for the target population by introducing an additional step of estimating separating sets.} \label{tab:comparing_assumptions} \end{table}} \spacingset{\origspace} Finally, in Table~\ref{tab:comparing_assumptions}, we compare existing methods with two proposed approaches. The first two rows show two common existing approaches based on sampling and heterogeneity sets, respectively. Although the identification of the PATE in those settings is straightforward, it requires rich covariate information from the target population data as well as from the experimental sample. Our approach relaxes data requirements for the target population by introducing an additional step of estimating separating sets. In Setting~\ref{obsSamHet} where we observe both a sampling set and a heterogeneity set in the experimental sample, we can identify exact separating sets from the experimental data alone (Theorem~\ref{thm_id_Exsep}). Setting~\ref{obsSam} only requires observing a sampling set in the experimental sample and we can identify marginal separating sets (Theorem~\ref{thm_id_sep}). In the next subsection, we introduce an algorithm that can estimate separating sets subject to user specified data constraints in the target population. \subsection{Estimation of Separating Sets} \label{subsec:est} Here, we propose an estimation algorithm to find a marginal separating set. As shown in Theorem~\ref{thm_id_sep}, the goal is to find a set that makes a sampling set and observed outcomes conditionally independent within the experimental data. We show how to apply Markov random fields (MRFs) to encode conditional independence relationships among observed covariates and then select a separating set. A similar algorithm can be used for finding an exact separating set. Our estimation algorithm consists of four simple steps. We provide a brief summary here and then describe each step in order. Step 1: specify all variables in sampling set $\bX^S$ based on domain knowledge, some of which might not be measured in the population data. Step 2: using the experimental data alone, estimate a Markov random field over an outcome, a treatment, the sampling set and observed pre-treatment covariates. Step 3: enumerate all simple paths\footnote{A simple path is a path in a Markov graph that does not have repeating nodes.} from $Y$ to $\bX^S$ in the estimated Markov graph. Step 4: find sets that block all the simple paths from $Y$ to $\bX^S$ in the estimated Markov graph. \paragraph{Estimating Markov Random Fields.} Theorem~\ref{thm_id_sep} implies that we can find a marginal separating set by estimating a set of variables $\bW$ that satisfies the conditional independence, $Y_i \ \mbox{$\perp\!\!\!\perp$} \ \bX_i^S \mid \bW_i, T_i, S=1.$ To estimate this set, we employ a Markov random field (MRF). MRFs are statistical models that encode the conditional independence structure over random variables via graph separation rules. For example, suppose there are three random variables $A, B$ and $C$. Then, $A \mbox{$\perp\!\!\!\perp$} B \mid C$ if there is no path connecting $A$ and $B$ when node $C$ is removed from the graph (i.e., node $C$ {\it separates} nodes $A$ and $B$), so-called the global Markov property \citep{lauritzen1996graphical}. We review basic properties of the MRF in the supplementary material (\ref{sec:mrf-si}), and we refer readers to \citet{lauritzen1996graphical} for its comprehensive discussion. Using the general theory of MRFs, the estimation of a separating set can be recast as the problem of finding a set of covariates separating outcome variable $Y$ and a sampling set $\bX^S$ in an estimated Markov graph. Therefore, we can find a separating set that satisfies the desired conditional independence as far as we can estimate the MRF over $\{Y, T, \bX^S, \bX_0\}$ within the experimental data where we define $\bX_0$ to be all pre-treatment variables measured both in the experimental and population data. We define $\bZ \equiv \{\bX^S, \bX_0\}$ to be pre-treatment covariates from which we select a separating set. \begin{remark} Note that MRFs are used here to estimate conditional independence relationships of observed covariates as an intermediate step of estimating separating sets. Importantly, they are not used to estimate the underlying causal directed acyclic graphs (causal DAGs). As emphasized earlier in the paper, while we used a causal diagram (Figure~\ref{fig:dag}) as an illustration, we only rely on domain knowledge about sampling sets, and we are not taking the causal graphical approach. Such an approach is powerful when knowledge of the full structure of the underlying causal DAG is available, which we do not assume in this paper. \end{remark} To estimate a MRF, we use a mixed graphical model \citep{yang2015graphical, haslbeck2020mgm}, which allows for both continuous and categorical variables. More concretely, we assume that each node can be modeled as the exponential family distribution using the remaining variables. \begin{equation} \Pr (G_r \mid G_{-r}) = \mbox{exp} \biggl\{ \alpha_r G_r + \sum_{h \neq r} \theta_{r,h} G_r G_h + \varphi(G_r) - \Phi(G_{-r}) \biggr\}, \end{equation} where $G_{-r}$ is a set of all random variables in a Markov graph except for variable $G_r$, base measure $\varphi(G_r)$ is given by the chosen exponential family, and $\Phi(G_{-r})$ is the normalization constant. For example, for a Bernoulli distribution, the conditional distribution can be seen as a logistic regression model. \begin{equation} \Pr (G_r \mid G_{-r}) = \cfrac{\mbox{exp}(\alpha_r + \sum_{h \neq r} \theta_{r,h} G_h)}{\mbox{exp}(\alpha_r + \sum_{h \neq r} \theta_{r,h} G_h) + 1}. \end{equation} In general, we model each node using a generalized linear model conditional on the remaining variables. Using this setup, we can estimate the structure of the MRF by estimating parameters $\{\theta_{r,h}\}_{h \neq r}$; $\theta_{r,h} \neq 0$ for variable $G_h$ in the neighbors of variable $G_r$ and $\theta_{r,h} = 0$ otherwise. We estimate each generalized linear model with $\ell_1$ penalty to encourage sparsity \citep{meinshausen2006high}. Finally, using the AND rule, an edge is estimated to exist between variables $G_r$ and $G_h$ when $\theta_{r,h} \neq 0$ {\it and} $\theta_{h,r} \neq 0.$ Researchers can also use an alternative OR rule (an edge exists when $\theta_{r,h} \neq 0$ {\it or} $\theta_{h,r} \neq 0$) and obtain the same theoretical guarantee of graph recovery. \paragraph{Estimating Separating Sets.} Given the estimated graphical model, we can enumerate many different separating sets. First, we focus on the estimation of a separating set of the smallest size because it often produces more stable weights and thus improves estimation accuracy. It is important to note that this separating set might not be the smallest with respect to the underlying unknown DAG because MRFs don't encode all conditional independence relationships between variables. It is the smallest size among all separating sets estimable from MRFs. We estimate this separating set from pre-treatment covariates $\bZ$ as an optimization problem. A separating set should block all simple paths between outcome $Y$ and variables in the sampling set $\bX^S$. Therefore, we first enumerate all simple paths between $Y$ and $\bX^S$ and then find a minimum set of variables that intersect all paths. Define $q$ to denote the number of variables in $\bZ$. We then define $\bd$ to be a $q$-dimensional decision vector with $d_j$ taking $1$ if we include the $j$ th variable of $\bZ$ into a separating set and taking $0$ otherwise. We use $\bP$ to store all simple paths from $Y$ to each variable in $\bX^S$ where each row is a $q$-dimensional vector and its $j$ th element takes $1$ if the path contains the $j$ th variable. With this setup, the estimation of the separating set of the smallest size is equivalent to the following linear programming problem given the estimated graphical model. \begin{eqnarray*} && \mbox{ min}_{\bd} \ \sum_{j=1}^q d_j \ \ \ \ \ \mbox{s.t., } \bP \bd \geq \mathbf{1}. \end{eqnarray*} where $\mathbf{1}$ is a vector of ones. The constraints above ensure that all simple paths intersect with at least one variable in a selected separating set, and the objective function just counts the total number of variables to be included into a separating set. Therefore, by optimizing this problem, we can find a set of variables with the smallest size that is guaranteed to block all simple paths. It is important to emphasize that the estimation of the Markov graph is subject to uncertainty as any other statistical methods. In our application, we incorporate uncertainties about set estimation through bootstrap. We also investigate accuracy of the proposed algorithm through simulation studies in the supplementary material. We find that estimators based on estimated separating sets often have similar standard errors to the ones based on the true sampling set. Although our approach introduces an additional estimation step of finding separating sets to relax data requirements, it does not suffer from substantial efficiency loss. \paragraph{Incorporating Users' Constraints.} One advantage of our approach is that we can allow the flexibility for researchers to explicitly specify variables that they cannot measure in the target population. This is important in practice because it is often the case that researchers can measure a large number of covariates in the experimental data but they can collect relatively few variables in the target population. We can easily adjust the previous optimization problem to account for this restriction. Define $\bu$ to be a $q$-dimensional vector with $u_j$ taking $1$ if we want to exclude the $j$ th variable of $\bZ$ from a separating set and taking $0$ otherwise. As we define $\bX_0$ to be those variables observed in both the experimental sample and the target population, $\bu$ will place constraints on those covariates in $\bX^S$ that are unobservable. Then, the optimization problem above changes as follows. \begin{eqnarray*} && \mbox{ min}_{\bd} \ \sum_{j=1}^q d_j \ \ \ \ \ \mbox{ s.t., } \bP \bd \geq \mathbf{1} \ \mbox{ and } \ \bu^\top \bd =0 \end{eqnarray*} In practice, it is possible that there exists no separating set, subject to user constraints. In our example, a true separating set could include social connections, which are not measured in the Northern Uganda Survey (the population data). In this case, there is no feasible separating set and our algorithm finds no separating set. \subsection{Estimation of Population Average Treatment Effect} \label{subsec:est_pate} To estimate the PATE with estimated separating sets, we use an inverse probability weighting estimator. First, we estimate a probability of being in the experiment $\Pr(S_i = 1 \mid \bW_i),$ for example, using a logistic regression \citep{Stuart:2011hr, stuart2017trans} with adjustment for the actual target population size \citep{Buchanan:2018dd}.\footnote{To account for the fact that the target population data is a random sample from the actual target population, we follow \citet{scott1986fitting, Buchanan:2018dd} to estimate a weighted logistic regression. We use weights 1 for the experimental data, and use weights $m/(N-n)$ for the target population data where the size of the actual target population $N = 10,000,000$ (population size in Northern Uganda), the size of the experimental data $n = 2,598$, and the size of the target population data $m = 21,348$.} Following \citet{Buchanan:2018dd}, we stack the experimental data and the population data, and $S_i = 1$ ($S_i = 0$) indicates that unit $i$ belongs to the experimental data (the population data). We can then estimate generalization weights as \begin{eqnarray} \pi_i = \frac{1}{\Pr(S_i = 1 \mid \bW_i)} \times \frac{\Pr(S_i=0 \mid \bW_i)}{\Pr(S_i=0)}, \end{eqnarray} where a usual inverse probability is adjusted by $\Pr(S_i=0|\bW_i)/\Pr(S_i=0)$ because the PATE is defined only with the population data, i.e., $\E[Y_i(1) - Y_i(0)|S_i = 0]$. Finally, we compute the inverse probability weighting estimator \citep{Stuart:2011hr}. \begin{eqnarray} \hat{\tau} \equiv \cfrac{\sum_{i; S_i= 1} \pi_i p_i T_iY_i}{\sum_{i; S_i= 1} \pi_i p_i T_i} - \cfrac{\sum_{i; S_i= 1} \pi_i (1-p_i) (1-T_i)Y_i}{\sum_{i; S_i= 1} \pi_i (1-p_i) (1-T_i)}, \label{eq:ipw} \end{eqnarray} where $p_i \equiv \Pr(T_i = 1 \mid S_i= 1, \bW_i)$ is known by the experimental design. We prove its consistency in the supplementary material. Researchers can also employ an outcome-model-based estimator and a doubly robust estimator for the PATE \citep{hernan2019, dahabreh2020extending}. To maintain the clear comparison with the original analysis that uses a weighting approach, we will focus on the inverse probability weighting estimator (equation~\eqref{eq:ipw}) in Section~\ref{sec:app_ana}, while we also report results from the other two estimators in the supplementary material (\ref{sec:add-result}). \subsection{Relationship with Sensitivity Analysis} \label{subsec:sa} Here, we want to clarify relationships between our proposed approach and sensitivity analyses for generalization \citep[e.g.,][]{Nguyen:2017jw, nguyen2018, andrews2019}. In particular, our proposed approach can also be used to simplify sensitivity analysis. Sensitivity analysis in general uses some sensitivity parameters to quantify certain aspects of unobserved covariates. For example, \citet{Nguyen:2017jw, nguyen2018} propose a sensitivity parameter that captures the distribution of covariates unmeasured in the target population, and \citet{andrews2019} introduce a sensitivity parameter to quantify the predictive power of unobserved covariates relative to that of observed covariates. When there is a much richer set of covariates in the experimental data than the target population data --- the common scenario and the main focus of our paper, analysts naturally need to specify a large number of sensitivity parameters to account for such potentially many covariates that are not measured in the target population data. In such scenarios, it is well known that sensitivity parameters become more difficult to interpret, and analysts need to add more parametric assumptions (e.g., additivity) to handle many unobserved variables. Our proposed approach can be used to first find the smallest separating set within the experimental data. Then, researchers do not need to consider all covariates that are unmeasured in the target population data, and they can focus on a potentially much smaller set of covariates estimated as a separating set. Researchers can then use their preferred sensitivity analysis technique to deal with a much smaller set of covariates that are unmeasured in the target population data. It is also important to clarify the scope of our approach. While we relax a stringent assumption that a sampling set is measured in both experimental and target population data, Theorem~\ref{thm_id_sep} still assumes that a sampling set is observed at least in the experimental data. If researchers are worried that a sampling set is not observed even in the experimental data (i.e., not observed in either the experimental or target population data), sensitivity analysis for completely unobserved covariates might be of greater importance. \section{Empirical Analysis} \label{sec:app_ana} Applying the proposed method, we examine the YOP described in Section~\ref{sec:app}. Our focus is on a central methodological challenge of covariate selection. In the original analysis, the authors adjusted for all eight variables shared by the experimental and population data. However, as noted in the original paper, it is unknown whether the original eight variables is a separating set necessary for estimating PATEs. To tackle this pervasive concern, we employ the proposed approach and select a separating set under two different assumptions about a sampling set and a heterogeneity set. First, we incorporate domain knowledge about a heterogeneity set, while we maintain the original assumption about a sampling set. As explained in Section~\ref{sec:sep}, by combining substantive information about a sampling set and a heterogeneity set, we can find a separating set, which can be much smaller than each one of the two. Relying on this smaller separating set, we find that point estimates are similar to estimates based on the original sampling set, but standard errors of the proposed approach are smaller for 14 out of 17 outcomes that the original analysis studied. Incorporating domain knowledge about a heterogeneity set can help us find a smaller set of variables sufficient for the PATE estimation, thereby improving efficiency. Second, we relax the original assumption about a sampling set --- the shared eight variables contain all relevant variables, and we allow for two additional unobserved variables. In the conventional approach based on a sampling set, researchers cannot estimate PATEs under this assumption. In contrast, the proposed approach estimated appropriate separating sets for 12 out of 17 outcomes, and thus, we can estimate the PATE for those 12 outcomes even with the two additional unobserved sampling variables. At the same time, we reveal that estimated PATEs are sensitive to the original assumption about the sampling mechanism for the other 5 outcomes. The original experiment used clustered randomization, and therefore, we compute clustered standard errors at the group level, which is the level at which treatments were randomly assigned. While we maintain the no interference assumption \citep{Rubin:1990tm} we made in Section~\ref{subsec:setup}, which is the standard assumption in the generalization literature, if researchers wish to account for interference within groups, it is important to additionally account for the difference in group structure between the experimental and target population data. Future work is necessary to consider this intersection of interference and generalization literature. \subsection{Incorporating Domain Knowledge on Heterogeneity Set} \label{subsec:exact-app} To begin with, we maintain an assumption about a sampling set in the original analysis, i.e., $\bX^S =$ \{\texttt{Gender, Age, Urban, Marital status, School attainment, \\ Household size, Durable assets, District}\}. Although the original analysis relies only on this knowledge of the sampling set for the PATE estimation, the authors also carefully discuss a heterogeneity set in their paper. In particular, they discuss two variables: gender and initial credit constraints. There are two natural covariates in the experimental data that capture these concepts, \texttt{Gender} and \texttt{Initial Saving}, respectively. Importantly, however, \texttt{Initial Saving} is measured only in the experimental sample and not in the target population data. Thus, $\bX^H =$ \{\texttt{Gender, \fbox{Initial Saving}}\} where the square box represents a variable unmeasured in the target population. In existing approaches, when a subset of heterogeneity variables are unmeasured in the target population as in this case, it is difficult to incorporate such domain knowledge into analysis and researchers often ignore the heterogeneity set altogether. In contrast, our proposed method uses knowledge about the heterogeneity set to estimate an exact separating set, which is potentially smaller than the observed sampling set and thus can increase the estimation accuracy for the PATE estimation. We first estimate a Markov random field over the union of sampling and heterogeneity sets within the experimental data. Then, we select an exact separating set that makes sampling set $\bX^S$ and heterogeneity set $\bX^H$ conditionally independent under a constraint that \texttt{Initial Saving} is unmeasured in the population data and cannot be selected. To take into account uncertainties, we estimate Markov random fields and select exact separating sets in each of 1000 bootstrap samples. Figure~\ref{fig:cov_XH} reports the results. The left panel (a) shows the proportion of each variable being estimated to be in an exact separating set over 1000 bootstrap samples. As the definition of separating sets (Definition~\ref{sep}) implies, the intersection of sampling and heterogeneity set, \texttt{Gender}, is always selected. In addition, \texttt{Durable assets} and \texttt{District} are selected almost always. Importantly, when we look at the size of estimated exact separating sets (the right panel (b)), it is often much smaller than the original size of eight (the mean size is $4.11$). This means that even when a sampling set is sufficient for estimating the PATE, researchers can find a smaller separating set by incorporating domain knowledge of heterogeneity sets with the proposed approach. \begin{figure} \caption{Estimated Exact Separating Sets. {\it Note:} Panel (a) shows the proportion of each variable being estimated to be in an exact separating set over 1000 bootstrap samples. Panel (b) reports the size of estimated exact separating sets over 1000 bootstrap samples.} \label{fig:cov_XH} \end{figure} If assumptions about the sampling set and the heterogeneity set hold, estimators based on the original sampling set $\bX^S$ and on the estimated separating sets $\bW$ are both consistent. However, standard errors of the latter might be smaller because corresponding estimated weights might be more stable. To estimate the PATEs, we use the inverse probability weighting estimator proposed in Section~\ref{subsec:est_pate}. First, we estimate weights using the following logistic regression. \begin{eqnarray} \mbox{logit}\{\Pr(S_i = 1 \mid \bC_i)\} = \alpha_0 + \bC_i^\top \beta, \end{eqnarray} where $\bC = \bX^S$ for the estimator based on the original sampling set and $\bC = \bW$ for our proposed estimator. We stack the experimental data (sample size = $2,598$) and the population data (sample size = $21,348$) and $S_i = 1$ ($S_i = 0$) indicates that unit $i$ belongs to the experimental data (the population data). We can then estimate weights as $\hat\pi_i = 1/\widehat{\Pr}(S_i = 1 \mid \bC_i) \times \widehat{\Pr}(S_i=0 \mid \bC_i)/\widehat{\Pr}(S_i=0),$ as proposed in Section~\ref{subsec:est_pate}. Note that treatment assignment probability in the experiment $\Pr(T_i =1 \mid S_i = 1, \bW_i)$ is equal to $\Pr(T_i =1 \mid S_i = 1, \mathbf{D}_i)$ where $\mathbf{D}_i$ is a vector indicating 14 districts, because the treatment randomization was stratified by districts \citep{blattman2013generating}. We use the block bootstrap to compute standard errors clustered at the group level as done in the original analysis. To take into account uncertainties of both steps --- the estimation of separating sets and that of the PATE, in each of 1000 bootstrap samples, we estimate separating sets, estimate weights, and then estimate the PATE. Note that the difference between the estimator based on the original sampling set and our proposed estimator comes only from the selection of covariates $\bC$ in the estimation of weights. \begin{table}[!t] \centering \small \resizebox{\textwidth}{!}{ \begin{tabular}{@{\extracolsep{5pt}} L{2.1in}C{0.6in}C{0.6in}|L{2in}C{0.6in}C{0.6in}} \hline \hline & Original & Sep. Set & & Original & Sep. Set \\[-5pt] & estimate & estimate & & estimate & estimate \\ \hline \normalsize \bf \underline {Employment} & & & \normalsize \bf \underline {Investments} & & \\ Average employment hours & 5.87 & 4.79 & Vocational training & 0.51 & 0.53 \\[-3pt] & (3.25) & (2.39) & & (0.07) & (0.05) \\ Agricultural & 0.75 & 0.30 & Hours of vocational training & 277.18 & 337.59 \\[-3pt] & (2.01) & (1.69) & & (51.21) & (40.77) \\ Nonagricultural & 5.12 & 4.49 & Business assets & 400.31 & 425.02 \\[-3pt] & (2.62) & (1.79) & & (125.19) & (135.65) \\ Skilled trades only & 2.73 & 4.36 & & & \\[-3pt] & (1.56) & (0.99) & \normalsize \bf \underline {Business Formality} & & \\ No employment hours & 0.00 & -0.03 & Maintain records & 0.19 & 0.20 \\[-3pt] & (0.02) & (0.03) & & (0.07) & (0.07) \\ Any skilled trade & 0.24 & 0.27 & Registered & 0.07 & 0.09 \\[-3pt] & (0.07) & (0.06) & & (0.04) & (0.05) \\ Works mostly in a skilled trade & -0.01 & 0.04 & Pays taxes & -0.01 & 0.05 \\[-3pt] & (0.03) & (0.03) & & (0.06) & (0.05) \\ \normalsize \bf \underline {Income} & & & \normalsize \bf \underline {Urbanization} & & \\ Cash earnings & 12.15 & 12.54 & Changed parish & -0.03 & -0.01 \\[-3pt] & (5.3) & (5.11) & & (0.07) & (0.04) \\ Durable assets & 0.09 & 0.18 & Lives in Urban area & -0.06 & -0.01 \\[-3pt] & (0.16) & (0.13) & & (0.07) & (0.04) \\ \hline \end{tabular}} \spacingset{1.2}{ \caption{Estimates of Population Average Treatment Effects based on the Original Set and the Estimated Separating Set. {\it Note:} We estimated population average treatment effects of 17 outcomes using weights based on the original eight variables (``Original estimate'') and the estimated exact separating set (``Sep. Set estimate''). Standard errors of the proposed estimators are smaller for 14 out of 17 outcomes.}\label{tab:estimate_XH}} \end{table} We report results in Table~\ref{tab:estimate_XH}. Effects of the YOP are large and positive across many outcomes even among the broader target population. For example, the average employment hours would increase by $4.79$ hours (19\% increase compared to the control group), monthly cash earnings would increase by $12,540$ Uganda shilling (36\% increase), and a proportion of people enrolled in vocational training would increase by $53$ percentage points (349\% increase). Comparing estimates based on the original sampling set and those based on the proposed separating set, we reveal that point estimates are similar to estimates with the original eight variables, and differences between them are not statistically significant at the conventional 0.05 level. This is expected because both estimators are consistent under the assumption that both specified sampling and heterogeneity sets are correct. More interestingly, we find that, for 14 out of 17 outcomes, standard errors of estimators based on the estimated separating sets are smaller than those based on the original sampling set. On average, standard errors of the proposed approach are about 16\% smaller. For the outcome ``Lives in Urban area,'' the standard error reduces more than 45\%. This shows that by incorporating domain knowledge about heterogeneity sets, we can estimate smaller separating sets, which often improve efficiency. \subsection{Accounting for Unobserved Sampling Set} \label{subsec:mar-app} In the previous analysis, we maintained the original authors' assumption about the sampling set and additionally took into account the assumption about the heterogeneity set. Here, we focus on estimating PATEs under weaker assumptions and directly address a concern noted in the original paper that the shared eight variable might not contain all relevant variables. In particular, \citet{blattman2013generating} discuss two potentially problematic variables. First, the authors are concerned that when the government screened applications at the village level, people with more social connections may have received some privilege. Second, people with ``affinity for entrepreneurship'' \citep{blattman2013generating} might have been more likely to apply for the program in the first place. To account for these two sources of sample selection, we assume that a true sampling set contains two additional variables: (1) \texttt{Connection}, the number of community groups that a respondent belongs to, as a measure of social connections, and (2) \texttt{Business Advice}, total hours spent getting business advice in last 7 days, as a measure of initial motivation and affinity for entrepreneurship. Importantly, both of these two variables are not measured in the population data. Therefore, $\bX^S = $ \{\texttt{Gender, Age, Urban, Marital status, School attainment, Household size, Durable assets, District, \fbox{Connection}, \fbox{Business Advice}}\} where the last two variables are measured only in the experiment and not in the population data. Moreover, we don't make any assumption about heterogeneity sets. Under this assumption, the current practice based on sampling sets or heterogeneity sets cannot estimate any PATEs; weights can be estimated only when sampling sets or heterogeneity sets are measured in both the experimental and population data. In contrast, the proposed method can select appropriate separating sets, should they exist, under such data constraints. \begin{figure} \caption{Proportions of Infeasible Solutions. {\it Note:} For 17 outcomes, we estimated marginal separating sets under a constraint that two sampling variables are unobserved in the population data. The figure shows the proportion of infeasible solutions for each outcome.} \label{fig:feasible_XS} \end{figure} There are two questions of interest for each outcome; (1) Can we find a separating set and estimate the PATE? (2) If we can estimate the PATE, is an estimate different from the one based on the original eight variables? We estimate marginal separating sets using the proposed algorithm. For each outcome $Y$, we first estimate a Markov random field and then select a separating set that makes outcome $Y$ and sampling set $\bX^S$ conditionally independent under a constraint that the two unobserved variables (\texttt{Connection, Business Advice}) cannot be selected. When the algorithm can find no separating set under the constraint, we call it an ``infeasible solution.'' We use block bootstrap to compute standard errors clustered at the group level as done in the original analysis. To take into account uncertainties over the covariate selection, we estimate Markov random fields and select separating sets in each of 1000 bootstrap samples. We begin by computing proportions of infeasible solutions among the 1000 bootstraps (Figure~\ref{fig:feasible_XS}). Proportions vary across outcomes, ranging from 0.0\% (``Lives in Urban area'') to 26.0\% (``Registered''), and on average, 4.70\%. Given that the current practice just based on sampling or heterogeneity sets cannot estimate PATEs for any outcomes, it is interesting that the proportions of infeasible solutions are smaller than 5\% for 12 out of 17 outcomes. For the remaining five outcomes, the average proportion of infeasible solutions is 11.5\%, suggesting that the PATE estimates for these outcomes are sensitive to unobserved sampling variables, \texttt{Connection} and \texttt{Business Advice}. \begin{figure} \caption{Estimates of Population Average Treatment Effects based on the Original Sets and Estimated Marginal Separating Sets. {\it Note:} We estimated population average treatment effects for 3 outcomes that have estimated proportions of infeasible solutions below 1\%. Weights are based on the original eight variables (``Original'') and estimated marginal separating sets (``Estimated Separating Set''). } \label{fig:estimate_XS} \end{figure} For three outcomes that have less than 1\% of infeasible solutions,\footnote{\spacingset{1}{\footnotesize ``Agricultural''[0.6\%], ``Changed parish'' [0.9\%], ``Lives in Urban area''[0.0\%].}} we also report estimates with 95\% confidence intervals in Figure~\ref{fig:estimate_XS}. We use the block bootstrap to compute standard errors clustered at the group level. To take into account uncertainties of both steps --- the estimation of separating sets and that of the PATE, in each of 1000 bootstrap samples, we estimate separating sets, estimate weights, and then estimate the PATE. We find that point estimates are similar to the original estimates for the two outcomes (``Agricultural'' and ``Lives in Urban area''), and for ``Changed parish,'' while point estimates are slightly different due to large standard errors, the difference between the original estimate and the estimate based on an estimated separating set is not statistically significant at the conventional level of $0.05.$ For all three outcomes, standard errors are smaller than the original ones. That is, estimates of the PATEs are robust to alternative separating sets, i.e., even if the sampling set includes additional unobserved variables, substantive conclusions are similar. This result demonstrates that the proposed algorithm of selecting separating sets allows researchers to estimate PATEs in situations where previous methods could not. \section{Concluding Remarks} The increased emphasis on well-identified causal effects in the social and biomedical sciences can sometimes lead researchers to narrow the focus of their research question and limit their findings to the experimental sample. However, primary research questions are often driven by the need to discover the impact of an intervention on a broader population. The extant literature has focused on the mathematical underpinnings concerning the generalizability of experimental evidence. The aim of this paper is to provide applied researchers with a means for uncovering a separating set using the experimental data alone. Building on previous approaches, we clarify the role of the separating set -- and its relationship to the sampling mechanism and treatment effect heterogeneity -- in identification of population average treatment effects. It makes clear that there are many possible covariate sets researchers can use for the recovery of population effects, and it allows us to develop a new algorithm that can incorporate researchers' data constraints on the target population. As a concrete context, we focus on the YOP in Uganda. For these types of large-scale development programs, potential benefits and necessity of generalization are well known among researchers and policy makers. However, analysts are often constrained by available covariate information, which limits applicability of existing approaches that assume rich covariate data from both the experimental and population samples. Our proposed algorithm can help researchers to estimate appropriate separating sets, if any should exist, even under such data constraints. We find that by incorporating domain knowledge about heterogeneity sets, which is often overlooked in the PATE estimation, we can substantially improve efficiency. We also reveal that the proposed algorithm can find separating sets for 12 out of 17 outcomes, even if we allow for two additional sampling variables that are not measured in the population. Identifying population effects remains a challenging task for experimental researchers. The results here suggest researchers can increase a chance of generalization by collecting rich covariate information on their experimental subjects, even when their capacity of the population data collection is limited. \spacingset{1.4} {\small \pdfbookmark[1]{References}{References} \printbibliography} \appendix \begin{refsection} \begin{center} {\bf \LARGE Supplementary Material} \\ {\large Covariate Selection for Generalizing Experimental Results} \end{center} \renewcommand{SM-\arabic{figure}}{SM-\arabic{figure}} \renewcommand{SM-\arabic{table}}{SM-\arabic{table}} \renewcommand\thesection{SM-\arabic{section}} \renewcommand\thesubsection{\thesection.\arabic{subsection}} \section{Proof of Theorems}\label{app:proofs} Here, we provide proofs for the theorems presented in the paper. \subsection{Proof of Theorem~\ref{thm_id_Exsep}} \label{subsec:Exsep} In this proof, we assume that the separating set $\bW$ is disjoint with the sampling set $\bX^S$ and the heterogeneity set $\bX^H$ for simpler notations. The same proof applies to the case in which some variables of the sampling set or the heterogeneity set are in the separating set. First, we have \begin{eqnarray} \bX^H \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, T, S=1. \label{eq:est1} \end{eqnarray} From Random Treatment Assignment(Assumption~\ref{random}), we have \begin{eqnarray} T \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, S=1. \label{eq:est2} \end{eqnarray} Combining equations~\eqref{eq:est1} and~\eqref{eq:est2} (Contraction in \citet{pearl2000causality}), \begin{eqnarray} && \{\bX^H, T\} \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, S=1, \notag \end{eqnarray} which implies $\bX^H \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, S=1.$ Given that the conditional independence structure of $(\bX^H, \bX^S, \bW)$ is the same under $S=1$ and $S=0$ (because $S$ only changes the treatment assignment), we have \begin{equation} \bX^H \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, S. \label{eq:est3} \end{equation} From the definition of the sampling variable, \begin{eqnarray} \bX^H \ \mbox{$\perp\!\!\!\perp$} \ S \mid \bW, \bX^S. \label{eq:est4} \end{eqnarray} Combining equations~\eqref{eq:est3} and~\eqref{eq:est4} (Intersection \citep{pearl2000causality}), we have \begin{eqnarray} && \bX^H \ \mbox{$\perp\!\!\!\perp$} \ \{S, \bX^S\} \mid \bW, \notag \end{eqnarray} which implies \begin{eqnarray} & & \bX^H \ \mbox{$\perp\!\!\!\perp$} \ S \mid \bW. \label{eq:42} \end{eqnarray} Additionally, based on the definition of the heterogeneity set, \begin{eqnarray} && Y(1) - Y(0) \ \mbox{$\perp\!\!\!\perp$} \ S \mid \bW, \bX^H. \label{eq:43} \end{eqnarray} Therefore, by combining equations~\eqref{eq:42} and~\eqref{eq:43} based on Contraction in \citet{pearl2000causality}, \begin{eqnarray*} && \{Y(1) - Y(0), \bX^H\} \ \mbox{$\perp\!\!\!\perp$} \ S \mid \bW, \end{eqnarray*} which implies $Y(1) - Y(0) \ \mbox{$\perp\!\!\!\perp$} \ S \mid \bW.$ \ensuremath{\Box} \subsection{Proof of Theorem~\ref{thm_id_sep}} \label{subsec:sep} First, we have \begin{eqnarray} Y \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, T, S=1. \label{eq:est5} \end{eqnarray} From Random Treatment Assignment(Assumption~\ref{random}), we have \begin{eqnarray} T \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, S=1. \label{eq:est6} \end{eqnarray} Combining equations~\eqref{eq:est5} and~\eqref{eq:est6} (Contraction in \citet{pearl2000causality}), \begin{eqnarray} && \{Y, T\} \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, S=1, \notag \end{eqnarray} which implies \begin{eqnarray} & & Y(t) \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, S=1. \end{eqnarray} Given that the conditional independence structure of $(Y(1), Y(0), \bX^S, \bW)$ is the same under $S=1$ and $S=0$ (because $S$ only changes the treatment assignment, relationship for potential outcomes and pre-treatment variables would not change), we have \begin{equation} Y(t) \ \mbox{$\perp\!\!\!\perp$} \ \bX^S \mid \bW, S, \label{eq:est7} \end{equation} for $t = \{0, 1\}.$ From the definition of the sampling variable, for $t = \{0, 1\},$ \begin{eqnarray} Y(t) \ \mbox{$\perp\!\!\!\perp$} \ S \mid \bW, \bX^S. \label{eq:est8} \end{eqnarray} Combining equations~\eqref{eq:est7} and~\eqref{eq:est8} (Intersection in \citet{pearl2000causality}), we have \begin{eqnarray*} && Y(t) \ \mbox{$\perp\!\!\!\perp$} \ \{S, \bX^S\} \mid \bW, \end{eqnarray*} which implies \begin{eqnarray*} & & Y(t) \ \mbox{$\perp\!\!\!\perp$} \ S \mid \bW \end{eqnarray*} for $t = \{0, 1\}.$ This completes the proof. \ensuremath{\Box} \section{IPW Estimator} \label{sec:ipw} Here, we show that $\hat{\tau} \xrightarrow{p} \E[Y_i (1) - Y_i(0) \mid S_i = 0].$ \paragraph{Proof} First, we rewrite the IPW estimator as follows. \begin{equation} \hat{\tau} = \cfrac{\frac{1}{n+m}\sum_{i} S_i \pi_i p_i T_iY_i}{\frac{1}{n+m}\sum_{i} S_i\pi_i p_i T_i} - \cfrac{\frac{1}{n+m}\sum_{i} S_i\pi_i (1-p_i) (1-T_i)Y_i}{\frac{1}{n+m}\sum_{i} S_i\pi_i (1-p_i) (1-T_i)}, \end{equation} where $n$ ($m$) is the sample size of the experimental data (the population data). By the law of large number, \begin{align*} \frac{1}{n+m}\sum_{i} S_i \pi_i p_i T_i \xrightarrow{p} \E[S_i \pi_i p_i T_i] & = \E_{\bW} \{\pi_i \Pr(S_i=1 \mid \bW_i) p_i \Pr(T_i = 1\mid S_i= 1, \bW_i)\}\\ & = \E_{\bW} \l\{\frac{\Pr(S_i=0 \mid \bW_i)}{\Pr(S_i=0)}\r\} = 1. \end{align*} Similarly, $ \frac{1}{n+m}\sum_{i} S_i \pi_i (1-p_i) (1-T_i) \xrightarrow{p} 1.$ Again, by the law of large number, \begin{align*} & \frac{1}{n+m}\sum_{i} S_i \pi_i p_i T_iY_i \xrightarrow{p} \E[S_i \pi_i p_i T_i Y_i], \hspace{0.1in} \frac{1}{n+m}\sum_{i} S_i \pi_i (1-p_i) (1-T_i) Y_i \xrightarrow{p} \E[S_i \pi_i (1-p_i) (1-T_i) Y_i]. \end{align*} Hence, $\hat{\tau} \xrightarrow{p} \E[S_i \pi_i p_i T_i Y_i - S_i \pi_i (1-p_i) (1-T_i) Y_i].$ We focus on the term on the right. {\small \begin{eqnarray*} \hspace{-0.3in}&& \E \biggl\{\pi_i \bigg(S_i p_i T_i Y_i - S_i (1-p_i) (1 - T_i) Y_i\bigg)\biggr\} = \E_{\bW} \Biggl\{\pi_i\E \biggl\{S_i p_i T_i Y_i - S_i (1-p_i) (1 - T_i) Y_i \mid \bW_i\biggr\} \Biggr\}\\ \hspace{-0.3in}& = & \E \Biggl\{\pi_i \Pr (S_i = 1 \mid \bW_i) \E \biggl\{ p_i T_i Y_i - (1-p_i)(1 - T_i) Y_i \mid S_i = 1, \bW_i \biggr\}\Biggr\}\\ \hspace{-0.3in}& = & \E \Biggl\{\pi_i \Pr (S_i = 1 \mid \bW_i) \{p_i \E[T_i Y_i \mid S_i = 1, \bW_i] - (1-p_i) \E[(1 - T_i) Y_i \mid S_i = 1, \bW_i]\}\Biggr\}\\ \hspace{-0.3in}& = & \E \Biggl\{\pi_i \Pr (S_i = 1 \mid \bW_i) \bigg(\E[Y_i(1) \mid S_i = 1, \bW_i] - \E[Y_i(0) \mid S_i = 1, \bW_i]\bigg) \Biggr\}\\ \hspace{-0.3in}& = & \E \Biggl\{\pi_i \Pr (S_i = 1 \mid \bW_i)\E[Y_i(1) - Y_i(0) \mid S_i = 1, \bW_i]\Biggr\} = \E \Biggl\{\pi_i \Pr (S_i = 1 \mid \bW_i)\E[Y_i(1) - Y_i(0) \mid S_i = 0, \bW_i]\Biggr\}\\ \hspace{-0.3in}& = & \E \Biggl\{\frac{\Pr (S_i = 0 \mid \bW_i)}{\Pr (S_i = 0)} \E[Y_i(1) - Y_i(0) \mid S_i = 0, \bW_i]\Biggr\}\\ \hspace{-0.3in}& = & \int_{\bW} \Biggl\{\frac{\Pr (S_i = 0 \mid \bW_i)}{\Pr (S_i = 0)} \E[Y_i(1) - Y_i(0) \mid S_i = 0, \bW_i]\Biggr\} p(\bW) d\bW\\ \hspace{-0.3in}& = & \int_{\bW} \E[Y_i(1) - Y_i(0) \mid S_i = 0, \bW_i] p(\bW \mid S_i = 0) d\bW = \E[Y_i(1) - Y_i(0) \mid S_i = 0], \end{eqnarray*}} \noindent where the first equality follows from the law of conditional expectation given $\bW$, the second from the conditional expectation given $S$, the third from the linearity of expectation, the fourth from the conditional expectation given $T$, the fifth from the linearity of expectation, the sixth from the definition of separating $\bW$, the seventh from the definition of $\pi,$ the eight from the rule of expectation, the ninth from Bayes rule, and the tenth from the rule of expectation. \section{Markov Random Fields: Review} \label{sec:mrf-si} A Markov random field (MRF), also known as an undirected graphical model, is a popular statistical model that encodes the conditional independence structure over multiple observed random variables. The main advantage of the MRF is that it encodes the conditional independence relationships of many random variables compactly. While many important results have been derived for MRFs, we focus on one key property, so-called, the global Markov property, which we use in our paper. MRFs define the conditional independence relationships via simple graph separation rules \citep{lauritzen1996graphical}. For sets of nodes $A$, $B$, and $C$, $A \ \mbox{$\perp\!\!\!\perp$} \ B \mid C$ if and only if there is no path connecting $A$ and $B$ when nodes in $C$ are removed from the graph (i.e., nodes in $C$ separates nodes $A$ and $B$). For example, in Figure~\ref{fig:mrf}, suppose $A = \{V_1, V_2, V_3\}$ and $B = \{V_6, V_7\}$. Then, if we define $C = \{V_4, V_5\}$, there is no path connecting $A$ and $B$ once nodes in $C$ are removed from the graph. Therefore, Figure~\ref{fig:mrf} encodes the conditional independence relationship, $\{V_1, V_2, V_3\} \ \mbox{$\perp\!\!\!\perp$} \ \{V_6, V_7\} \mid V_4, V_5$. As emphasized in the paper, we use the MRF as the statistical model to characterize the conditional independence relationships between observed random variables. We do not use the MRF as a step to estimate the underlying causal DAG. \begin{figure} \caption{Example of a Markov Random Field (MRF).} \label{fig:mrf} \end{figure} \section{Additional Results on Empirical Analysis} \label{sec:add-result} In Section~\ref{sec:app_ana}, we focused on the inverse probability weighting estimator (equation~\eqref{eq:ipw}) to maintain the clear comparison with the original analysis that uses the weighting approach. In this section, we report results based on an outcome-model-based estimator\footnote{ For outcome-model-based estimators, it is unclear whether adjusting for a smaller set of covariates leads to an increase in estimation efficiency; it will depend on how predictive are those covariates. However, at least in our application, we see below in Table~\ref{tab:tab-exact-out} that outcome-model-based estimators based on estimated separating sets have smaller standard errors than those based on the original sampling set for 16 out of 17 outcomes. For outcome-model-based estimators, another benefit of having a smaller valid separating set is that it is easier for analysts to model the conditional expectation correctly with a fewer variables --- the key necessary assumption for outcome-model-based estimators. We leave further technical and thorough investigation of outcome-model-based estimators for future work.} and a doubly robust estimator \citep{hernan2019}. In particular, for the outcome-model-based estimator, we use a fully-interacted linear model. Within the experimental data, we estimate a linear regression with a specified set of covariates separately for the treatment and control groups. Then, we use the estimated models to predict potential outcomes under treatment and control for the target population data. This outcome-model-based estimator is consistent under the assumption that the outcome model is correctly specified. For the doubly robust estimator, we use an augmented IPW estimator \citep{robins1994, hernan2019} where the outcome model is a fully-interacted linear model and the sampling model is a logistic regression specified in Section~\ref{subsec:est_pate}. This doubly robust estimator is consistent if one of the two models --- outcome or sampling models --- is correctly specified. We first extend our analyses in Section~\ref{subsec:exact-app}. Table~\ref{tab:tab-exact-out} reports results based on the outcome-model-based estimator (an extension of Table~\ref{tab:estimate_XH}). Similarly to the case of the IPW estimator, we find that (1) point estimates based on estimated separating sets are similar to those based on the original sampling set, and (2) standard errors based on our proposed estimated separating sets are smaller for 16 out of 17 outcomes. Table~\ref{tab:tab-exact-aipw} reports results based on the doubly robust estimator (an extension of Table~\ref{tab:estimate_XH}). Similarly to the cases of the IPW estimator and the outcome-model-based estimator, we find that (1) point estimates based on estimated separating sets are similar to those based on the original sampling set, and (2) standard errors based on our proposed estimated separating sets are smaller for 15 out of 17 outcomes. Therefore, for all three classes of estimators, our proposed approach of using the estimated separating set improves estimation accuracy. Finally, we also compare estimates across three classes of estimators in Table~\ref{tab:tab-exact}. Across 17 outcomes, we find that estimates of the PATE are relatively stable across different estimators (none of the differences in estimates are statistically significant at the conventional $0.05$ level), which suggests model misspecification is of little concern. We next extend our analyses in Section~\ref{subsec:mar-app}. Table~\ref{tab:tab-mar} reports results for Section~\ref{subsec:mar-app} by comparing estimates from the outcome-model-based estimator and the doubly robust estimator to estimates from the IPW estimator. While the point estimate for ``Agricultural'' is unstable due to a relatively large standard error (the first row in Table~\ref{tab:tab-mar}), estimates of the PATE are relatively stable across different estimators (none of the differences in estimates are statistically significant at the conventional $0.05$ level), which again suggests model misspecification is of little concern. \begin{table}[!h] \centering \scalebox{0.75}{ \begin{tabular}{|l|cc|cc|} \hline & \multicolumn{2}{c|}{Original} & \multicolumn{2}{c|}{Estimated} \\ & \multicolumn{2}{c|}{Sampling Set} & \multicolumn{2}{c|}{Separating Set } \\ \hline & Estimate & S.E. & Estimate & S.E. \\ \hline Average employment hours & 4.58 & 2.35 & 3.57 & 1.80 \\ Agricultural & -0.00 & 1.61 & -1.22 & 1.45 \\ Nonagricultural & 4.58 & 1.77 & 4.79 & 1.45 \\ Skilled trades only & 3.70 & 1.03 & 4.08 & 0.86 \\ No employment hours & -0.04 & 0.03 & -0.03 & 0.02 \\ Any skilled trade & 0.27 & 0.05 & 0.25 & 0.04 \\ Works mostly in a skilled trade & 0.02 & 0.02 & 0.04 & 0.02 \\ Cash earnings & 5.20 & 7.31 & 8.22 & 7.02 \\ Durable assets & 0.08 & 0.10 & 0.06 & 0.08 \\ Vocational training & 0.52 & 0.05 & 0.50 & 0.04 \\ Hours of vocational training & 250.32 & 34.71 & 280.24 & 27.43 \\ Business assets & 340.79 & 141.74 & 367.61 & 127.23 \\ Maintain records & 0.14 & 0.05 & 0.14 & 0.04 \\ Registered & 0.03 & 0.04 & 0.04 & 0.03 \\ Pays taxes & 0.01 & 0.05 & 0.02 & 0.05 \\ Changed parish & 0.04 & 0.06 & -0.02 & 0.03 \\ Lives in Urban area & -0.01 & 0.03 & -0.01 & 0.03 \\ \hline \end{tabular}} \caption{Estimates of the PATEs based on Outcome-Model-Based Estimator, comparing the Original Sampling Set and Estimated Exact Separating Sets. Extension of Table~\ref{tab:estimate_XH} in Section~\ref{subsec:exact-app}.}\label{tab:tab-exact-out} \end{table} \begin{table}[!h] \centering \scalebox{0.75}{ \begin{tabular}{|l|cc|cc|} \hline & \multicolumn{2}{c|}{Original} & \multicolumn{2}{c|}{Estimated} \\ & \multicolumn{2}{c|}{Sampling Set} & \multicolumn{2}{c|}{Separating Set } \\ \hline & Estimate & S.E. & Estimate & S.E. \\ \hline Average employment hours & 2.49 & 3.16 & 2.48 & 2.73 \\ Agricultural & -2.27 & 2.73 & -2.08 & 1.83 \\ Nonagricultural & 5.10 & 2.99 & 4.56 & 2.24 \\ Skilled trades only & 2.29 & 1.66 & 3.71 & 1.14 \\ No employment hours & 0.01 & 0.03 & -0.01 & 0.03 \\ Any skilled trade & 0.24 & 0.07 & 0.24 & 0.06 \\ Works mostly in a skilled trade & -0.03 & 0.03 & 0.02 & 0.04 \\ Cash earnings & 4.16 & 8.06 & 9.02 & 7.54 \\ Durable assets & 0.02 & 0.15 & 0.14 & 0.15 \\ Vocational training & 0.49 & 0.07 & 0.50 & 0.05 \\ Hours of vocational training & 228.35 & 50.14 & 283.26 & 34.55 \\ Business assets & 326.58 & 178.44 & 371.18 & 139.65 \\ Maintain records & 0.16 & 0.07 & 0.17 & 0.07 \\ Registered & 0.05 & 0.06 & 0.06 & 0.05 \\ Pays taxes & -0.02 & 0.07 & 0.01 & 0.07 \\ Changed parish & 0.06 & 0.07 & -0.04 & 0.05 \\ Lives in Urban area & 0.01 & 0.05 & -0.01 & 0.04 \\ \hline \end{tabular}} \caption{Estimates of the PATEs based on Doubly Robust Estimator, comparing the Original Sampling Set and Estimated Exact Separating Sets. Extension of Table~\ref{tab:estimate_XH} in Section~\ref{subsec:exact-app}.}\label{tab:tab-exact-aipw} \end{table} \begin{table}[!h] \centering \scalebox{0.875}{ \begin{tabular}{|l|cc|cc|cc|} \hline & \multicolumn{2}{c|}{IPW} & \multicolumn{2}{c|}{Outcome-Model-based} & \multicolumn{2}{c|}{AIPW} \\ & \multicolumn{2}{c|}{Estimator} & \multicolumn{2}{c|}{Estimator} & \multicolumn{2}{c|}{Estimator} \\ \hline & Estimate & S.E. & Estimate & S.E. & Estimate & S.E. \\ \hline Average employment hours & 4.79 & 2.39 & 3.57 & 1.80 & 2.48 & 2.73 \\ Agricultural & 0.30 & 1.69 & -1.22 & 1.45 & -2.08 & 1.83 \\ Nonagricultural & 4.49 & 1.79 & 4.79 & 1.45 & 4.56 & 2.24 \\ Skilled trades only & 4.36 & 0.99 & 4.08 & 0.86 & 3.71 & 1.14 \\ No employment hours & -0.03 & 0.03 & -0.03 & 0.02 & -0.01 & 0.03 \\ Any skilled trade & 0.27 & 0.06 & 0.25 & 0.04 & 0.24 & 0.06 \\ Works mostly in a skilled trade & 0.04 & 0.03 & 0.04 & 0.02 & 0.02 & 0.04 \\ Cash earnings & 12.54 & 5.11 & 8.22 & 7.02 & 9.02 & 7.54 \\ Durable assets & 0.18 & 0.13 & 0.06 & 0.08 & 0.14 & 0.15 \\ Vocational training & 0.53 & 0.05 & 0.50 & 0.04 & 0.50 & 0.05 \\ Hours of vocational training & 337.59 & 40.77 & 280.24 & 27.43 & 283.26 & 34.55 \\ Business assets & 425.02 & 135.65 & 367.61 & 127.23 & 371.18 & 139.65 \\ Maintain records & 0.20 & 0.07 & 0.14 & 0.04 & 0.17 & 0.07 \\ Registered & 0.09 & 0.05 & 0.04 & 0.03 & 0.06 & 0.05 \\ Pays taxes & 0.05 & 0.05 & 0.02 & 0.05 & 0.01 & 0.07 \\ Changed parish & -0.01 & 0.04 & -0.02 & 0.03 & -0.04 & 0.05 \\ Lives in Urban area & -0.01 & 0.04 & -0.01 & 0.03 & -0.01 & 0.04 \\ \hline \end{tabular}} \caption{Estimates of the PATEs based on Estimated Exact Separating Sets for Three Estimators. Extension of Section~\ref{subsec:exact-app}.}\label{tab:tab-exact} \end{table} \begin{table}[!h] \centering \begin{tabular}{|l|cc|cc|cc|} \hline & \multicolumn{2}{c|}{IPW} & \multicolumn{2}{c|}{Outcome-Model-based} & \multicolumn{2}{c|}{AIPW} \\ & \multicolumn{2}{c|}{Estimator} & \multicolumn{2}{c|}{Estimator} & \multicolumn{2}{c|}{Estimator} \\ \hline & Estimate & S.E. & Estimate & S.E. & Estimate & S.E. \\ \hline Agricultural & 0.64 & 1.63 & -1.10 & 1.31 & -1.32 & 1.56 \\ Changed parish & 0.05 & 0.03 & 0.03 & 0.03 & 0.04 & 0.03 \\ Lives in Urban area & -0.02 & 0.03 & -0.02 & 0.03 & -0.01 & 0.04 \\ \hline \end{tabular} \caption{Estimates of the PATEs based on Estimated Marginal Separating Sets for Three Estimators. Extension of Section~\ref{subsec:mar-app}.}\label{tab:tab-mar} \end{table} \section{Simulation Studies} \label{sec:sims} We turn now to simulations to explore how well the proposed algorithm can recover the PATE. We first verify that our proposed algorithm can obtain a consistent estimator of the PATE. More importantly, we find that estimators based on estimated separating sets often have similar standard errors to the ones based on the true sampling set. Although our approach introduces an additional estimation step of finding separating sets to relax data requirements for the target population, it does not suffer from substantial efficiency loss. Both results hold with and without user constraints on what variables can be measured in the target population. \subsection{Simulation Design} In this subsection, we articulate our simulation design step by step. See the supplementary material for all the details on the simulation design. \paragraph{Pre-treatment Covariates and Potential Outcome Model.} To consider different types of separating sets, we assume the causal directed acyclic graph (DAG) in Figure \ref{Dag_sep} that encodes causal relationships among the outcome, the sampling indicator, and pre-treatment covariates. In this DAG, there are three conceptually distinct sets that we consider -- (1) a sampling set, $X4$ and $X5$, depicted in green, (2) a heterogeneity set, $X2$ and $X3$, depicted in orange, and (3) the minimum separating set, $X1$, highlighted in purple. Three root nodes $X1$, $X6$, $X7$ are normally distributed and other pre-treatment covariates are linear functions of their parents in the DAG. In particular, pre-treatment covariates are generated as follows. \begin{align*} X1 &\sim \cN(0, 1) \\ X2 & = 0.7 \times X1 + \sqrt{1 - 0.7^2} \times \epsilon_2 \\ X3 & = 0.7 \times X1 + \sqrt{1 - 0.7^2} \times \epsilon_3 \\ X4 & = 0.7 \times X1 + \sqrt{1 - 0.7^2} \times \epsilon_4 \\ X5 & = 0.3 \times X9 + \sqrt{1 - 0.3^2} \times \epsilon_5 \\ X6 & \sim \cN(0, 1) \\ X7 & \sim \cN(0, 1) \\ X8 & = -0.7 \times X2 + \sqrt{1 - 0.7^2} \times \epsilon_8 \\ X9 & = 0.6 \times X1 + \sqrt{1 - 0.6^2} \times \epsilon_9 \end{align*} where $\epsilon_2, \epsilon_3, \epsilon_4, \epsilon_5, \epsilon_8, \epsilon_9$ are drawn independently and identically from a standard normal distribution, $\cN(0, 1).$ This results in the following correlation structure for variables $X1 - X9$. \begin{small} \begin{align*} cor(\textbf{X}) &= \begin{pmatrix} 1.00 & -0.70 & 0.70 & 0.70 & -0.20 & 0.00 & 0.00 & 0.50 & -0.70 \\ -0.70 & 1.00 & -0.50 & -0.50 & 0.15 & 0.00 & 0.00 & -0.70 & 0.50 \\ 0.70 & -0.50 & 1.00 & 0.50 & -0.15 & 0.00 & 0.00 & 0.33 & -0.50 \\ 0.70 & -0.50 & 0.50 & 1.00 & -0.15 & 0.00 & 0.00 & 0.33 & -0.50 \\ -0.21 & 0.15 & -0.15 & -0.15 & 1.00 & 0.00 & 0.00 & -0.10 & 0.30 \\ 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 1.00 & 0.00 & 0.00 & 0.00 \\ 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 1.00 & 0.00 & 0.00 \\ 0.50 & -0.70 & 0.33 & 0.33 & -0.10 & 0.00 & 0.00 & 1.00 & -0.33 \\ -0.70 & 0.50 & -0.50 & -0.50 & 0.30 & 0.00 & 0.00 & -0.33 & 1.00 \end{pmatrix} \end{align*} \end{small} We then draw the potential outcomes as follows. \begin{equation*} Y_i(T_i) = 5 T_i + 10 \times X_{3i} \times T_i -10 \times X_{2i} \times T_i + X_{6i} - 3 \times X_{8i} + \epsilon_i \end{equation*} where $\epsilon_i \sim N(0, 1)$. Thus, the true PATE is set to $5$. \begin{figure} \caption{Causal DAG underlying the simulation study. Note: We consider three conceptually distinct sets (1) a sampling set, $X4$ and $X5$ (green), (2) a heterogeneity set, $X2$ and $X3$ (orange) and (3) the minimum separating set, $X1$ (purple). Three root nodes $X1$, $X6$, $X7$ are normally distributed and other pre-treatment covariates are linear functions of their parents. } \label{Dag_sep} \end{figure} \paragraph{Sampling Mechanism and Treatment Assignment.} We randomly sample a set of $n$ units for a randomized experiment. The sampling mechanism is a logit model based on the sampling set, $X4$ and $X5$. The treatment assignment mechanism is defined only for the experimental sample ($S_i=1$). After being sampled into the experiment, every unit has the same probability of receiving the treatment $\Pr(T_i = 1 \mid S_i=1) = 0.5$. For the sake of simplicity, we omit an arrow from the sampling indicator $S$ to the treatment $T$ in Figure 1. In particular, we draw a sampling indicator $S_i$ as follows. The second step scales the probability to be bounded away from zero and one. \begin{align*} & S'_{i,lp} = -20 \times X_{4i} + 20 \times X_{5i} \\ & S_{i,lp} = 0.25(S'_{i,lp} - \overline{S'_{lp}})/sd(S'_{lp}) \\ & S_i = \frac{1}{1 + e^{-S_{i,lp}}} \end{align*} \paragraph{Simulation Procedure.} We conduct 5000 simulations for each of six experimental sample sizes, $n = \{ 100, 200, 500, 1000, 2000, 3000 \}$. Within each simulation, we first randomly sample $n$ units for the experiment based on the sampling mechanism and randomly assign units to treatment according to the specified treatment assignment mechanism. We also randomly sample a target population of size $m = 10000$. We then estimate both an exact and a marginal separating set using the experimental data. An advantage of our method is that researchers can specify variables that cannot be measured in the target population. To illustrate this benefit, we also estimate a marginal separating set with a constraint that variable $X1$ is unmeasurable in the target population, thus making the minimal separating set unobservable in the target population. We compare these sets to an oracle sampling set, oracle heterogeneity set, and oracle minimum separating set. For each estimated and oracle set, we compute the PATE using the inverse probability weighting estimator described in Section~\ref{subsec:est_pate}. In the supplementary material, we repeat these simulations with a calibration estimator discussed in \citet{Hartman:2015hq}, and a linear regression projection estimator. \begin{figure} \caption{Simulation Results. Note: The left figure shows bias for the PATE and the right figure presents standard error estimates. As expected, bias is close to zero for all estimators. More importantly, estimators based on the estimated separating sets (red) and estimated separating sets with user constraints (pink) have similar standard errors to the oracle sampling set (green) and the oracle minimum separating set (purple).} \label{fig:sim_res_1} \end{figure} \subsection{Results} We present results in Figure~\ref{fig:sim_res_1}. Not shown in the graph are the results for the naive difference-in-means, which has significant bias ($-1.0$). As expected, we see that the bias goes to zero for the oracle and estimated separating sets, and that the estimators are consistent for the PATE. More importantly, we see that estimators based on the selected marginal separating sets (red), exact separating sets (dark blue), and marginal separating set with user constraints (pink) have similar standard errors to the oracle sampling set (green) and the oracle minimum separating set (purple). An estimator based on the oracle heterogeneity set (orange) has smaller standard errors than other estimators partly because it contains variables which are direct predictors of outcomes. \begin{figure} \caption{Types of Estimated Separating Sets. Note: We present the frequency of estimated separating sets by conceptual type. While the algorithm picks an inappropriate set when the sample size is small, as $n$ increases, the most likely set is the minimal separating set.} \label{fig:sim_res_2_type} \end{figure} Figure~\ref{fig:sim_res_2_type} shows the breakdown of types of estimated separating sets. We group sets that are conceptually similar, and the frequency with which each set is chosen is presented. For example, if our algorithm selects the variables in the sampling set ($X4$ and $X5$) as well as an additional variable, we group these as ``similar to'' the sampling set. As can be seen, in these simulations as $n$ gets large, over 75\% of the time, the minimal separating set (purple) is selected. Small sample size can lead to the misestimation of the MRF, and therefore selection of inappropriate sets (gray) which do not remove bias --- however, the rate at which inappropriate sets are selected drops off rapidly with sample size. In the supplementary material, we show that, when incorporating user constraints that make adjustment by the minimum separating set infeasible, the algorithm selects sets similar to the sampling and heterogeneity sets with higher frequency. \subsection{Additional Simulation Results} \label{app:simulation_res} In the previous subsection, we discussed the breakdown of the different types of estimated separating sets in the simulated data generating process. Here we show the breakdown of types of estimated separating sets when incorporating user constraints in Figure~\ref{fig:sim_res_2_type_unobs}. In this case, $X1$, the alternative separating set, cannot be measured in the target population, we see that the algorithm selects the sampling and heterogeneity sets with higher frequency. \begin{figure} \caption{Type of Estimated Marginal Separating Set with User Constraints. Note: We present the frequency of estimated separating sets by conceptual type. With user constraints, the algorithm selects each of the other types of separating sets more frequently.} \label{fig:sim_res_2_type_unobs} \end{figure} Figure \ref{fig:sim_res_estimated_by_type} presents the bias and standard error result by selected estimated separating set type. We refer to sets that are ``similar to'' different conceptual sets in order to group sets that control for a specific type of separating sets, but which may include extra variables. For example, if the estimated set includes $X4$, $X5$, and $X8$, we say this is similar to a sampling set ($X4$ and $X5$). As theorems tell us, it doesn't matter what type of separating sets the algorithm estimates in the experimental data, all of them produce unbiased estimates so long as the set is an appropriate separating set (see Figure~\ref{fig:sim_res_estimated_by_type}). When an inappropriate set is chosen, which is common in the $n = 100$ case but rare as $n$ increases, we see that inappropriate sets do not reduce bias. As we expect, when estimated separating sets are similar to a heterogeneity set, standard errors are the smallest. \begin{figure} \caption{Simulation Results for Estimated Separating Set by Type. Note: The left figure shows bias for the PATE and the right figure presents standard error estimates. As expected, bias is close to zero for all estimators. Estimated sets are categorized by type: similar to oracle sampling set (green) and the oracle minimum separating set (purple) and oracle heterogeneity set (orange).} \label{fig:sim_res_estimated_by_type} \end{figure} Finally, we present the simulation results for two alternative estimators in Figure \ref{fig:sim_res_1_alt}, a calibration estimator and a linear regression projection. The calibration estimator matches population means for the estimated separating set using a maximum entropy (raking) algorithm \citep{Hartman:2015hq}. The linear projection estimator estimates a fully interacted linear regression model using the estimated separating set, and projects the model on the target population. \begin{figure} \caption{Simulation Results for Alternative Estimators. Note: The left figure shows bias for the PATE and the right figure presents standard error estimates. As expected, bias is close to zero for all estimators. More importantly, estimators based on the estimated separating sets (red) and estimated separating set with user constraints (pink) have similar standard errors to the oracle sampling set (green) and the oracle minimum separating set (purple). An estimator based on the heterogeneity set (orange) has significantly smaller standard errors than other estimators, but this estimator might be unavailable in practice.} \label{fig:sim_res_1_alt} \end{figure} \section{R Function to Estimate Separating Sets} {\footnotesize \begin{verbatim} # ################################ # Estimating the separating set # ################################ # X.data: all pre-treatment covariates in the experimental data # X.type: types of each covariate. "g" for continous variables, and "c" for categorical variables. # X.level: the number of levels in each covariates. For continous variables, set it to 1. # Y: outcome variable in the experimental data # Treat: treatment variable in the experimental data # XS: names of the sampling set # XH: names of the heterogeneity set # XU: names of variables unmeasurable in the target population # type: when "Y", we estimate the marginal separating set. when "XH", we estimate the exact separating set. # print_graph: whether we print the estimated Markov Random Fields library(igraph); library(qgraph); library(lpSolve); library(mgm); library(Hmisc) Separating <- function(X.data, X.type, X.level, Y, Treat, XS, XH = NULL, XU=NULL, type = "Y", print_graph = FALSE) { ## Setup n.var <- ncol(X.data) if(type == "Y"){ if(missing(X.type) == TRUE){ type.sim <- rep("g", n.var + 2) level.sim <- rep(1, n.var + 2) }else{ type.sim <- c(X.type, rep("g", 2)) level.sim <- c(X.level, rep(1, 2)) } X.data.g <- cbind(X.data, Y, Treat) name.label <- c(colnames(X.data), "Y") }else if(type == "XH"){ if(missing(X.type) == TRUE){ type.sim <- rep("g", n.var + 1) level.sim <- rep(1, n.var + 1) }else{ type.sim <- c(X.type, rep("g", 1)) level.sim <- c(X.level, rep(1, 1)) } X.data.g <- cbind(X.data, Treat) name.label <- colnames(X.data) } ## ########################################### ## Step 1: Estimate the Markov Random Graph ## ########################################### fit.sim <- mgm( data = X.data.g, type = type.sim, level = level.sim, threshold = "LW", k = 2, verbatim = TRUE, signInfo = FALSE, lambdaSel = "EBIC" ) ## Remove T from the Graph treat_ind <- which(colnames(X.data.g) == "Treat") Ad <- as.matrix(fit.sim$pairwise$wadj > 0) Ad <- Ad[-treat_ind,-treat_ind] Ad.w <- fit.sim$pairwise$wadj Ad.w <- Ad.w[-treat_ind, -treat_ind] edge.col <- fit.sim$pairwise$edgecolor edge.col <- edge.col[-treat_ind,-treat_ind] graph.u <- graph_from_adjacency_matrix(Ad) ## Show the graph if(print_graph) qgraph( Ad.w, edge.color = edge.col, layout = 'spring', labels = name.label ) ## ################################################################# ## Step 2: Estimate the Separating Set based on an estimated MRF ## ################################################################# if (type == "Y") { base <- rep(0, (n.var + 1)) XS.ind <- which(is.element(colnames(X.data), XS)) path.cons <- matrix(NA, nrow = 0, ncol = (n.var + 1)) ## Enumerate all path for (w in 1:length(XS)) { ind.path.mat <- do.call("rbind", lapply(all_simple_paths(graph.u, (n.var + 1), XS.ind[w]), FUN=function(x) ind.path(x, base))) path.cons <- rbind(path.cons, ind.path.mat) } }else if (type == "XH") { base <- rep(0, n.var) XJ <- intersect(XS, XH) all.pair <- expand.grid(XH, XS) path.cons <- matrix(NA, nrow = 0, ncol = n.var) ## Enumerate all path for (w in 1:nrow(all.pair)) { ind_1 <- which(colnames(X.data.g) == all.pair[w, 1]) ind_2 <- which(colnames(X.data.g) == all.pair[w, 2]) ind.path.mat <- do.call("rbind", lapply( all_simple_paths(graph.u, ind_1, ind_2), FUN=function(x) ind.path(x, base))) path.cons <- rbind(path.cons, ind.path.mat) } } if(dim(path.cons)[1] == 0) { solution <- NULL status <- 0 }else{ ## Removing Y and XU from the separating set if (length(XU) == 0) { if(type == "Y"){ path.cons2 <- rbind(path.cons, c(rep(0, n.var), 1)) f.dir <- c(rep(">=", nrow(path.cons)), "=") f.rhs <- c(rep(1, nrow(path.cons)), 0) }else if(type == "XH"){ path.cons2 <- path.cons f.dir <- rep(">=", nrow(path.cons)) f.rhs <- rep(1, nrow(path.cons)) } } else{ XU.ind <- which(is.element(colnames(X.data), XU)) path.cons2.u <- matrix(0, nrow = length(XU.ind), ncol = n.var) for (i in 1:nrow(path.cons2.u)) { path.cons2.u[i, XU.ind[i]] <- 1 } if(type == "Y"){ path.cons2.u2 <- cbind(path.cons2.u, 0) path.cons2 <- rbind(path.cons, c(rep(0, n.var), 1), path.cons2.u2) f.dir <- c(rep(">=", nrow(path.cons)), rep("=", (nrow(path.cons2.u2) + 1))) f.rhs <- c(rep(1, nrow(path.cons)), rep(0, (nrow(path.cons2.u2) + 1))) }else if(type == "XH"){ path.cons2.u2 <- path.cons2.u path.cons2 <- rbind(path.cons, path.cons2.u2) f.dir <- c(rep(">=", nrow(path.cons)), rep("=", nrow(path.cons2.u2))) f.rhs <- c(rep(1, nrow(path.cons)), rep(0, nrow(path.cons2.u2))) } } if(type == "Y"){f.obj <- c(rep(1, n.var), 0)} else if(type == "XH"){f.obj <- rep(1, n.var)} f.con <- path.cons2 num.solutions <- max.solutions.calculate <- 1 sp.out <- lp("min", f.obj, f.con, f.dir, f.rhs, all.bin = TRUE, num.bin.solns = max.solutions.calculate) if(sp.out$status == 0) { if(max.solutions.calculate > 1) { solution <- sp.out$solution[1:(length(f.obj)*max.solutions.calculate)] solution <- split(solution, sort(1:length(solution) if(num.solutions == 1) { solution <- sample(solution, num.solutions) } if(length(solution) == 1) { solution <- as.vector(unlist(solution)) } } else { solution <- sp.out$solution } } status <- sp.out$status } ## Final Adjustment if (status == 0) { if(is.null(solution)==TRUE) { ## the empty set is enough for generalizability solution.name <- NULL }else{ if(type == "Y"){ solution.ind <- which(solution[-length(solution)] == 1) XJ <- NULL }else if (type == "XH") { solution.ind <- which(solution == 1) } solution.name <- colnames(X.data)[solution.ind] if(type == "XH" & length(XJ)!=0){ solution.name <- union(solution.name, XJ)} } }else if (status==2){ cat("\nNo Feasible Solution.\n") solution.name <- "No Feasible Solution." } if(print_graph==TRUE){cat ("\n"); cat(solution.name)} return(solution.name) } # Auxiliary function ind.path <- function(x, base) { base[x] <- 1 return(base) } \end{verbatim} } \spacingset{1.4} {\small \pdfbookmark[1]{Supplementary Material References}{Supplementary Material References} \printbibliography[title = Supplementary Material References]} \end{refsection} \end{document}
arXiv
Classical converse theorems in Lyapunov's second method Polynomial optimization with applications to stability analysis and control - Alternatives to sum of squares Advances in computational Lyapunov analysis using sum-of-squares programming James Anderson 1, and Antonis Papachristodoulou 1, Department of Engineering Science, University of Oxford, Parks Road, Oxford, OX1 3PJ, United Kingdom, United Kingdom Received June 2014 Revised November 2014 Published August 2015 The stability of an equilibrium point of a nonlinear dynamical system is typically determined using Lyapunov theory. This requires the construction of an energy-like function, termed a Lyapunov function, which satisfies certain positivity conditions. Unlike linear dynamical systems, there is no algorithmic method for constructing Lyapunov functions for general nonlinear systems. However, if the systems of interest evolve according to polynomial vector fields and the Lyapunov functions are constrained to be sum-of-squares polynomials then stability verification can be cast as a semidefinite (convex) optimization programme. In this paper we describe recent advances in sum-of-squares programming that facilitate advanced stability analysis and control design. Keywords: time-delay systems., sum-of-squares, Lyapunov functions, semidefinite programming, hybrid systems. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C3. Citation: James Anderson, Antonis Papachristodoulou. Advances in computational Lyapunov analysis using sum-of-squares programming. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2361-2381. doi: 10.3934/dcdsb.2015.20.2361 A. A. Ahmadi, M. Krstic and P. A. Parrilo, A globally asymptotically stable polynomial vector field with no polynomial Lyapunov function,, in Decision and Control and European Control Conference (CDC-ECC), (2011), 7579. doi: 10.1109/CDC.2011.6161499. Google Scholar J. Anderson, Dynamical System Decomposition and Analysis Using Convex Optimization,, PhD thesis, (2012). Google Scholar G. Blekherman, P. A. Parrilo and R. R. Thomas, Semidefinite Optimization and Convex Algebraic Geometry,, SIAM, (2013). Google Scholar J. Bochnak, M. Coste and M.-F. Roy, Real Algebraic Geometry,, Springer-Verlag, (1998). doi: 10.1007/BFb0084605. Google Scholar S. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory,, Society for Industrial and Applied Mathematics, (1994). doi: 10.1137/1.9781611970777. Google Scholar S. Boyd, L. El Ghaoul, E. Feron and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory,, Vol. 15, (1987). Google Scholar S. Boyd and L. Vandenberghe, Convex Optimization,, Cambridge University Press, (2004). doi: 10.1017/CBO9780511804441. Google Scholar G. Chesi, Estimating the domain of attraction for uncertain polynomial systems,, Automatica, 40 (2004), 1981. doi: 10.1016/j.automatica.2004.06.014. Google Scholar G. Chesi, A. Garulli, A. Tesi and A. Vicino, Homogeneous Polynomial Forms for Robustness Analysis of Uncertain Systems,, Springer, (2009). doi: 10.1007/978-1-84882-781-3. Google Scholar D. Cox, J. Little and D. O'Shea, Ideals, Varietis, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra,, Springer, (1997). Google Scholar P. A. Giesl and S. F. Hafstein, Revised CPA method to compute Lyapunov functions for nonlinear systems,, Journal of Mathematical Analysis and Applications, 410 (2014), 292. doi: 10.1016/j.jmaa.2013.08.014. Google Scholar M. Grant and S. Boyd, CVX: Matlab software for disciplined convex programming, version 1.21., , (2011). Google Scholar J. K. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations,, Applied Mathematical Sciences, (1993). doi: 10.1007/978-1-4612-4342-7. Google Scholar E. J. Hancock and A. Papachristodoulou, Generalised absolute stability and sum of squares,, Automatica, 49 (2013), 960. doi: 10.1016/j.automatica.2013.01.006. Google Scholar D. Henrion and J. B. Lasserre, GloptiPoly: Global optimization over polynomials with Matlab and SeDuMi,, ACM Transactions on Mathematical Software (TOMS), 29 (2003), 165. doi: 10.1145/779359.779363. Google Scholar Y. Huang and A. Jadbabaie, Nonlinear H control: An enhanced quasi-LPV approach,, in Proceedings of the IFAC World Congress, (1999), 85. Google Scholar A. Isidori and A. Astolfi, Disturbance attenuation and $H_{\infty}$-control via measurement feedback in nonlinear systems,, IEEE Transactions on Automatic Control, 37 (1992), 1283. doi: 10.1109/9.159566. Google Scholar H. K. Khalil, Nonlinear Systems,, Prentice-Hall, (2000). Google Scholar V. Kolmanovskii and A. Myshkis, Introduction to the Theory and Applications of Functional Differential Equations,, Kluwer Academic Publishers, (1999). doi: 10.1007/978-94-017-1965-0. Google Scholar J. Lasserre, D. Henrion, C. Prieur and E. Trelat, Nonlinear optimal control via occupation measures and LMI-relaxations,, SIAM Journal on Control and Optimization, 47 (2008), 1643. doi: 10.1137/070685051. Google Scholar J. Löfberg, Yalmip: A toolbox for modeling and optimization in MATLAB,, in Proceedings of the CACSD Conference, (2004). Google Scholar W. M. Lu and J. C. Doyle, $H_{\infty}$ control of nonlinear systems: A convex characterization,, IEEE Transactions on Automatic Control, 40 (1995), 1668. doi: 10.1109/9.412643. Google Scholar A. Papachristodoulou, J. Anderson, G. Valmorbida, S. Prajna, P. Seiler and P. A. Parrilo, SOSTOOLS: Sum of Squares Optimization Toolbox for MATLAB,, , (2013). Google Scholar A. Papachristodoulou, M. M. Peet and S. Lall, Analysis of polynomial systems with time delays via the sum of squares decomposition,, IEEE Transactions on Automatic Control, 54 (2009), 1058. doi: 10.1109/TAC.2009.2017168. Google Scholar A. Papachristodoulou and S. Prajna, Analysis of non-polynomial systems using the sum of squares decomposition,, in Positive Polynomials in Control, 312 (2005), 23. Google Scholar P. A. Parrilo, Structured Semidefinite Programs and Semialgebraic Geometry Methods in Robustness and Optimization,, PhD thesis, (2000). Google Scholar P. A. Parrilo, Semidefinite programming relaxations for semialgebraic problems,, Mathematical Programming, 96 (2003), 293. doi: 10.1007/s10107-003-0387-5. Google Scholar P. A. Parrilo and B. Sturmfels, Minimizing polynomials functions,, , (2001). Google Scholar M. M. Peet, Exponentially stable nonlinear systems have polynomial Lyapunov functions on bounded regions,, Automatic Control, 54 (2009), 979. doi: 10.1109/TAC.2009.2017116. Google Scholar M. M. Peet and A. Papachristodoulou, A converse sum of squares Lyapunov result with a degree bound,, IEEE Transactions on Automatic Control, 57 (2012), 2281. doi: 10.1109/TAC.2012.2190163. Google Scholar M. M. Peet, A. Papachristodoulou and S. Lall, Positive forms and stability of linear time-delay systems,, SIAM J. Control Optim., 47 (2008), 3237. doi: 10.1137/070706999. Google Scholar S. Prajna, A. Papachristodoulou and F. Wu, Nonlinear control synthesis by sum of squares optimization: A Lyapunov-based approach,, in 5th Asian Control Conference, (2004), 157. Google Scholar S. Prajna, P. A. Parrilo and A. Rantzer, Nonlinear control synthesis by convex optimization,, IEEE Transactions on Automatic Control, 49 (2004), 310. doi: 10.1109/TAC.2003.823000. Google Scholar J. F. Sturm, Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones,, Optimization Methods and Software, 11/12 (1999), 625. doi: 10.1080/10556789908805766. Google Scholar W. Tan, Nonlinear Control Analysis and Synthesis using Sum-of-Squares Programming,, PhD thesis, (2006). Google Scholar B. Tibken and Y. Fan, Computing the domain of attraction for polynomial systems via BMI optimization method,, in Proceedings of the American Control Conference, (2006), 117. doi: 10.1109/ACC.2006.1655340. Google Scholar M. J. Todd, Semidefinite optimization,, Acta Numerica 2001, 10 (2001), 515. doi: 10.1017/S0962492901000071. Google Scholar K. C. Toh, M. J. Todd and R. H. Tütüncü, SDPT3 - a Matlab software package for semidefinite programming, version 1.3,, Optimization Methods and Software, 11 (1999), 545. doi: 10.1080/10556789908805762. Google Scholar U. Topcu, A. Packard, P. Seiler and G. J. Balas, Robust region-of-attraction estimation,, IEEE Transactions on Automatic Control, 55 (2010), 137. doi: 10.1109/TAC.2009.2033751. Google Scholar G. Valmorbida and J. Anderson, Region of attraction analysis via invariant sets,, in Proc. of the American Control Conference, (2014), 3591. doi: 10.1109/ACC.2014.6859263. Google Scholar L. Vandenberghe and S. Boyd, Semidefinite programming,, SIAM Review, 38 (1996), 49. doi: 10.1137/1038003. Google Scholar Q. Zheng and F. Wu, Nonlinear output feedback $H_{\infty}$ control for polynomial nonlinear systems,, in Proceedings of the 2008 American Control Conference, (2008), 1196. Google Scholar Q. Zheng and F. Wu, Generalized nonlinear $H_{\infty}$ synthesis condition with its numerically efficient solution,, International Journal of Robust and Nonlinear Control, 21 (2011), 2079. doi: 10.1002/rnc.1682. Google Scholar Sigurdur Hafstein, Skuli Gudmundsson, Peter Giesl, Enrico Scalas. Lyapunov function computation for autonomous linear stochastic differential equations using sum-of-squares programming. Discrete & Continuous Dynamical Systems - B, 2018, 23 (2) : 939-956. doi: 10.3934/dcdsb.2018049 B. Cantó, C. Coll, A. Herrero, E. Sánchez, N. Thome. Pole-assignment of discrete time-delay systems with symmetries. Discrete & Continuous Dynamical Systems - B, 2006, 6 (3) : 641-649. doi: 10.3934/dcdsb.2006.6.641 Ming He, Xiaoyun Ma, Weijiang Zhang. Oscillation death in systems of oscillators with transferable coupling and time-delay. Discrete & Continuous Dynamical Systems - A, 2001, 7 (4) : 737-745. doi: 10.3934/dcds.2001.7.737 Qinqin Chai, Ryan Loxton, Kok Lay Teo, Chunhua Yang. A unified parameter identification method for nonlinear time-delay systems. Journal of Industrial & Management Optimization, 2013, 9 (2) : 471-486. doi: 10.3934/jimo.2013.9.471 Nguyen H. Sau, Vu N. Phat. LP approach to exponential stabilization of singular linear positive time-delay systems via memory state feedback. Journal of Industrial & Management Optimization, 2018, 14 (2) : 583-596. doi: 10.3934/jimo.2017061 J. C. Robinson. A topological time-delay embedding theorem for infinite-dimensional cocycle dynamical systems. Discrete & Continuous Dynamical Systems - B, 2008, 9 (3&4, May) : 731-741. doi: 10.3934/dcdsb.2008.9.731 Denis de Carvalho Braga, Luis Fernando Mello, Carmen Rocşoreanu, Mihaela Sterpu. Lyapunov coefficients for non-symmetrically coupled identical dynamical systems. Application to coupled advertising models. Discrete & Continuous Dynamical Systems - B, 2009, 11 (3) : 785-803. doi: 10.3934/dcdsb.2009.11.785 Gunther Dirr, Hiroshi Ito, Anders Rantzer, Björn S. Rüffer. Separable Lyapunov functions for monotone systems: Constructions and limitations. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2497-2526. doi: 10.3934/dcdsb.2015.20.2497 Jóhann Björnsson, Peter Giesl, Sigurdur F. Hafstein, Christopher M. Kellett. Computation of Lyapunov functions for systems with multiple local attractors. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4019-4039. doi: 10.3934/dcds.2015.35.4019 Michael Schönlein. Asymptotic stability and smooth Lyapunov functions for a class of abstract dynamical systems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 4053-4069. doi: 10.3934/dcds.2017172 Sigurdur F. Hafstein, Christopher M. Kellett, Huijuan Li. Computing continuous and piecewise affine lyapunov functions for nonlinear systems. Journal of Computational Dynamics, 2015, 2 (2) : 227-246. doi: 10.3934/jcd.2015004 P. Adda, J. L. Dimi, A. Iggidir, J. C. Kamgang, G. Sallet, J. J. Tewa. General models of host-parasite systems. Global analysis. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 1-17. doi: 10.3934/dcdsb.2007.8.1 Qinghong Zhang, Gang Chen, Ting Zhang. Duality formulations in semidefinite programming. Journal of Industrial & Management Optimization, 2010, 6 (4) : 881-893. doi: 10.3934/jimo.2010.6.881 Sergey Zelik. On the Lyapunov dimension of cascade systems. Communications on Pure & Applied Analysis, 2008, 7 (4) : 971-985. doi: 10.3934/cpaa.2008.7.971 Huijuan Li, Robert Baier, Lars Grüne, Sigurdur F. Hafstein, Fabian R. Wirth. Computation of local ISS Lyapunov functions with low gains via linear programming. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2477-2495. doi: 10.3934/dcdsb.2015.20.2477 Thomas I. Seidman, Olaf Klein. Periodic solutions of isotone hybrid systems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (2) : 483-493. doi: 10.3934/dcdsb.2013.18.483 William Clark, Anthony Bloch, Leonardo Colombo. A Poincaré-Bendixson theorem for hybrid systems. Mathematical Control & Related Fields, 2019, 0 (0) : 0-0. doi: 10.3934/mcrf.2019028 Michael Stich, Carsten Beta. Standing waves in a complex Ginzburg-Landau equation with time-delay feedback. Conference Publications, 2011, 2011 (Special) : 1329-1334. doi: 10.3934/proc.2011.2011.1329 Nabil T. Fadai, Michael J. Ward, Juncheng Wei. A time-delay in the activator kinetics enhances the stability of a spike solution to the gierer-meinhardt model. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1431-1458. doi: 10.3934/dcdsb.2018158 Richard H. Rand, Asok K. Sen. A numerical investigation of the dynamics of a system of two time-delay coupled relaxation oscillators. Communications on Pure & Applied Analysis, 2003, 2 (4) : 567-577. doi: 10.3934/cpaa.2003.2.567 James Anderson Antonis Papachristodoulou
CommonCrawl
What does "formal" mean? I know the definition of formal power series, power series and polynomials. But what does the adjective "formal" mean? In google English dictionary, does it mean "9. Of or relating to linguistic or logical form as opposed to function or meaning" or maybe another one in the link? Or does "formal" have some mathematical meaning which is other than usual dictionary meaning? soft-question terminology GobiGobi I see formal used in at least two senses in mathematics. Rigorous, i.e. "here is a formal proof" as opposed to "here is an informal demonstration." "Formal manipulation," that is, manipulating expressions according to certain rules without caring about convergence, etc. Confusingly they can mean opposite things in certain contexts, although "formal manipulations" can be made rigorous in many cases. Qiaochu YuanQiaochu Yuan $\begingroup$ Isn't there a notion of "formal" in algebraic geometry? $\endgroup$ – Damien Jul 27 '11 at 1:44 $\begingroup$ You mean en.wikipedia.org/wiki/Formal_scheme ? Well, "formal" here seems to mean something like "including infinitesimal information." Morally the etymology comes from making formal manipulations with infinitesimals rigorous. $\endgroup$ – Qiaochu Yuan Jul 27 '11 at 1:54 $\begingroup$ I think the etymology of the word shows how the senses are related. 'Formal' comes from 'form'; the association with rigour is via Hilbert's formalist school. By 'rigour' what is really meant is formal manipulations of logical propositions in accordance to the rules of inference, rather than following 'intuition'. $\endgroup$ – Zhen Lin Jul 27 '11 at 2:00 $\begingroup$ It should be noted that the first sense includes a really vast spectrum of degrees of formality. The way I see it, it includes usual textbook proofs (e.g. Folland's proof of Radon-Nikodým theorem is 'formal'), and 'logically' formal proofs, as in en.wikipedia.org/wiki/Formal_proof , which also serves as input for automated proof checking. $\endgroup$ – Bruno Stonek Jul 27 '11 at 3:17 $\begingroup$ I've always thought of the latter meaning as the meaning "of or related to form" — a formal power series is something that has the form of a power series, formal manipulations are those that work on the form directly (without caring about what the expression may "mean" in the analysis sense), etc. $\endgroup$ – ShreevatsaR Jul 27 '11 at 4:42 As an example, formal power series is analyzed without regard to convergence. Really, what is of interest is the sequence of coefficients. ncmathsadistncmathsadist And don't forget the notion of formal space arising in rational homotopy theory. Cheerful ParsnipCheerful Parsnip When I was learning about logic as an undergraduate, I recall being told that the word "formal", with respect to "formal languages" meant that the "form" of expressions written in that language had primacy. In other words, rules for manipulating expressions in a formal language could be given in terms of the form of the expression only, without needing to know to what values the variables in the expression were bound. So a formal language permits us to use relatively simple pattern-matching algorithms to decide which transformations of an expression are valid at any given time. In this context, formality is linked to the simplicity of the rules that define the set of valid transformations of an expression. William PayneWilliam Payne The word "formal" in "formal power series" is indicating that you are considering all objects that are algebraically "like a power series". This is opposed to its use in analysis where you spend a lot of time figuring out for which $x$ the series converges. Basic analysis goes like this: "$\displaystyle\sum_{n=1}^{\infty} x^n$ is a series which converges for $|x|<1$ and therefore the function $f(x) = \displaystyle\sum_{n=1}^{\infty} x^n$ has the domain $|x| < 1$". You then proceed to use the function and talk about derivatives and integrals on the restricted domain. If the series has very few points of convergence such as $\displaystyle\sum_{n=1}^{\infty} n!x^n$ which converges only for $x=0$, then casting it as the function $g(x) = \displaystyle\sum_{n=1}^{\infty} n!x^n$ can only have domain $x=0$ and its value is $g(0)=0$. Pretty boring function when it comes to derivatives and integrals! When you study formal power series, you ignore the consideration of convergence and use the series as it is presented as an algebraic entity, so even though $g$ only converges at $x=0$, you ignore that and focus on other properties of the series. Another common use of the word "formal" is with a "formal system" which is basically a big rulebook for an artificial language comprised of an alphabet (a list of symbols), a grammar (a way of arranging those symbols), and axioms (initial lists of symbols to start from). The word "formal" here is needed because it is very prim and proper and only allows manipulations according to the grammar and axioms; you can't combine symbols in any way like you can in English (for example this ee cummings poem is an "acceptable" combination of the symbols of English, but is also seemingly "wrong" according to our standard grammar). tomcuchtatomcuchta $\begingroup$ The first series certainly converges for $|x| < 1$; why wouldn't you consider it a power series with a finite radius of convergence? (I would have chosen an example with zero radius of convergence, such as $\sum n! x^n$.) $\endgroup$ – Qiaochu Yuan Jul 27 '11 at 1:39 $\begingroup$ Yes, @Qiaochu's series is an excellent example; even with the zero radius of convergence, it can be manipulated formally to produce... interesting and useful identities. $\endgroup$ – J. M. is a poor mathematician Jul 27 '11 at 2:23 $\begingroup$ Thank you for the suggestion! $\endgroup$ – tomcuchta Jul 27 '11 at 4:39 $\begingroup$ "not considered as a power series in analysis since it does not converge for any $x\in \mathbf R$" - actually, it does... if $x=0$ that is. $\endgroup$ – J. M. is a poor mathematician Jul 27 '11 at 4:45 Formal proof systems One context in which the word "formal" comes in, is that of formal proof systems. A formal proof system is a way to write theorems and their proof in the computer, such that after it have been done the proof can be automatically verified by a computer. In such systems, theorems are just sequences of characters (strings), and starting from your axiom strings, you use a few well defined rules to transform those strings mechanically, and obtain new true strings (thus making a proof). The huge advantage of such systems is that since each proof step is so simple and mechanical, computers can verify proofs, which can be an extremely difficult and error prone task for humans to do! There are also cases where the proof itself requires tedious verification of thousands of cases, and would be too time consuming for any human. One notable example of this is the four color theorem. The downside of such systems is that they are much harder to write your proofs in, because in order to communicate with the computer you have to write everything in a very precise way. I do believe however, that if such systems are done well enough with good tooling and standard libraries, writing a proof should be no harder than writing a computer program, and the benefits would largely outweight the greater difficulty of writing the proof. For a concrete well presented example, a have a look at Metamath's awesome proof that 2 + 2 = 4 http://us.metamath.org/mpeuni/mmset.html#trivia Metamath is an older proof system, and there are likely better choices today as I have mentioned at: What is the current state of formalized mathematics? but their web presentation is very nice! Such proofs require of course defining everything in terms of things that the proof system understands. In the case of Metamath, Zermelo–Fraenkel-like set theory is used to my understanding. A TL;DR version of the classic set theory approach would be: we can use sets, forall, exists and modus ponens the naturals can be defined in terms of sets like this: https://en.wikipedia.org/wiki/Set-theoretic_definition_of_natural_numbers the rationals can be defined easily as an ordered pair of integers. Ordered pairs can be defined in terms of sets easily with Kiratowski's definition, see also: Please Explain Kuratowski Definition of Ordered Pairs the reals can be defined in terms of sets with Dedekind cuts, see also: True Definition of the Real Numbers functions are just a set of pairs: Is $f(x) = (x + 1)/(x +2)$ a function? once we have reals and functions, note how the epsilon delta definition of limits only uses concepts that we have previously defined: functions, reals, forall and exists! Once you see this, it is easy to believe, that, at least, we can formalize real analysis with this simple system The fact that mathematics can be fully formalized is surprising, and was arguably only fully realized at the end of the 19th century, in particular through the seminal Principia Mathematica, and materialized with the invention of computers. This is in my opinion the property of mathematics that best defines it, and that which clearly separates its preciseness from other arts such as poetry. Once we have maths formally modelled, one of the coolest results is Gödel's incompleteness theorems, which states that for any reasonable proof system, there are necessarily theorems that cannot be proven neither true nor false starting from any given set of axioms: those theorems are independent from those axioms. Therefore, there are three possible outcomes for any hypothesis: true, false or independent! Some famous theorems have even been proven to be independent of some famous axioms. One of the most notable is that the Continuum Hypothesis is independent from ZFC! Such independence proofs rely on modelling the proof system inside another proof system, and forcing is one of the main techniques used for this. Ciro Santilli 新疆改造中心法轮功六四事件Ciro Santilli 新疆改造中心法轮功六四事件 Not the answer you're looking for? Browse other questions tagged soft-question terminology or ask your own question. What is the meaning of "formal" in math-speak? Please Explain Kuratowski Definition of Ordered Pairs True Definition of the Real Numbers What is the current state of formalized mathematics? Is $f(x) = (x + 1)/(x +2)$ a function? Simple closed form for $\int_0^\infty\frac{1}{\sqrt{x^2+x}\sqrt[4]{8x^2+8x+1}}\;dx$ What is a "formal" difference of vector bundles? Accurate identities related to $\sum\limits_{n=0}^{\infty}\frac{(2n)!}{(n!)^3}x^n$ and $\sum\limits_{n=0}^{\infty}\frac{(2n)!}{(n!)^4}x^n$ Criteria for formally derived Euler-Maclaurin-type formula What does it mean to "count (some number) of (some finite set of objects)"? Etymology of the word "normal" (perpendicular) Does "monotonic sequence" always mean "a sequence of real numbers" What is linear, numerically and geometrically speaking? How to google search mathematical notions and expressions?
CommonCrawl
\(\newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{{\not{\mid}}} \newcommand{\notsubset}{\not\subset} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\gf}{\operatorname{GF}} \newcommand{\inn}{\operatorname{Inn}} \newcommand{\aut}{\operatorname{Aut}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\cis}{\operatorname{cis}} \newcommand{\chr}{\operatorname{char}} \newcommand{\Null}{\operatorname{Null}} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \) Applied Discrete Structures Al Doerr, Ken Levasseur IndexPrevUpNext PrevUpNext 1 Set Theory Set Notation and Relations Basic Set Operations Cartesian Products and Power Sets Binary Representation of Positive Integers Summation Notation and Generalizations 2 Combinatorics Basic Counting Techniques - The Rule of Products Partitions of Sets and the Law of Addition Combinations and the Binomial Theorem 3 Logic Propositions and Logical Operators Truth Tables and Propositions Generated by a Set Equivalence and Implication The Laws of Logic Mathematical Systems and Proofs Propositions over a Universe A Review of Methods of Proof 4 More on Sets Methods of Proof for Sets Laws of Set Theory Minsets The Duality Principle 5 Introduction to Matrix Algebra Basic Definitions and Operations Special Types of Matrices Laws of Matrix Algebra Matrix Oddities 6 Relations Graphs of Relations on a Set Properties of Relations Matrices of Relations Closure Operations on Relations 7 Functions Definition and Notation Properties of Functions Function Composition 8 Recursion and Recurrence Relations The Many Faces of Recursion Recurrence Relations Some Common Recurrence Relations Generating Functions 9 Graph Theory Graphs - General Introduction Data Structures for Graphs Traversals: Eulerian and Hamiltonian Graphs Graph Optimization Planarity and Colorings What Is a Tree? Spanning Trees Rooted Trees Binary Trees 11 Algebraic Structures Some General Properties of Groups Greatest Common Divisors and the Integers Modulo \(n\) Direct Products Isomorphisms 12 More Matrix Algebra Systems of Linear Equations Matrix Inversion An Introduction to Vector Spaces The Diagonalization Process Some Applications Linear Equations over the Integers Mod 2 13 Boolean Algebra Posets Revisited Boolean Algebras Atoms of a Boolean Algebra Finite Boolean Algebras as \(n\)-tuples of 0's and 1's 14 Monoids and Automata Monoids Free Monoids and Languages Automata, Finite-State Machines The Monoid of a Finite-State Machine The Machine of a Monoid 15 Group Theory and Applications Cyclic Groups Cosets and Factor Groups Permutation Groups Normal Subgroups and Group Homomorphisms Coding Theory, Group Codes 16 An Introduction to Rings and Fields Rings, Basic Definitions and Concepts Polynomial Rings Field Extensions A Algorithms An Introduction to Algorithms The Invariant Relation Theorem B Python and SageMath Python Iterators C Determinants D Hints and Solutions to Selected Exercises E Notation Authored in PreTeXt Section 5.1 Basic Definitions and Operations Subsection 5.1.1 Matrix Order and Equality Definition 5.1.1. matrix. A matrix is a rectangular array of elements of the form \begin{equation*} A = \left( \begin{array}{ccccc} a_{11} & a_{12} & a_{13} & \cdots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & a_{m3} & \cdots & a_{mn} \\ \end{array} \right) \end{equation*} A convenient way of describing a matrix in general is to designate each entry via its position in the array. That is, the entry \(a_{34}\) is the entry in the third row and fourth column of the matrix \(A\text{.}\) Depending on the situation, we will decide in advance to which set the entries in a matrix will belong. For example, we might assume that each entry \(a_{ij}\) (\(1 \leq i\leq m\text{,}\) \(1 \leq j \leq n\)) is a real number. In that case we would use \(M_{m\times n}(\mathbb{R})\) to stand for the set of all \(m\) by \(n\) matrices whose entries are real numbers. If we decide that the entries in a matrix must come from a set \(S\text{,}\) we use \(M_{m\times n}(S)\) to denote all such matrices. Definition 5.1.2. The Order of a Matrix. A matrix \(A\) that has \(m\) rows and \(n\) columns is called an \(m\times n\) (read "\(m\) by \(n\)") matrix, and is said to have order \(m \times n\text{.}\) Since it is rather cumbersome to write out the large rectangular array above each time we wish to discuss the generalized form of a matrix, it is common practice to replace the above by \(A = \left(a_{ij}\right)\text{.}\) In general, matrices are often given names that are capital letters and the corresponding lower case letter is used for individual entries. For example the entry in the third row, second column of a matrix called \(C\) would be \(c_{32}\text{.}\) Example 5.1.3. Orders of Some Matrices. \(A =\left( \begin{array}{cc} 2 & 3 \\ 0 & -5 \\ \end{array} \right)\) , \(B =\left( \begin{array}{c} 0 \\ \frac{1}{2} \\ 15 \\ \end{array} \right)\) , and \(D =\left( \begin{array}{ccc} 1 & 2 & 5 \\ 6 & -2 & 3 \\ 4 & 2 & 8 \\ \end{array} \right)\) are \(2\times 2\text{,}\) \(3\times 1\text{,}\) and \(3\times 3\) matrices, respectively. Since we now understand what a matrix looks like, we are in a position to investigate the operations of matrix algebra for which users have found the most applications. First we ask ourselves: Is the matrix \(A =\left( \begin{array}{cc} 1 & 2 \\ 3 & 4 \\ \end{array} \right)\) equal to the matrix \(B =\left( \begin{array}{cc} 1 & 2 \\ 3 & 5 \\ \end{array} \right)\text{?}\) No, they are not because the corresponding entries in the second row, second column of the two matrices are not equal. Next, is \(A =\left( \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \\ \end{array} \right)\) equal to \(B=\left( \begin{array}{cc} 1 & 2 \\ 4 & 5 \\ \end{array} \right)\text{?}\) No, although the corresponding entries in the first two columns are identical, \(B\) doesn't have a third column to compare to that of \(A\text{.}\) We formalize these observations in the following definition. Definition 5.1.4. Equality of Matrices. A matrix \(A\) is said to be equal to matrix \(B\) (written \(A = B\)) if and only if: \(A\) and \(B\) have the same order, and all corresponding entries are equal: that is, \(a_{ij}\) = \(b_{ij}\) for all appropriate \(i\) and \(j\text{.}\) Subsection 5.1.2 Matrix Addition and Scalar Multiplication The first two operations we introduce are very natural and are not likely cause much confusion. The first is matrix addition. It seems natural that if \(A =\left( \begin{array}{cc} 1 & 0 \\ 2 & -1 \\ \end{array} \right)\) and \(B =\left( \begin{array}{cc} 3 & 4 \\ -5 & 2 \\ \end{array} \right)\) , then \begin{equation*} A + B =\left( \begin{array}{cc} 1+3 & 0+4 \\ 2-5 & -1+2 \\ \end{array} \right)=\left( \begin{array}{cc} 4 & 4 \\ -3 & 1 \\ \end{array} \right). \end{equation*} However, if \(A=\left( \begin{array}{ccc} 1 & 2 & 3 \\ 0 & 1 & 2 \\ \end{array} \right)\) and \(B = \left( \begin{array}{cc} 3 & 0 \\ 2 & 8 \\ \end{array} \right)\text{,}\) is there a natural way to add them to give us \(A+B\text{?}\) No, the orders of the two matrices must be identical. Definition 5.1.5. Matrix Addition. Let \(A\) and \(B\) be \(m\times n\) matrices. Then \(A+B\) is an \(m\times n\) matrix where \((A + B)_{ij} = a_{ij} + b_{ij}\) (read "The \(i\)th \(j\)th entry of the matrix \(A + B\) is obtained by adding the \(i\)th \(j\)th entry of \(A\) to the \(i\)th \(j\)th entry of \(B\)"). If the orders of \(A\) and \(B\) are not identical, \(A+B\) is not defined. In short, \(A + B\) is defined if and only if \(A\) and \(B\) are of the same order. Another frequently used operation is that of multiplying a matrix by a number, commonly called a scalar in this context. Scalars normally come from the same set as the entries in a matrix. For example, if \(A\in M_{m\times n}(\mathbb{R})\text{,}\) a scalar can be any real number. Example 5.1.6. A Scalar Product. If \(c = 3\) and if \(A =\left( \begin{array}{cc} 1 & -2 \\ 3 & 5 \\ \end{array} \right)\) and we wish to find \(c A\text{,}\) it seems natural to multiply each entry of \(A\) by 3 so that \(3 A =\left( \begin{array}{cc} 3 & -6 \\ 9 & 15 \\ \end{array} \right)\text{,}\) and this is precisely the way scalar multiplication is defined. Definition 5.1.7. Scalar Multiplication. Let \(A\) be an \(m \times n\) matrix and \(c\) a scalar. Then \(c A\) is the \(m\times n\) matrix obtained by multiplying \(c\) times each entry of \(A\text{;}\) that is \((c A)_{ij} = c a_{ij}\text{.}\) Subsection 5.1.3 Matrix Multiplication A definition that is more awkward to motivate is the product of two matrices. See Exercise 5.1.4.8 for an attempt to do so. In time, the reader will see that the following definition of the product of matrices will be very useful, and will provide an algebraic system that is quite similar to elementary algebra. Here is a video introduction to matrix multiplication. Definition 5.1.8. Matrix Multiplication. Let \(A\) be an \(m\times n\) matrix and let \(B\) be an \(n\times p\) matrix. The product of \(A\) and \(B\text{,}\) denoted by \(AB\text{,}\) is an \(m\times p\) matrix whose \(i\)th row \(j\)th column entry is \begin{equation*} \begin{split} (A B)_{ij}&= a_{i 1}b_{1 j}+a_{i 2}b_{2 j}+ \cdots +a_{i n}b_{n j}\\ &= \sum_{k=1}^n a_{i k} b_{k j} \end{split} \end{equation*} for \(1\leq i\leq m\) and \(1\leq j\leq p\text{.}\) The mechanics of computing one entry in the product of two matrices is illustrated in Figure 5.1.9. Figure 5.1.9. Computation of one entry in the product of two 3 by 3 matrices The computation of a product can take a considerable amount of time in comparison to the time required to add two matrices. Suppose that \(A\) and \(B\) are \(n\times n\) matrices; then \((A B)_{ij}\) is determined performing \(n\) multiplications and \(n-1\) additions. The full product takes \(n^3\) multiplications and \(n^3 - n^2\) additions. This compares with \(n^2\) additions for the sum of two \(n\times n\) matrices. The product of two 10 by 10 matrices will require 1,000 multiplications and 900 additions, clearly a job that you would assign to a computer. The sum of two matrices requires a more modest 100 additions. This analysis is based on the assumption that matrix multiplication will be done using the formula that is given in the definition. There are more advanced methods that, in theory, reduce operation counts. For example, Strassen's algorithm (https://en.wikipedia.org/wiki/Strassen\_algorithm) computes the product of two \(n\) by \(n\) matrices in \(7\cdot 7^{\log _2n}-6\cdot 4^{\log _2n}\approx 7 n^{2.808}\) operations. There are practical issues involved in actually using the algorithm in many situations. For example, round-off error can be more of a problem than with the standard formula. Example 5.1.10. A Matrix Product. Let \(A =\left( \begin{array}{cc} 1 & 0 \\ 3 & 2 \\ -5 & 1 \\ \end{array} \right)\text{,}\) a \(3\times 2\) matrix, and let \(B =\left( \begin{array}{c} 6 \\ 1 \\ \end{array} \right)\text{,}\) a \(2\times 1\) matrix. Then \(A B\) is a \(3 \times 1\) matrix: \begin{equation*} A B = \left( \begin{array}{cc} 1 & 0 \\ 3 & 2 \\ -5 & 1 \\ \end{array} \right) \left( \begin{array}{c} 6 \\ 1 \\ \end{array} \right) = \left( \begin{array}{c} 1\cdot 6+0\cdot 1 \\ 3 \cdot 6 + 2\cdot 1 \\ -5 \cdot 6 + 1\cdot 1 \\ \end{array} \right) = \left( \begin{array}{c} 6 \\ 20 \\ -29 \\ \end{array} \right) \end{equation*} The product \(A B\) is defined only if \(A\) is an \(m\times n\) matrix and \(B\) is an \(n\times p\) matrix; that is, the two "inner" numbers must be equal. Furthermore, the order of the product matrix \(A B\) is the "outer" numbers, in this case \(m\times p\text{.}\) It is wise to first determine the order of a product matrix. For example, if \(A\) is a \(3\times 2\) matrix and \(B\) is a \(2\times 2\) matrix, then \(A B\) is a \(3\times 2\) matrix of the form \begin{equation*} A B =\left( \begin{array}{cc} c_{11} & c_{12} \\ c_{21} & c_{22} \\ c_{31} & c_{32} \\ \end{array} \right) \end{equation*} Then to obtain, for example, \(c_{31}\text{,}\) we multiply corresponding entries in the third row of \(A\) times the first column of \(B\) and add the results. Example 5.1.11. Multiplication with a diagonal matrix. Let \(A =\left( \begin{array}{cc} -1 & 0 \\ 0 & 3 \\ \end{array} \right)\) and \(B =\left( \begin{array}{cc} 3 & 10 \\ 2 & 1 \\ \end{array} \right)\) . Then \(A B =\left( \begin{array}{cc} -1\cdot 3 + 0\cdot 2 & -1\cdot 10+0\cdot 1 \\ 0\cdot 3+3\cdot 2 & 0\cdot 10+3\cdot 1 \\ \end{array} \right)= \left( \begin{array}{cc} -3 & -10 \\ 6 & 3 \\ \end{array} \right)\) The net effect is to multiply the first row of \(B\) by \(-1\) and the second row of \(B\) by 3. Note: \(B A =\left( \begin{array}{cc} -3 & 30 \\ - 2 & 3 \\ \end{array} \right) \neq A B\text{.}\) The columns of \(B\) are multiplied by \(-1\) and 3 when the order is switched. An \(n\times n\) matrix is called a square matrix. If \(A\) is a square matrix, \(A A\) is defined and is denoted by \(A^2\) , and \(A A A = A^3\text{,}\) etc. The \(m\times n\) matrices whose entries are all 0 are denoted by \(\pmb{0}_{m\times n}\text{,}\) or simply \(\pmb{ 0}\text{,}\) when no confusion arises regarding the order. Exercises 5.1.4 Exercises for Section 5.1 Let \(A=\left( \begin{array}{cc} 1 & -1 \\ 2 & 3 \\ \end{array} \right)\text{,}\) \(B =\left( \begin{array}{cc} 0 & 1 \\ 3 & -5 \\ \end{array} \right)\) , and \(C=\left( \begin{array}{ccc} 0 & 1 & -1 \\ 3 & -2 & 2 \\ \end{array} \right)\) Compute \(A B\) and \(B A\text{.}\) Compute \(A + B\) and \(B + A\text{.}\) If \(c = 3\text{,}\) show that \(c(A + B) = c A + c B\text{.}\) Show that \((A B)C = A(B C)\text{.}\) Compute \(A^2 C\text{.}\) Compute \(B + \pmb{0}\text{.}\) Compute \(A \pmb{0}_{2\times 2}\) and \(\pmb{0}_{2\times 2} A\text{,}\) where \(\pmb{0}_{2\times 2}\) is the \(2\times 2\) zero matrix. Compute \(0A\text{,}\) where 0 is the real number (scalar) zero. Let \(c = 2\) and \(d = 3\text{.}\) Show that \((c + d)A = c A + d A\text{.}\) For parts c, d and i of this exercise, only a verification is needed. Here, we supply the result that will appear on both sides of the equality. \(AB=\left( \begin{array}{cc} -3 &6 \\ 9 & -13 \\ \end{array} \right) \quad BA=\left( \begin{array}{cc} 2 & 3 \\ -7 & -18 \\ \end{array} \right)\) \(\left( \begin{array}{cc} 1 & 0 \\ 5 & -2 \\ \end{array} \right)\) \(\left( \begin{array}{cc} 3 & 0 \\ 15 & -6 \\ \end{array} \right)\) \(\left( \begin{array}{ccc} 18 & -15 & 15 \\ -39 & 35 & -35 \\ \end{array} \right)\) \(\left( \begin{array}{ccc} -12 & 7 & -7 \\ 21 & -6 & 6 \\ \end{array} \right)\) \(B+0=B\) \(\left( \begin{array}{cc} 0 & 0 \\ 0 & 0 \\ \end{array} \right)\) \(\left( \begin{array}{cc} 5 & -5 \\ 10 & 15 \\ \end{array} \right)\) Let \(A = \left( \begin{array}{ccc} 1 & 0 & 2 \\ 2 & -1 & 5 \\ 3 & 2 & 1 \\ \end{array} \right)\) , \(B =\left( \begin{array}{ccc} 0 & 2 & 3 \\ 1 & 1 & 2 \\ -1 & 3 & -2 \\ \end{array} \right)\) , and \(C=\left( \begin{array}{cccc} 2 & 1 & 2 & 3 \\ 4 & 0 & 1 & 1 \\ 3 & -1 & 4 & 1 \\ \end{array} \right)\) Compute, if possible; \(A - B\) \(A B\) \(A C - B C\) \(A(B C)\) \(C A - C B\) \(C \left( \begin{array}{c} x \\ y \\ z \\ w \\ \end{array} \right)\) Let \(A =\left( \begin{array}{cc} 2 & 0 \\ 0 & 3 \\ \end{array} \right)\) . Find a matrix \(B\) such that \(A B = I\) and \(B A = I\text{,}\) where \(I = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right)\text{.}\) \(\left( \begin{array}{cc} 1/2 & 0 \\ 0 & 1/3 \\ \end{array} \right)\) Find \(A I\) and \(B I\) where \(I\) is as in Exercise 3, where \(A = \left( \begin{array}{cc} 1 & 8 \\ 9 & 5 \\ \end{array} \right)\) and \(B = \left( \begin{array}{cc} -2 & 3 \\ 5 & -7 \\ \end{array} \right)\text{.}\) What do you notice? Find \(A^3\) if \(A=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \\ \end{array} \right)\) . What is \(A^{15}\) equal to? \(A^3=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & 27 \\ \end{array} \right)\) \(A^{15}=\left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 32768 & 0 \\ 0 & 0 & 14348907 \\ \end{array} \right)\) Determine \(I^2\) and \(I^3 \text{ if }\) \(I = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right)\text{.}\) What is \(I^n\) equal to for any \(n\geq 1\text{?}\) Prove your answer to part (b) by induction. If \(A =\left( \begin{array}{cc} 2 & 1 \\ 1 & -1 \\ \end{array} \right)\text{,}\) \(X =\left( \begin{array}{c} x_1 \\ x_2 \\ \end{array} \right)\text{,}\) and \(B =\left( \begin{array}{c} 3 \\ 1 \\ \end{array} \right)\) , show that \(A X =B\) is a way of expressing the system \(\begin{array}{c}2x_1 + x_2 = 3\\ x_1 - x_2= 1\\ \end{array}\) using matrices. Express the following systems of equations using matrices: \(\begin{array}{c} 2 x_1- x_2= 4\\ x_1+ x_2= 0\\ \end{array}\) \(\begin{array}{c} x_1+ x_2+ 2 x_3= 1\\ x_1+ 2 x_2-x_3= -1\\ x_1+ 3 x_2+x_3= 5\\ \end{array}\) \(\begin{array}{c} x_1+x_2\quad\quad= 3\\ \quad \quad x_2\quad\quad= 5\\ x_1 \quad \quad+ 3x_3= 6\\ \end{array} \) \(Ax=\left( \begin{array}{c} 2x_1+1x_2 \\ 1x_1-1x_2 \\ \end{array} \right)\) equals \(\left( \begin{array}{c} 3 \\ 1 \\ \end{array} \right)\) if and only if both of the equalities \(2x_1+x_2=3 \textrm{ and } x_1-x_2=1\) are true. (i) \(A=\left( \begin{array}{cc} 2 & -1 \\ 1 & 1 \\ \end{array} \right)\) \(x=\left( \begin{array}{c} x_1 \\ x_2 \\ \end{array} \right)\) \(B=\left( \begin{array}{c} 4 \\ 0 \\ \end{array} \right)\) \(A=\left( \begin{array}{ccc} 1 & 1 & 2 \\ 1 & 2 & -1 \\ 1 & 3 & 1 \\ \end{array} \right)\) \(x=\left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ \end{array} \right)\) \(B=\left( \begin{array}{c} 1 \\ -1 \\ 5 \\ \end{array} \right)\) \(A=\left( \begin{array}{ccc} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 3 \\ \end{array} \right)\) \(x=\left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ \end{array} \right)\) \(B=\left( \begin{array}{c} 3 \\ 5 \\ 6 \\ \end{array} \right)\) In this exercise, we propose to show how matrix multiplication is a natural operation. Suppose a bakery produces bread, cakes and pies every weekday, Monday through Friday. Based on past sales history, the bakery produces various numbers of each product each day, summarized in the \(5 \times 3\) matrix \(D\text{.}\) It should be noted that the order could be described as "number of days by number of products." For example, on Wednesday (the third day) the number of cakes (second product in our list) that are produced is \(d_{3,2} = 4\text{.}\) \begin{equation*} D =\left( \begin{array}{ccc} 25 & 5 & 5 \\ 14 & 5 & 8 \\ 20 & 4 & 15 \\ 18 & 5 & 7 \\ 35 & 10 & 9 \\ \end{array} \right) \end{equation*} The main ingredients of these products are flour, sugar and eggs. We assume that other ingredients are always in ample supply, but we need to be sure to have the three main ones available. For each of the three products, The amount of each ingredient that is needed is summarized in the \(3 \times 3\text{,}\) or "number of products by number of ingredients" matrix \(P\text{.}\) For example, to bake a cake (second product) we need \(P_{2,1}=1.5\) cups of flour (first ingredient). Regarding units: flour and sugar are given in cups per unit of each product, while eggs are given in individual eggs per unit of each product. \begin{equation*} P =\left( \begin{array}{ccc} 2 & 0.5 & 0 \\ 1.5 & 1 & 2 \\ 1 & 1 & 1 \\ \end{array} \right) \end{equation*} These amounts are "made up", so don't used them to do your own baking! How many cups of flour will the bakery need every Monday? Pay close attention to how you compute your answer and the units of each number. How many eggs will the bakery need every Wednesday? Compute the matrix product \(D P\text{.}\) What do you notice? Suppose the costs of ingredients are \(\$0.12\) for a cup of flour, \(\$0.15\) for a cup of sugar and \(\$0.19\) for one egg. How can this information be put into a matrix that can meaningfully be multiplied by one of the other matrices in this problem?
CommonCrawl
\begin{document} \title{Bandwidth and Distortion Revisited} \newcommand{{\sc{Bandwidth}}}{{\sc{Bandwidth}}} \newcommand{{\sc{Distortion}}}{{\sc{Distortion}}} \newcommand{4.383}{4.383} \newcommand{9.363}{9.363} \newcommand{0.5475}{0.5475} \begin{abstract} In this paper we merge recent developments on exact algorithms for finding an ordering of vertices of a given graph that minimizes bandwidth (the {\sc{Bandwidth}}{} problem) and for finding an embedding of a given graph into a line that minimizes distortion (the {\sc{Distortion}}{} problem). For both problems we develop algorithms that work in $O(9.363^n)$ time and polynomial space. For {\sc{Bandwidth}}{}, this improves $O^*(10^n)$ algorithm by Feige and Kilian from 2000, for {\sc{Distortion}}{} this is the first polynomial space exact algorithm that works in $O(c^n)$ time we are aware of. As a coproduct, we enhance the $O(5^{n + o(n)})$--time and $O^*(2^n)$--space algorithm for {\sc{Distortion}}{} by Fomin et al. to an algorithm working in $O(4.383^n)$ time and space. \end{abstract} \section{Introduction}\label{s:intro} Recently the NP--complete {\sc{Bandwidth}}{} problem, together with a similar problem of embedding a graph into a real line with minimal distortion (called {\sc{Distortion}}{} in this paper), attracted some attention from the side of exact (and therefore not polynomial) algorithms. Given a graph $G$ with $n$ vertices, an {\em{ordering}} is a bijective function $\pi: V(G) \to \{1, 2, \ldots, n\}$. Bandwidth of $\pi$ is a maximal length of an edge, i.e., $\ensuremath{\mathrm{bw}}(\pi) = \max_{uv\in E(G)} |\pi(u) - \pi(v)|$. The {\sc{Bandwidth}}{} problem, given a graph $G$ and a positive integer $b$, asks if there exists an ordering of bandwidth at most $b$. Given a graph $G$, an {\em{embedding}} of $G$ into a real line is a function $\pi : G \to \ensuremath{\mathbb{R}}$. For every pair of distinct vertices $u, v \in V(G)$ we define a distortion of $u$ and $v$ by $\ensuremath{\mathrm{dist}}(u, v) = |\pi(u) - \pi(v)| / d_G(u, v)$, where $d_G$ denotes the distance in the graph $G$. A contraction and an expansion of $\pi$, denoted $\ensuremath{\mathrm{contr}}(\pi)$ and $\ensuremath{\mathrm{expan}}(\pi)$ respectively, are the minimal and maximal distortion over all pairs of distinct vertices in $V(G)$. The distortion of $\pi$, denoted $\ensuremath{\mathrm{dist}}(\pi)$, equals to $\ensuremath{\mathrm{expan}}(\pi)/\ensuremath{\mathrm{contr}}(\pi)$. The {\sc{Distortion}}{} problem, given a graph $G$ and a positive real number $d$, asks if there exists an embedding with distortion at most $d$. Note that the distortion of an embedding does not change if we change $\pi$ afinitely, and we can rescale $\pi$ by $1/\ensuremath{\mathrm{contr}}(\pi)$ and obtain $\pi$ with contraction exactly $1$. Therefore, in this paper, we limit ourselves only to embeddings with contraction at least $1$ and we optimize the expansion of the embedding, that is, we try to construct embeddings with contraction at least $1$ and with expansion at most $d$. The first non--trivial exact algorithm for the {\sc{Bandwidth}}{} problem was developed by Feige and Kilian in 2000 \cite{feige:exp}. It works in polynomial space and $O^*(10^n)$ time. Recently we improved the time bound to $O^*(5^n)$ \cite{naszewg}, $O(4.83^n)$ \cite{nasze483} and $O^*(20^{n/2})$ \cite{naszicalp}. However, the cost of the improvements was exponential space complexity: $O^*(2^n)$, $O^*(4^n)$, $O^*(20^{n/2})$ respectively. In 2009 Fomin et al. \cite{fomin:distortion-5n} adopted some ideas from \cite{naszewg} to the {\sc{Distortion}}{} problem and obtained a $O(5^{n + o(n)})$--time and $O^*(2^n)$--space algorithm. It is worth mentioning that the considered problems, although very similar form the exact computation point of view, differ from the point of parameterized computation. The {\sc{Bandwidth}}{} problem is hard for any level of the $W$ hierarchy \cite{fellows:hardness}, whereas {\sc{Distortion}}{} is fixed parameter tractable where parameterized by $d$ \cite{fomin:distortion-fpt}. However, the FPT algorithm for {\sc{Distortion}}{} works in $O(nd^4(2d+1)^{2d})$ time, which does not reach the $O(c^n)$ complexity for $d = \Omega(n)$. In this paper we present a link between aforementioned results and develop $O(9.363^n)$--time and polynomial space algorithms for both {\sc{Bandwidth}}{} and {\sc{Distortion}}{}. First, we develop a $O(4.383^n)$--time and space algorithm for {\sc{Distortion}}{}, using ideas both from $O^*(20^{n/2})$ algorithm for {\sc{Bandwidth}}{}\footnote{The complexity analysis of our algorithm, in particular the proof in Appendix \ref{a:20n2}, proves that the algorithm from \cite{naszicalp} works in $O(4.383^n)$ time and space too. However, we do not state it as a new result in this paper, since analysis based on this approach will be published in the journal version of \cite{naszicalp}.} \cite{naszicalp} and $O(5^{n+o(n)})$ algorithm for {\sc{Distortion}}{}~\cite{fomin:distortion-5n}. Then, we use an approach somehow similar to these of Feige and Kilian \cite{feige:exp} to reduce space to polynomial, at the cost of time complexity, obtaining the aforementioned algorithms. We are not aware of any exact polynomial--space algorithms that work in $O(c^n)$ time for {\sc{Distortion}}{} or are faster than Feige and Kilian's algorithm for {\sc{Bandwidth}}{}. In Section \ref{s:fun} we gather results on partial bucket functions: tool that was used in all previous algorithms for {\sc{Distortion}}{} and {\sc{Bandwidth}}{}. In Section \ref{s:bwpoly} we recall the $O^*(20^{n/2})$ algorithm \cite{naszicalp} and show how to transform it into $O(9.363^n)$--time and polynomial space algorithm for {\sc{Bandwidth}}{}. Section \ref{s:dist} is devoted to {\sc{Distortion}}{}: first, we merge ideas from \cite{naszewg} and \cite{fomin:distortion-5n} to obtain an $O^*(4.383^n)$--time and space algorithm for {\sc{Distortion}}{}. Then we apply the same trick as for {\sc{Bandwidth}}{} to obtain an $O(9.363^n)$--time and polynomial space algorithm. In the following sections we assume that we are given a connected undirected graph $G=(V,E)$ with $n = |V|$. Note that {\sc{Bandwidth}}{} trivially decomposes into subproblems on connected components, whereas answer to {\sc{Distortion}}{} is always negative for a disconnected graph. Proofs of results marked with $\clubsuit$ are postponed to Appendix \ref{a:proofs}. \section{Partial bucket functions}\label{s:fun} \newcommand{{\bar{f}}}{{\bar{f}}} In this section we gather results on {\em{partial bucket functions}}, a tool used in algorithms for both {\sc{Bandwidth}}{} and {\sc{Distortion}}{}. Most ideas here are based on the $O^*(20^{n/2})$ algorithm for {\sc{Bandwidth}}{} \cite{naszicalp}. \begin{definition}\label{def:buckfun} A {\em{partial bucket function}} is a pair $(A, f)$, such that $A \subseteq V$, $f:A \to \ensuremath{\mathbb{Z}}$ and there exists ${\bar{f}}: V \to \ensuremath{\mathbb{Z}}$ satisfying: \begin{enumerate} \item ${\bar{f}}|_A = f$; \item if $uv \in E$ then $|{\bar{f}}(u) - {\bar{f}}(v)| \leq 1$, in particular, if $u, v \in A$ then $|f(u) - f(v)| \leq 1$; \item if $uv \in E$, $u \in A$ and $v \notin A$ then ${\bar{f}}(u) \geq {\bar{f}}(v)$, i.e., ${\bar{f}}(u) = {\bar{f}}(v)$ or ${\bar{f}}(u) = {\bar{f}}(v) + 1$. \end{enumerate} We say that such a function ${\bar{f}}$ is a {\em{bucket extension}} of $f$. \end{definition} \begin{definition} Assume we have two partial bucket functions $(A, f)$ and $(A', f')$ such that $A' = A \cup \{v\}$, $v \notin A$ and $f'|_A = f$, we say that $(A', f')$ is a {\em{successor}} of $(A, f)$ with vertex $v$ if there does not exist any $uv \in E$, $u \in A$ such that $f(u) < f'(v)$. \end{definition} \begin{lemma}\label{lem:checkext} Assume that $A \subseteq V$ and $f: A \to \ensuremath{\mathbb{Z}}$. Moreover, let $A \subseteq B \subseteq V$, $f': B \to \ensuremath{\mathbb{Z}}$ and $f'|_A = f$. Then one can find in polynomial time a bucket extension ${\bar{f}}$ of $f$ such that ${\bar{f}}|_B = f'$ or state that such bucket extension does not exist. \end{lemma} \begin{proof} The case $A = B = \emptyset$ is trivial, so we may assume there exists some $v_0 \in B$. W.l.o.g. we may assume $f'(v_0) = 0$. Therefore any valid bucket extension should satisfy ${\bar{f}}(V) \subseteq \{-n, -n+1, \ldots, n\}$. We calculate for every $v \in V \setminus A$ the value $p(v) \subseteq \{-n, -n+1, \ldots, n\}$, intuitively, the set of possible values for ${\bar{f}}(v)$, by the following algorithm. \begin{algorithm} \caption{\label{alg:checkext}Calculate values $p(v)$ --- the sets of valid values for ${\bar{f}}(v)$.} \begin{minipage}{\textwidth} \small \begin{algorithmic}[1] \State Set $p(v) := \{-n, -n+1, \ldots, n\}$ for all $v \in V \setminus B$. \State Set $p(v) := \{f'(v)\}$ for all $v \in B \setminus A$. \Repeat \For{all $v \in V \setminus B$} \State $p(v) := p(v) \cap \bigcap_{u \in N(v) \cap A} \{f(u)-1, f(u)\} \cap \bigcap_{u \in N(v) \setminus A} \bigcup_{i \in p(u)} \{i-1, i, i+1\}$ \EndFor \Until{some $p(v)$ is empty or we do not change any $p(v)$ in the inner loop} \State {\Return True iff all $p(v)$ remain nonempty.} \end{algorithmic} \end{minipage} \end{algorithm} To prove that Algorithm \ref{alg:checkext} correctly checks if there exists a valid bucket extension ${\bar{f}}$ note the following: \begin{enumerate} \item Let ${\bar{f}}$ be a bucket extension of $(A, f)$ such that ${\bar{f}}|_B = f'$. Then, at every step of the algorithm ${\bar{f}}(v) \in p(v)$ for every $v \in V \setminus A$. \item If the algorithm returns nonempty $p(v)$ for every $v \in V \setminus A$, setting ${\bar{f}}(v) = \min p(v)$ constructs a valid bucket extension of $(A, f)$. Moreover, since we start with $p(v) = \{f'(v)\}$ for $v \in B \setminus A$, we obtain ${\bar{f}}|_B = f'$. \end{enumerate} \end{proof} \begin{corollary}\label{cor:pbecheck} One can check in polynomial time whether a given pair $(A, f)$ is a partial bucket function. Moreover one can check whether $(A', f')$ is a successor of $(A, f)$ in polynomial time too. \end{corollary} \begin{proof} To check if $(A, f)$ is a partial bucket function we simply run the algorithm from Lemma \ref{lem:checkext} for $B=A$ and $f'=f$. Conditions for being a successor of $(A, f)$ are trivial to check. \end{proof} \begin{lemma}\label{lem:5n} Let $N \in \ensuremath{\mathbb{Z}}_+$. Then there are at most $2N \cdot 5^{n-1}$ triples $(A, f, {\bar{f}})$ such that $(A, f)$ is a partial bucket function and ${\bar{f}}$ is a bucket extension of $f$ satisfying ${\bar{f}}(V) \subseteq \{1, 2, \ldots, N\}$. \end{lemma} \begin{proof} Note that if $(A, f)$ is a partial bucket function in the graph $G$ and ${\bar{f}}$ is a bucket extension, and $G'$ is a graph created from $G$ by removing an edge, then $(A, f)$ and ${\bar{f}}$ remain partial bucket function and its bucket extension in $G'$. Therefore we may assume that $G$ is a tree, rooted at $v_r$. There are $2N$ possibilities to choose the value of ${\bar{f}}(v_r)$ and whether $v_r \in A$ or $v_r \notin A$. We now construct all interesting triples $(A, f, {\bar{f}})$ in a root--to-leaves order in the graph $G$. If we are at a node $v$ with its parent $w$, then $f(v) \in \{f(w)-1, f(w), f(w)+1\}$. However, if $w\in A$ then we cannot both have $f(v) = f(w)+1$ and $v \notin A$. Similarly, if $w \notin A$ then we cannot both have $f(v) = f(w)-1$ and $v \in A$. Therefore we have $5$ options to choose $f(v)$ and whether $v \in A$ or $v \notin A$. Finally, we obtain at most $2N \cdot 5^{n-1}$ triples $(A, f, {\bar{f}})$. \end{proof} \begin{lemma}[$\clubsuit$]\label{lem:extpoly} Let $(A, f)$ be a partial bucket function. Then all bucket extensions of $f$ can be generated with a polynomial delay, using polynomial space. \end{lemma} The proof of the theorem below is an adjusted and improved proof of a bound of the number of states in the $O^*(20^{n/2})$ algorithm for {\sc{Bandwidth}}{} \cite{naszicalp}. The proof can be found in Appendix \ref{a:20n2}. \begin{theorem}\label{thm:20n2} Let $N \in \ensuremath{\mathbb{Z}}_+$. There exists a constant $c < 4.383$ such that there are $O(N \cdot c^n)$ partial bucket functions $(A, f)$ such that there exists a bucket extension ${\bar{f}}$ satisfying ${\bar{f}}(V) \subseteq \{1, 2, \ldots, N\}$. Moreover, all such partial bucket functions can be generated in $O^*(N \cdot c^n)$ time using polynomial space. \end{theorem} \section{Poly-space algorithm for {\sc{Bandwidth}}}\label{s:bwpoly} In this section we describe an $O(9.363^n)$-time and polynomial-space algorithm solving {\sc{Bandwidth}}{}. As an input, the algorithm takes a graph $G=(V,E)$ with $|V|=n$ and an integer $1 \leq b < n$ and decides, whether $G$ has an ordering with bandwidth at most $b$. \subsection{Preliminaries} First, let us recall some important observations made in \cite{naszewg}. An ordering $\pi$ is called a $b$-ordering if $\ensuremath{\mathrm{bw}}(\pi) \leq b$. Let $\ensuremath{\mathbf{Pos}} = \{1,2,\ldots,n\}$ be the set of possible positions and for every position $i \in \ensuremath{\mathbf{Pos}}$ we define the {\it{segment}} it belongs to by $\ensuremath{\mathtt{segment}}(i) = \lceil \frac{i}{b+1} \rceil$ and the {\it{color}} of it by $\ensuremath{\mathtt{color}}(i) = (i - 1) \mod (b+1) + 1$. By $\ensuremath{\mathbf{Seg}} = \{1,2,\ldots,\lceil \frac{n}{b+1}\rceil\}$ we denote the set of possible segments, and by $\ensuremath{\mathbf{Col}} = \{1,2,\ldots,b+1\}$ the set of possible colors. The pair $(\ensuremath{\mathtt{color}}(i), \ensuremath{\mathtt{segment}}(i))$ defines the position $i$ uniquely. We order positions lexicographically by pairs $(\ensuremath{\mathtt{color}}(i), \ensuremath{\mathtt{segment}}(i))$, i.e., the color has higher order that the segment number, and call this order the {\it{color order}} of positions. By $\ensuremath{\mathbf{Pos}}_i$ we denote the set of the first $i$ positions in the color order. Given some (maybe partial) ordering $\pi$, and $v \in V$ for which $\pi(v)$ is defined, by $\ensuremath{\mathtt{color}}(v)$ and $\ensuremath{\mathtt{segment}}(v)$ we understand $\ensuremath{\mathtt{color}}(\pi(v))$ and $\ensuremath{\mathtt{segment}}(\pi(v))$ respectively. Let us recall the crucial observation made in \cite{naszewg}. \begin{lemma}[\cite{naszewg}, Lemma 8]\label{lem:najwazniejsze} Let $\pi$ be an ordering. It is a $b$-ordering iff, for every $uv \in E$, $|\ensuremath{\mathtt{segment}}(u) - \ensuremath{\mathtt{segment}}(v)| \leq 1$ and if $\ensuremath{\mathtt{segment}}(u) + 1 = \ensuremath{\mathtt{segment}}(v)$ then $\ensuremath{\mathtt{color}}(u) > \ensuremath{\mathtt{color}}(v)$ (equivalently, $\pi(u)$ is later in color order than $\pi(v)$). \end{lemma} \subsection{$O^*(20^{n/2})$ algorithm from \cite{naszicalp}} First let us recall the $O^*(20^{n/2})$-time and space algorithm from \cite{naszicalp}. \begin{definition} A {\em{state}} is a partial bucket assignment $(A, f)$ such that the multiset $\{f(v): v \in A\}$ is equal to the multiset $\{\ensuremath{\mathtt{segment}}(i) : i \in \ensuremath{\mathbf{Pos}}_{|A|}\}$. A state $(A \cup \{v\}, f')$ is {\em{a successor of}} a state $(A, f)$ with a vertex $v \notin A$ if $(A \cup \{v\}, f')$ as a partial bucket function is a successor of a partial bucket function $(A, f)$. \end{definition} \begin{theorem}[\cite{naszicalp}, Lemmas 16 and 17]\label{thm:icalpeq} \begin{enumerate} \item Let $\pi$ be a $b$-ordering. For $0 \leq k \leq n$ let $A_k = \{v\in V: \pi(v) \in \ensuremath{\mathbf{Pos}}_k\}$ and $f_k = \ensuremath{\mathtt{segment}}|_{A_k}$. Then every $(A_k, f_k)$ is a state and for every $0 \leq k < n$ the state $(A_{k+1}, f_{k+1})$ is a successor of the state $(A_k, f_k)$. \item Assume we have states $(A_k, f_k)$ for $0 \leq k \leq n$ and for all $0 \leq k < n$ the state $(A_{k+1}, f_{k+1})$ is a successor of the state $(A_k, f_k)$ with the vertex $v_{k+1}$. Let $\pi$ be an ordering assigning $v_k$ to the $k$-th position in the color order. Then $\pi$ is a $b$-ordering. \end{enumerate} \end{theorem} The algorithm of \cite{naszicalp} works as follows: we do a depth--first search from the state $(\emptyset, \emptyset)$ and seek for a state $(V, \cdot)$. At a state $(A, f)$ we generate in polynomial time all successors of this state and memoize visited states. Theorem \ref{thm:icalpeq} implies that we reach state $(V, \cdot)$ iff there exists a $b$-ordering. Moreover, Theorem \ref{thm:20n2} (with $N = n$) implies that we visit at most $O(4.383^n)$ states; generating all successors of a given state can be done in polynomial time due to Corollary \ref{cor:pbecheck}, so the algorithm works in $O(4.383^n)$ time and space. \subsection{The $O(9.363^n)$--time and polynomial space algorithm} \begin{lemma}\label{lem:stateguess} Let $(A, f)$ and $(B, g)$ be a pair of states such that $A \subseteq B$ and $g|_A = f$. Let $a = |A|$ and $b = |B|$. Then one can check in $O^*(4^{b-a})$--time and polynomial space if there exists a sequence of states $(A, f) = (A_a, f_a), (A_{a+1}, f_{a+1}), \ldots, (A_b, f_b) = (B, g)$ such that $(A_{i+1}, f_{i+1})$ is an successor of $(A_i, f_i)$ for $a \leq i < b$. \end{lemma} \begin{proof} First note that a set $A_i$ determines the function $f_i$, since $f_i = g|_{A_i}$. Let $m := b-a$. If $m = 1$, we need to check only if $(B, g)$ is a successor of $(A, f)$, what can be done in polynomial time. Otherwise, let $k = \lfloor \frac{a+b}{2} \rfloor$ and guess $A_k$: there are roughly $2^m$ choices. Set $f_k = g|_{A_k}$. Recursively, check if there is a path of states from $(A, f)$ to $(A_k, f_k)$ and from $(A_k, f_k)$ to $(B, g)$. The algorithm clearly works in polynomial space; now let us estimate the time it consumes. At one step, it does some polynomial computation and invokes roughly $2^{m+1}$ times itself recursively for $b - a \sim m/2$. Therefore it works in $O^*(4^m)$ time. \end{proof} Let $\alpha = 0.5475$. The algorithm works in the same fashion as in \cite{naszicalp}: it seeks for a path of states $(A_i, f_i)_{i=0}^n$ from $(\emptyset, \emptyset)$ to $(V, \cdot)$ such that $(A_{i+1}, f_{i+1})$ is a successor of $(A_i, f_i)$ for $0 \leq i < n$. However, since we are limited to polynomial space, we cannot do a simple search. Instead, we guess middle states on the path, similarly as in Lemma \ref{lem:stateguess}. The algorithm works as follows: \begin{enumerate} \item Let $k := \lfloor \alpha n \rfloor$ and guess the state $(A_k, f_k)$. By Theorem \ref{thm:20n2} with $N = n$, we can enumerate all partial bucket functions in $O(4.383^n)$. We enumerate them and drop those that are not states or have the size of the domain different than $k$. \item Using Lemma \ref{lem:stateguess}, check if there is a path of states from $(\emptyset, \emptyset)$ to $(A_k, f_k)$. This phase works in time $4^{\alpha n}$. In total, for all $(A_k, f_k)$, this phase works in time $O(4.383^n \cdot 4^{\alpha n}) = O(9.363^n)$. \item Guess the state $(V, f_n)$: $f_n$ needs to be a bucket extension of the partial bucket function $(A_k, f_k)$. By Lemma \ref{lem:extpoly}, bucket extensions can be enumerated with polynomial delay; we simply drop those that are not states. By Lemma \ref{lem:5n} with $N = n$, there will be at most $O^*(5^n)$ pairs of states $(A_k, f_k)$ and $(V, f_n)$. \item Using Lemma \ref{lem:stateguess}, check if there is a path from the state $(A_k, f_k)$ to $(V, f_n)$. This phase works in time $O^*(4^{(1-\alpha)n})$. In total, for all $(A_k, f_k)$ and $(V, f_n)$, this phase works in time $O^*(5^n 4^{(1-\alpha)n}) = O(9.363^n)$. \item Return true, if for any $(A_k, f_k)$ and $(V, f_n)$ both applications of Lemma \ref{lem:stateguess} return success. \end{enumerate} Theorem \ref{thm:icalpeq} ensures that the algorithm is correct. In memory we keep only states $(A_k, f_k)$, $(V, f_n)$, recursion stack generated by the algorithm from Lemma \ref{lem:stateguess} and state of generators of states $(A_k, f_k)$ and $(V, f_n)$, so the algorithm works in polynomial space. Comments above prove that it consumes at most $O(9.363^n)$ time. \section{Algorithms for {\sc{Distortion}}{}}\label{s:dist} We consider algorithms that, given a connected graph $G$ with $n$ vertices, and positive real number $d$ decides if $G$ can be embedded into a line with distortion at most $d$. First, let us recall the basis of the approach of Fomin et al. \cite{fomin:distortion-5n}. Recall that $d_G(u, v)$ denotes the distance between vertices $u$ and $v$ in the graph $G$. \begin{definition} Given an embedding $\pi: V \to \ensuremath{\mathbb{Z}}$, we say that $v$ {\em{pushes}} $u$ iff $d_G(u, v) = |\pi(u) - \pi(v)|$. An embedding is called {\em{pushing}}, if $V = \{v_1, v_2, \ldots, v_n\}$ and $\pi(v_1) < \pi(v_2) < \ldots < \pi(v_n)$ then $v_i$ pushes $v_{i+1}$ for all $1 \leq i < n$. \end{definition} \begin{lemma}[\cite{fomin:distortion-arxiv}]\label{lem:dist:wstep} If $G$ can be embedded into the line with distortion $d$, then there is a pushing embedding of $G$ into the line with distortion $d$. Every pushing embedding of $G$ into the line has contraction at least $1$. Moreover, let $\pi$ be a pushing embedding of a connected graph $G$ into the line with distortion at most $d$ and let $V = \{v_1, v_2, \ldots, v_n\}$ be such an ordering $\pi$ that $\pi(v_1) < \pi(v_2) < \ldots < \pi(v_n)$. Then $\pi(v_{i+1}) - \pi(v_i) \leq d$ for all $1 \leq i < n$. \end{lemma} Therefore, we only consider pushing embeddings and hence assume that $d$ is a positive integer. Note that a pushing embedding of a connected graph of at least $2$ vertices has contraction exactly $1$, since $d_G(v_1, v_2) = |\pi(u_2) - \pi(u_1)|$. Therefore distortion equals expansion. As any connected graph with $n$ vertices can be embedded into a line with distortion at most $2n-1$ \cite{badoiu:distortion}, this decisive approach suffices to find the minimal distortion of $G$. We may assume that $\pi(V) \subseteq \{1, 2, \ldots, n(d+1)\}$. Now, let us introduce the concept of segments, adjusted for the {\sc{Distortion}}{} problem. Here the set of available positions is $\ensuremath{\mathbf{Pos}} = \{1, 2, \ldots, n(d+1)\}$ and a segment of a position $i$ is $\ensuremath{\mathtt{segment}}(i) = \lceil \frac{i}{d+1} \rceil$, i.e., a $j$-th segment is an integer interval of the form $\{(j-1)(d+1) + 1, (j-1)(d+1) + 2, \ldots, j(d+1)\}$. The color of a position is $\ensuremath{\mathtt{color}}(i) = (i-1) {\rm mod} (d+1) + 1$. By $\ensuremath{\mathbf{Seg}} = \{1, 2, \ldots, n\}$ we denote the set of possible segments and by $\ensuremath{\mathbf{Col}} = \{1, 2, \ldots, d+1\}$ the set of possible colors. The pair $(\ensuremath{\mathtt{color}}(i), \ensuremath{\mathtt{segment}}(i))$ defines the position $i$ uniquely. We order the positions lexicographically by pairs $(\ensuremath{\mathtt{color}}(i), \ensuremath{\mathtt{segment}}(i))$ and call this order {\em{color order}} of positions. By $\ensuremath{\mathbf{Pos}}_i$ we denote the set of the first $i$ positions in the color order and by $\ensuremath{\mathbf{Seg}}_i$ we denote the set of positions in the $i$-th segment. Given some, maybe partial, embedding $\pi$, by $\ensuremath{\mathtt{color}}(v)$ and $\ensuremath{\mathtt{segment}}(v)$ we denote $\ensuremath{\mathtt{color}}(\pi(v))$ and $\ensuremath{\mathtt{segment}}(\pi(v))$ respectively. Similarly as in the case of {\sc{Bandwidth}}{}, the following equivalence holds (cf. Lemma \ref{lem:najwazniejsze}). \begin{lemma}[$\clubsuit$]\label{lem:najwazniejsze-dist} Let $\pi$ be a pushing embedding. Then $\pi$ has distortion at most $d$ iff for every $uv \in E$, $|\ensuremath{\mathtt{segment}}(u) - \ensuremath{\mathtt{segment}}(v)| \leq 1$ and if $\ensuremath{\mathtt{segment}}(u) + 1 = \ensuremath{\mathtt{segment}}(v)$ then $\ensuremath{\mathtt{color}}(u) > \ensuremath{\mathtt{color}}(v)$, i.e., $\pi(u)$ is later in the color order than $\pi(v)$. \end{lemma} Similarly as in \cite{fomin:distortion-5n}, we solve the following extended case of {\sc{Distortion}}{} as a subproblem. As an input to the subproblem, we are given an induced subgraph $G[X]$ of $G$, an integer $r$ (called the number of segments), a subset $Z \subseteq X$ and a function $\bar{\pi} : Z \to \ensuremath{\mathbf{Seg}}_0 \cup \ensuremath{\mathbf{Seg}}_{r+1}$. Given this input, we ask, if there exists a pushing embedding $\pi: X \to \{-d, -d+1, \ldots, (r+1)(d+1)\}$ with distortion at most $d$ such that $\pi|_Z = \bar{\pi}$, $\pi(X\setminus Z) \subseteq \{1, 2, \ldots, r(d+1)\}$. Moreover, we demand that $\pi$ does not leave any empty segment, i.e, for every $1 \leq i \leq r$, $\pi^{-1}(\ensuremath{\mathbf{Seg}}_i) \neq \emptyset$. \begin{theorem}\label{thm:dist:fewseg} The extended {\sc{Distortion}}{} problem can be solved in $O(4.383^{|X \setminus Z|} n^{O(r)})$ time and space. If we are restricted to polynomial space, the extended {\sc{Distortion}}{} problem can be solved in $O(9.363^{|X \setminus Z|} n^{O(r \log n)})$ time. \end{theorem} \newcommand{{\hat{n}}}{{\hat{n}}} Let ${\hat{n}} = |X \setminus Z|$. The algorithm for Theorem \ref{thm:dist:fewseg} goes as follows. First, for each segment $1 \leq i \leq r$ we guess the vertex $v_i$ and position $1 \leq p_i \leq r(d+1)$ such that $\ensuremath{\mathbf{Seg}}(p_i) = i$. There are roughly $O(n^{O(r)})$ possible guesses (if $r > {\hat{n}}$ the answer is immediately negative). We seek for embeddings $\pi$ such that for every $1 \leq i \leq r$ position $\pi(v_i) = p_i$, and there is no vertex assigned to any position in the segment $i$ with color earlier than $\ensuremath{\mathtt{color}}(p_i)$, i.e., $v_i$ is the first vertex in the segment $i$. If there exists $z \in Z$ such that $\bar{\pi}(z) \leq 0$, then we require that $v_1$ is pushed by such $z$ that $\bar{\pi}(z)$ is the largest nonpositive possible. Along the lines of the algorithm for {\sc{Bandwidth}}{} \cite{naszicalp} and algorithm for {\sc{Distortion}}{} by Fomin et al. \cite{fomin:distortion-5n}, we define state and a state successor as follows: \begin{definition} A {\em{state}} is a triple $(p, (A, f), (H, h))$ such that: \begin{enumerate} \item $0 \leq p \leq r(d+1)$ is an integer, \item $(A, f)$ is a partial bucket function, \item $H \subseteq A$ is a set of vertices such that $H \cap \ensuremath{\mathbf{Seg}}_i$ is nonempty iff $f^{-1}(i)$ is nonempty, \item $h : H \to \ensuremath{\mathbf{Pos}}_p$ and if $v \in H$ then $f(v) = \ensuremath{\mathtt{segment}}(h(v))$, \item if for any segment $1 \leq i \leq r$, vertex $v_i \in H$, then $h(v_i) = p_i$, \item if for any segment $1 \leq i \leq r$ position $p_i \in \ensuremath{\mathbf{Pos}}_i$ then $v_i \in A$ and $f(v_i) = i$. \end{enumerate} \end{definition} \begin{definition} We say that a state $(p+1, (A_2, f_2), (H_2, h_2))$ is a {\em{successor}} of a state $(p, (A_1, f_1), (H_1, h_1))$ iff: \begin{enumerate} \item $A_2 = A_1$ or $A_2 = A_1 \cup \{v\}$, \item if $A_2 = A_1$ then $f_2 = f_1$, $H_1 = H_2$ and $h_1 = h_2$, \item if $A_2 = A_1 \cup \{v\}$, then: \begin{enumerate} \item partial bucket function $(A_2, f_2)$ is a successor of the partial bucket function $(A_1, f_1)$ with the vertex $v$, such that $f_2(v) = \ensuremath{\mathtt{segment}}(p+1)$, \item $H_2 = (H_1 \setminus f_1^{-1}(\ensuremath{\mathtt{segment}}(p+1))) \cup \{v\}$, \item $h_2 = h_1|_{H_1 \cap H_2} \cup (v, p+1)$, \item if $H_1 \cap f_1^{-1}(\ensuremath{\mathtt{segment}}(p+1)) = \{w\}$, then $d_G(v, w) = h_2(v) - h_1(w)$, \item for any $z \in Z$, $d_G(z, v) \leq |\bar{\pi}(z) - (p+1)| \leq d \cdot d_G(z, v)$. \end{enumerate} \end{enumerate} \end{definition} \begin{definition} We say that a state $(r(d+1), (V, f), (H, h))$ is a {\em{final state}} iff for each segment $1 \leq i \leq r$ we have $\{w_i\} = H \cap \ensuremath{\mathbf{Seg}}_i$ (i.e., $H \cap \ensuremath{\mathbf{Seg}}_i$ is nonempty), $w_i$ pushes $v_{i+1}$ for $i<r$ and $w_r$ pushes first $z \in Z$ such that $\bar{\pi}(z) \in \ensuremath{\mathbf{Seg}}_{r+1}$ (if such $z$ exists). \end{definition} The following equivalence holds: \begin{lemma}\label{lem:dist:eq1} Let $\pi$ be a pushing embedding and a solution to the extended {\sc{Distortion}}{} problem with distortion at most $d$. Assume that $\pi(v_i) = p_i$ and this is the first vertex in the segment $i$ for every segment $1 \leq i \leq r$, i.e., the initial guesses are correct with respect to the solution $\pi$. For each $1 \leq p \leq r(d+1)$ we define $(A_p, f_p)$ and $(H_p, h_p)$ as follows: \begin{enumerate} \item $A_p = \pi^{-1}(\ensuremath{\mathbf{Pos}}_p)$ and $f_p = \ensuremath{\mathtt{segment}}|_{A_p}$, \item for each segment $1 \leq i \leq r$ we take $w_i$ as the vertex in $\pi^{-1}(\ensuremath{\mathbf{Pos}}_p \cap \ensuremath{\mathbf{Seg}}_i)$ with the greatest color of position and take $w_i \in H_p$, $h_p(w_i) = \pi(w_i)$; if $\pi^{-1}(\ensuremath{\mathbf{Pos}}_p \cap \ensuremath{\mathbf{Seg}}_i) = \emptyset$, we take $H_p \cap \ensuremath{\mathbf{Seg}}_i = \emptyset$. \end{enumerate} Then $S_p = (p, (A_p, f_p), (H_p, h_p)$ is a state and $S_{p+1} = (p+1, (A_{p+1}, f_{p+1}), (H_{p+1}, h_{p+1}))$ is its successor if $p < r(d+1)$. Moreover, $S_{r(d+1)}$ is a final state. \end{lemma} \begin{proof} First note that, similarly as in the case of {\sc{Bandwidth}}{}, $(A_p, f_p)$ is a partial bucket function and $(A_{p+1}, f_{p+1})$ is a successor of $(A_p, f_p)$. Indeed, the conditions for a partial bucket function and its successor are implied by Lemma \ref{lem:najwazniejsze-dist}. The check that $(H_p, h_p)$ satisfies the conditions for being a state is straightforward. Let us now look at the conditions for the successor. The only nontrivial part is that if in $H_p$ the vertex $w$ is replaced by $v$ in $H_{p+1}$, then $d_G(v, w) = h_{p+1}(v) - h_p(w)$. However, this is implied by the fact that $\pi$ is a pushing embedding. To see that $S_{r(d+1)}$ is a final state recall that $\pi$ leaves no segment $\ensuremath{\mathbf{Seg}}_i$, $1 \leq i \leq r$, nonempty and it is a pushing embedding. \end{proof} \begin{lemma}\label{lem:dist:eq2} Assume that we have a sequence of states $(S_p)_{p=0}^{r(d+1)}$, $S_p = (p, (A_p, f_p), (H_p, h_p))$ such that $S_{p+1}$ is a successor of $S_p$ for $0 \leq p < r(d+1)$ and $S_{r(d+1)}$ is a final state. Let $\pi = \bigcup_{p=0}^{r(d+1)} h_p$. Then $\pi$ is a solution to the extended {\sc{Distortion}}{} problem with distortion at most $d$. Moreover, $\pi(v_i) = p_i$ for all $1 \leq i \leq r$. \end{lemma} \begin{proof} Note that the conditions for the final state imply that $\pi$ leaves every segment from $1$ to $r$ nonempty. Moreover, the conditions for $(H_p, h_p)$ imply that $\pi(v_i) = p_i$ and $v_i$ is the first vertex assigned in segment $i$. First we check if $\pi$ is a pushing embedding. Let $v$ and $w$ be two vertices such that $\pi(v) < \pi(w)$ and there is no $u$ with $\pi(v) < \pi(u) < \pi(w)$. If $\ensuremath{\mathtt{segment}}(v) = \ensuremath{\mathtt{segment}}(w)$, then $\pi(w) - \pi(v) = d_G(v, w)$ is ensured by the state successor definition at step, where $S_{p+1}$ is a successor of the state $S_p$ with the vertex $w$. Otherwise, if $\ensuremath{\mathtt{segment}}(v) + 1 = \ensuremath{\mathtt{segment}}(w)$, then $w = v_{\ensuremath{\mathtt{segment}}(v)}$ or $w$ is the first vertex of $Z$ in segment $r+1$ and the fact that $v$ pushes $w$ is implied by the condition of the final state. The possibility that $\ensuremath{\mathtt{segment}}(v) + 1 < \ensuremath{\mathtt{segment}}(w)$ is forbidden since in the final state $H_{r(d+1)} \cap \ensuremath{\mathbf{Seg}}_i \neq \emptyset$ for $1 \leq i \leq r$. Now we check if for each edge $uv$, $|\pi(u) - \pi(v)| \leq d$. Assume not, let $\pi(u) + d < \pi(v)$ and let $S_k$ be a successor of the state $S_{k-1}$ with the vertex $v$. By the conditions for a partial bucket function $(A_k, f_k)$, $|\ensuremath{\mathtt{segment}}(u) - \ensuremath{\mathtt{segment}}(v)| \leq 1$, so $\ensuremath{\mathtt{segment}}(u) + 1 = \ensuremath{\mathtt{segment}}(v)$. However, by the conditions for a partial bucket function successor, $\ensuremath{\mathtt{color}}(u) > \ensuremath{\mathtt{color}}(v)$, a contradiction, since consecutive positions of the same color are in distance $d+1$. \end{proof} Let us now limit the number of states. There are at most $O^*(4.383^{{\hat{n}}})$ partial bucket functions. Integer $p = O(rd)$ and $h_p$ keeps position of at most one vertex in each segment, so there are $O(n^{O(r)})$ possible pairs $(H_p, h_p)$. Therefore, in total, we have $O(4.383^{{\hat{n}}} n^{O(r)})$ states. Note that there at most ${\hat{n}}+1$ successors of a given state, since choosing $A_2 \setminus A_1$ defines the successor uniquely. Note that, as checking if a pair $(A, f)$ is a partial bucket function can be done in polynomial time, checking if a given triple is a state or checking if one state is a successor of the other can be done in polynomial time too. To obtain the $O(4.383^{{\hat{n}}} n^{O(r)})$--time and space algorithm, we simply seek a path of states as in Lemma \ref{lem:dist:eq2}, memoizing visited states. To limit the algorithm to the polynomial space, we do the same trick as in the $O(9.363^n)$ algorithm for {\sc{Bandwidth}}{}. \begin{lemma}\label{lem:dist:stateguess} Assume that we have states $S_p = (p, (A_p, f_p), (H_p, h_p))$ and $S_q = (q, (A_q, f_q), (H_q, h_q))$ such that $p < q$, $A_p \subseteq A_q$ and $f_p = f_q|_{A_p}$. Let $m = |A_q \setminus A_q|$. Then one can check if there exists a sequence of states $S_i = (i, (A_i, f_i), (H_i, h_i))$ for $i = p, p+1, \ldots, q$ such that the state $S_{i+1}$ is a successor of the state numbered $S_i$ in time $O(4^m n^{O(r \log m)})$. \end{lemma} \begin{proof} First, let us consider the case when $m = 1$. We guess index $k$, $p < k \leq q$, such that $A_k = A_q$ and $f_k = f_q$, but $A_{k-1} = A_p$ and $f_{k-1} = f_p$. Note that then all states $S_i$ for $p \leq i \leq q$ are defined uniquely: $h_i = h_p$ for $i < k$ and $h_i = h_q$ for $i \geq k$. We need only to check if all consecutive pairs of states are successors. Let now assume $m > 1$ and let $s = |A_p| + \lfloor m/2 \rfloor$. Let us guess the state $S_k$ such that $|A_k| = s$. We need $A_p \subseteq A_k \subseteq A_q$ and $f_k = f_q|_{A_k}$, so we have only roughly $2^m$ possibilities for $(A_k, f_k)$ and $O(dr) = O(n{\hat{n}})$ possibilities for the index $k$. As always, there are $n^{O(r)}$ possible guesses for $(H_k, h_k)$. We recursively check if there is a sequence of states from $S_p$ to $S_k$ and from $S_k$ to $S_q$. Since at each step we divide $m$ by $2$, finally we obtain an $O(4^m n^{O(r \log m)})$ time bound. \end{proof} Again we set $\alpha := 0.5475$. \begin{enumerate} \item We guess the state $S_k = (k, (A_k, f_k), (H_k, h_k))$ such that $|A_k| = \lfloor \alpha n \rfloor $. By Theorem \ref{thm:20n2} with $N = n$, we can enumerate all partial bucket extensions in $O(4.383^{{\hat{n}}})$. We enumerate all partial bucket functions, guess $p$ and $(H_k, h_k)$ and drop those combinations that are not states. Note that there are $O(n^{O(r)})$ possible guesses for $(H_k, h_k)$ and $dr \leq n^2$ guesses for $p$. \item Using Lemma \ref{lem:dist:stateguess}, check if there is a path of states from $(0, (\emptyset, \emptyset), (\emptyset, \emptyset))$ to $S_k$. This phase works in time $4^{\alpha {\hat{n}}} n^{O(r \log n)}$. In total, for all $(A_k, f_k)$, this phase works in time $O^*(4.383^{{\hat{n}}} \cdot 4^{\alpha {\hat{n}}} n^{O(r \log n)}) = O(9.363^{\hat{n}} n^{O(r \log n)})$. \item Guess the final state $S_{r(d+1)} = (r(d+1), (V, f_{r(d+1)}), (H_{r(d+1)}, h_{r(d+1)}))$: $f_{r(d+1)}$ needs to be a bucket extension of the partial bucket function $(A_k, f_k)$. By Lemma \ref{lem:extpoly}, bucket extensions can be enumerated with polynomial delay. We guess $h_{r(d+1)}$ and simply drop those guesses that do not form states. By Lemma \ref{lem:5n} with $N = r$, there will be at most $O^*(5^{\hat{n}})$ pairs of states $(A_k, f_k)$ and $(V, f_{r(d+1)})$. We have $n^{O(r)}$ possibilities for $h_{r(d+1)}$. \item Using Lemma \ref{lem:dist:stateguess}, check if there is a path from the state $S_k$ to $S_{r(d+1)}$. This phase works in time $4^{(1-\alpha)n} n^{O(r \log n)}$. In total, for all $S_k$ and $S_{r(d+1)}$ this phase works in time $$O^*(5^{\hat{n}} 4^{(1-\alpha){\hat{n}}} n^{O(r \log n)}) = O(9.363^{\hat{n}} n^{O(r \log n)}).$$ \end{enumerate} \begin{theorem} The {\sc{Distortion}}{} problem can be solved in $O(4.383^n)$ time and space. If we are restricted to polynomial space, the extended {\sc{Distortion}}{} problem can be solved in $O(9.363^n)$ time. \end{theorem} \begin{proof} We almost repeat the argument from \cite{fomin:distortion-5n}. First, we may guess the number of nonempty segments needed to embed $G$ into a line with a pushing embedding $\pi$ with distortion at most $d$. Denote this number by $r$, i.e., $r = \lceil \max \{\pi(v) : v \in V(G)\} / (d+1) \rceil$. Note that the original {\sc{Distortion}}{} problem can be represented as an extended case with $H = G$ and $Z = \bar{\pi} = \emptyset$ and with guessed $r$. If $r < n / \log^3(n)$, the thesis is straightforward by applying Theorem \ref{thm:dist:fewseg}. Therefore, let us assume $r \geq n / \log^3(n)$. As every segment from $1$ to $r$ contains at least one vertex in a required pushing embedding $\pi$, by simple counting argument, there needs to be a segment $r/4 \leq k \leq 3r/4$ such that there are at most $4n/r \leq 4\log^3(n)$ vertices assigned to segments $k$ and $k+1$ in total by $\pi$. We guess: segment number $k$, vertices assigned to segments $k$ and $k+1$ and values of $\pi$ for these vertices. We discard any guess that already makes some edge between guessed vertices longer than $d$. As $d, r = O(n)$, we have $n^{O(\log^3 n)}$ possible guesses. Let $Y$ be the set of vertices assigned to segments $k$ and $k+1$ and look at any connected component $C$ of $G[V \setminus Y]$. Note that if $C$ has neighbours in both segment $k$ and $k+1$, the answer is immediately negative. Moreover, as $G$ was connected, $C$ has a neighbour in segment $k$ or $k+1$. Therefore we know, whether vertices from $C$ should be assigned to segments $1, 2, \ldots, k-1$ or $k+2, \ldots, r$. The problem now decomposes into two subproblems: graphs $H_1$ and $H_2$, such that $H_1$ should be embedded into segments $1$ to $k$ and $H_2$ should be embedded into segments $k+1$ to $r$; moreover, we demand that the embeddings meet the guesses values of $\pi$ on $Y$. The subproblems are in fact instances of extended {\sc{Distortion}}{} problem and can be decomposed further in the same fashion until there are at most $n / \log^3(n)$ segments in one instance. The depth of this recurrence is $O(\log r) = O(\log n)$, and each subproblem with at most $n / \log^3(n)$ can be solved by algorithm described in Theorem \ref{thm:dist:fewseg}. Therefore, finally, we obtain an algorithm that works in $O(4.383^n)$ time and space and an algorithm that works in $O(9.363^n)$ time and polynomial space. \end{proof} \appendix \section{Bound on the number of partial bucket functions}\label{a:20n2} In this section we prove Theorem \ref{thm:20n2}; namely, that for some constant $c < 4.383$ in a connected, undirected graph $G = (V, E)$ with $|V| = n$ there are at most $O(N \cdot c^n)$ bucket functions, where we are allowed to assign values $\{1, 2, \ldots, N\}$ only. Let $c = 4.383 - \varepsilon$ for some sufficiently small $\varepsilon$. We use $c$ instead of simply constant $4.383$ to hide polynomial factors at the end, i.e., to say $O^*(c^n) = O(4.383^n)$. Let us start with the following observation. \begin{lemma} Let $G' = (V, E')$ be a graph formed by removing one edge from the graph $G$ in a way that $G'$ is still connected. If $(A, f)$ is a bucket function in $G$, then it is also a bucket function in $G'$. \end{lemma} Therefore we can assume that $G = (V, E)$ is a tree. Take any vertex $v_r$ with degree $1$ and make it a root of $G$. In this proof we limit not the number of partial bucket functions, but the number of {\it{prototypes}}, defined below. It is quite clear that the number of prototypes is larger than the number of partial bucket extensions, and we prove that there are at most $O(Nc^n)$ prototypes. Then we show that one can generate all prototypes in $O^*(Nc^n)$ time and in polynomial space. This proves that all partial bucket extensions can be generated in $O^*(Nc^n)$ time and polynomial space. \begin{definition}\label{def:genex-prestate} Assume we have a fixed subset $B \subseteq V$. A {\it{prototype}} is a pair $(A, f)$, where $A \subseteq V$, $f : A \cup B \to \ensuremath{\mathbb{Z}}$, such that $(A, f|_A)$ is a partial bucket function, and there exists a bucket extension ${\bar{f}}$ that is an extension of $f$, not only $f|_A$. \end{definition} \begin{lemma} For any fixed $B \subseteq V$ the number of partial bucket functions in not greater than the number of prototypes. \end{lemma} \begin{proof} Let us assign to every prototype $(A, f)$ the partial bucket function $(A, f|_A)$. To prove our lemma we need to show that this assignment is surjective. Having a partial bucket function $(A, f)$, take any its bucket extension ${\bar{f}}$ and look at the pair $(A, {\bar{f}}|_{A \cup B})$. This is clearly a prototype, and $(A, f)$ is assigned to it in the aforementioned assignment. \end{proof} Before we proceed to main estimations, we need a few calculations. Let $\alpha = 4.26$, $\beta = 3$ and $\gamma = 5.02$. \begin{lemma}\label{lem:suma357} \begin{equation*} 2c^{n-1} + \sum_{k=1}^\infty (2k-1)c^{n-k} = c^n \Big(\frac{2}{c} + \frac{2c}{(c-1)^2} - \frac{1}{c-1} \Big) \end{equation*} \end{lemma} \begin{proof} \begin{equation} \sum_{k=1}^\infty kc^{-k} = \frac{1}{c} \sum_{k=0}^\infty (k+1)c^{-k} = \frac{1}{c} \Big( \frac{1}{1-x} \Big)' \Big|_{x=\frac{1}{c}} = \frac{c}{(c- 1)^2} \label{r:sumakck} \end{equation} \begin{align*} 2c^{n-1} + \sum_{k=1}^\infty (2k-1)c^{n-k} = \\ = c^n \Big( 2\sum_{k=1}^\infty kc^{-k} - \sum_{k=1}^\infty c^{-k} + 2c^{-1} \Big) = \\ = c^n \Big( \frac{2}{c} + \frac{2c}{(c- 1)^2} - \frac{1}{c - 1}\Big) \end{align*} \end{proof} \begin{corollary}\label{cor:suma357} For our choice of values for $\alpha$, $\gamma$ and $c$ we obtain \begin{equation*} 2c^{n-1} + \sum_{k=1}^\infty (2k-1)c^{n-k} \leq c^n \left( 1 - \max\left(\frac{6}{\alpha c^2}, \frac{15}{\gamma c^3}\right) \right). \end{equation*} \end{corollary} \begin{lemma}\label{lem:suma246} \begin{equation*} \sum_{k=1}^\infty 2kc^{n-k} = c^n \frac{2c}{(c-1)^2} \end{equation*} \end{lemma} \begin{proof} This is a straightforward corollary from Equation \ref{r:sumakck}. \end{proof} \begin{corollary}\label{cor:suma246} For our choice of values for $\beta$, $\gamma$ and $c$ we obtain \begin{equation*} \sum_{k=1}^\infty 2kc^{n-k} \leq c^n \left(1 - \max\left(\frac{7}{\beta c^2}, \frac{13}{\gamma c^2}\right)\right). \end{equation*} \end{corollary} Let us proceed to the main estimations. \begin{lemma}\label{lem:Tn} Let $G$ be a path of length $n+1$ --- graph with $V = \{v_0, v_1, v_2, \ldots, v_n\}$, $E = \{(v_i, v_{i+1}): 0 \leq i < n\}$. Let $B = \{v_0\}$. Fix any $j \in \ensuremath{\mathbb{Z}}$. Let $T(n)$ be the number of prototypes $(A, f)$ satisfying $v_0 \in A$ and $f(v_0) = j$. Then $T(n) \leq \alpha \cdot c^{n-1}$. \end{lemma} \begin{proof} Let us denote $T(x) = 0$ for $x \leq 0$. This satisfies $T(x) \leq \alpha c^{x-1}$. We use the induction and start with calculating $T(1)$ and $T(2)$ manually. If $n=1$ we have $f(v_1) \in \{j-1, j, j+1\}$ if $v_1 \in A$, and one prototype if $v_1 \notin A$, so $T(1) = 4 < \alpha$. If $n=2$, we consider several cases. If $v_1 \in A$ we have $f(v_1) \in \{j-1, j, j+1\}$ and $T(1)$ possibilities for $A \setminus \{v_0\}$ and $f|_{A \setminus \{v_0\}}$. If $A = \{v_0, v_2\}$, $f(v_2) \in \{j-1, j, j+1\}$ due to the conditions for a partial bucket extension $\bar{f}$. There is also one state with $A = \{v_0\}$, ending up with $T(2) = 3 \cdot 4 + 3 + 1 = 16 < \alpha c$. Let us recursively count interesting prototypes for $n \geq 3$. There is exactly one prototype $(A, f)$ with $A = \{v_0\}$. Otherwise let $k(A) > 0$ be the smallest positive integer satisfying $v_{k(A)} \in A$. Let us count the number of prototypes $(A, f)$, such that $k(A) = k$ for fixed $k$. For $k=1$ we have $f(v_1) \in \{j-1, j, j+1\}$, and, having fixed value $f(v_1)$, we have $T(n-1)$ ways to choose $A \setminus \{v_0\}$ and $f_{A \setminus \{v_0\}}$. For $k>1$ we have $j - k + 1 \leq f(v_k) \leq j + k - 1$, due to the conditions for a partial bucket extension $\bar{f}$, so we have $(2k-1)$ ways to choose $f(v_k)$ and $T(n-k)$ ways to choose $A \setminus \{v_0, v_1, \ldots, v_{k-1}\}$ and $f_{A \setminus \{v_0, v_1, \ldots, v_{k-1}\}}$ if $k < n$ and $1$ way if $k = n$. Therefore we have for $n \geq 3$: \begin{align*} T(n) \leq 1 + 3T(n-1) + \sum_{k=2}^{n-1} (2k-1) T(n-k) + 2n-1 \leq\\ \leq 2n + 2T(n-1) + \sum_{k=1}^{\infty} (2k-1)T(n-k) \end{align*} Note that for $n \geq 3$ we have $2n \leq \frac{6}{\alpha c^2} \cdot \alpha c^{n-1}$, as we have an equality for $n=3$ and the right side grows significantly faster than the left side for $n \geq 3$. Using Corollary \ref{cor:suma357} we obtain: \begin{equation*} T(n) \leq \alpha c^{n-1} \end{equation*} \end{proof} \begin{lemma}\label{lem:Tpn} Let $G$ be a path of length $n+1$ --- graph with $V = \{v_0, v_1, v_2, \ldots, v_n\}$, $B = \{v_0\}$ and $E = \{(v_i, v_{i+1}): 0 \leq i < n\}$. Fix any $j \in \ensuremath{\mathbb{Z}}$. Let $T'(n)$ be the number of prototypes $(A, f)$ satisfying $v_0 \notin A$ and $f(v_0) = j$. Then $T'(n) \leq \beta c^{n-1}$. \end{lemma} \begin{proof} Write the formula for $T'$ using previously bounded $T$. We start with calculating $T'(1)$ and $T'(2)$ manually. If $n=1$, if $v_1 \in A$ we have $f(v_1) \in \{j, j+1\}$ and one prototype with $A = \emptyset$, so $T'(1) = 3 \le \beta $. If $n=2$, we have one prototype with $A = \emptyset$, four prototypes if $A = \{v_2\}$ (since then $f(v_2) \in \{j-1, j, j+1, j+2\}$) and $2 \cdot T(1)$ prototypes if $v_1 \in A$ (since $f(v_1) \in \{j, j+1\}$). Therefore $T'(2) = 1 + 4 + 2 \cdot 4 = 13 < \beta c$. Let us assume $n \geq 3$. There is exactly one prototype $(A, f)$ with $A = \emptyset$. Otherwise let $k(A) > 0$ be the smallest positive integer satisfying $v_{k(A)} \in A$. Let us count the number of prototypes $(A, f)$ such that $k(A) = k$ for fixed $k$. Note that, due to the conditions for a partial bucket extension $\bar{f}$, $j - k + 1 \leq f(v_k) \leq j + k$; there are $2k$ ways to choose $f(v_k)$. There are $T(n-k)$ ways to choose $A \setminus \{v_0, v_1, \ldots, v_{k-1}\}$ and $f_{A \setminus \{v_0, v_1, \ldots, v_{k-1}\}}$ for $k < n$ and $1$ way for $k = n$, leading us to inequality \begin{equation*} T'(n) \leq 1 + 2n + \sum_{k=1}^\infty 2kT(n-k) \end{equation*} Note that for $n \geq 3$ we have $2n + 1 \leq \frac{7}{\beta c^2} \cdot \beta c^{n-1}$, as we have equality for $n=3$ and the right side grows significantly faster than the left side for $n \geq 3$. Therefore, using Corollary \ref{cor:suma246}, we obtain \begin{equation*} T'(n) \leq \beta c^{n-1} \end{equation*} \end{proof} \begin{lemma}\label{lem:Sn} Let $G$ be a path of length $n+1$ --- graph with $V = \{v_0, v_1, v_2, \ldots, v_n\}$, $B = \{v_0, v_n\}$ and $E = \{(v_i, v_{i+1}): 0 \leq i < n\}$. Fix any $j \in \ensuremath{\mathbb{Z}}$. Let $S(n)$ be the number of prototypes $(A, f)$ satisfying $v_0 \in A$ and $f(v_0) = j$. Then $S(n) \leq \gamma c^{n-1}$. Moreover, at least $0.4 S(n)$ of these prototypes $(A, f)$ satisfy $v_n \notin A$. \end{lemma} \begin{proof} As in the estimations of $T(n)$, we use induction and write a recursive formula for $S$. Let $S(x) = 0$ for $x \leq 0$. We start with calculating $S(1)$, $S(2)$ and $S(3)$ manually. If $n=1$, if $v_1 \in A$ we have $f(v_1) \in \{j-1, j, j+1\}$ and if $v_1 \notin A$ we have $f(v_1) \in \{j-1, j\}$, thus $S(1) = 5 \leq \gamma$ and $2 = 0.4S(1)$ of these prototypes satisfy $v_1 \notin A$. If $n=2$, we consider several cases, as in calculations of $T(2)$. If $v_1 \in A$, we have $f(v_1) \in \{j-1, j, j+1\}$ thus $3 \cdot S(1)$ possibilities and out of them $3 \cdot 2$ possibilities satisfy $v_2 \notin A$. If $A = \{v_0, v_2\}$ we have $f(v_2) \in \{j-1, j, j+1\}$, $3$ possibilities. If $A = \{v_0\}$ we have $f(v_2) \in \{j-2, j-1, j, j+1\}$, $4$ possibilities. In total, $S(2) = 15 + 3 + 4 = 22 \leq \gamma c$ and $3 \cdot 2 + 4 > 0.4S(2)$ of these prototypes satisfy $v_2 \notin A$. If $n=3$, we do similarly. If $v_1 \in A$, we have $f(v_1) \in \{j-1, j, j+1\}$ thus $3 \cdot S(2)$ possibilities and out of them $3 \cdot 10$ possibilities satisfy $v_3 \notin A$. If $v_1 \notin A$ but $v_2 \in A$ we have $f(v_2) \in \{j-1, j, j+1\}$, $3 \cdot S(1)$ possibilities and out of them $3 \cdot 2$ possibilities satisfy $v_3 \notin A$. If $A = \{v_0, v_3\}$ we have $f(v_3) \in \{j-2, j-1, j, j+1, j+2\}$, $5$ possibilities. If $A = \{v_0\}$ we have $f(v_3) \in \{j-3, j-2, j-1, j, j+1, j+2\}$, $6$ possibilities. In total $S(3) = 3 \cdot 22 + 3 \cdot 5 + 5 + 6 = 92 \leq \gamma c^2$, and $3 \cdot 10 + 3 \cdot 2 + 6 = 42 > 0.4S(3)$ of these prototypes satisfy $v_3 \notin A$. Let us assume $n \geq 4$. If $A = \{v_0\}$, we have $j - n \leq f(v_n) \leq j + n - 1$, $2n$ possible prototypes and all of them satisfy $v_n \notin A$. Otherwise let $k(A)$ be the smallest positive integer such that $v_{k(A)} \in A$. Let us once again count the number of prototypes $(A, f)$, such that $k(A) = k$ for fixed $k$. As in the estimate of $T(n)$, we have $3$ possible values for $f(v_k)$ when $k=1$ and $(2k-1)$ possible values when $k > 1$. For $k < n$ there are $S(n-k)$ possible ways to choose $A \setminus \{v_0, v_1, \ldots, v_{k-1}\}$ and $f_{A \setminus \{v_0, v_1, \ldots, v_{k-1}\}}$ and $1$ way if $k = n$. Moreover for $k < n$ at least $0.4S(n-k)$ of choices satisfy $v_n \notin A$. Therefore: \begin{equation*} S(n) = 2n-1 + 2n + 2S(n-1) + \sum_{k=1}^{n-1}(2k-1)S(n-k) \end{equation*} And at least \begin{equation*} 2n + 0.4\left(2S(n-1) + \sum_{k=1}^{n-1}(2k-1)S(n-k)\right) \geq 0.4S(n) \end{equation*} of these prototypes satisfy $v_n \notin A$. For $n \geq 4$ we have $4n-1 \leq \frac{15}{\gamma c^3} \cdot \gamma c^{n-1}$, so using Corollary \ref{cor:suma357} we obtain: \begin{equation*} S(n) \leq \gamma c^{n-1} \end{equation*} \end{proof} \begin{lemma}\label{lem:Spn} Let $G$ be a path of length $n+1$ --- graph with $V = \{v_0, v_1, v_2, \ldots, v_n\}$, $B = \{v_0, v_n\}$ and $E = \{(v_i, v_{i+1}): 0 \leq i < n\}$. Fix any $j \in \ensuremath{\mathbb{Z}}$. Let $S'(n)$ be the number of prototypes $(A, f)$ satisfying $v_0 \notin A$ and $f(v_0) = j$. Then $S'(n) \leq \gamma c^{n-1}$. Moreover, at least $0.4 S'(n)$ of these prototypes $(A, f)$ satisfy $v_n \notin A$. \end{lemma} \begin{proof} Similarly to the estimate of $T'$, we write the formula bounding $S'$ with $S$ and use already proved bounds for $S$. We start with calculating $S'(1)$ and $S'(2)$ manually. If $n=1$ we have $f(v_1) \in \{j, j+1\}$ if $v_1 \in A$ and $f(v_1) \in \{j-1, j, j+1\}$ if $v_1 \notin A$, thus $S'(1) = 5 \leq \gamma$ and $3 > 0.4S'(1)$ of these prototypes satisfy $v_1 \notin A$. If $n=2$ we consider several cases. If $v_1 \in A$ we have $f(v_1) \in \{j, j+1\}$, thus $2 \cdot S(1)$ possibilities and out of them $2 \cdot 2$ possibilities satisfy $v_2 \notin A$. If $A = \{v_2\}$ we have $f(v_2) \in \{j-1, j, j+1, j+2\}$, $4$ possibilities. If $A = \emptyset$ we have $f(v_2) \in \{j-2, j-1, j, j+1, j+2\}$, $5$ possibilities. In total $S'(2) = 2 \cdot 5 + 4 + 5 = 19 \leq \gamma c$, and $2 \cdot 2 + 5 = 9 > 0.4'S(2)$ of these prototypes satisfy $v_2 \notin A$. Let us assume $n \geq 3$. If $A = \emptyset$, we have $j - n \leq f(v_n) \leq j + n$, $2n+1$ possible prototypes, all satisfying $v_n \notin A$. Otherwise let $k(A)$ be the smallest positive integer such that $v_{k(A)} \in A$. Let us once again count number of prototypes $(A, f)$, such that $k(A) = k$ for fixed $k$. As in the estimate of $T'(n)$, we have $2k$ possible values for $f(v_k)$. For $k < n$ there are $S(n-k)$ possible ways to choose $A \setminus \{v_0, v_1, \ldots, v_{k-1}\}$ and $f_{A \setminus \{v_0, v_1, \ldots, v_{k-1}\}}$ and $1$ way if $k = n$. Moreover, for $k<n$ at least $0.4S(n-k)$ of choices satisfy $v_n \notin A$. Therefore: \begin{equation*} S'(n) \leq 2n+1 + 2n + \sum_{k=1}^\infty 2kS(n-k) \end{equation*} and at least \begin{equation*} 2n+1 + 0.4\sum_{k=1}^\infty 2kS(n-k)\geq 0.4 S'(n) \end{equation*} of these prototypes satisfy $v_n \notin A$. For $n \geq 3$ we have $4n+1 \leq \frac{13}{\gamma c^2} \cdot \gamma c^{n-1}$. Using Corollary \ref{cor:suma246} we obtain \begin{equation*} S'(n) \leq \gamma c^{n - 1} \end{equation*} \end{proof} Let us proceed to the final lemma in this proof. By $B_0 \subseteq V$ we denote the root $v_r$ and the set of vertices with at least two children in $G$, i.e., vertices of degree at least $3$. Recall that $v_r$ has degree $1$. \begin{lemma} Let $v_r$ be the root of an $n$ vertex graph $G=(V, E)$ of degree $1$ and let $B = B_0$. Assume that $G$ is not a path. Fix $j \in \ensuremath{\mathbb{Z}}$. Then both the number of prototypes $(A, f)$ with $f(v_r) = j$, $v_r \in A$ and the number of prototypes $(A, f)$ with $f(v_r) = j$, $v_r \notin A$ are at most $\delta c^{n-2}$, where $\delta = \sqrt{0.6 \alpha^2 + 0.4 \beta^2}$. \end{lemma} \begin{proof} We prove it by induction over $n = |V|$. Let $v$ be the closest to $v_r$ vertex that belongs to $B_0$ different than $v_r$ ($v$ exists as $G$ is not a path) Let $P$ be the path from $v$ to $v_r$, including $v$ and $v_r$ and let $|P|$ be the number of vertices on $P$. Due to Lemma \ref{lem:Sn} and Lemma \ref{lem:Spn}, there are at most $\gamma c^{|P|-2}$ ways to choose $(A \cap P, f|_{(A \cup B) \cap P})$, and at least $0.4$ of these possibilities satisfy $v \notin A$. Let us now fix one of such choices. Let $G_1$, $G_2$, \ldots, $G_k$ be the connected components of $G$ with removed $P$. Let $V_i$ be the set of vertices of $G_i$ and $B_i = B \cap V_i$. For each $1 \leq i \leq k$, we bound the number of possible choices for $(A \cap V_i, f|_{(A \cup B) \cap V_i})$. If $B_i = \emptyset$ (equivalently $G_i$ is a path) then one can choose $(A \cap V_i, f|_{(A \cup B) \cap V_i})$ on $T(|V_i|) \leq \alpha c^{|V_i|-1}$ or $T'(|V_i|) \leq \beta c^{|V_i|-1}$ ways, depending on whether $v= v_0 \in A$ or $v = v_0 \notin A$ (we use here Lemma \ref{lem:Tn} or Lemma \ref{lem:Tpn} for $v_0 = v$ and $\{v_1, v_2, \ldots, v_{|V_i|}\} = V_i$). Otherwise, we use inductive assumption for $G_i$ with added root $v$. In this case we have at most $\delta c^{|V_i|-1}$ possibilities to choose $(A \cap V_i, f|_{(A \cup B) \cap V_i})$. \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{A}}{\mathcal{A}} Let $\mathcal{B} = \{1 \leq i \leq k: B_i = \emptyset\}$, and $\mathcal{A} = \{1, 2, \ldots, k\} \setminus \mathcal{B}$. If $v \in A$, the number of choices for all graphs $G_i$ is bounded by: \begin{equation*} \left( \prod_{i \in \mathcal{A}} \delta c^{|V_i|-1} \right) \cdot \left(\prod_{i \in \mathcal{B}} \alpha c^{|V_i| - 1}\right) = \delta^{|\mathcal{A}|}\alpha^{|\mathcal{B}|} c^{n-|P|-k} \end{equation*} If $v \notin A$, the number of choices for all graphs $G_i$ is bounded by: \begin{equation*} \left( \prod_{i \in \mathcal{A}} \delta c^{|V_i|-1} \right) \cdot \left(\prod_{i \in \mathcal{B}} \beta c^{|V_i| - 1}\right) = \delta^{|\mathcal{A}|}\beta^{|\mathcal{B}|} c^{n-|P|-k} \end{equation*} Therefore, as $\alpha \geq \beta$, the total number of prototypes for $G$ is bounded by \begin{equation*} \gamma c^{|P|-2} \delta^{|\mathcal{A}|} c^{n-|P|-k} \left(0.6 \alpha^{|\mathcal{B}|} + 0.4 \beta^{|\mathcal{B}|}\right) = c^{n-2} \left(\gamma c^{-k} \delta^{|\mathcal{A}|} \left(0.6 \alpha^{|\mathcal{B}|} + 0.4 \beta^{|\mathcal{B}|}\right)\right) \end{equation*} Note that $\delta \gamma \leq c^2$. If $\mathcal{B} \leq 1$ we have, using that $k \geq 2$ and $0.6\alpha +0.4\beta \leq \delta \leq c$: \begin{equation*} \gamma c^{-k} \delta^{|\mathcal{A}|} \left(0.6 \alpha^{|\mathcal{B}|} + 0.4 \beta^{|\mathcal{B}|}\right) \leq \gamma c^{-k} \delta^{k} \leq \delta. \end{equation*} Otherwise, if $|\mathcal{B}| \geq 2$ we have, as $\beta \leq \alpha \leq c$ and $\delta \leq c$: \begin{align*} \gamma c^{-k} \delta^{|\mathcal{A}|} \left(0.6 \alpha^{|\mathcal{B}|} + 0.4 \beta^{|\mathcal{B}|}\right) &\leq \gamma c^{-k} \delta^{|\mathcal{A}|} \left(0.6 \alpha^{|\mathcal{B}|} + 0.4 \alpha^{|\mathcal{B}|-2} \beta^2\right) \\ &= \gamma c^{-k}\delta^{|\mathcal{A}|} \alpha^{|\mathcal{B}|-2} \delta^2 \leq \delta. \end{align*} Thus the bound is proven. \end{proof} \begin{corollary} The number of all prototypes satisfying $f(v_r) \in \{1,2, \ldots, N\}$ is at most $N \cdot \max(\alpha, \delta) \cdot c^{n-2} = O(Nc^n)$. \end{corollary} To finish up the proof of theorem \ref{thm:20n2}, we need to show the following lemma. \begin{lemma} Fix $B = B_0$. All prototypes can be generated in polynomial space and in $O^*(Nc^n)$ time. \end{lemma} \begin{proof} We assume that $G=(V,E)$ is a tree rooted at $v_r$. Otherwise, we may take any spanning tree of $G$, generate all prototypes for this tree, and finally for each prototype in the spanning tree check if this is a prototype in the original graph $G$ too. First we guess $f(v_r)$ and guess the set $A$. Then we go in the root--to--leaves order in $G$ and guess values of $f$ for vertices in $A \cup B$. Whenever we encounter a vertex $v \in A \cup B$ we look at its closest predecessor $w \in A \cup B$. Let $d$ be the distance between $v$ and $w$. We iterate over all possibilities $f(v) \in \{f(w) -d, f(w)-d+1, \ldots, f(w)+d\}$; however the following options are forbidden due to the conditions for the bucket extension: \begin{itemize} \item if $v \in A$, $w \in A$ and $d > 1$ then $f(v) = f(w)-d$ and $f(v)=f(w)+d$ are forbidden; \item if $v \in A$ and $w \notin A$ then $f(v) = f(w) -d$ is forbidden; \item if $v \notin A$ and $w \in A$ then $f(v) = f(w)+d$ is forbidden. \end{itemize} Since every branch in our search ends up with a valid prototype, the algorithm takes $O^*(Nc^n)$ time. In memory, we keep only the recursion stack of the search algorithm, and therefore we use polynomial space. \end{proof} \section{Omitted proofs}\label{a:proofs} \begin{proof}[Proof of Lemma \ref{lem:extpoly}] We construct all valid bucket extensions by a brute--force search. We start with $f'=f$ and $B=A$. At one step we have $A \subseteq B \subseteq V$, $f':B \to V$ such that $f'|_A = f$ and there exists a bucket extension ${\bar{f}}$ of $(A, f)$ such that ${\bar{f}}|_B = f'$. We take any $v \in V \setminus B$ such that there exists a neighbour $w$ of $v$ that belongs to $B$ and try to assign $f'(v) = f'(w) + \varepsilon$, for each $\varepsilon \in \{-1, 0, 1\}$. At every step, we use the algorithm from Lemma \ref{lem:checkext} to check the condition if $f'$ can be extended to a valid bucket extension of $(A, f)$. This check ensures that every branch in our search algorithm ends up with a bucket extension. Therefore we generate all bucket extensions with a polynomial delay and in polynomial space. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:najwazniejsze-dist}] First, assume $\pi$ has distortion at most $d$. Then for each $uv \in E$ we have $|\pi(u) - \pi(v)| \leq d$. Since segments are of size $d+1$, this implies that $|\ensuremath{\mathtt{segment}}(u) - \ensuremath{\mathtt{segment}}(v)| \leq 1$. Moreover, the distance between positions of the same color in consecutive segments is exactly $d+1$, which implies that if $\ensuremath{\mathtt{segment}}(u) + 1 = \ensuremath{\mathtt{segment}}(v)$ then $\ensuremath{\mathtt{color}}(u) > \ensuremath{\mathtt{color}}(v)$. In the other direction, assume that for some $u, v \in V$ we have $k = d_G(u, v)$ $|\pi(u) - \pi(v)| > dk$. Let $u = u_0, u_1, \ldots u_k = v$ be the path of length $k$ between $u$ and $v$. Then, for some $0 \leq i < k$ we have $|\pi(u_{i+1}) - \pi(u_i)| > d$. This implies that $\ensuremath{\mathtt{segment}}(u_{i+1}) \neq \ensuremath{\mathtt{segment}}(u_i)$, w.l.o.g. assume that $\ensuremath{\mathtt{segment}}(u_i) + 1 = \ensuremath{\mathtt{segment}}(u_{i+1})$. However, since consecutive positions of the same color are in distance $d+1$, this implies that $\ensuremath{\mathtt{color}}(u_i) \leq \ensuremath{\mathtt{color}}(u_{i+1})$, a contradiction. \end{proof} \end{document}
arXiv
\begin{document} \leftmargini=2em \title{A characterisation of algebraic exactness} \author{Richard Garner} \address{Department of Computing, Macquarie University, NSW 2109, Australia} \email{[email protected]} \subjclass[2010]{} \date{\today} \begin{abstract} An \emph{algebraically exact} category is one that admits all of the limits and colimits which every variety of algebras possesses and every forgetful functor between varieties preserves, and which verifies the same interactions between these limits and colimits as hold in any variety. Such categories were studied by Ad{\'a}mek, Lawvere and Rosick{\'y}: they characterised them as the categories with small limits and sifted colimits for which the functor taking sifted colimits is continuous. They conjectured that a complete and sifted-cocomplete category should be algebraically exact just when it is Barr-exact, finite limits commute with filtered colimits, regular epimorphisms are stable by small products, and filtered colimits distribute over small products. We prove this conjecture. \end{abstract} \thanks{The support of the Australian Research Council and DETYA is gratefully acknowledged} \maketitle \newcommand{\mathcal S \mathrm{ind}}{\mathcal S \mathrm{ind}} \newcommand{\mathcal S_{\kappa'}}{\mathcal S_{\kappa'}} \section{Introduction} In the series of papers~\cite{Adamek2001How-algebraic,Adamek2001On-algebraically,Adamek2004Toward} was introduced and studied the notion of an \emph{algebraically exact} category. A category ${\mathcal C}$ is said to be algebraically exact if, firstly, it admits all of the operations ${\mathcal C}^{\mathcal A} \to {\mathcal C}$ of small arity which every variety of (finitary, many-sorted) algebras supports and every forgetful functor between varieties preserves, and secondly, it obeys all of the equations between such operations as are satisfied in every variety. Any variety admits small limits and sifted colimits, and every forgetful functor between varieties preserves them; recall from~\cite{Adamek2001On-sifted} that \emph{sifted colimits} are those which commute with finite products in $\cat{Set}$, most important amongst these being the filtered colimits, and the coequalisers of reflexive pairs. It follows that any algebraically exact category also admits small limits and sifted colimits; and it turns out that these two kinds of operations in fact generate all of those required of an algebraically exact category. As regarding the equations that hold between these operations, we observe that in any variety, the following four exactness properties are verified: \begin{enumerate}[(E1)] \item Regular epimorphisms are stable under pullback, and equivalence relations are effective (i.e., the category is Barr-exact); \item Finite limits commute with filtered colimits; \item Regular epimorphisms are stable by small products; \item Filtered colimits distribute over small products. \end{enumerate} It follows that these same conditions are verified in any algebraically exact category, and it was conjectured in~\cite{Adamek2001How-algebraic} that, in fact, these four conditions completely characterise the algebraically exact categories amongst those categories with small limits and sifted colimits. The conjecture was proved in~\cite{Adamek2001On-algebraically} for the case of cocomplete categories with a regular generator, and in~\cite{Adamek2004Toward} for the case of categories with finite coproducts; the purpose of this article is to prove it in its full generality. We shall do so using techniques developed in~\cite{Garner2011Lex-colimits}, though the arguments are straightforward enough that we can reproduce them in full here, so making this article entirely self-contained. In order to state the conjecture more precisely, we will make use of a different description of the algebraically exact categories. We recall from~\cite{Adamek2001On-sifted} the construction which to every locally small category ${\mathcal C}$ assigns its free completion $\mathcal S \mathrm{ind}({\mathcal C})$ under sifted colimits. As in~\cite[Theorem 5.35]{Kelly1982Basic}, we may obtain $\mathcal S \mathrm{ind}({\mathcal C})$ as the closure of the representables in $[{\mathcal C}^\mathrm{op}, \cat{Set}]$ under sifted colimits, and now the restricted Yoneda embedding $W \colon {\mathcal C} \to \mathcal S \mathrm{ind}({\mathcal C})$ provides the unit at ${\mathcal C}$ of a Kock-Z\"oberlein pseudomonad~\cite{Kock1995Monads} on $\cat{CAT}$, whose pseudoalgebras are the sifted-cocomplete categories. Thus a category ${\mathcal C}$ admits sifted colimits just when $W \colon {\mathcal C} \to \mathcal S \mathrm{ind}({\mathcal C})$ admits a left adjoint. It was shown in~\cite[Theorem 3.11]{Adamek2001How-algebraic} that if ${\mathcal C}$ is complete, then so too is $\mathcal S \mathrm{ind}({\mathcal C})$; that if $F \colon {\mathcal C} \to {\mathcal D}$ is a continuous functor between complete categories, then so too is $\mathcal S \mathrm{ind}(F)$; and that the unit ${\mathcal C} \to \mathcal S \mathrm{ind}({\mathcal C})$ and multiplication $\mathcal S \mathrm{ind}(\mathcal S \mathrm{ind}({\mathcal C})) \to \mathcal S \mathrm{ind}({\mathcal C})$ are always continuous functors. It follows that the pseudomonad $\mathcal S \mathrm{ind}$ restricts and corestricts to one on $\cat{CONTS}$, the $2$-category of complete categories and continuous functors; and it was shown in~\cite[Corollary 4.4]{Adamek2001How-algebraic} that the pseudoalgebras for this restricted pseudomonad are precisely the algebraically exact categories described above. Thus a complete and sifted-cocomplete category ${\mathcal C}$ is algebraically exact just when $W \colon {\mathcal C} \to \mathcal S \mathrm{ind}({\mathcal C})$ admits a left adjoint which is \emph{continuous}. For the purposes of this paper, we will take this last as our definition of an algebraically exact category; and our goal, then, is to prove: \begin{Thm}\label{totalthm} A complete and sifted-cocomplete category ${\mathcal C}$ is algebraically exact just when it satisfies conditions (E1)--(E4). \end{Thm} In fact, as remarked above, any algebraically exact category does indeed satisfy (E1)--(E4); and so our task is to show that these conditions in turn imply algebraic exactness. \section{The result} The basic idea behind the proof of Theorem~\ref{totalthm} is to show that any category ${\mathcal C}$ satisfying (E1)--(E4) admits a full structure-preserving embedding into some ${\mathcal E}$ which is an essential localisation of a presheaf topos. Any such ${\mathcal E}$ will be algebraically exact; and now we may reflect this property along the full embedding, so concluding that ${\mathcal C}$ itself is algebraically exact. This argument does not quite work as it stands, for reasons of size. The ${\mathcal E}$ into which we would like to embed is a topos of sheaves on ${\mathcal C}$, but only when ${\mathcal C}$ is small may such a topos be constructed; in which situation, with ${\mathcal C}$ being small, and also small-complete, it is necessarily a preorder, which is far too restrictive. To overcome this problem, we will first prove a variant of Theorem~\ref{totalthm}, in which suitable bounds have been introduced on the size of the limits and colimits required, and then deduce the general result from this. Our cardinality bounds will be governed by an infinite regular cardinal $\kappa$. Given any such $\kappa$, we define $\kappa'$ to be the cardinal $(\Sigma_{\gamma < \kappa} 2^{\gamma})^+$, and the pair $(\kappa, \kappa')$ now has the property that whenever $\mu < \kappa$ and $\lambda < \kappa'$, we have $\lambda^\mu < \kappa'$: see~\cite[Proposition 2.3.5]{Makkai1989Accessible}. By a $\kappa$-limit we shall mean one indexed by a diagram of cardinality $< \kappa$, and we attach a corresponding meaning to the term $\kappa'$-colimit. We shall now describe a variant of the notion of algebraic exactness, which we term \emph{$\kappa$-algebraic exactness}, that deals only with $\kappa$-limits and $\kappa'$-colimits. \looseness=-1 There is a slight delicacy here as to the kinds of $\kappa'$-colimit we will consider. The obvious choice would be the sifted $\kappa'$-colimits---which we emphasise means the $\kappa'$-small sifted colimits, and \emph{not} the colimits which commute in $\cat{Set}$ with $\kappa'$-small products---but this choice is in fact inappropriate. It follows from~\cite[Proposition 5.1]{Adamek2001On-algebraically} that if ${\mathcal C}$ is complete then $\mathcal S \mathrm{ind}({\mathcal C})$ is the closure of the representables in $[{\mathcal C}^\mathrm{op}, \cat{Set}]$ under reflexive coequalisers and filtered colimits, so that a complete ${\mathcal C}$ admits sifted colimits just when it admits reflexive coequalisers and filtered colimits. When we bound the cardinality of our colimits, it turns out to be the reflexive coequalisers together with the filtered $\kappa'$-colimits which are relevant, and not the sifted $\kappa'$-colimits; recall from~\cite{Adamek2010What} that the latter class of colimits is in general \emph{strictly} larger. We consider the $2$-category $\kappa\text-\cat{CONTS}$ of $\kappa$-complete categories and $\kappa$-continuous functors between them; on this, we will describe a pseudomonad whose pseudoalgebras will be the $\kappa$-algebraically exact categories we seek to define. Observe first that as well as the pseudomonad $\mathcal S \mathrm{ind}$ on $\cat{CAT}$ we also have the pseudomonad ${\mathcal P}$ which freely adds small colimits. Proposition 4.3 and Remark 6.6 of~\cite{Day2007Limits} prove that if ${\mathcal C}$ is $\kappa$-complete, then so is ${\mathcal P} {\mathcal C}$; that if $F \colon {\mathcal C} \to {\mathcal D}$ is a $\kappa$-continuous functor between such categories, then so is ${\mathcal P} F$; and that ${\mathcal P}$'s unit and multiplication are always $\kappa$-continuous. Thus we may restrict and corestrict ${\mathcal P}$ to a pseudomonad on $\kappa\text-\cat{CONTS}$; and the pseudomonad of interest to us will be a submonad of this, defined as follows. For each ${\mathcal C}$ in $\kappa\text-\cat{CONTS}$, we let $\mathcal S_{\kappa'}({\mathcal C})$ denote the closure of ${\mathcal C}$ in ${\mathcal P}{\mathcal C}$ under $\kappa$-limits, reflexive coequalisers, and filtered $\kappa'$-colimits, and let $V \colon {\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ denote the restricted Yoneda embedding. Now~\cite[Proposition 3.1]{Garner2011Lex-colimits} ensures that this $V$ provides the unit at ${\mathcal C}$ of a Kock-Z\"oberlein pseudomonad on $\kappa\text-\cat{CONTS}$; and a $\kappa$-algebraically exact category will be, by definition, a pseudoalgebra for this pseudomonad. In other words, a $\kappa$-complete category ${\mathcal C}$ is \emph{$\kappa$-algebraically exact} just when the embedding $V \colon {\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ admits a $\kappa$-continuous left adjoint. Observe that this implies that ${\mathcal C}$ has reflexive coequalisers and filtered $\kappa'$-colimits, but may not imply that it has all sifted $\kappa'$-colimits; this is in accordance with the remarks of the preceding paragraph. We shall now prove the following refinement of Theorem~\ref{totalthm}. \begin{Thm}\label{mainthm} A category ${\mathcal C}$ with $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits is $\kappa$-algebraically exact just when: \begin{enumerate}[(E1')] \item It is Barr-exact; \item Finite limits commute with filtered $\kappa'$-colimits; \item Regular epimorphisms are stable by $\kappa$-small products; \item Filtered $\kappa'$-colimits distribute over $\kappa$-small products. \end{enumerate} \end{Thm} Clearly, a complete and sifted-cocomplete ${\mathcal C}$ satisfies (E1')--(E4') for each regular $\kappa$ if and only if it satisfies (E1)--(E4). On the other hand, we have: \begin{Prop}\label{propreduce} A complete and sifted-cocomplete category ${\mathcal C}$ is algebraically exact if and only if it is $\kappa$-algebraically exact for each regular $\kappa$. \end{Prop} By virtue of this Proposition and the comment preceding it, we may prove Theorem~\ref{totalthm} by proving Theorem~\ref{mainthm}, and then taking the conjunction of all its instances as $\kappa$ ranges across the small regular cardinals. \begin{proof}[Proof of Proposition~\ref{propreduce}] For every $\kappa$, we observe that $\mathcal S \mathrm{ind}({\mathcal C})$ is closed under $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits in $[{\mathcal C}^\mathrm{op}, \cat{Set}]$; whence $\mathcal S_{\kappa'}({\mathcal C}) \subset \mathcal S \mathrm{ind}({\mathcal C})$ with the inclusion preserving all $\kappa$-limits. Hence if $W \colon {\mathcal C} \to \mathcal S \mathrm{ind}({\mathcal C})$ admits a continuous left adjoint, then by restriction each $V \colon {\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ will admit a $\kappa$-continuous left adjoint. Conversely, suppose that each $V \colon {\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ admits a $\kappa$-continuous left adjoint. As observed above, since ${\mathcal C}$ is complete, it follows by~\cite[Proposition 5.1]{Adamek2001On-algebraically} that $\mathcal S \mathrm{ind}({\mathcal C})$ is the closure of the representables in $[{\mathcal C}^\mathrm{op}, \cat{Set}]$ under reflexive coequalisers and filtered colimits. But it is easy to see that the collection of $\varphi \in \mathcal S \mathrm{ind}({\mathcal C})$ which lie in some $\mathcal S_{\kappa'}({\mathcal C})$ contains the representables and is closed under reflexive coequalisers and filtered colimits, and so must be all of $\mathcal S \mathrm{ind}({\mathcal C})$; which is to say that $\mathcal S \mathrm{ind}({\mathcal C}) = \bigcup_\kappa \mathcal S_{\kappa'}({\mathcal C})$. Thus, since each $V \colon {\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ admits a left adjoint, so too does $W \colon {\mathcal C} \to \mathcal S \mathrm{ind}({\mathcal C})$, and it remains to show that this left adjoint is continuous. Given a small diagram $D \colon {\mathcal I} \to \mathcal S \mathrm{ind}({\mathcal C})$, we may choose a regular cardinal $\kappa$ such that $DI \in \mathcal S_{\kappa'}({\mathcal C})$ for each $I \in {\mathcal I}$ and also $\abs{{\mathcal I}} < \kappa$; now the diagram $D$ factors as $D' \colon {\mathcal I} \to \mathcal S_{\kappa'}({\mathcal C})$, and the left adjoint of ${\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ preserves the limit of $D'$: from which it follows that the left adjoint of $W$ preserves that of $D$, as required. \end{proof} We now prove Theorem~\ref{mainthm} for the case of a small ${\mathcal C}$. Given such a ${\mathcal C}$ satisfying the conditions of the theorem, we shall embed it into a $\kappa$-algebraically exact category via a functor preserving $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits. It will then follow that ${\mathcal C}$ is $\kappa$-algebraically exact by virtue of the following result. \begin{Prop}\label{embedding} Let $J \colon {\mathcal C} \to {\mathcal E}$ be fully faithful; suppose moreover that ${\mathcal C}$ has, and that $J$ preserves, $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits, and that ${\mathcal E}$ is $\kappa$-algebraically exact. Then ${\mathcal C}$ is also $\kappa$-algebraically exact. \end{Prop} \begin{proof} Because ${\mathcal E}$ is $\kappa$-algebraically exact, the functor $J$ admits a left Kan extension \begin{equation*}\cd[@-0.5em]{ {\mathcal C} \ar[d]_V \ar[r]^J \twocong[0.3]{dr}{} & {\mathcal E}\\ \mathcal S_{\kappa'}({\mathcal C}) \ar[ur]_{\Lan_V J} & {} } \end{equation*} along $V$, which may be calculated as the composite \begin{equation*} \mathcal S_{\kappa'}({\mathcal C}) \xrightarrow{\mathcal S_{\kappa'}(J)} \mathcal S_{\kappa'}({\mathcal E}) \xrightarrow{\quad L \quad} {\mathcal E} \end{equation*} with $L$ the $\kappa$-continuous left adjoint of $V \colon {\mathcal E} \to \mathcal S_{\kappa'}({\mathcal E})$. Now $\mathcal S_{\kappa'}(J)$ is an algebra morphism between free $\mathcal S_{\kappa'}$-algebras, and as such, preserves $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits; whilst $L$ preserves all colimits, being a left adjoint. It follows that $\Lan_V J$, like $J$, preserves $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits; whence the collection of $\varphi \in \mathcal S_{\kappa'}({\mathcal C})$ for which $\Lan_V J$ lands in the essential image of $J$ contains the representables and is closed under $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits, and so must be all of $\mathcal S_{\kappa'}({\mathcal C})$. Hence $\Lan_V J$ factors through $J$, up-to-isomorphism; and the factorisation $\mathcal S_{\kappa'}({\mathcal C}) \to {\mathcal C}$ so induced, which is clearly $\kappa$-continuous, may also be shown to be left adjoint to $V \colon {\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$, so that ${\mathcal C}$ is indeed $\kappa$-algebraically exact. \end{proof} Given a small, $\kappa$-complete ${\mathcal C}$, admitting reflexive coequalisers and filtered $\kappa'$-colimits, and satisfying (E1')--(E4'), we now exhibit an embedding of the above form; as anticipated at the start of this section, it will in fact be an embedding into a topos. We consider the smallest topology on ${\mathcal C}$ for which all regular epimorphisms are covering, and for which the colimit injections into each filtered $\kappa'$-colimit are covering. (E1') and (E2') ensure that this topology is subcanonical and so we have a full embedding $J \colon {\mathcal C} \to \cat{Sh}({\mathcal C})$. \begin{Prop} The full embedding $J \colon {\mathcal C} \to \cat{Sh}({\mathcal C})$ preserves $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits. \end{Prop} \begin{proof} Clearly $J$ preserves all limits that exist, so in particular $\kappa$-limits. It also preserves regular epimorphisms, since the given topology contains the regular one, and we will show below that it preserves filtered $\kappa'$-colimits. It will then follow that it preserves reflexive coequalisers too, since in ${\mathcal C}$ and in $\cat{Sh}({\mathcal C})$, we may exploit (E1') and (E2') to construct such coequalisers from finite limits, countable filtered colimits and coequalisers of equivalence relations, all of which are preserved by $J$; the argument is standard and given in precisely the form we need in~\cite[Theorem 2.6]{Adamek2004Toward}. It remains to show that $J$ preserves filtered $\kappa'$-colimits. Observe that if $(p_k \colon Dk \to X \mid k \in {\mathcal K})$ is such a colimit in ${\mathcal C}$, then $J$ will preserve it just when every sheaf ${\mathcal C}^\mathrm{op} \to \cat{Set}$ sends it to a limit in $\cat{Set}$. Let $F$ be such a sheaf. Since the family $(p_k \mid k \in {\mathcal K})$ is covering, we may identify $FX$ with the set of matching families for this covering. In other words, if \begin{equation*} \cd[@-0.5em]{ D_{jk} \ar[r]^-{d_{jk}} \ar[d]_{c_{jk}} & Dj \ar[d]^{p_j} \\ Dk \ar[r]_{p_k} & X } \end{equation*} is a pullback for each $j, k \in {\mathcal K}$, then we may identify $FX$ with the set \begin{equation}\label{eq:theset}\tag{$\ast$} \{ \vec x \in \Pi_k FDk \,\mid\, Fd_{jk}(x_j) = Fc_{jk}(x_k) \text{ for all $j, k \in {\mathcal K}$}\}\rlap{ .} \end{equation} Under this identification, the canonical comparison map $FX \to \lim FD$ is just the inclusion between these sets, seen as subobjects of $\Pi_k FDk$, and so injective; it remains to show that it is also surjective. Thus we must show that each $\vec x \in \lim FD$ lies in~\eqref{eq:theset}, or in other words, that $Fd_{jk}(x_j) = Fc_{jk}(x_k)$ for each such $\vec x$ and each $j, k \in J$. To this end, we consider the category ${\mathcal K}'$ of cospans from $j$ to $k$ in ${\mathcal K}$; since ${\mathcal K}$ is filtered and $\kappa'$-small, it follows easily that ${\mathcal K}'$ is too. We define a functor $E \colon {\mathcal K}' \to {\mathcal C}$ by sending each cospan $f \colon j \to \ell \leftarrow k \colon g$ in ${\mathcal K}'$ to the apex of the pullback square\[ \cd[@-0.5em]{ E(f,g) \ar[r]^-{u_{f,g}} \ar[d]_{v_{f,g}} & Dj \ar[d]^{Df} \\ Dk \ar[r]_{Dg} & D\ell } \] in ${\mathcal C}$. A simple calculation shows that $p_k.v_{f,g} = p_j.u_{f,g}$, so that we have induced maps $q_{f,g} \mathrel{\mathop:}= (u_{f,g}, v_{f,g}) \colon E(f,g) \to D_{jk}$, constituting a cocone $q$ under $E$ with vertex $D_{jk}$. We claim that this cocone is colimiting; whereupon, by the preceding part of the argument, the comparison $FD_{jk} \to \lim FE$ induced by $q$ will be monic, and so the family $(Fq_{f,g} \mid (f,g) \in {\mathcal K}')$ jointly monic. Thus in order to verify that $Fd_{jk}(x_j) = Fc_{jk}(x_k)$, and so complete the proof, it will be enough to observe that for each $f \colon j \to \ell \leftarrow k \colon g$ in ${\mathcal K}'$, we have: \begin{align*}Fq_{f,g}(Fd_{jk}(x_j)) &= Fu_{f,g}(x_j) = Fu_{f,g}(FDf(x_\ell))\\ & = Fv_{f,g}(FDg(x_\ell)) = Fv_{f,g}(x_k) \\ &= Fq_{f,g}(Fc_{jk}(x_k))\rlap{ .} \end{align*} It remains to verify that $q$ is colimiting. For this, let $V \colon {\mathcal K}' \to {\mathcal K}$ denote the functor sending a $j,k$-cospan to its central object, and $\iota_1 \colon \Delta j \to V \leftarrow \Delta k \colon \iota_2$ the evident natural transformations. Now we have a commutative cube \begin{equation*} \cd[@[email protected]@C+0.5em]{ E \ar[rr]^u \ar[dd]_v \ar[dr]^{q} & & \Delta Dj \ar[dd]_(0.25){D\iota_1} \ar@{=}[dr] \\ & \Delta(D_{jk}) \ar[rr]_(0.7){\Delta d_{jk}} \ar[dd]^(0.75){\Delta c_{jk}} & & \Delta(Dj) \ar[dd]^{\Delta p_j} \\ \Delta(Dk) \ar[rr]^(0.33){D \iota_2} \ar@{=}[dr] & & DV \ar[dr]^{pV} \\ & \Delta(Dk) \ar[rr]_{\Delta p_k} & & \Delta X } \end{equation*} in $[{\mathcal K}', {\mathcal C}]$; its front and rear faces are pullbacks, and by (E2') will remain so on applying the functor $\mathrm{colim} \colon [{\mathcal K}', {\mathcal C}] \to {\mathcal C}$. To show that $q$ is colimiting is equally to show that it is inverted by $\mathrm{colim}$; for which, by the previous sentence, it is enough to show that $pV$ is likewise inverted. But ${\mathcal K}$'s filteredness implies easily that $V \colon {\mathcal K}' \to {\mathcal K}$ is a final functor, so that $pV$, like $p$, is a colimiting cocone, and so inverted by $\mathrm{colim}$ as required. \end{proof} We thus have a full structure-preserving embedding ${\mathcal C} \to \cat{Sh}({\mathcal C})$ and the only thing left to verify is that $\cat{Sh}({\mathcal C})$ is in fact $\kappa$-algebraically exact. The key to doing so is the following proposition. \begin{Prop} If ${\mathcal E}$ is reflective in a presheaf category via a $\kappa$-continuous reflector, then ${\mathcal E}$ is $\kappa$-algebraically exact. \end{Prop} \begin{proof} If ${\mathcal C}$ is small, then ${\mathcal P} {\mathcal C} = [{\mathcal C}^\mathrm{op}, \cat{Set}]$, and now the restricted Yoneda embedding ${\mathcal P} {\mathcal C} \to {\mathcal P} {\mathcal P} {\mathcal C}$ admits a continuous left adjoint ${\mathcal P} {\mathcal P} {\mathcal C} \to {\mathcal P} {\mathcal C}$, this being the multiplication at ${\mathcal C}$ of the pseudomonad ${\mathcal P}$. Since $\mathcal S_{\kappa'}({\mathcal P} {\mathcal C})$ is closed in ${\mathcal P} {\mathcal P} {\mathcal C}$ under $\kappa$-limits, it follows by restriction that ${\mathcal P} {\mathcal C} \to \mathcal S_{\kappa'}({\mathcal P} {\mathcal C})$ admits a $\kappa$-continuous left adjoint; and so every presheaf category is $\kappa$-algebraically exact. Now if ${\mathcal E}$ is reflective in the $\kappa$-algebraically exact $[{\mathcal C}^\mathrm{op}, \cat{Set}]$ via a $\kappa$-continuous reflector, then it is an adjoint retract of $[{\mathcal C}^\mathrm{op}, \cat{Set}]$ in $\kappa\text-\cat{CONTS}$, and so by a standard property of Kock-Z\"oberlein pseudomonads, must itself be $\kappa$-algebraically exact. \end{proof} Thus it is enough to show that $\cat{Sh}({\mathcal C})$ is reflective in $[{\mathcal C}^\mathrm{op}, \cat{Set}]$ via a $\kappa$-continuous reflector. This will be a consequence of the following result, which may be found proven---though with ``small'' harmlessly replacing our ``$\kappa$-small''---in~\cite[Theorem 4.2]{Kelly1989On-the-complete}; we shall not recall the details, since we shall not need them in what follows. \begin{Prop} A left exact reflector $L \colon [{\mathcal C}^\mathrm{op}, \cat{Set}] \to {\mathcal E}$ preserves all $\kappa$-small limits if and only if the covering sieves for the corresponding topology are closed under $\kappa$-small intersections in $[{\mathcal C}^\mathrm{op}, \cat{Set}]$. \end{Prop} We are therefore required to show that any $\kappa$-small intersection of covering sieves for the above-defined topology on ${\mathcal C}$ is again covering. Clearly it is sufficient to consider the case where the sieves participating in the intersection are generating ones for the topology. We can decompose any such intersection of sieves as an intersection \begin{equation*} \bigcap_{i \in I} \mathcal S_i \cap \bigcap_{j \in J} {\mathcal T}_j \end{equation*} where each indexing set $I$ and $J$ is $\kappa$-small, each sieve $\mathcal S_i$ is generated by a regular epimorphism $e_i \colon A_i \twoheadrightarrow X$ and each sieve ${\mathcal T}_j$ is generated by a $\kappa'$-small filtered colimit cocone $((q_j)_{x} \colon D_j(x) \to X \mid x \in {\mathcal A}_j)$. Now we can form the $\kappa$-small product $\Pi_i e_i \colon \Pi_i A_i \to \Pi_i X$; by condition (E3') this is a regular epimorphism in ${\mathcal C}$, and by regularity, so also is its pullback $e \colon A \to X$ along the diagonal $X \to \Pi_i X$. Clearly a map $Z \to X$ factors through $e$ just when it factors through each $e_i$, and so the covering sieve $\mathcal S$ generated by $e$ is the intersection $\bigcap_i \mathcal S_i$. In a similar manner, we can form the filtered category $\Pi_j {\mathcal A}_j$; since $\abs{J} < \kappa$, and each $\abs{{\mathcal A}_j} < \kappa'$, we have also that $\abs{\Pi_j {\mathcal A}_j} < \kappa'$. Now on considering the diagram $D \colon \Pi_j {\mathcal A}_j \to {\mathcal C}$ defined by $D(x_j \mid j \in J) = \Pi_j D_j(x_j)$, condition (E4') asserts that $\Pi_j X$ is a colimit for it; so that on pulling back along the diagonal $X \to \Pi_j X_j$, we conclude that $X$ is a colimit for the diagram $D' \colon \Pi_j {\mathcal A}_j \to {\mathcal C}$ which sends $(x_j \mid j \in J)$ to the fibre product of the maps $(q_j)_{x_j} \colon D_j(x_j) \to X$. Now we see as before that the covering sieve ${\mathcal T}$ generated by this filtered $\kappa'$-colimit cocone is precisely $\bigcap_j {\mathcal T}_j$. It follows that $\bigcap_{i} \mathcal S_i \cap \bigcap_{j} {\mathcal T}_j = \mathcal S \cap {\mathcal T}$ is a covering sieve, since covering sieves are always closed under finite intersections, and this completes the proof of: \begin{Prop} If the small, $\kappa$-complete ${\mathcal C}$ with reflexive coequalisers and filtered $\kappa'$-colimits satisfies (E1')--(E4'), then it admits a full structure-preserving embedding into a $\kappa$-algebraically exact category, and so is itself $\kappa$-algebraically exact. \end{Prop} It remains to prove Theorem~\ref{mainthm} for categories of no matter what size. So let ${\mathcal C}$ be a category with $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits, satisfying (E1')--(E4'). We call a full, replete subcategory \emph{$\kappa$-closed} if it is closed in ${\mathcal C}$ under the limits and colimits just mentioned. Clearly, each small, $\kappa$-closed subcategory of ${\mathcal C}$ satisfies (E1')--(E4'), and so by the preceding proposition is $\kappa$-algebraically exact. We may now conclude that the same is true of ${\mathcal C}$ by way of the following result. \begin{Prop} A $\kappa$-complete ${\mathcal C}$ admitting reflexive coequalisers and filtered $\kappa'$-colimits is $\kappa$-algebraically exact so long as all of its small $\kappa$-closed subcategories are. \end{Prop} \begin{proof} Suppose that each $\kappa$-closed subcategory of ${\mathcal C}$ is $\kappa$-algebraically exact; we must show that ${\mathcal C}$ is too, or in other words, that $V \colon {\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ admits a $\kappa$-continuous left adjoint. To this end, consider the collection of $\varphi \in \mathcal S_{\kappa'}({\mathcal C})$ for which there exists a small $\kappa$-closed $J \colon {\mathcal D} \hookrightarrow {\mathcal C}$ with $\varphi$ lying in the essential image of the fully faithful $\mathcal S_{\kappa'}(J) \colon \mathcal S_{\kappa'}({\mathcal D}) \to \mathcal S_{\kappa'}({\mathcal C})$. It is easy to show that this collection contains the representables and is closed under $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits, and so is all of $\mathcal S_{\kappa'}({\mathcal C})$. It follows that ${\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ admits a left adjoint, since each ${\mathcal D} \to \mathcal S_{\kappa'}({\mathcal D})$ does by assumption. To show that this left adjoint is moreover $\kappa$-continuous, consider a $\kappa$-small diagram $X \colon {\mathcal I} \to \mathcal S_{\kappa'}({\mathcal C})$. For each $I \in {\mathcal I}$ we can find a small $\kappa$-closed ${\mathcal D}_I \subset {\mathcal C}$ with $XI$ in the essential image of $\mathcal S_{\kappa'}({\mathcal D}_I) \to \mathcal S_{\kappa'}({\mathcal C})$; now taking ${\mathcal D}$ to be the closure of $\bigcup_I {\mathcal D}_I$ in ${\mathcal C}$ under $\kappa$-limits, reflexive coequalisers and filtered $\kappa'$-colimits, we obtain another small $\kappa$-closed subcategory. The diagram $X$ factors up-to-isomorphism through the fully faithful $\mathcal S_{\kappa'}({\mathcal D}) \rightarrow \mathcal S_{\kappa'}({\mathcal C})$ as $X' \colon {\mathcal I} \to \mathcal S_{\kappa'}({\mathcal D})$, say; and now by assumption, the left adjoint of ${\mathcal D} \to \mathcal S_{\kappa'}({\mathcal D})$ preserves the limit of $X'$, whence the left adjoint of ${\mathcal C} \to \mathcal S_{\kappa'}({\mathcal C})$ preserves that of $X$, as required. \end{proof} This completes the proof of Theorem~\ref{mainthm} for categories of any size; and now, as discussed previously, taking the conjunction of all instances of this theorem as $\kappa$ ranges over the small regular cardinals completes the proof of Theorem~\ref{totalthm}. \end{document}
arXiv
At a particular school with 43 students, each student takes chemistry, biology, or both. The chemistry class is three times as large as the biology class, and 5 students are taking both classes. How many people are in the chemistry class? Let $x$ be the number of students in the biology class who aren't in the chemistry class and $y$ be the number of students in the chemistry class who aren't in the biology class. Then, since all students are in either one of the classes or in both, we know that $43=x+y+5$. We also know that $3(x+5)=y+5$. Solving for $y$ in terms of $x$ gives us $y=3x+10$, and substituting that into the first equation gives us $43=x+(3x+10)+5$, which gives us $x=7$. Substituting this into the other equation gives us $y=31$. However, $y$ is only the number of chemistry students who aren't taking biology, so we need to add the number of students taking both to get our final answer of $\boxed{36}$.
Math Dataset
Mathematics / 5th Grade / Unit 2: Multiplication and Division of Whole Numbers / Multiplication and Division of Whole Numbers Download Lesson 2 Download Icon Created with Sketch. Hamburger Created with Sketch. Unit 2: Multiplication and Division of Whole Numbers Topic A: Writing and Interpreting Numerical Expressions Evaluate numerical expressions involving addition, subtraction, multiplication, division, and grouping symbols. Write expressions that record calculations with numbers, and interpret expressions without evaluating them. Write expressions that represent real-world situations and evaluate them. Topic B: Multi-Digit Whole Number Multiplication 5.NBT.B.5 Multiply multiples of powers of ten. Estimate multi-digit products by rounding numbers to their largest place value. Multiply two-digit, three-digit, and four-digit numbers by one-digit numbers. Multiply two-digit numbers by two-digit numbers. Multiply three-digit numbers by two-digit numbers. Multiply four-digit numbers by two-digit numbers. Multiply three- and four-digit numbers by three-digit numbers. Multiply multi-digit numbers and assess the reasonableness of the product. Topic C: Multi-Digit Whole Number Division Divide multiples of powers of ten by multiples of ten without remainders. Estimate multi-digit quotients by rounding numbers to their largest place value. Estimate multi-digit quotients by rounding numbers to compatible numbers. Divide two-digit, three-digit, and four-digit dividends by one-digit divisors. Divide two- and three-digit dividends by multiples of 10 with one-digit quotients and remainders in the ones place. Divide two-digit dividends by two-digit divisors with one-digit quotients and remainders in the ones place. Divide three-digit dividends by two-digit divisors with one-digit quotients and remainders in the ones place. Divide three-digit dividends by two-digit divisors with two-digit quotients, reasoning about the decomposition of a remainder in any place. Divide four-digit dividends by two-digit divisors with two- and three-digit quotients, reasoning about the decomposition of a remainder in any place. Divide multi-digit numbers by one- and two-digit divisors and assess the reasonableness of the quotient. Solve word problems involving multi-digit multiplication and division. Criteria for Success Anchor Tasks Problem Set & Homework Target Task Additional Practice 5.OA.A.1 — Use parentheses, brackets, or braces in numerical expressions, and evaluate expressions with these symbols. 5.OA.A.2 — Write simple expressions that record calculations with numbers, and interpret numerical expressions without evaluating them. For example, express the calculation "add 8 and 7, then multiply by 2" as 2 × (8 + 7). Recognize that 3 × (18932 + 921) is three times as large as 18932 + 921, without having to calculate the indicated sum or product. 3.OA.D.8 3.OA.D.8 — Solve two-step word problems using the four operations. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding. This standard is limited to problems posed with whole numbers and having whole-number answers; students should know how to perform operations in the conventional order when there are no parentheses to specify a particular order (Order of Operations). Understand the sum to be the result of adding two values, the difference to be the result of subtracting two values, the product to be the result of multiplying two numbers, and the quotient to be the result of dividing two numbers. Write numerical expressions based on verbal/written descriptions of calculations (e.g., write 2 x (8+7) to express the calculation "add 8 and 7, then multiply by 2") (MP.7). Write descriptions of calculations based on numerical expressions (e.g., write "add 8 and 7, then multiply by 2" to describe the expression 2 x (8+7)) (MP.7). Interpret expressions without evaluating them (MP.2). Students should be familiar with the terms sum, difference, product, and quotient from prior grade levels. If you need to adapt or shorten this lesson for remote learning, we suggest prioritizing Anchor Tasks 2 and 3 (benefit from worked examples). Find more guidance on adapting our math curriculum for remote learning here. Fishtank Plus Subscribe to Fishtank Plus to unlock access to additional resources for this lesson, including: Student Handout Editor Google Classrom Integration Vocabulary Package Write an expression to represent the tape diagrams below. In each tape diagram, their units are equal. Guiding Questions Create a free account or sign in to access the Guiding Questions for this Anchor Problem. EngageNY Mathematics Grade 5 Mathematics > Module 2 > Topic B > Lesson 3 — Concept Development Grade 5 Mathematics > Module 2 > Topic B > Lesson 3 of the New York State Common Core Mathematics Curriculum from EngageNY and Great Minds. © 2015 Great Minds. Licensed by EngageNY of the New York State Education Department under the CC BY-NC-SA 3.0 US license. Accessed Dec. 2, 2016, 5:15 p.m.. Modified by Fishtank Learning, Inc. For each problem below, write an expression that records the calculations described below, but do not evaluate. 3 times the sum of 26 and 4 The quotient of 15 and 3 subtracted from 60 Write the following expressions in words: $$8\times(15-9)$$ $$(y+4)\div20$$ Below is a picture that represents $$9+2$$. Draw a picture that represents $$4 \times (9 + 2)$$. How many times bigger is the value of $$4 \times (9 + 2)$$ than $$9+2$$? Explain your reasoning. Illustrative Mathematics Seeing is Believing Seeing is Believing, accessed on Dec. 5, 2017, 3:52 p.m., is licensed by Illustrative Mathematics under either the CC BY 4.0 or CC BY-NC-SA 4.0. For further information, contact Illustrative Mathematics. Problem Set Answer Key Answer keys for Problem Sets and Homework are available with a Fishtank Plus subscription. Homework Answer Key Discussion of Problem Set In #3b some of you wrote 4 x (14 + 26) and others wrote (14 + 26) x 4. Are both expressions acceptable? Explain. Were you able to answer #4 without actually solving Expressions A and B? How? A student got 85 for #5a. Can you identify the error in thinking? Look at #6. How were you able to answer parts (b) and (c) without knowing how to compute with some of the numbers in the expressions? Look at #6c. Without evaluating, what would you expect the digits in the first expression to look like compared to the digits in the second expression? Why? We don't yet know how to find 60 x 225. But, how were you able to use what you learned today to answer #10? Which phrase is represented by the expression 5 x (36 + 9)? the product of 36 and 5, increased by 9 the product of 36 and 9, multiplied by 5 the sum of 36 and 9, multiplied by 5 the sum of 36 and 5, increased by 9 EngageNY New York State Testing Program Grade 5 Common Core Mathematics Test Released Questions June 2017 — Question #7 From EngageNY.org of the New York State Education Department. New York State Testing Program Grade 5 Common Core Mathematics Test Released Questions June 2017. Internet. Available from https://www.engageny.org/resource/released-2017-3-8-ela-and-mathematics-state-test-questions/file/150271; accessed Dec. 5, 2017, 3:55 p.m.. Sam divided the difference of 17 and 5 by 6. Write an expression to match Sam's calculations. Which of the following expressions represents a number that is $$3$$ times larger than the sum of 8105 and 186? a. $$({8105}+{186})\div3$$ b. $$3 \times ({8105}+{186}) $$ c. $${8105}+{186}\div3 $$ d. $$3 \times {8105}+{186}$$ Massachusetts Department of Elementary and Secondary Education Spring 2013 Grade 5 Mathematics Test — Question #14 Spring 2013 Grade 5 Mathematics Test is made available by the Massachusetts Department of Elementary and Secondary Education. © 2017 Commonwealth of Massachusetts. Accessed Dec. 5, 2017, 3:51 p.m.. Mastery Response Create a free account or sign in to view Mastery Response Unit Practice With Fishtank Plus you can access our Daily Word Problem Practice and our content-aligned Fluency Activities created to help students strengthen their application and fluency skills.
CommonCrawl
\begin{document} \title{Construction of neural networks for realization of localized deep learning} \author{Charles K. Chui$^{1,2}$ \and Shao-Bo Lin$^3$ \and Ding-Xuan Zhou$^4$} \date{} \maketitle \begin{center} \footnotesize 1. Department of Mathematics, Hong Kong Baptist University, Kowloon, Hong Kong, China 2. Department of Statistics, Stanford University, CA 94305, USA 3. Department of Mathematics, Wenzhou University, Wenzhou 325035, China 4. Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong, China \begin{abstract} The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net) approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component also designed to eliminate outliers. The main theoretical result in this paper is the order $\mathcal O\left(m^{-2s/(2s+d)}\right)$ of approximation of the regression function with regularity $s$, in terms of the number $m$ of sample points, where the (unknown) manifold dimension $d$ replaces the dimension $D$ of the sampling (Euclidean) space for shallow nets.\\ {\bf Keywords:} Deep nets, learning theory, deep learning, manifold learning\\ \end{abstract} \end{center} \section{Introduction} The continually rapid growth in data acquisition and data updating has recently posed crucial challenges to the machine learning community on developing learning schemes to match or outperform human learning capability. Fortunately, the introduction of deep learning (see, for example, \cite{Hinton2006}) has led to the feasibility of getting around the bottleneck of classical learning strategies, such as the support vector machine and boosting algorithms, based on classical neural networks (see, for example, \cite{Lippmann1987,Funahashi1989,Cybenko1989,Chui1992}), by demonstrating remarkable successes in many applications, particularly computer vision \cite{Krizhevsky2012} and speech recognition \cite{Lee2009}, and more currently in other areas, including: natural language processing, medical diagnosis and bioinformatics, financial market analysis and online advertisement, time series forecasting and search engines. Furthermore, the exciting recent advances of deep learning schemes for such applications have motivated the current interest in re-visiting the development of classical neural networks (to be called ''shallow nets'' in later discussions), by allowing multiple hidden layers between the input and output layers. Such neural networks are called ''deep'' neural nets, or simply, deep nets (DN). Indeed, the advantages of DN's over shallow nets, at least in applications, have led to various popular research directions in the academic communities of Approximation Theory and Learning Theory. Explicit results on the existence of functions, that are expressible by DN's but cannot be approximated by shallow nets with comparable number of parameters, are generally regarded as powerful features of the advantage of DN's in Approximation Theory. The first theoretical understanding of such results dates back to our early work \cite{Chui1994}, where by using the Heaviside activation function, it was shown that DN's with two hidden layers already provide localized approximation, while shallow nets fail. Later explicit results on DN approximation \cite{Eldan2015,Mhaskar2016,Telgarsky2016,Raghu2016,Poggio2017} further reveal other various advantages of DN's over shallow nets. From approximation to learning, the tug of war between bias and variance \cite{Cucker2007} indicates that explicit derivation of DN's is insufficient to show its success in machine learning, in that besides bias, the capacity of DN should possess the expressivity of embodying variance. In this direction, the capacity of DN's, as measured by the number of linear regions, Betty number, neuron transitions, and DN trajectory length were studied in \cite{Montufar2013}, \cite{Bianchini2014}, and \cite{Raghu2016} respectively, in showing that DN's allow for many more functionalities than shallow nets. Although these results certainly show the benefits of deep nets, yet they pose more difficulties in analyzing the deep learning performance, since large capacity usually implies large variance and requires more elaborate learning algorithms. One of the main difficulties is development of satisfactory learning rate analysis for DN learning, that has been well studied for shallow nets (see, for example, \cite{Maiorov2006a}). In this paper, we present an analysis of the advantages of DN's in the framework of learning theory \cite{Cucker2007}, taking into account the trade-off between bias and variance. Our starting point is to assume that the samples are located approximately on some unknown manifold in the sample ($D$-dimensional Euclidean) space. For simplicity, consider the set of { inputs of samples}: $ x_1, \dots, x_m \in\mathcal X\subseteq[-1,1]^D$, with a corresponding set of { outputs}: $ y_1, \cdots, y_m \in\mathcal Y\subseteq [-M,M]$ for some positive number $M$, where $\mathcal X$ is an unknown data-dependent $d$-dimensional connected $C^\infty$ Riemannian manifold (without boundary). We will call $S_m=\{(x_i,y_i)\}_{i=1}^m$ the sample set, and construct a DN with three hidden layers, with the first for the dimensionality-reduction, the second for bias-reduction, and the third for variance-reduction. The main tools for our construction are the ``local manifold learning'' for deep nets in \cite{Chui2016}, ``localized approximation'' for deep nets in \cite{Chui1994}, and ``local average'' in \cite{Gyorfi2002}. We will also introduce a feedback procedure to eliminate outliers during the learning process. Our constructions justify the common consensus that deep nets are intuitively capable of capturing data features via their architectural structures \cite{Bengio2009}. In addition, we will prove that the constructed DN can well approximate the so-called regression function \cite{Cucker2007} within the accuracy of $\mathcal O\left(m^{-2s/(2s+d)}\right)$ in expectation, where $s$ denotes the order of smoothness (or regularity) of the regression function. Noting that the best existing learning rates of the shallow nets are $\mathcal O\left(m^{-2s/(2s+D)}\log^2m\right)$ \cite{Maiorov2006a} and $\mathcal O\left(m^{-s/(8s+4d)}(\log m)^{s/(4s+2d)}\right)$ \cite{Ye2008}, we observe the power of deep nets over shallow nets, at least theoretically, in the framework of Learning Theory. The organization of this paper is as follows. In the next section, we present a detailed construction of the proposed deep net. The main results of the paper will be stated in Section \ref{Sec.learning rate}, where { tight} learning rates of the constructed deep net are also deduced. Discussions of our contributions along with comparison with some related work and proofs of the main results will be presented in Section \ref{Sec.Comparison} and \ref{Sec.Proof1}, respectively. \section{Construction of Deep Nets}\label{Sec.Construction} In this section, we present a construction of deep neural networks (called deep nets, for simplicity) with three hidden layers to realize certain deep learning algorithms, by applying the mathematical tools of localized approximation in \cite{Chui1994}, local manifold learning in \cite{Chui2016}, and local average arguments in \cite{Gyorfi2002}. Throughout this paper, we will consider only two activation functions: the Heaviside function$\sigma_0$ and the square-rectifier $\sigma_2$, where the standard notation $t_{+}=\max\{0,t\}$ is used to define $ \sigma_n(t)=t_{+}^n= (t_{+})^n$, for any non-negative integer $n$. \subsection{Localized approximation and localized manifold learning}\label{Subsec:Localized approximation} Performance comparison between deep nets and shallow nets is a classical topic in Approximation Theory. It is well-known from numerous publications (see, for example, \cite{Chui1994,Eldan2015,Raghu2016,Telgarsky2016}) that various functions can be well approximated by deep nets but not by any shallow net with the same order of magnitude in the numbers of neurons. In particular, it was proved in \cite{Chui1994} that deep nets can provide localized approximation, while shallow nets fail. For $r,q\in\mathbb N$ and an arbitrary ${\bf j}\in\mathbb N_{2q}^r$, where $\mathbb N_{2q}^r=\{1,2,\dots,2q\}^r$, let $\zeta_{\bf{j}} =\zeta_{{\bf j}, q} =(\zeta_{\bf j}^{(\ell)})_{\ell=1}^r \in(-1,1)^r$ with $\zeta_{\bf j}^{(\ell)}= -1+\frac{2{\bf j}^{(\ell)}-1}{2q} \in (-1,1)$. For $a>0$ and $\zeta\in\mathbb R^r$, let us denote by $A_{r,a, \zeta} =\zeta + \left[-\frac{a}{2}, \frac{a}{2}\right]^r$, the cube in $\mathbb R^r$ with center $\zeta$ and width $a$. Furthermore, we define $N_{1,r, q, \zeta_{\bf j}}: {\mathbb R}^r \to \mathbb R$ by \begin{equation}\label{NN for localization} N_{1,r, q, \zeta_{\bf j}}(\xi) = \sigma_0\left\{\sum_{\ell=1}^r\sigma_0\left[\frac1{2q}+\xi^{(\ell)}-\zeta_{\bf j}^{(\ell)}\right] +\sum_{\ell=1}^r\sigma_0\left[\frac1{2q}-\xi^{(\ell)}+\zeta_{\bf j}^{(\ell)}\right]- 2r+\frac12 \right\}. \end{equation} In what follows, the standard notion $I_A$ of the indicator function of a set (or an event) $A$ will be used. For $x\in \mathbb R$, since \begin{eqnarray*} \sigma_0 \left[\frac1{2q}+x\right] + \sigma_0 \left[\frac1{2q} - x\right] -2 &=& I_{[-1/(2q), \infty)} (x) + I_{(-\infty, 1/(2q)]}(x)-2\\ &=& \left\{\begin{array}{ll} 0, & \hbox{if} \ x\in [-1/(2q), 1/(2q)], \\ -1, & \hbox{otherwise}, \end{array}\right. \end{eqnarray*} we observe that $$ \sum_{\ell=1}^r\sigma_0\left[\frac1{2q}+\xi^{(\ell)}\right] +\sum_{\ell=1}^r\sigma_0\left[\frac1{2q}-\xi^{(\ell)}\right]- 2r+\frac12 \ \left\{\begin{array}{ll} =\frac{1}{2},& \mbox{ for}\ x \in [-1/(2q), 1/(2q)]^r,\\ \leq-\frac{1}{2},& \mbox{otherwise.}\end{array}\right. $$ This implies that $N_{1,r, q, \zeta_{\bf j}}$ as introduced in (\ref{NN for localization}), is the indicator function of the cube $\zeta_{\bf j} + [-1/(2q), 1/(2q)]^r = A_{r, 1/q, \zeta_{\bf j}}$. Thus, the following proposition which describes the localized approximation property of $N_{1,r,q, \zeta_{\bf j}}$, can be easily deduced by applying Theorem 2.3 in \cite{Chui1994 }. \begin{proposition}\label{Proposition:localization} Let $r,q\in\mathbb N$ be arbitrarily given. Then $N_{1,r, q, \zeta_{\bf j}} ={I}_{A_{r,1/q,\zeta_{\bf j}}}$ for all ${\bf j}\in\mathbb N_{2q}^r$. \end{proposition} On the other hand, it was proposed in \cite{DiCarlo2007,Basri2016} with practical arguments, that deep nets can tackle data in highly-curved manifolds, while any shallow net fails. These arguments were theoretically verified in \cite{Shaham2015,Chui2016}, with the implication that adding hidden layers to shallow nets should enable the neural networks to have the capability of processing massive data in a high-dimensional space from samples in lower dimensional manifolds. More precisely, it follows from \cite{Docarmo1992,Shaham2015} that for a lower $d$-dimensional connected and compact $C^\infty$ Riemannian submanifold $\mathcal X \subseteq[-1,1]^D$ (without boundary), isometrically embedded in ${\mathbb R}^D$ and endowed with the geodesic distance $d_G$, there exists some $\delta>0$, such that for any $x, x'\in\mathcal X$, with $d_G(x,x')<\delta$, \begin{equation}\label{smooth manifold} \frac12d_G(x, x')\leq\|x-x'\|_D\leq2d_G(x,x'), \end{equation} where for any $r>0$, $\|\cdot\|_r$ denotes, as usual, the Euclidean norm of $\mathbb R^r$. In the following, let $B_G(\xi_0,\tau)$, $B_D(\xi_0,\tau)$, and $B_{d}(\xi_0, \tau)$ denote the closed geodesic ball, the $D$-dimensional Euclidean ball, and the $d$-dimensional Euclidean ball, with center at $\xi_0$, respectively, and with radius $\tau>0$. Then the following proposition is a brief summary of Theorem 2.2, Theorem 2.3 and Remark 2.1 in \cite{Chui2016}, with the implication that neural network can be used as a dimensionality-reduction tool. \begin{proposition}\label{Proposition:local manifold learning} For each $\xi\in\mathcal X$, there exist a positive number $\delta_\xi$ and a neural network $$ \Phi_\xi =(\Phi^{(\ell)}_\xi)_{\ell=1}^d: {\mathcal X} \to {\mathbb R}^d$$ with \begin{equation}\label{representation for phi x} \Phi_\xi^{(\ell)}(x) =\sum_{k=1}^{(D+2)(D+1)}a_{k,\xi,\ell} \sigma_2(w_{k,\xi,\ell}\cdot x+b_{k,\xi,\ell}), \qquad w_{k,\xi,\ell}\in\mathbb R^D, a_{k,\xi,\ell}, b_{k,\xi,\ell}\in\mathbb R, \end{equation} that maps $B_G(\xi,\delta_\xi)$ diffeomorphically onto $[-1,1]^d$ and satisfies \begin{equation}\label{equality of distance} \alpha_\xi d_G(x,x') \leq \|\Phi_\xi(x)-\Phi_\xi(x')\|_d\leq \beta_\xi d_G(x,x'), \qquad \forall\ x,x'\in B_G(\xi,\delta_\xi) \end{equation} for some $\alpha_\xi,\beta_\xi>0$. \end{proposition} \subsection{Learning via deep nets}\label{Subsec: learning without feedback} Our construction of deep nets depends on the localized approximation and dimensionality-reduction technique, as presented in Propositions \ref{Proposition:localization} and \ref{Proposition:local manifold learning}. To describe the learning process, firstly select a suitable $q^*$, so that for every ${\bf j}\in N_{2q^*}^d$, there exists some point $\xi^*_{\bf j}$ in a finite set $\{\xi^*_{i}\}_{i=1}^{F_\mathcal X} \subset \mathcal X$ that satisfies \begin{equation}\label{target 1} A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap\mathcal X\subset B_G(\xi^*_{\bf j}, \delta_{\xi^*_{\bf j}}). \end{equation} To this end, we need a constant $C_0 \geq 1$, such that \begin{equation}\label{embedRD} d_G(x, x') \leq C_0 \|x-x'\|_D, \qquad \forall\ x,x'\in \mathcal X. \end{equation} The existence of such a constant is proved in the literature (see, for example, \cite{Ye2008}). Also, in view of the compactness of $\mathcal X$, since $\bigcup_{\xi\in\mathcal X}\{x\in \mathcal X: B_G(x, \xi) <\delta_\xi/2\}$ is an open covering of $\mathcal X$, there exists a finite set of points $\{\xi^*_{i}\}_{i=1}^{F_\mathcal X} \subset \mathcal X$, such that $\mathcal X\subset\bigcup_{i=1}^{F_{\mathcal X}}B_G(\xi^*_i,\delta_{\xi^*_i}/2).$ Hence, $q^* \in \mathbb N$ may be chosen to satisfy \begin{equation}\label{choose q} q^* \geq \frac{2 C_0 \sqrt{D}}{\min_{1\leq i \leq\mathcal F_{\mathcal X}}\delta_{\xi^*_i}}. \end{equation} With this choice, we claim that (\ref{target 1}) holds. Indeed, if $A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap\mathcal X=\varnothing$, then (\ref{target 1}) obviously holds for any choice of $\xi\in\mathcal X$. On the other hand, if $A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap\mathcal X \neq\varnothing$, then from the inclusion property $\mathcal X\subset\bigcup_{i=1}^{F_{\mathcal X}}B_G(\xi^*_i,\delta_{\xi^*_i}/2)$, it follows that there is some $i^* \in \{1, \ldots, F_\mathcal X\}$, depending on ${\bf j} \in N_{2q^*}^d$, such that \begin{equation}\label{tool 1 for manifold} A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap B_G(\xi^*_{i^*},\delta_{\xi^*_{i^*}}/2)\neq \varnothing. \end{equation} Next, let $\eta^*\in A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap B_G(\xi^*_{i^*},\delta_{\xi^*_{i^*}}/2)$. By (\ref{embedRD}), we have, for any $x\in A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap\mathcal X$, $$d_G (x, \eta^*) \leq C_0 \|x-\eta^*\|_D \leq C_0 \sqrt{D} \frac{1}{q^*}.$$ Therefore, it follows from (\ref{choose q}) that \begin{eqnarray*} d_G(x,\xi^*_{i^*}) &\leq& d_G(x,\eta^*)+d_G(\eta^*,\xi_{i^*}^*) \leq C_0 \sqrt{D} \frac{1}{q^*} +\frac{\delta_{\xi_{i^*}^*}}{2} \leq \delta_{\xi_{i^*}^*}. \end{eqnarray*} This implies that $A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap\mathcal X\subset B_G(\xi_{i^*}^*,\delta_{\xi_{i^*}^*})$ and verifies our claim (\ref{target 1}) with the choice of $\xi^*_{\bf j} = \xi_{i^*}^*$. Observe that for every ${\bf j}\in\mathbb N_{2q^*}^D$ we may choose the point $\xi^*_{\bf j}\in \mathcal X$ to define $N_{2,\bf j}= (N_{2,\bf j}^{(\ell)})_{\ell=1}^d: {\mathcal X} \to {\mathbb R}^d$ by setting \begin{equation}\label{N2(x)} N_{2,\bf j}^{(\ell)}(x):= \Phi_{\xi^*_{\bf j}}^{(\ell)} (x) = \sum_{k=1}^{(D+2)(D+1)}a_{k,\xi^*_{\bf j},\ell} \sigma_2\left(w_{k,\xi^*_{\bf j},\ell}\cdot x +b_{k,\xi^*_{\bf j},\ell}\right), \qquad \ell =1, \ldots, d \end{equation} and apply Proposition \ref{Proposition:local manifold learning}, (\ref{target 1}), and (\ref{representation for phi x}) to obtain the following. \begin{proposition}\label{Proposition:Manifold learning} For each ${\bf j}\in\mathbb N_{2q^*}^D$, $N_{2, \bf j}$ maps $A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap\mathcal X$ diffeomorphically into $[-1,1]^d$ and \begin{equation}\label{equality of distance of N2} \alpha d_G(x,x') \leq \|N_{2,\bf j}(x)-N_{2,\bf j}(x')\|_d\leq \beta d_G(x,x'),\qquad\forall\ x,x'\in A_{D,1/q^*,\zeta_{{\bf j},q^*}}\cap\mathcal X, \end{equation} where $\alpha:=\min_{1\leq i \leq F_{\mathcal X}}\alpha_{\xi^*_{i}}$ and $\beta:=\max_{1\leq i\leq F_{\mathcal X}}\beta_{\xi^*_{i}}$. \end{proposition} As a result of Propositions \ref{Proposition:localization} and \ref{Proposition:Manifold learning}, we now present the construction of the deep nets for the proposed learning purpose. Start with selecting $(2n)^d$ points $t_{\bf k}= t_{{\bf k}, n}\in (-1,1)^d$, ${\bf k}\in\mathbb N_{2n}^d$ and $n\in\mathbb N$, with $t_{\bf k} = (t_{\bf k}^{1}, \cdots, t_{\bf k}^{d})$, where $t_{\bf k}^{(\ell)}=-1+\frac{2{\bf k}^{(\ell)}-1}{2n}$ in $(-1,1)^d$. Denote $C_{\bf k} =A_{d,1/n,t_{{\bf k}}}$ and $H_{{\bf k,j}} =\{x\in\mathcal X\cap A_{D,1/q^*,\zeta_{{\bf j},q^*}}: N_{2,{\bf j}}(x)\in C_{\bf k}\}$. In view of Proposition \ref{Proposition:Manifold learning}, it follows that $H_{\bf k, j}$ is well defined, ${\mathcal X} \subseteq \cup_{{\bf j}\in\mathbb N_{2q^*}^D} A_{D,1/q^*,\zeta_{{\bf j},q^*}}$, and $ \bigcup_{{\bf k}\in\mathbb N_{2n}^d} H_{\bf k, j} =\mathcal X\cap A_{D,1/q^*,\zeta_{{\bf j},q^*}}.$ We also define $N_{3,{\bf k,j}}: {\mathcal X} \to {\mathbb R}$ by \begin{eqnarray}\label{N3k(x)} &&N_{3,{\bf k,j}}(x) =N_{1, d, n, t_{\bf k}} \circ N_{2,{\bf j}}(x) \\ && = \sigma_0\left\{ \sum_{\ell=1}^d\sigma_0 \left[\frac{1}{2n}+N^{(\ell)}_{2,{\bf j}}(x)-t_{\bf k}^{(\ell)}\right] + \sum_{\ell=1}^d\sigma_0 \left[\frac1{2n}-N^{(\ell)}_{2,{\bf j}}(x)+t_{\bf k}^{(\ell)}\right]-2d+\frac12\right\}. \nonumber \end{eqnarray} Then the desired deep net estimator with three hidden layers may be defined by \begin{eqnarray}\label{final estimator1} N_{3}(x)= \frac{\sum_{{\bf j}\in \mathbb N_{2 q^*}^D} \sum_{{\bf k}\in \mathbb N_{2n}^d} \sum_{i=1}^m N_{1,D, q^*, \zeta_{{\bf j}}} (x_i) N_{3,{\bf k,j}}(x_i) y_i N_{3,{\bf k,j}}(x)}{ \sum_{{\bf j}\in \mathbb N_{2 q^*}^D} \sum_{{\bf k}\in \mathbb N_{2n}^d} \sum_{i=1}^m N_{1,D, q^*, \zeta_{{\bf j}}} (x_i) N_{3,{\bf k,j}}(x_i)}, \end{eqnarray} where we set $N_{3}(x)=0$ if the denominator is zero. Observe that in the above construction there is a totality of three hidden layers to perform three separate tasks, namely: the task of the first hidden layer is to reduce the dimension of the input space, while the second and third hidden layers are to perform localized approximation on $\mathbb R^d$ and data variance reduction by applying local averaging \cite{Gyorfi2002}, respectively. \subsection{Fine-tuning} For each $x\in {\mathcal X}$, it follows from $\mathcal X=\bigcup_{{\bf j} \in \mathbb N_{2q^*}^D} A_{D,1/q^*,\zeta_{{\bf j},q^*}}$ that there is some ${\bf j}\in\mathbb N_{2q^*}^D$, such that $x\in A_{D,1/q^*,\zeta_{{\bf j},q^*}}$, which implies that $N_{2,{\bf j}}(x) \in [-1, 1]^d$. For each ${\bf j}\in\mathbb N_{2q}^{*}$, since $ A_{D,1/q^*,\zeta_{{\bf j},q^*}}$ is a cube in $\mathbb R^D$, the cardinality of the set $\{{\bf j}:x\in A_{D,1/q^*,\zeta_{{\bf j},q^*}}\}$ is at most $2^D$. Also, because $[-1,1]^d=\bigcup_{{\bf k} \in \mathbb N_{2n}^d}A_{d, 1/n, t_{\bf k}}$ for each ${\bf j}\in\mathbb N_{2q}^{*}$, there exists some ${\bf k}\in \mathbb N_{2n}^d$, such that $N_{2,{\bf j}}(x) \in A_{d, 1/n, t_{\bf k}}$, implying that $N_{3,{\bf k,j}}(x) =N_{1, d, n, t_{\bf k}}\circ N_{2,{\bf j}}(x) = 1$ and that the number of such integers ${\bf k}$ is bounded by $2^d$. For each $x\in\mathcal X$, we consider a non-empty subset \begin{equation}\label{Lambdaset} \Lambda_x =\left\{({\bf j,k})\in\mathbb N_{2q^*}^D \times \mathbb N_{2n}^d: x\in A_{D,1/q^*,\zeta_{{\bf j},q^*}}, N_{3,{\bf k,j}}(x)=1 \right\}. \end{equation} of $\mathbb N_{2q^*}^D \times \mathbb N_{2n}^d$, with cardinality \begin{equation}\label{cap1} |\Lambda_x|\leq 2^{D+d},\qquad \forall\ x\in\mathcal X. \end{equation} Also, for each $x\in\mathcal X$, we further define $S_{\Lambda_x}=\cup_{({\bf j,k})\in \Lambda_x} H_{\bf k,j}\cap \{x_i\}_{i=1}^m$, as well as \begin{equation}\label{Lambdaset1} \Lambda_{x,S}=\left\{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d, N_{1,D, q^*, \zeta_{{\bf j}}} (x_i) N_{3,{\bf k,j}}(x_i)=1,x_i\in S_{\Lambda_x}\right\}, \end{equation} and \begin{equation}\label{Lambdaset2} \Lambda'_{x,S}=\left\{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d, N_{1,D, q^*, \zeta_{{\bf j}}} (x_i) N_{3,{\bf k,j}}(x_i)N_{3,{\bf k,j}}(x)=1,x_i\in S_{\Lambda_x}\right\}. \end{equation} Then it follows from (\ref{Lambdaset1}) and (\ref{Lambdaset2}) that $|\Lambda'_{x,S}|\leq|\Lambda_{x,S}|,$ and it is easy to see that if each $x_i\in S_{\Lambda_x}$ is an interior point of some $H_{\bf k,j}$, then $|\Lambda_{x,S}|=|\Lambda'_{x,S}|$. In this way, $N_3$ is some local average estimator. However, if $|\Lambda_{x,S}|\neq |\Lambda'_{x,S}|$, (and this is possible when some $x_i$ lies on the boundary of $H_{\bf k,j}$ for some $({\bf j,k})\in \mathbb N_{2q^*}^D\times\mathbb N_{2n}^d$), then the estimator $N_3$ (\ref{final estimator1}) might perform badly, and this happens even for training data. Note that to predict some $x_j\in S_m$, which is an interior point of $H_{{\bf k}_0,{\bf j}_0}$, we have $$ N_3(x_j)=\frac{\sum_{i=1}^mN_{1,D,q^*,\zeta_{{\bf j}_0}}(x_i)N_{3,{\bf k}_0,{\bf j}_0}(x_i)y_i}{|\Lambda'_{x_j,S}|}, $$ which is much smaller than $y_j$ when $|\Lambda'_{x,S}|<|\Lambda_{x,S}|$. The reason is that there are only $|\Lambda_{x,S}|$ summations in the numerator. Noting that the Riemannian measure of the boundary of $\cup_{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d}H_{\bf k,j}$ is zero, we consider the above phenomenon as outliers. Fine-tuning, often referred to as feedback in the literature of deep learning \cite{Bengio2009}, can essentially improve the learning performance of deep nets \cite{Larochelle2009}. We observe that fine-tuning can also be applied to avoid outliers for our constructed deep net in (\ref{final estimator1}), by counting the cardinalities of $\Lambda_{x,S}$ and $\Lambda'_{x,S}$. In the training processing, besides computing $N_3(x)$ for some query point $x$, we may also record $|\Lambda_{x,S}|$ and $|\Lambda'_{x,S}|$. If the estimator is not big enough, we propose to add the factor $\frac{|\Lambda'_{x,S}|} {|\Lambda_{x,S}|}$ to $N_3(x)$. In this way, the deep net estimator with feedback can be mathematically represented by \begin{equation}\label{feedback 1} N_3^F(x)=\frac{|\Lambda'_{x,S}|}{|\Lambda_{x,S}|}N_3(x)=\frac{\sum_{{\bf j}\in \mathbb N_{2 q^*}^D} \sum_{{\bf k}\in \mathbb N_{2n}^d} \sum_{i=1}^m y_i \Phi_{\bf k,j}(x, x_i)}{ \sum_{{\bf j}\in \mathbb N_{2 q^*}^D} \sum_{{\bf k}\in \mathbb N_{2n}^d} \sum_{i=1}^m \Phi_{\bf k,j}(x, x_i)}, \end{equation} where $\Phi_{\bf k,j} = \Phi_{{\bf k,j}, D, q^*, n}: {\mathcal X}\times {\mathcal X} \to {\mathbb R}$ is defined by $$ \Phi_{\bf k,j}(x, u) = N_{1,D, q^*, \zeta_{{\bf j}}} (u) N_{3,{\bf k,j}}(u) N_{3,{\bf k,j}}(x); $$ and as before, we set $N_3^F(x)=0$ if the denominator $\sum_{{\bf j}\in \mathbb N_{2 q^*}^D} \sum_{{\bf k}\in \mathbb N_{2n}^d} \sum_{i=1}^m \Phi_{\bf k,j}(x, x_i)$ vanishes. \section{Learning Rate Analysis}\label{Sec.learning rate} We consider a standard regression setting in learning theory \cite{Cucker2007} and assume that the sample set $S= S_m=\{(x_i,y_i)\}_{i=1}^m$ of size $m$ is drawn independently according to some Borel probability measure $\rho$ on ${\mathcal Z} = {\mathcal X}\times {\mathcal Y}$. The regression function is then defined by $$ f_\rho(x)=\int_{\mathcal Y} y d\rho(y|x), \qquad x\in\mathcal X, $$ where $\rho(y|x)$ denotes the conditional distribution at $x$ induced by $\rho$. Let $\rho_X$ be the marginal distribution of $\rho$ on $\mathcal X$ and $(L^2_{\rho_{_X}}, \|\cdot\|_\rho)$ be the Hilbert space of square-integrable functions with respect to $\rho_X$ on $\mathcal X$. Our goal is to estimate the distance between the output function $N_3$ and the regression function $f_\rho$ measured by $\|N_3-f_\rho\|_\rho$, as well as the distance between $N_3^F$ and $\|N_3^F-f_\rho\|_\rho$. We say that a function $f$ on $\mathcal X$ is $(s,c_0)$-Lipschitz (continuous) with positive exponent $s \leq 1$ and constant $c_0 >0$, if \begin{equation}\label{Smoothness assumption} |f(x)-f(x')|\leq c_0(d_G(x,x'))^s, \qquad \forall x,x'\in\mathcal X; \end{equation} and denote by $Lip^{(s,c_0)}:=Lip^{(s,c_0)}(\mathcal X)$, the family of all $(s,c_0)$ Lipschitz functions that satisfy (\ref{Smoothness assumption}). Our error analysis of $N_3$ will be carried out based on the following two assumptions. \begin{assumption}\label{Assumption:frho} There exist an $s\in(0,1]$ and a constant $c_0\in\mathbb R_+$ such that $f_\rho\in Lip^{(s,c_0)}$. \end{assumption} This smoothness assumption is standard in learning theory for the study of approximation for regression (see, for example, and\cite{Gyorfi2002,Kohler2005,Maiorov2006a,Cucker2007,Wu2008,Shi2011,Hu2015,Fan2016,Guo2016,Christmann2016,Chang2017,Lin2017}). \begin{assumption}\label{Assumption:rhox} $\rho_X$ is continuous with respect to the geodesic distance $d_G$ of the Riemannian manifold. \end{assumption} Note that Assumption \ref{Assumption:rhox}, which is about the geometrical structure of $\rho_X$, is slightly weaker than the distortion assumption in \cite{Zhou2006,Shi2013} but somewhat similar to the assumption considered in \cite{Meister2016}. The objective of this assumption is for describing the functionality of fine-tuning. We are now ready to state the main results of this paper. In the first theorem below, we obtained an upper bound of learning rate for the constructed deep nets $N_3$. \begin{theorem}\label{Theorem: optimal rate without feedback} Let $m$ be the number of samples and set $n=\lceil m^{1/(2s+d)}\rceil$, where $1/(2n)$ is the uniform spacing of the points $t_{\bf k}= t_{{\bf k}, n}\in (-1,1)^d$ in the definition of $ N_3$ in (\ref{N3k(x)}). Then under Assumptions \ref{Assumption:frho} and \ref{Assumption:rhox}, \begin{equation}\label{theorem1} \mathbf E\left[\|N_3-f_\rho\|_\rho^2\right] \leq C_1m^{-\frac{2s}{2s+d}}. \end{equation} for some positive constant $C_1$ independent of $m$. \end{theorem} Observe that Theorem \ref{Theorem: optimal rate without feedback} provides fast learning rate for the constructed deep net which depends on manifold dimension $d$ instead of the sample space dimension $D$. In the second theorem below, we show the necessity of the fine-tuning process as presented in (\ref{feedback 1}), when Assumption \ref{Assumption:rhox} is removed. \begin{theorem}\label{Theorem: optimal rate} Let $m$ be the number of samples and set $n=\lceil m^{1/(2s+d)}\rceil$, where $1/(2n)$ is the uniform spacing of the points $t_{\bf k}= t_{{\bf k}, n}\in (-1,1)^d$ in the definition of $N_3$ in (\ref{N3k(x)}), which is used to define $N_3^F$ in (\ref{feedback 1}). Then under Assumption \ref{Assumption:frho}, \begin{equation}\label{theorem2} \mathbf E\left[\|N_3^F-f_\rho\|_\rho^2\right] \leq C_2'm^{-\frac{2s}{2s+d}}. \end{equation} for some positive constant $C_2$ independent of $m$. \end{theorem} Observe that while Assumption \ref{Assumption:rhox} is needed in Theorem \ref{Theorem: optimal rate without feedback}, it is not necessary for the validity of Theorem \ref{Theorem: optimal rate}, which theoretically shows the significance of fine-tuning in our construction. The proofs of these two theorems will be presented in the final section of this paper. \section{Related Work and Discussions}\label{Sec.Comparison} The success in practical applications, especially in the fields of computer vision \cite{Krizhevsky2012} and speech recognition \cite{Lee2009}, has triggered enormous research activities on deep learning. Several other encouraging results, such as object recognition \cite{DiCarlo2007}, unsupervised training \cite{Erhan2010}, and artificial intelligence architecture\cite{Bengio2009}, have been obtained to demonstrate the significance of deep learning. We refer the interested readers to the 2016 MIT monograph, ``Deep Learning'' \cite{Goodfellow}, by Goodfellow, Bengjio and Courville, for further study of this exciting subject, which is only at the infancy of its development. Indeed, deep learning has already created several challenges to the machine learning community. Among the main challenges are to show the necessity of the usage of deep nets and to theoretically justify the advantages of deep nets over shallow nets. This is essentially a classical topic in Approximation Theory. In particular, dating back to the early 1990's, it was already proved that deep nets can provide localized approximation but shallow nets fail (see, for example, \cite{Chui1994}). Furthermore, it was also shown that deep nets provide high approximation orders, that are certainly not restricted by the lower error bounds for shallow nets (see \cite{Chui1996, Maiorov1999b}). More recently, stimulated by the avid enthusiasm of deep learning, numerous advantages of deep nets were also revealed from the point of view of function approximation. In particular, certain functions discussed in \cite{Eldan2015} can be represented by deep nets but cannot be approximated by shallow nets; it was shown in \cite{Mhaskar2016} that deep nets, but not shallow nets, can approximate composition of functions; it was exhibited in\cite{Poggio2017} that deep nets can avoid the curse of dimension of shallow nets; a probability argument was given in \cite{Lin2017a} to show that deep nets have better approximation performance than shallow nets with high confidence; it was demonstrated in \cite{Shaham2015,Chui2016} that deep nets can improve the approximation capability of shallow nets when the data are located on data-dependent manifolds; and so on. All of these results give theoretical explanations of the significance of deep nets from the Approximation Theory point of view. As a departure from the work mentioned above, our present paper is devoted to explore better performance of deep nets over shallow nets in the framework of Leaning Theory. In particular, we are concerned not only with the approximation accuracy but also with the cost to attain such accuracy. In this regard, learning rates of certain deep nets have been analyzed in \cite{Kohler2005}, in which Kohler and Krzy\.{z}ak provided certain near-optimal learning rates for a fairly complex regularization scheme, with the hypothesis space being the family of deep nets with two hidden layers proposed in \cite{Mhaskar1993}. More precisely, they derived a learning rate of order $\mathcal O(m^{-2s/(2s+D)}(\log m)^{4s/(2s+D)})$ for functions $f_\rho\in Lip^{(s,c_0)}$. This is close to the optimal learning rate of shallow nets in \cite{Maiorov2006a}, different only by a logarithmic factor. Hence, the study in \cite{Kohler2005} theoretically showed that deep nets at least do not downgrade the learning performance of shallow nets. In comparison with \cite{Kohler2005}, our study is focussed on answering the question: ''What is to be gained by deep learning?'' The deep net constructed in our paper possesses a learning rate of order $\mathcal O(m^{-2s/(2s+d)})$, when $\mathcal X$ is an unknown $d$-dimensional connected $C^\infty$ Riemannian manifold (without boundary). This rate is the same as the optimal learning rate \cite[Chapeter 3]{Gyorfi2002} for special case of the cube $\mathcal X=[-1,1]^d$ under a similar condition, though it is smaller than the optimal learning rates for shallow nets \cite{Maiorov2006a}. Another line of related work is \cite{Ye2008,Ye2009}, where Ye and Zhou deduced learning rates for regularized least-squares over shallow nets for the same setting of our paper. They derived a learning rate of $\mathcal O\left( m^{-s/(8s+4d)}(\log m)^{s/(4s+2d)}\right)$, which is slower than the rate established in our paper. It should be mentioned that in a more recent work \cite{Kohler2017}, some advantages of deep nets are revealed from the learning theory viewpoint. However, the results in \cite{Kohler2017} requires a hierarchical interaction structure, which is totally different from what is presented in our present paper. Due to the high degree of freedom for deep nets, the number and type of parameters for deep nets are much more than those of shallow nets. Thus, it should be of great interest to develop scalable algorithms to reduce the computational burdens of deep learning. Distributed learning based on a divide-and-conquer strategy \cite{Zhang2014,Lin2015} could be a fruitful approach for this purpose. It is also of interest to establish results similar to those of Theorem \ref{Theorem: optimal rate} and Theorem \ref{Theorem: optimal rate without feedback} for deep nets, but with rectifier neurons, by using the rectifier (or ramp) function, $ \sigma_2(t)=t_{+}^2= (t_{+})^2$, as activation. The reason is that the rectifier is one of the most widely used activations in the literature on deep learning. Our research in these directions is postponed to a later work. \section{Proofs of the main results}\label{Sec.Proof1} To facilitate our proofs of the theorems stated in Section \ref{Sec.learning rate}, we first establish the following two lemmas. Observe from Proposition \ref{Proposition:localization} and the definition (\ref{N3k(x)}) of the function $N_{3,{\bf k,j}}$ that \begin{equation}\label{N1&3} N_{1,D, q^*, \zeta_{{\bf j}}} (x) N_{3,{\bf k,j}}(x) = {I}_{A_{D,1/q^*,\zeta_{\bf j}}} (x) I_{A_{d, 1/n, t_{\bf k}}} (N_{2, {\bf j}} (x)) = I_{H_{\bf k, j}}(x). \end{equation} For ${\bf j}\in \mathbb N_{2 q^*}^D, {\bf k}\in \mathbb N_{2n}^d$, define a random function $T_{{\bf k, j}}: {\mathcal Z}^m \to \mathbb R$ in term of the random sample $S=\{(x_i,y_i)\}_{i=1}^m$ by \begin{equation}\label{def. T} T_{{\bf k, j}}(S) = \sum_{i=1}^m N_{1,D, q^*, \zeta_{{\bf j}}} (x_i) N_{3,{\bf k,j}}(x_i), \end{equation} so that \begin{equation}\label{expressT} T_{{\bf k, j}}(S) =\sum_{i=1}^m I_{H_{\bf k, j}}(x_i). \end{equation} \begin{lemma}\label{Lemma:important} Let $\Lambda^*\subseteq \mathbb N_{2q^*}^D\times\mathbb N_{2n}^d$ be a non-empty subset, $({\bf j}\times{\bf k}) \in \Lambda^*$ and $T_{{\bf k, j}}(S)$ be defined as in (\ref{def. T}). Then \begin{equation}\label{lemmafor bio} \mathbf E_S \left[\frac{I_{\{z\in\mathcal Z^m:\sum_{({\bf j,k})\in \Lambda^*}T_{{\bf k, j}}(z)>0\}}(S)}{\sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)}\right]\leq \frac{2}{(m+1)\rho_X(\cup_{({\bf j,k})\in\Lambda^*}H_{\bf k, j})}, \end{equation} where if $\sum_{({\bf j,k})\in \Lambda^*}T_{{\bf k, j}}(S)=0$, we set $$\frac{I_{\{z\in\mathcal Z^m:\sum_{({\bf j,k})\in \Lambda^*}T_{{\bf k, j}}(z)>0\}}(S)}{\sum_{{\bf j,k}\in\Lambda^*}T_{{\bf k, j}}(S)}=0. $$ \end{lemma} {\bf Proof} Observe that it follows from (\ref{expressT}) that $T_{{\bf k, j}}(S)\in \{0, 1, \ldots, m\}$ and \begin{eqnarray*} &&\mathbf E_S \left[\frac{I_{\{z\in\mathcal Z^m:\sum_{({\bf j,k})\in \Lambda^*}T_{{\bf k, j}}(z)>0\}}(S)}{\sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)}\right]\\ &=& \sum_{\ell=0}^{m} \mathbf E_S \left[\frac{I_{\{z\in\mathcal Z^m:\sum_{({\bf j,k})\in \Lambda^*}T_{{\bf k, j}}(z)>0\}}(S)}{\sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)}\big| \sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)=\ell \right] Pr\left[\sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)=\ell\right]. \end{eqnarray*} Since by the definition of the fraction $\frac{I_{\{z\in\mathcal Z^m:\sum_{({\bf j,k})\in \Lambda^*}T_{{\bf k,j}}(z)>0\}}(S)}{\sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)}$, the term with $\ell=0$ above vanishes, so that \begin{eqnarray*} \mathbf E_S \left[\frac{I_{\{z\in\mathcal Z^m:\sum_{({\bf j,k})\in \Lambda^*}T_{{\bf k, j}}(z)>0\}}(S)}{\sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)}\right] &=&\sum_{\ell=1}^{m}\mathbf E\left[\frac{1}{\ell} \big|\sum_{({\bf j, k})\in \Lambda^*}T_{{\bf k, j}}(S)=\ell\right] Pr\left[\sum_{({\bf j, k})\in \Lambda^*}T_{{\bf k, j}}(S)=\ell\right]\\ & =& \sum_{\ell=1}^{m} \frac{1}{\ell} Pr\left[\sum_{({\bf j, k})\in \Lambda^*}T_{{\bf k, j}}(S)=\ell\right]. \end{eqnarray*} On the other hand, from (\ref{expressT}), note that $\sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)=\ell$ is equivalent to $x_i \in \cup_{({\bf j,k})\in\Lambda^*}H_{\bf k,j}$ for $\ell$ indices $i$ from $\{1,\cdots, m\}$, which in turn implies that $$ Pr\left[\sum_{({\bf j, k})\in \Lambda^*}T_{{\bf k, j}}(S)=\ell\right] =\left(\begin{array}{c}m\\ \ell\end{array}\right)[\rho_X(\cup_{({\bf j,k})\in\Lambda^*} H_{\bf k, j})]^\ell[1-\rho_X(\cup_{({\bf j,k})\in\Lambda^*}H_{\bf k, j})]^{m-\ell}. $$ Thus, we obtain \begin{eqnarray*} &&\mathbf E_S \left[\frac{I_{\{z\in\mathcal Z^m:\sum_{({\bf j,k})\in \Lambda^*}T_{{\bf k, j}}(z)>0\}}(S)}{\sum_{({\bf j,k})\in\Lambda^*}T_{{\bf k, j}}(S)}\right]\\ & =& \sum_{\ell=1}^{m} \frac{1}{\ell} \left(\begin{array}{c}m\\ \ell\end{array}\right)[\rho_X(\cup_{({\bf j,k})\in\Lambda^*} H_{\bf k, j})]^\ell[1-\rho_X(\cup_{({\bf j,k})\in\Lambda^*}H_{\bf k, j})]^{m-\ell}\\ &\leq& \sum_{\ell=1}^{m} \frac{2}{\ell+1} \left(\begin{array}{c}m\\ \ell\end{array}\right)[\rho_X(\cup_{({\bf j,k})\in\Lambda^*} H_{\bf k, j})]^\ell[1-\rho_X(\cup_{({\bf j,k})\in\Lambda^*}H_{\bf k, j})]^{m-\ell}\\ &=& \frac{2}{(m+1)\rho_X(\cup_{({\bf j,k})\in\Lambda^*} H_{\bf k, j}))}\sum_{\ell=1}^{m} \left(\begin{array}{c}m+1\\ \ell+1\end{array}\right)[\rho_X(\cup_{({\bf j,k})\in\Lambda^*} H_{\bf k, j})]^{\ell+1}[1-\rho_X({\cup}_{({\bf j,k})\in\Lambda^*} H_{\bf k, j})]^{m-\ell}. \end{eqnarray*} Therefore, the desired inequality (\ref{lemmafor bio}) follows. This completes the proof of Lemma \ref{Lemma:important}. $\Box$ \begin{lemma}\label{Lemma:important2} Let $S=\{(x_i,y_i)\}_{i=1}^m$ be a sample set drawn independently according to $\rho$. If $f_{S}(x)=\sum_{i=1}^my_i h_{\bf x}(x, x_i)$ with a measurable function $h_{\bf x}: \mathcal X \times \mathcal X \to \mathbb R$ that depends on ${\bf x}:=\{x_i\}_{i=1}^m$, then \begin{equation}\label{unbias} \mathbf E\left[\|f_S-f_\rho\|_\mu^2| {\bf x}\right]=\mathbf E\left[\left\|f_S-\sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i)\right\|_\mu^2| {\bf x}\right]+ \left\|\sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i)-f_\rho\right\|_\mu^2 \end{equation} for any Borel probability measure $\mu$ on $\mathcal X$. \end{lemma} {\bf Proof.} Since $f_\rho(x)$ is the conditional mean of $y$ given $x\in\mathcal X$, we have from $f_{S}(x)=\sum_{i=1}^my_ih_{{\bf x}}(x, x_i)$ that $ \mathbf E[f_S|{\bf x}]= \sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i)$. Hence, \begin{eqnarray*} &&\mathbf E\left[\left\langle f_S-\sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i), \sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i) -f_\rho\right\rangle_\mu|{\bf x}\right]\\ &=& \left\langle \mathbf E\left[f_S|{\bf x}\right]-\sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i), \sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i)-f_\rho\right\rangle_\mu =0. \end{eqnarray*} Thus, along with the inner-product expression \begin{eqnarray*} \|f_S-f_\rho\|_\mu^2 &=&\left\|f_S-\sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i)\right\|_\mu^2+ \left\|\sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i)-f_\rho\right\|_\mu^2\\ &+& 2\left\langle f_S-\sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i), \sum_{i=1}^mf_\rho(x_i)h_{\bf x}(\cdot, x_i)-f_\rho\right\rangle_\mu \end{eqnarray*} the above equality yields the desired result (\ref{unbias}). This completes the proof of Lemma \ref{Lemma:important2}. $\Box$ We are now ready to prove the two main results of the paper. {\bf Proof of Theorem \ref{Theorem: optimal rate without feedback}.} We divide the proof into four steps, namely: error decomposition, sampling error estimation, approximation error estimation, and learning rate deduction. {\it Step 1: Error decomposition.} Let $\dot{H}_{\bf k,j}$ be the set of interior points of $H_{\bf k,j}$. For arbitrarily fixed ${\bf k',j'}$ and $x\in \dot{H}_{\bf k',j'}$, it follows from (\ref{N1&3}) that \begin{eqnarray*} \sum_{{\bf j}\in \mathbb N_{2 q^*}^D} \sum_{{\bf k}\in \mathbb N_{2n}^d} \sum_{i=1}^m N_{1,D, q^*, \zeta_{{\bf j}}} (x_i) N_{3,{\bf k,j}}(x_i) y_i N_{3,{\bf k,j}}(x) &=& \sum_{i=1}^my_iN_{1,D, q^*, \zeta_{{\bf j'}}} (x_i) N_{3,{\bf k',j'}}(x_i)\\ &=&\sum_{i=1}^my_iI_{H_{\bf k',j'}}(x_i). \end{eqnarray*} If, in addition, each $x_i\in \dot{H}_{\bf k,j}$ for some ${\bf k,j}\in \mathbb N_{2 q^*}^D\times \mathbb N_{2n}^d$, then we have, from (\ref{final estimator1}), that \begin{equation}\label{rewritten N3} N_{3}(x)=\frac{\sum_{i=1}^my_iI_{H_{\bf k',j'}}(x_i)}{ \sum_{i=1}^mI_{H_{\bf k',j'}}(x_i)} = \frac{\sum_{i=1}^my_iI_{H_{\bf k',j'}}(x_i)}{T_{\bf k',j'}(S)}. \end{equation} In view of Assumption \ref{Assumption:rhox}, it follows that for an arbitrary subset $A\subset R^{D}$, $\lambda_G(A)=0$ implies $\rho_X(A)=0$, where $\lambda_G (A)$ denotes the geodesic metric of the Riemannian manifold $\mathcal X$. In particular, for $A=H_{\bf k,j}\backslash\dot{H}_{\bf k,j}$ in the above analysis, we have $\rho_X(H_{\bf k,j}\backslash\dot{H}_{\bf k,j})=0$, which implies that (\ref{rewritten N3}) almost surely holds. Next, set \begin{equation}\label{Definition of tilde n4444} \widetilde{N_{3}} :=\mathbf E\left[N_3 |{\bf x}\right]. \end{equation} Then it follows from Lemma \ref{Lemma:important2}, with $\mu=\rho_X$, that \begin{equation}\label{Error decomposition 222} \mathbf E\left[\|N_3-f_\rho\|_\rho^2\right] = \mathbf E\left[\| N_3-\widetilde{N_3}\|_\rho^2\right] + \mathbf E\left[\|\widetilde{N_3}-f_\rho\|_\rho^2\right]. \end{equation} In what follows, the two terms on the right-hand side of (\ref{Error decomposition 222}) will be called sampling error and approximation error, respectively. {\it Step 2: Sampling error estimation.} Due to Assumption \ref{Assumption:rhox}, we have \begin{eqnarray}\label{sam 2.1} \mathbf E[\|N_3 -\widetilde{N_3}\|_\rho^2] = \sum_{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d} \int_{\dot{H}_{\bf k,j}} \mathbf E\left[(N_3 (x)-\widetilde{N_3} (x))^2\right]d\rho_X. \end{eqnarray} On the other hand, (\ref{rewritten N3}) and (\ref{Definition of tilde n4444}) together imply that $$ N_{3}(x)-\widetilde{N_3}(x)= \frac{\sum_{i=1}^m(y_i-f_\rho(x_i))I_{H_{\bf k,j}}(x_i)}{T_{\bf k,j}(S)} $$ almost surely for $x\in \dot{H}_{\bf k,j}$, and that $$ \mathbf E\left[(N_3 (x)-\widetilde{N_3} (x))^2|{\bf x}\right] = \frac{\sum_{i=1}^m\int_{\mathcal Y}(y-f_\rho(x_i))^2d\rho(y|x_i)I^2_{H_{\bf k,j}}(x_i)}{[T_{\bf k,j}(S)]^2} \leq 4M^2\frac{I_{\{z:T_{\bf k,j}(z)>0\}}(S)}{T_{\bf k,j}(S)}, $$ where $\mathbb E[y_i|x_i]=f_\rho(x_i)$ in the second equality, $I^2_{H_{\bf k,j}}(x_i)=I_{H_{\bf k,j}}(x_i)$ and $|y_i|\leq M$ holds almost surely in the inequality. It then follows from Lemma \ref{Lemma:important} and Assumption \ref{Assumption:rhox} that $$ \mathbf E\left[(N_3 (x)-\widetilde{N_3} (x))^2\right] \leq \frac{8 M^2}{(m+1)\rho_X(H_{\bf k, j})}. $$ This, together with (\ref{sam 2.1}), implies that \begin{eqnarray}\label{sam 2.est} \mathbf E[\|N_3 -\widetilde{N_3}\|_\rho^2] \leq \sum_{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d} \int_{\dot{H}_{\bf k,j}} \frac{8M^2}{(m+1)\rho_X(H_{\bf k, j})}d\rho_X \leq \frac{8(2q^*)^D(2n)^dM^2}{m+1}. \end{eqnarray} {\it Step 3: Approximation error estimation.} According to Assumption \ref{Assumption:rhox}, we have \begin{eqnarray}\label{app 2.1} \mathbf E[\|f_\rho -\widetilde{N_3}\|_\rho^2] = \sum_{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d} \int_{\dot{H}_{\bf k,j}} \mathbf E\left[(f_\rho (x)-\widetilde{N_3} (x))^2\right]d\rho_X. \end{eqnarray} For $x\in \dot{H}_{\bf k,j}$, it follows from Assumption \ref{Assumption:frho}, (\ref{rewritten N3}) and (\ref{Definition of tilde n4444}) that \begin{eqnarray*} &&\left|f_\rho (x)-\widetilde{N_3} (x)\right|\leq \frac{\sum_{i=1}^m|f_\rho(x)-f_\rho(x_i)|I_{H_{\bf k,j}}(x_i)}{T_{\bf k,j}(S)}\leq c_0(\max_{x,x'\in H_{\bf k,j}}d_G(x,x'))^s \end{eqnarray*} almost surely holds. We then have, from (\ref{equality of distance of N2}) and $N_{2,{\bf j}}(x),N_{2,{\bf j}}(x') \in A_{d,1/n,t_{\bf k}}$, that $$ \max_{x,x'\in H_{{\bf k},{\bf j}}} d_G(x,x') \leq \max_{x,x'\in H_{{\bf k},{\bf j}}} \alpha^{-1} \|N_{2,{\bf j}}(x)-N_{2,{\bf j}}(x')\|_d. $$ Now, since $\max_{t,t'\in A_{d,1/n,t_{\bf k}}}\|t-t'\|_d\leq \frac{2\sqrt{d}}n$, we obtain $$ \max_{x,x'\in H_{{\bf k},{\bf j}}} d_G(x,x') \leq \frac{2 d^{1/2}}{\alpha }n^{-1}, $$ so that $$ \left|f_\rho (x)-\widetilde{N_3} (x)\right|\leq c_0\frac{2^s d^{s/2}}{\alpha^s}n^{-s}. $$ almost surely holds. Inserting the above estimate into (\ref{app 2.1}), we also obtain \begin{eqnarray}\label{app 2.est} \mathbf E[\|f_\rho -\widetilde{N_3}\|_\rho^2] \leq \sum_{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d} \rho_X({\dot{H}_{\bf k,j}})\frac{c_0^24^s d^{s}}{\alpha^{2s}}n^{-2s} \leq \frac{c_0^24^s d^{s}}{\alpha^{2s}}n^{-2s}. \end{eqnarray} {\it Step 4: Learning rate deduction.} Inserting (\ref{app 2.est}) and (\ref{sam 2.est}) into (\ref{Error decomposition 222}), we obtain \begin{eqnarray*} &&\mathbf E\left[\|N_3 -f_\rho\|_\rho^2\right] \leq \frac{8(2q^*)^D(2n)^dM^2}{m+1} + \frac{c_0^24^s d^{s}}{\alpha^{2s}}n^{-2s}. \end{eqnarray*} Since $n=\lceil m^{1/(2s+d)}\rceil$, we have $$ \mathbf E\left[\|N_3^F-f_\rho\|_\rho^2\right]\leq C_1m^{-\frac{2s}{2s+d}} $$ with $$ C_1:=8(2q^*)^D2^dM^2+\frac{c_0^24^{s}d^{s}}{\alpha^{2s}}. $$ As $q^*$ depends only on $\mathcal X$, $C_2'$ is independent of $m$ or $n$. This completes the proof of Theorem \ref{Theorem: optimal rate without feedback}. $\Box$ {\bf Proof of Theorem \ref{Theorem: optimal rate}.} Similar to the proof of Theorem \ref{Theorem: optimal rate without feedback}, we also divide this proof into four steps. {\it Step 1: Error decomposition.} From (\ref{feedback 1}), we have \begin{equation}\label{defienition of feedback} N_3^F(x)= \sum_{i=1}^m y_i h_{\bf x}(x, x_i), \end{equation} where $h_{\bf x}: \mathcal X \times \mathcal X \to \mathbb R$ is a function defined for $x, u \in \mathcal X$ by \begin{equation}\label{def h} h_{\bf x}(x, u) = \frac{\sum_{{\bf j}\in \mathbb N_{2 q^*}^D} \sum_{{\bf k}\in \mathbb N_{2n}^d} \Phi_{\bf k,j}(x, u)}{\sum_{{\bf j}\in \mathbb N_{2 q^*}^D} \sum_{{\bf k}\in \mathbb N_{2n}^d} \sum_{i=1}^m \Phi_{\bf k,j}(x, x_i)}, \end{equation} and $h_{\bf x}(x, u)=0$ when the denominator vanishes. Define $\widetilde{N^F_{3}}: \mathcal X \to \mathbb R$ by \begin{equation}\label{Definition of tilde n3} \widetilde{N^F_{3}}(x) =\mathbf E\left[N^F_3(x)| {\bf x}\right]=\sum_{i=1}^m f_\rho (x_i) h_{\bf x}(x, x_i). \end{equation} Then it follows from Lemma \ref{Lemma:important2} with $\mu=\rho_X$, that \begin{equation}\label{Error decomposition} \mathbf E\left[\|N_3^F-f_\rho\|_\rho^2\right] = \mathbf E\left[\| N_3^F-\widetilde{N^F_3}\|_\rho^2\right] + \mathbf E\left[\|\widetilde{N^F_3}-f_\rho\|_\rho^2\right]. \end{equation} In what follows, the terms on the right-hand side of (\ref{Error decomposition}) will be called sampling error and approximation error, respectively. By (\ref{N1&3}), for each $x\in \mathcal X$ and $i\in\{1, \cdots, m\}$, we have $\Phi_{\bf k,j}(x, x_i) = I_{H_{\bf k,j}}(x_i) N_{3,{\bf k,j}}(x) = I_{H_{\bf k, j}}(x_i)$ for $({\bf j,k})\in\Lambda_x$ and $\Phi_{\bf k,j}(x, x_i) = 0$ for $({\bf j,k})\notin\Lambda_x$, where $\Lambda_x$ is defined by (\ref{Lambdaset}). This, together with (\ref{Definition of tilde n3}), (\ref{defienition of feedback}) and (\ref{def h}), yields both \begin{equation}\label{fdiff} N_3^F(x) - \widetilde{N^F_{3}}(x) = \sum_{i=1}^m \left(y_i - f_\rho (x_i)\right) \frac{\sum_{({\bf j}, {\bf k})\in \Lambda_x} I_{H_{\bf k, j}}(x_i)}{\sum_{({\bf j}, {\bf k})\in \Lambda_x} T_{{\bf k, j}}(S)}, \qquad \forall x\in \mathcal X \end{equation} and \begin{equation}\label{fdiff11} \widetilde{N^F_{3}}(x)-f_\rho(x) = \sum_{i=1}^m [f_\rho (x_i)-f_\rho(x)] \frac{\sum_{({\bf j}, {\bf k})\in \Lambda_x} I_{H_{\bf k, j}}(x_i)}{\sum_{({\bf j}, {\bf k})\in \Lambda_x} T_{{\bf k, j}}(S)}, \qquad \forall x\in \mathcal X, \end{equation} where $T_{{\bf k, j}}(S)=\sum_{i=1}^mI_{H_{\bf k, j}}(x_i).$ {\it Step 2: Sampling error estimation.} First consider \begin{equation}\label{sam 1} \mathbf E \left[\|N_3^F-\widetilde{N^F_3}\|_\rho^2\right] \leq \sum_{({\bf j,k}) \in \mathbb N_{2q^*}^D \times \mathbb N_{2n}^d} \int_{H_{\bf k,j}} \mathbf E \left[\left(N^F_3(x)-\widetilde{N_3^F}(x)\right)^2\right] d\rho_X. \end{equation} Then for each $x\in H_{\bf k,j}$, since $\mathbb E[y|x]=f_\rho(x)$, it follows from (\ref{fdiff}) and $|y|\leq M$ that \begin{eqnarray*} &&\mathbf E \left[\left(N^F_3(x)-\widetilde{N_3^F}(x)\right)^2|{\bf x}\right] = \mathbf E\left[\left(\sum_{i=1}^m \left(y_i - f_\rho (x_i)\right) \frac{\sum_{({\bf j}, {\bf k})\in \Lambda_x} I_{H_{\bf k, j}}(x_i)}{\sum_{({\bf j}, {\bf k})\in \Lambda_x} T_{{\bf k, j}}(S)}\right)^2\big|{\bf x}\right]\\ &=& \mathbf E\left[ \sum_{i=1}^m \left(y_i - f_\rho (x_i)\right)^2 \left(\frac{\sum_{({\bf j}, {\bf k})\in \Lambda_x} I_{H_{\bf k, j}}(x_i)}{\sum_{({\bf j}, {\bf k})\in \Lambda_x} T_{{\bf k, j}}(S)}\right)^2\big|{\bf x}\right]\\ &\leq& 4M^2\sum_{i=1}^m\left(\frac{\sum_{({\bf j}, {\bf k})\in \Lambda_x} I_{H_{\bf k, j}}(x_i)}{\sum_{({\bf j}, {\bf k})\in \Lambda_x} T_{{\bf k, j}}(S)}\right)^2 \end{eqnarray*} almost surely holds. Hence, since $\sum_{i=1}^m I_{H_{\bf k, j}}(x_i)= T_{{\bf k, j}}(S)$, we may apply the Schwarz inequality to $\sum_{({\bf j}, {\bf k})\in \Lambda_x}I_{H_{\bf k, j}}(x_i)$ to obtain \begin{eqnarray*} &&\mathbf E \left[\left(N^F_3(x)-\widetilde{N_3^F}(x)\right)^2|{\bf x}\right] \leq \frac{4M^2|\Lambda_x|\sum_{({\bf j}, {\bf k})\in \Lambda_x}\sum_{i=1}^mI^2_{H_{\bf k, j}}(x_i)}{\left(\sum_{({\bf j}, {\bf k})\in \Lambda_x} T_{{\bf k, j}}(S)\right)^2}\\ &=& \frac{4M^2|\Lambda_x|I_{\{z\in\mathcal Z^m:\sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}>0\}} (S)}{\sum_{({\bf j}, {\bf k})\in \Lambda_x} T_{{\bf k, j}}(S)}. \end{eqnarray*} Thus, it follows from Lemma \ref{Lemma:important} and (\ref{cap1}) that \begin{eqnarray*} \mathbf E \left[\left(N^F_3(x)-\widetilde{N_3^F}(x)\right)^2\right] &=& \mathbf E\left[\mathbf E \left[\left(N^F_3(x)-\widetilde{N_3^F}(x)\right)^2|{\bf x}\right]\right]\\ &\leq& \frac{8M^22^{D+d} }{(m+1)\rho_X(\cup_{({\bf j,k})\in\Lambda_x}H_{\bf k, j})}. \end{eqnarray*} This, along with (\ref{sam 1}), implies that \begin{eqnarray}\label{Sam est} &&\mathbf E \left[\|N_3^F-\widetilde{N^F_3}\|_\rho^2\right] \leq \frac{2^{D+d+3}M^2 }{(m+1)} \sum_{({\bf j,k}) \in \mathbb N_{2q^*}^D \times \mathbb N_{2n}^d} \int_{H_{\bf k,j}} \frac{1}{\rho_X(\cup_{({\bf j,k})\in\Lambda_x}H_{\bf k, j})} d\rho_X\nonumber\\ &\leq& \frac{2^{D+d+3}M^2 }{(m+1)} \sum_{({\bf j,k}) \in \mathbb N_{2q^*}^D \times \mathbb N_{2n}^d} \int_{H_{\bf k,j}} \frac{1}{\rho_X(H_{\bf k, j})} d\rho_X \leq \frac{2^{D+d+3}{(2q^*)}^DM^2{(2n)}^d }{(m+1)}. \end{eqnarray} {\it Step 3: Approximation error estimation.} For each $x\in \mathcal X$, set $$ A_1(x):= \mathbf E\left[( \widetilde{N_3^F}(x)-f_\rho(x))^2| \sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}(S) =0\right] Pr\left[ \sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}(S) =0\right] $$ and $$ A_2(x):= \mathbf E\left[( \widetilde{N_3^F}(x)-f_\rho(x))^2| \sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}(S) \geq1\right] Pr\left[\sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}(S)\geq 1\right]; $$ and observe that \begin{equation}\label{app 1} \mathbf E \left[\| \widetilde{N^F_3}-f_\rho\|_\rho^2\right] = \int_{\mathcal X} \mathbf E \left[\left( \widetilde{N_3^F}(x)-f_\rho(x)\right)^2\right] d\rho_X = \int_{\mathcal X}A_1(x)d\rho_X+\int_{\mathcal X}A_2(x)d\rho_X. \end{equation} Let us first consider $\int_{\mathcal X}A_1(x)d\rho_X$ as follows. Since $\widetilde{N_3^F}(x)=0$ for $\sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}(S) =0$, we have, from $ |f_\rho(x)|\leq M$, that $$ \mathbf E\left[( \widetilde{N_3^F(x)}-f_\rho(x))^2| \sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}(S)=0\right]\leq M^2. $$ On the other hand, since $$ Pr\left[\sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}(S)=0\right] =[1-\rho_X(\cup_{({\bf j,k})\in \Lambda_x}H_{\bf k,j })]^{m}, $$ it follows from the elementary inequality $$ v(1-v)^m\leq ve^{-mv}\leq\frac1{em},\qquad\forall 0\leq v\leq 1 $$ that \begin{eqnarray}\label{Bound A_1(x)} &&\int_{\mathcal X}A_1(x)d\rho_X \leq \int_{\mathcal X}M^2[1-\rho_X(\cup_{({\bf j,k})\in \Lambda_x}H_{\bf k,j })]^{m}d\rho_X \nonumber\\ &\leq& M^2\sum_{({\bf j',k'})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d}\int_{{H}_{\bf k',j'}}[1-\rho_X(\cup_{({\bf j,k})\in \Lambda_x}H_{\bf k,j })]^{m}d\rho_X \nonumber\\ &\leq& M^2\sum_{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d} \int_{{H}_{\bf k,j}}[1-\rho_X(H_{\bf k,j })]^{m}d\rho_X \leq M^2\sum_{({\bf j,k})\in\mathbb N_{2q^*}^D\times\mathbb N_{2n}^d}[1-\rho_X(H_{\bf k,j })]^{m}\rho_X({H}_{\bf k,j})\nonumber\\ &\leq& \frac{(2n)^d(2q^*)^DM^2}{em}. \end{eqnarray} We next consider $\int_{\mathcal X}A_2(x)d\rho_X$. Let $x\in= \mathcal X$ be so chosen that $\sum_{({\bf j,k})\in \Lambda_x}T_{{\bf k,j}}(S)\geq 1$. Then $x_i\in H_x:=\cup_{({\bf j,k})\in\Lambda_x}H_{\bf k,j}$ at least for some $i\in\{1,2,\dots,m\}$. For those $x_i\notin H_x$, we have $\sum_{({\bf j,k})\in \Lambda_x}I_{H_{\bf k,j}}(x_i)=0$, so that \begin{eqnarray*} &&\left|\widetilde{N^F_{3}}(x)-f_\rho(x)\right| = \sum_{i:x_i\in H_x} |f_\rho (x_i)-f_\rho(x)| \frac{\sum_{({\bf j}, {\bf k})\in \Lambda_x} I_{H_{\bf k, j}}(x_i)}{\sum_{({\bf j}, {\bf k})\in \Lambda_x} T_{{\bf k, j}}(S)}. \end{eqnarray*} For $x_i\in H_x$, we have $x_i\in H_{\bf k,j}$ for some $({\bf j,k})\in\Lambda_x$. But $x\in H_{\bf k,j}$, so that \begin{eqnarray*} |\widetilde{N_3^F}(x) -f_\rho(x)| \leq \max_{u,u'\in H_{{\bf k},{\bf j}}}|f_\rho(u)-f_\rho(u')\leq c_0 \max_{u,u'\in H_{{\bf k},{\bf j}}}[d_G(u,u')]^s,\qquad x\in\mathcal X. \end{eqnarray*} But (\ref{equality of distance of N2}) implies that \begin{eqnarray*} \max_{u,u'\in { H_{{\bf k},{\bf j}}}} [d_G(u,u')]^s &\leq& \max_{u,u'\in { H_{{\bf k},{\bf j}}}}\alpha^{-s} \|N_{2,{\bf j}_x}(u)-N_{2,{\bf j}_x}(u')\|^s_d \leq \alpha^{-s}\max_{t,t'\in A_{d,1/n,t_{{\bf k}}}}\|t-t'\|^s_d\\ &\leq& \frac{2^sd^{s/2}}{\alpha^s}n^{-s}. \end{eqnarray*} Hence, for $x\in\mathcal X$ with $\sum_{({\bf j,k})\in\Lambda_x}T_{\bf k,j}(S)\geq 1$, we have $$ |\widetilde{N_3^F}(x) -f_\rho(x)| \leq \frac{c_02^{s}d^{s/2}}{\alpha^s}n^{-s}\frac{\sum_{i:x_i\in H_x} \sum_{({\bf j,k})\in\Lambda_x}}{\sum_{({\bf j,k})\in\Lambda_x}T_{\bf k,j}(S)} \leq \frac{c_02^{s}d^{s/2}}{\alpha^s}n^{-s},\qquad \forall x\in \mathcal X, $$ or equivalently, \begin{equation}\label{Bound A_2} \int_{\mathcal X}A_2(x)d\rho_X \leq \int_{\mathcal X}\mathbf E\left[( \widetilde{N_3^F}(x)-f_\rho(x))^2| \sum_{({\bf j}, {\bf k})\in \Lambda_x}T_{{\bf k,j}}(S) \geq1\right]d\rho_X \leq \frac{c_0^24^{s}d^{s}}{\alpha^{2s}}n^{-2s}. \end{equation} Therefore, putting (\ref{Bound A_1(x)}) and (\ref{Bound A_2}) into (\ref{app 1}), we have \begin{equation}\label{app est} \mathbf E \left[\| \widetilde{N^F_3}-f_\rho\|_\rho^2\right] \leq \frac{c_0^24^{s}d^{s}}{\alpha^{2s}}n^{-2s}+\frac{M^2(2n)^d(2q^*)^D}{em}. \end{equation} {\it Step 4: Learning rate deduction.} By inserting (\ref{Sam est}) and (\ref{app est}) into (\ref{Error decomposition}), we obtain \begin{eqnarray*} &&\mathbf E\left[\|N_3^F-f_\rho\|_\rho^2\right] \leq \frac{2^{D+d+3}{(2q^*)}^DM^2{(2n)}^d }{m+1} + \frac{c_0^24^{s}d^{s}}{\alpha^{2s}}n^{-2s}+\frac{M^2(2n)^d(2q^*)^D}{em}. \end{eqnarray*} Hence, in view of $n=\lceil m^{1/(2s+d)}\rceil$, we have $$ \mathbf E\left[\|N_3^F-f_\rho\|_\rho^2\right]\leq C_2m^{-\frac{2s}{2s+d}} $$ with $$ C_2:= 2^{D+d+4}{(2q^*)}^DM^2{(2n)}^d +\frac{c_0^24^{s}d^{s}}{\alpha^{2s}}. $$ This completes the proof of Theorem \ref{Theorem: optimal rate}, since $q^*$ depends only on $\mathcal X$, so that $C_2'$ is independent of $m$ or $n$. $\Box$ \end{document}
arXiv
Circular ensemble In the theory of random matrices, the circular ensembles are measures on spaces of unitary matrices introduced by Freeman Dyson as modifications of the Gaussian matrix ensembles.[1] The three main examples are the circular orthogonal ensemble (COE) on symmetric unitary matrices, the circular unitary ensemble (CUE) on unitary matrices, and the circular symplectic ensemble (CSE) on self dual unitary quaternionic matrices. Probability distributions The distribution of the unitary circular ensemble CUE(n) is the Haar measure on the unitary group U(n). If U is a random element of CUE(n), then UTU is a random element of COE(n); if U is a random element of CUE(2n), then URU is a random element of CSE(n), where $U^{R}=\left({\begin{array}{ccccccc}0&-1&&&&&\\1&0&&&&&\\&&0&-1&&&\\&&1&0&&&\\&&&&\ddots &&\\&&&&&0&-1\\&&&&&1&0\end{array}}\right)U^{T}\left({\begin{array}{ccccccc}0&1&&&&&\\-1&0&&&&&\\&&0&1&&&\\&&-1&0&&&\\&&&&\ddots &&\\&&&&&0&1\\&&&&&-1&0\end{array}}\right)~.$ Each element of a circular ensemble is a unitary matrix, so it has eigenvalues on the unit circle: $\lambda _{k}=e^{i\theta _{k}}$ with $0\leq \theta _{k}<2\pi $ for k=1,2,... n, where the $\theta _{k}$ are also known as eigenangles or eigenphases. In the CSE each of these n eigenvalues appears twice. The distributions have densities with respect to the eigenangles, given by $p(\theta _{1},\cdots ,\theta _{n})={\frac {1}{Z_{n,\beta }}}\prod _{1\leq k<j\leq n}|e^{i\theta _{k}}-e^{i\theta _{j}}|^{\beta }~$ on $\mathbb {R} _{[0,2\pi ]}^{n}$ (symmetrized version), where β=1 for COE, β=2 for CUE, and β=4 for CSE. The normalisation constant Zn,β is given by $Z_{n,\beta }=(2\pi )^{n}{\frac {\Gamma (\beta n/2+1)}{\left(\Gamma (\beta /2+1)\right)^{n}}}~,$ as can be verified via Selberg's integral formula, or Weyl's integral formula for compact Lie groups. Generalizations Generalizations of the circular ensemble restrict the matrix elements of U to real numbers [so that U is in the orthogonal group O(n)] or to real quaternion numbers [so that U is in the symplectic group Sp(2n). The Haar measure on the orthogonal group produces the circular real ensemble (CRE) and the Haar measure on the symplectic group produces the circular quaternion ensemble (CQE). The eigenvalues of orthogonal matrices come in complex conjugate pairs $e^{i\theta _{k}}$ and $e^{-i\theta _{k}}$, possibly complemented by eigenvalues fixed at +1 or -1. For n=2m even and det U=1, there are no fixed eigenvalues and the phases θk have probability distribution[2] $p(\theta _{1},\cdots ,\theta _{m})=C\prod _{1\leq k<j\leq m}(\cos \theta _{k}-\cos \theta _{j})^{2}~,$ with C an unspecified normalization constant. For n=2m+1 odd there is one fixed eigenvalue σ=det U equal to ±1. The phases have distribution $p(\theta _{1},\cdots ,\theta _{m})=C\prod _{1\leq i\leq m}(1-\sigma \cos \theta _{i})\prod _{1\leq k<j\leq m}(\cos \theta _{k}-\cos \theta _{j})^{2}~.$ For n=2m+2 even and det U=-1 there is a pair of eigenvalues fixed at +1 and -1, while the phases have distribution $p(\theta _{1},\cdots ,\theta _{m})=C\prod _{1\leq i\leq m}(1-\cos ^{2}\theta _{i})\prod _{1\leq k<j\leq m}(\cos \theta _{k}-\cos \theta _{j})^{2}~.$ This is also the distribution of the eigenvalues of a matrix in Sp(2m). These probability density functions are referred to as Jacobi distributions in the theory of random matrices, because correlation functions can be expressed in terms of Jacobi polynomials. Calculations Averages of products of matrix elements in the circular ensembles can be calculated using Weingarten functions. For large dimension of the matrix these calculations become impractical, and a numerical method is advantageous. There exist efficient algorithms to generate random matrices in the circular ensembles, for example by performing a QR decomposition on a Ginibre matrix.[3] References 1. F.M. Dyson (1962). "The threefold way. Algebraic structure of symmetry groups and ensembles in quantum mechanics". Journal of Mathematical Physics. 3 (6): 1199. Bibcode:1962JMP.....3.1199D. doi:10.1063/1.1703863. 2. V.L. Girko (1985). "Distribution of eigenvalues and eigenvectors of orthogonal random matrices". Ukrainian Mathematical Journal. 37 (5): 457. doi:10.1007/bf01061167. S2CID 120597749. 3. F. Mezzadri (2007). "How to generate random matrices from the classical compact groups" (PDF). Notices of the AMS. 54: 592. arXiv:math-ph/0609050. Bibcode:2006math.ph...9050M. Software Implementations • "Wolfram Mathematica circular ensembles". Wolfram Language. • Suezen, Mehmet (2017). "Bristol: A Python package for Random Matrix Ensembles (Parallel implementation of circular ensemble generation)". doi:10.5281/zenodo.579642. {{cite journal}}: Cite journal requires |journal= (help) • "Bristol: A Python package for Random Matrix Ensembles". pypi. External links • Mehta, Madan Lal (2004), Random matrices, Pure and Applied Mathematics (Amsterdam), vol. 142 (3rd ed.), Elsevier/Academic Press, Amsterdam, ISBN 978-0-12-088409-4, MR 2129906 • Forrester, Peter J. (2010), Log-gases and random matrices, Princeton University Press, ISBN 978-0-691-12829-0
Wikipedia
Universal property In mathematics, more specifically in category theory, a universal property is a property that characterizes up to an isomorphism the result of some constructions. Thus, universal properties can be used for defining some objects independently from the method chosen for constructing them. For example, the definitions of the integers from the natural numbers, of the rational numbers from the integers, of the real numbers from the rational numbers, and of polynomial rings from the field of their coefficients can all be done in terms of universal properties. In particular, the concept of universal property allows a simple proof that all constructions of real numbers are equivalent: it suffices to prove that they satisfy the same universal property. Technically, a universal property is defined in terms of categories and functors by means of a universal morphism (see § Formal definition, below). Universal morphisms can also be thought more abstractly as initial or terminal objects of a comma category (see § Connection with comma categories, below). Universal properties occur almost everywhere in mathematics, and the use of the concept allows the use of general properties of universal properties for easily proving some properties that would need boring verifications otherwise. For example, given a commutative ring R, the field of fractions of the quotient ring of R by a prime ideal p can be identified with the residue field of the localization of R at p; that is $R_{p}/pR_{p}\cong \operatorname {Frac} (R/p)$ (all these constructions can be defined by universal properties). Other objects that can be defined by universal properties include: all free objects, direct products and direct sums, free groups, free lattices, Grothendieck group, completion of a metric space, completion of a ring, Dedekind–MacNeille completion, product topologies, Stone–Čech compactification, tensor products, inverse limit and direct limit, kernels and cokernels, quotient groups, quotient vector spaces, and other quotient spaces. Motivation Before giving a formal definition of universal properties, we offer some motivation for studying such constructions. • The concrete details of a given construction may be messy, but if the construction satisfies a universal property, one can forget all those details: all there is to know about the construction is already contained in the universal property. Proofs often become short and elegant if the universal property is used rather than the concrete details. For example, the tensor algebra of a vector space is slightly complicated to construct, but much easier to deal with by its universal property. • Universal properties define objects uniquely up to a unique isomorphism.[1] Therefore, one strategy to prove that two objects are isomorphic is to show that they satisfy the same universal property. • Universal constructions are functorial in nature: if one can carry out the construction for every object in a category C then one obtains a functor on C. Furthermore, this functor is a right or left adjoint to the functor U used in the definition of the universal property.[2] • Universal properties occur everywhere in mathematics. By understanding their abstract properties, one obtains information about all these constructions and can avoid repeating the same analysis for each individual instance. Formal definition To understand the definition of a universal construction, it is important to look at examples. Universal constructions were not defined out of thin air, but were rather defined after mathematicians began noticing a pattern in many mathematical constructions (see Examples below). Hence, the definition may not make sense to one at first, but will become clear when one reconciles it with concrete examples. Let $F:{\mathcal {C}}\to {\mathcal {D}}$ be a functor between categories ${\mathcal {C}}$ and ${\mathcal {D}}$. In what follows, let $X$ be an object of ${\mathcal {D}}$, while $A$ and $A'$ are objects of ${\mathcal {C}}$, and $h:A\to A'$ is a morphism in ${\mathcal {C}}$. Thus, the functor $F$ maps $A$, $A'$ and $h$ in ${\mathcal {C}}$ to $F(A)$, $F(A')$ and $F(h)$ in ${\mathcal {D}}$. A universal morphism from $X$ to $F$ is a unique pair $(A,u:X\to F(A))$ in ${\mathcal {D}}$ which has the following property, commonly referred to as a universal property: For any morphism of the form $f:X\to F(A')$ in ${\mathcal {D}}$, there exists a unique morphism $h:A\to A'$ in ${\mathcal {C}}$ such that the following diagram commutes: We can dualize this categorical concept. A universal morphism from $F$ to $X$ is a unique pair $(A,u:F(A)\to X)$ that satisfies the following universal property: For any morphism of the form $f:F(A')\to X$ in ${\mathcal {D}}$, there exists a unique morphism $h:A'\to A$ in ${\mathcal {C}}$ such that the following diagram commutes: Note that in each definition, the arrows are reversed. Both definitions are necessary to describe universal constructions which appear in mathematics; but they also arise due to the inherent duality present in category theory. In either case, we say that the pair $(A,u)$ which behaves as above satisfies a universal property. Connection with comma categories Universal morphisms can be described more concisely as initial and terminal objects in a comma category (i.e. one where morphisms are seen as objects in their own right). Let $F:{\mathcal {C}}\to {\mathcal {D}}$ be a functor and $X$ an object of ${\mathcal {D}}$. Then recall that the comma category $(X\downarrow F)$ is the category where • Objects are pairs of the form $(B,f:X\to F(B))$, where $B$ is an object in ${\mathcal {C}}$ • A morphism from $(B,f:X\to F(B))$ to $(B',f':X\to F(B'))$ is given by a morphism $h:B\to B'$ in ${\mathcal {C}}$ such that the diagram commutes: Now suppose that the object $(A,u:X\to F(A))$ in $(X\downarrow F)$ is initial. Then for every object $(A',f:X\to F(A'))$, there exists a unique morphism $h:A\to A'$ such that the following diagram commutes. Note that the equality here simply means the diagrams are the same. Also note that the diagram on the right side of the equality is the exact same as the one offered in defining a universal morphism from $X$ to $F$. Therefore, we see that a universal morphism from $X$ to $F$ is equivalent to an initial object in the comma category $(X\downarrow F)$. Conversely, recall that the comma category $(F\downarrow X)$ is the category where • Objects are pairs of the form $(B,f:F(B)\to X)$ where $B$ is an object in ${\mathcal {C}}$ • A morphism from $(B,f:F(B)\to X)$ to $(B',f':F(B')\to X)$ is given by a morphism $h:B\to B'$ in ${\mathcal {C}}$ such that the diagram commutes: Suppose $(A,u:F(A)\to X)$ is a terminal object in $(F\downarrow X)$. Then for every object $(A',f:F(A')\to X)$, there exists a unique morphism $h:A'\to A$ such that the following diagrams commute. The diagram on the right side of the equality is the same diagram pictured when defining a universal morphism from $F$ to $X$. Hence, a universal morphism from $F$ to $X$ corresponds with a terminal object in the comma category $(F\downarrow X)$. Examples Below are a few examples, to highlight the general idea. The reader can construct numerous other examples by consulting the articles mentioned in the introduction. Tensor algebras Let ${\mathcal {C}}$ be the category of vector spaces $K$-Vect over a field $K$ and let ${\mathcal {D}}$ be the category of algebras $K$-Alg over $K$ (assumed to be unital and associative). Let $U$ : $K$-Alg → $K$-Vect be the forgetful functor which assigns to each algebra its underlying vector space. Given any vector space $V$ over $K$ we can construct the tensor algebra $T(V)$. The tensor algebra is characterized by the fact: “Any linear map from $V$ to an algebra $A$ can be uniquely extended to an algebra homomorphism from $T(V)$ to $A$.” This statement is an initial property of the tensor algebra since it expresses the fact that the pair $(T(V),i)$, where $i:V\to U(T(V))$ is the inclusion map, is a universal morphism from the vector space $V$ to the functor $U$. Since this construction works for any vector space $V$, we conclude that $T$ is a functor from $K$-Vect to $K$-Alg. This means that $T$ is left adjoint to the forgetful functor $U$ (see the section below on relation to adjoint functors). Products A categorical product can be characterized by a universal construction. For concreteness, one may consider the Cartesian product in Set, the direct product in Grp, or the product topology in Top, where products exist. Let $X$ and $Y$ be objects of a category ${\mathcal {C}}$ with finite products. The product of $X$ and $Y$ is an object $X$ × $Y$ together with two morphisms $\pi _{1}$ : $X\times Y\to X$ $\pi _{2}$ : $X\times Y\to Y$ such that for any other object $Z$ of ${\mathcal {C}}$ and morphisms $f:Z\to X$ and $g:Z\to Y$ there exists a unique morphism $h:Z\to X\times Y$ such that $f=\pi _{1}\circ h$ and $g=\pi _{2}\circ h$. To understand this characterization as a universal property, take the category ${\mathcal {D}}$ to be the product category ${\mathcal {C}}\times {\mathcal {C}}$ and define the diagonal functor $\Delta :{\mathcal {C}}\to {\mathcal {C}}\times {\mathcal {C}}$ :{\mathcal {C}}\to {\mathcal {C}}\times {\mathcal {C}}} by $\Delta (X)=(X,X)$ and $\Delta (f:X\to Y)=(f,f)$. Then $(X\times Y,(\pi _{1},\pi _{2}))$ is a universal morphism from $\Delta $ to the object $(X,Y)$ of ${\mathcal {C}}\times {\mathcal {C}}$: if $(f,g)$ is any morphism from $(Z,Z)$ to $(X,Y)$, then it must equal a morphism $\Delta (h:Z\to X\times Y)=(h,h)$ from $\Delta (Z)=(Z,Z)$ to $\Delta (X\times Y)=(X\times Y,X\times Y)$ followed by $(\pi _{1},\pi _{2})$. As a commutative diagram: For the example of the Cartesian product in Set, the morphism $(\pi _{1},\pi _{2})$ comprises the two projections $\pi _{1}(x,y)=x$ and $\pi _{2}(x,y)=y$. Given any set $Z$ and functions $f,g$ the unique map such that the required diagram commutes is given by $h=\langle x,y\rangle (z)=(f(z),g(z))$.[3] Limits and colimits Categorical products are a particular kind of limit in category theory. One can generalize the above example to arbitrary limits and colimits. Let ${\mathcal {J}}$ and ${\mathcal {C}}$ be categories with ${\mathcal {J}}$ a small index category and let ${\mathcal {C}}^{\mathcal {J}}$ be the corresponding functor category. The diagonal functor $\Delta :{\mathcal {C}}\to {\mathcal {C}}^{\mathcal {J}}$ :{\mathcal {C}}\to {\mathcal {C}}^{\mathcal {J}}} is the functor that maps each object $N$ in ${\mathcal {C}}$ to the constant functor $\Delta (N):{\mathcal {J}}\to {\mathcal {C}}$ (i.e. $\Delta (N)(X)=N$ for each $X$ in ${\mathcal {J}}$ and $\Delta (N)(f)=1_{N}$ for each $f:X\to Y$ in ${\mathcal {J}}$) and each morphism $f:N\to M$ in ${\mathcal {C}}$ to the natural transformation $\Delta (f):\Delta (N)\to \Delta (M)$ in ${\mathcal {C}}^{\mathcal {J}}$ defined as, for every object $X$ of ${\mathcal {J}}$, the component $\Delta (f)(X):\Delta (N)(X)\to \Delta (M)(X)=f:N\to M$ at $X$. In other words, the natural transformation is the one defined by having constant component $f:N\to M$ for every object of ${\mathcal {J}}$. Given a functor $F:{\mathcal {J}}\to {\mathcal {C}}$ (thought of as an object in ${\mathcal {C}}^{\mathcal {J}}$), the limit of $F$, if it exists, is nothing but a universal morphism from $\Delta $ to $F$. Dually, the colimit of $F$ is a universal morphism from $F$ to $\Delta $. Properties Existence and uniqueness Defining a quantity does not guarantee its existence. Given a functor $F:{\mathcal {C}}\to {\mathcal {D}}$ and an object $X$ of ${\mathcal {D}}$, there may or may not exist a universal morphism from $X$ to $F$. If, however, a universal morphism $(A,u)$ does exist, then it is essentially unique. Specifically, it is unique up to a unique isomorphism: if $(A',u')$ is another pair, then there exists a unique isomorphism $k:A\to A'$ such that $u'=F(k)\circ u$. This is easily seen by substituting $(A,u')$ in the definition of a universal morphism. It is the pair $(A,u)$ which is essentially unique in this fashion. The object $A$ itself is only unique up to isomorphism. Indeed, if $(A,u)$ is a universal morphism and $k:A\to A'$ is any isomorphism then the pair $(A',u')$, where $u'=F(k)\circ u$ is also a universal morphism. Equivalent formulations The definition of a universal morphism can be rephrased in a variety of ways. Let $F:{\mathcal {C}}\to {\mathcal {D}}$ be a functor and let $X$ be an object of ${\mathcal {D}}$. Then the following statements are equivalent: • $(A,u)$ is a universal morphism from $X$ to $F$ • $(A,u)$ is an initial object of the comma category $(X\downarrow F)$ • $(A,F(\bullet )\circ u)$ is a representation of ${\text{Hom}}_{\mathcal {D}}(X,F(-))$, where its components $(F(\bullet )\circ u)_{B}:{\text{Hom}}_{\mathcal {C}}(A,B)\to {\text{Hom}}_{\mathcal {D}}(X,F(B))$ are defined by $(F(\bullet )\circ u)_{B}(f:A\to B):X\to F(B)=F(f)\circ u:X\to F(B)$ for each object $B$ in ${\mathcal {C}}.$ The dual statements are also equivalent: • $(A,u)$ is a universal morphism from $F$ to $X$ • $(A,u)$ is a terminal object of the comma category $(F\downarrow X)$ • $(A,u\circ F(\bullet ))$ is a representation of ${\text{Hom}}_{\mathcal {D}}(F(-),X)$, where its components $(u\circ F(\bullet ))_{B}:{\text{Hom}}_{\mathcal {C}}(B,A)\to {\text{Hom}}_{\mathcal {D}}(F(B),X)$ are defined by $(u\circ F(\bullet ))_{B}(f:B\to A):F(B)\to X=u\circ F(f):F(B)\to X$ for each object $B$ in ${\mathcal {C}}.$ Relation to adjoint functors Suppose $(A_{1},u_{1})$ is a universal morphism from $X_{1}$ to $F$ and $(A_{2},u_{2})$ is a universal morphism from $X_{2}$ to $F$. By the universal property of universal morphisms, given any morphism $h:X_{1}\to X_{2}$ there exists a unique morphism $g:A_{1}\to A_{2}$ such that the following diagram commutes: If every object $X_{i}$ of ${\mathcal {D}}$ admits a universal morphism to $F$, then the assignment $X_{i}\mapsto A_{i}$ and $h\mapsto g$ defines a functor $G:{\mathcal {D}}\to {\mathcal {C}}$. The maps $u_{i}$ then define a natural transformation from $1_{\mathcal {D}}$ (the identity functor on ${\mathcal {D}}$) to $F\circ G$. The functors $(F,G)$ are then a pair of adjoint functors, with $G$ left-adjoint to $F$ and $F$ right-adjoint to $G$. Similar statements apply to the dual situation of terminal morphisms from $F$. If such morphisms exist for every $X$ in ${\mathcal {C}}$ one obtains a functor $G:{\mathcal {C}}\to {\mathcal {D}}$ which is right-adjoint to $F$ (so $F$ is left-adjoint to $G$). Indeed, all pairs of adjoint functors arise from universal constructions in this manner. Let $F$ and $G$ be a pair of adjoint functors with unit $\eta $ and co-unit $\epsilon $ (see the article on adjoint functors for the definitions). Then we have a universal morphism for each object in ${\mathcal {C}}$ and ${\mathcal {D}}$: • For each object $X$ in ${\mathcal {C}}$, $(F(X),\eta _{X})$ is a universal morphism from $X$ to $G$. That is, for all $f:X\to G(Y)$ there exists a unique $g:F(X)\to Y$ for which the following diagrams commute. • For each object $Y$ in ${\mathcal {D}}$, $(G(Y),\epsilon _{Y})$ is a universal morphism from $F$ to $Y$. That is, for all $g:F(X)\to Y$ there exists a unique $f:X\to G(Y)$ for which the following diagrams commute. Universal constructions are more general than adjoint functor pairs: a universal construction is like an optimization problem; it gives rise to an adjoint pair if and only if this problem has a solution for every object of ${\mathcal {C}}$ (equivalently, every object of ${\mathcal {D}}$). History Universal properties of various topological constructions were presented by Pierre Samuel in 1948. They were later used extensively by Bourbaki. The closely related concept of adjoint functors was introduced independently by Daniel Kan in 1958. See also • Free object • Natural transformation • Adjoint functor • Monad (category theory) • Variety of algebras • Cartesian closed category Notes 1. Jacobson (2009), Proposition 1.6, p. 44. 2. See for example, Polcino & Sehgal (2002), p. 133. exercise 1, about the universal property of group rings. 3. Fong, Brendan; Spivak, David I. (2018-10-12). "Seven Sketches in Compositionality: An Invitation to Applied Category Theory". arXiv:1803.05316 [math.CT]. References • Paul Cohn, Universal Algebra (1981), D.Reidel Publishing, Holland. ISBN 90-277-1213-1. • Mac Lane, Saunders (1998). Categories for the Working Mathematician. Graduate Texts in Mathematics 5 (2nd ed.). Springer. ISBN 0-387-98403-8. • Borceux, F. Handbook of Categorical Algebra: vol 1 Basic category theory (1994) Cambridge University Press, (Encyclopedia of Mathematics and its Applications) ISBN 0-521-44178-1 • N. Bourbaki, Livre II : Algèbre (1970), Hermann, ISBN 0-201-00639-1. • Milies, César Polcino; Sehgal, Sudarshan K.. An introduction to group rings. Algebras and applications, Volume 1. Springer, 2002. ISBN 978-1-4020-0238-0 • Jacobson. Basic Algebra II. Dover. 2009. ISBN 0-486-47187-X External links • nLab, a wiki project on mathematics, physics and philosophy with emphasis on the n-categorical point of view • André Joyal, CatLab, a wiki project dedicated to the exposition of categorical mathematics • Hillman, Chris (2001). "A Categorical Primer" (Document): {{cite document}}: Cite document requires |publisher= (help); Unknown parameter |citeseerx= ignored (help) formal introduction to category theory. • J. Adamek, H. Herrlich, G. Stecker, Abstract and Concrete Categories-The Joy of Cats • Stanford Encyclopedia of Philosophy: "Category Theory"—by Jean-Pierre Marquis. Extensive bibliography. • List of academic conferences on category theory • Baez, John, 1996,"The Tale of n-categories." An informal introduction to higher order categories. • WildCats is a category theory package for Mathematica. Manipulation and visualization of objects, morphisms, categories, functors, natural transformations, universal properties. • The catsters, a YouTube channel about category theory. • Video archive of recorded talks relevant to categories, logic and the foundations of physics. • Interactive Web page which generates examples of categorical constructions in the category of finite sets. Category theory Key concepts Key concepts • Category • Adjoint functors • CCC • Commutative diagram • Concrete category • End • Exponential • Functor • Kan extension • Morphism • Natural transformation • Universal property Universal constructions Limits • Terminal objects • Products • Equalizers • Kernels • Pullbacks • Inverse limit Colimits • Initial objects • Coproducts • Coequalizers • Cokernels and quotients • Pushout • Direct limit Algebraic categories • Sets • Relations • Magmas • Groups • Abelian groups • Rings (Fields) • Modules (Vector spaces) Constructions on categories • Free category • Functor category • Kleisli category • Opposite category • Quotient category • Product category • Comma category • Subcategory Higher category theory Key concepts • Categorification • Enriched category • Higher-dimensional algebra • Homotopy hypothesis • Model category • Simplex category • String diagram • Topos n-categories Weak n-categories • Bicategory (pseudofunctor) • Tricategory • Tetracategory • Kan complex • ∞-groupoid • ∞-topos Strict n-categories • 2-category (2-functor) • 3-category Categorified concepts • 2-group • 2-ring • En-ring • (Traced)(Symmetric) monoidal category • n-group • n-monoid • Category • Outline • Glossary
Wikipedia
Hall plane In mathematics, a Hall plane is a non-Desarguesian projective plane constructed by Marshall Hall Jr. (1943).[1] There are examples of order p2n for every prime p and every positive integer n provided p2n > 4.[2] Algebraic construction via Hall systems The original construction of Hall planes was based on the Hall quasifield (also called a Hall system), H of order $p^{2n}$ for p a prime. The creation of the plane from the quasifield follows the standard construction (see quasifield for details). To build a Hall quasifield, start with a Galois field, $F=\operatorname {GF} (p^{n})$ for p a prime and a quadratic irreducible polynomial $f(x)=x^{2}-rx-s$ over F. Extend $H=F\times F$, a two-dimensional vector space over F, to a quasifield by defining a multiplication on the vectors by $(a,b)\circ (c,d)=(ac-bd^{-1}f(c),ad-bc+br)$ when $d\neq 0$ and $(a,b)\circ (c,0)=(ac,bc)$ otherwise. Writing the elements of H in terms of a basis <1, λ>, that is, identifying (x,y) with x  +  λy as x and y vary over F, we can identify the elements of F as the ordered pairs (x, 0), i.e. x +  λ0. The properties of the defined multiplication which turn the right vector space H into a quasifield are: 1. every element α of H not in F satisfies the quadratic equation f(α) =  0; 2. F is in the kernel of H (meaning that (α  +  β)c  =  αc  +  βc, and (αβ)c  =  α(βc) for all α, β in H and all c in F); and 3. every element of F commutes (multiplicatively) with all the elements of H.[3] Derivation Another construction that produces Hall planes is obtained by applying derivation to Desarguesian planes. A process, due to T. G. Ostrom, which replaces certain sets of lines in a projective plane by alternate sets in such a way that the new structure is still a projective plane is called derivation. We give the details of this process.[4] Start with a projective plane $\pi $ of order $n^{2}$ and designate one line $\ell $ as its line at infinity. Let A be the affine plane $\pi \setminus \ell $. A set D of $n+1$ points of $\ell $ is called a derivation set if for every pair of distinct points X and Y of A which determine a line meeting $\ell $ in a point of D, there is a Baer subplane containing X, Y and D (we say that such Baer subplanes belong to D.) Define a new affine plane $\operatorname {D} (A)$ as follows: The points of $\operatorname {D} (A)$ are the points of A. The lines of $\operatorname {D} (A)$ are the lines of $\pi $ which do not meet $\ell $ at a point of D (restricted to A) and the Baer subplanes that belong to D (restricted to A). The set $\operatorname {D} (A)$ is an affine plane of order $n^{2}$ and it, or its projective completion, is called a derived plane.[5] Properties 1. Hall planes are translation planes. 2. All finite Hall planes of the same order are isomorphic. 3. Hall planes are not self-dual. 4. All finite Hall planes contain subplanes of order 2 (Fano subplanes). 5. All finite Hall planes contain subplanes of order different from 2. 6. Hall planes are André planes. The Hall plane of order 9 Hall plane of order 9 Order9 Lenz-Barlotti ClassIVa.3 Automorphisms$2^{8}\times 3^{5}\times 5$ Point Orbit Lengths10, 81 Line Orbit Lengths1, 90 PropertiesTranslation plane The Hall plane of order 9 is the smallest Hall plane, and one of the three smallest examples of a finite non-Desarguesian projective plane, along with its dual and the Hughes plane of order 9. Construction While usually constructed in the same way as other Hall planes, the Hall plane of order 9 was actually found earlier by Oswald Veblen and Joseph Wedderburn in 1907.[6] There are four quasifields of order nine which can be used to construct the Hall plane of order nine. Three of these are Hall systems generated by the irreducible polynomials $f(x)=x^{2}+1$, $g(x)=x^{2}-x-1$ or $h(x)=x^{2}+x-1$.[7] The first of these produces an associative quasifield,[8] that is, a near-field, and it was in this context that the plane was discovered by Veblen and Wedderburn. This plane is often referred to as the nearfield plane of order nine. Automorphism Group The Hall plane of order 9 is the unique projective plane, finite or infinite, which has Lenz-Barlotti class IVa.3.[9] Its automorphism group acts on its (necessarily unique) translation line imprimitively, having 5 pairs of points that the group preserves set-wise; the automorphism group acts as $S_{5}$ on these 5 pairs.[10] Unitals The Hall plane of order 9 admits four inequivalent embedded unitals.[11] Two of these unitals arise from Buekenhout's[12] constructions: one is parabolic, meeting the translation line in a single point, while the other is hyperbolic, meeting the translation line in 4 points. The latter of these two unitals was shown by Grüning[13] to also be embeddable in the dual Hall plane. Another of the unitals arises from the construction of Barlotti and Lunardon.[14] The fourth has an automorphism group of order 8 isomorphic to the quaternions, and is not part of any known infinite family. Notes 1. Hall (1943) 2. Although the constructions will provide a projective plane of order 4, the unique such plane is Desarguesian and is generally not considered to be a Hall plane. 3. Hughes & Piper (1973, pg. 183) 4. Hughes & Piper (1973, pp. 202–218, Chapter X. Derivation) 5. Hughes & Piper (1973, pg. 203, Theorem 10.2) 6. Veblen, Oscar; Wedderburn, Joseph H.M. (1907), "Non-Desarguesian and non-Pascalian geometries" (PDF), Transactions of the American Mathematical Society, 8 (3): 379–388, doi:10.2307/1988781, JSTOR 1988781 7. Stevenson, Frederick W. (1972), Projective Planes, San Francisco: W.H. Freeman and Company, ISBN 0-7167-0443-9 Pages 333–334. 8. D. Hughes and F. Piper (1973). Projective Planes. Springer-Verlag. ISBN 0-387-90044-6. Page 186. 9. Dembowski, Peter (1968). Finite Geometries : Reprint of the 1968 Edition. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3-642-62012-6. OCLC 851794158. Page 126. 10. André, Johannes (1955-12-01). "Projektive Ebenen über Fastkörpern". Mathematische Zeitschrift (in German). 62 (1): 137–160. doi:10.1007/BF01180628. ISSN 1432-1823. S2CID 122641224. 11. Penttila, Tim; Royle, Gordon F. (1995-11-01). "Sets of type (m, n) in the affine and projective planes of order nine". Designs, Codes and Cryptography. 6 (3): 229–245. doi:10.1007/BF01388477. ISSN 1573-7586. S2CID 43638589. 12. Buekenhout, F. (July 1976). "Existence of unitals in finite translation planes of order $q^{2}$ with a kernel of order $q$". Geometriae Dedicata. 5 (2). doi:10.1007/BF00145956. ISSN 0046-5755. S2CID 123037502. 13. Grüning, Klaus (1987-06-01). "A class of unitals of order $q$ which can be embedded in two different planes of order $q^{2}$". Journal of Geometry. 29 (1): 61–77. doi:10.1007/BF01234988. ISSN 1420-8997. S2CID 117872040. 14. Barlotti, A.; Lunardon, G. (1979). "Una classe di unitals nei $\Delta $-piani". Rivisita di Matematica della Università di Parma. 4: 781–785. References • Dembowski, P. (1968), Finite Geometries, Berlin: Springer-Verlag • Hall, Marshall Jr. (1943), "Projective Planes" (PDF), Transactions of the American Mathematical Society, 54 (2): 229–277, doi:10.2307/1990331, ISSN 0002-9947, JSTOR 1990331, MR 0008892 • Hughes, D.; Piper, F. (1973). Projective Planes. Springer-Verlag. ISBN 0-387-90044-6. • Stevenson, Frederick W. (1972), Projective Planes, San Francisco: W.H. Freeman and Company, ISBN 0-7167-0443-9 • Veblen, Oscar; Wedderburn, Joseph H.M. (1907), "Non-Desarguesian and non-Pascalian geometries" (PDF), Transactions of the American Mathematical Society, 8 (3): 379–388, doi:10.2307/1988781, JSTOR 1988781 • Weibel, Charles (2007), "Survey of Non-Desarguesian Planes" (PDF), Notices of the American Mathematical Society, 54 (10): 1294–1303
Wikipedia
\begin{document} \title{A Proof of the CSP Dichotomy Conjecture} \author{Dmitriy Zhuk\\ Department of Mechanics and Mathematics \\ Lomonosov Moscow State University\\ Moscow, Russia } \date{} \maketitle \begin{abstract} Many natural combinatorial problems can be expressed as constraint satisfaction problems. This class of problems is known to be NP-complete in general, but certain restrictions on the form of the constraints can ensure tractability. The standard way to parameterize interesting subclasses of the constraint satisfaction problem is via finite constraint languages. The main problem is to classify those subclasses that are solvable in polynomial time and those that are NP-complete. It was conjectured that if a constraint language has a weak near-unanimity polymorphism then the corresponding constraint satisfaction problem is tractable, otherwise it is NP-complete. In the paper we present an algorithm that solves Constraint Satisfaction Problem in polynomial time for constraint languages having a weak near unanimity polymorphism, which proves the remaining part of the conjecture. \end{abstract} \section{Introduction} The \emph{Constraint Satisfaction Problem (CSP)} is the problem of deciding whether there is an assignment to a set of variables subject to some specified constraints. Formally, the \emph{Constraint Satisfaction Problem} is defined as a triple $\langle \mathbf{X} , \mathbf{D} , \mathbf{C} \rangle$, where \begin{itemize} \item $\mathbf{X}=\{x_1,\ldots ,x_n\}$ is a set of variables, \item $\mathbf{D}=\{D_{1},\ldots ,D_{n}\}$ is a set of the respective domains, \item $\mathbf{C}=\{C_{1},\ldots ,C_{m}\}$ is a set of constraints, \end{itemize} where each variable $x_{i}$ can take on values in the nonempty domain $D_{i}$, every \emph{constraint} $C_{j}\in \mathbf{C}$ is a pair $(t_{j},\rho_{j})$ where $t_{j}$ is a tuple of variables of length $m_{j}$, called the \emph{constraint scope}, and $\rho_{j}$ is an $m_{j}$-ary relation on the corresponding domains, called the \emph{constraint relation}. The question is whether there exists \emph{a solution} to $\langle \mathbf{X} , \mathbf{D} , \mathbf{C} \rangle$, that is a mapping that assigns a value from $D_{i}$ to every variable $x_{i}$ such that for each constraint $C_{j}$ the image of the constraint scope is a member of the constraint relation. In this paper we consider only CSP over finite domains. The general CSP is known to be NP-complete \cite{Num26, Num30}; however, certain restrictions on the allowed form of constraints involved may ensure tractability (solvability in polynomial time) \cite{Num4,Num20,Num22,Num23,CSPconjecture,BulatovAboutCSP}. Below we provide a formalization to this idea. To simplify the formulation of the main result we assume that the domain of every variable is a finite set $A$. Later we will assume that the domain of every variable is a unary relation from the constraint language $\Gamma$ (see below). By $R_{A}$ we denote the set of all finitary relations on $A$, that is, subsets of $A^{m}$ for some $m$. Thus, all the constraint relations are from $R_{A}$. For a set of relations $\Gamma\subseteq R_{A}$ by $\CSP(\Gamma)$ we denote the Constraint Satisfaction Problem where all the constraint relations are from $\Gamma$. The set $\Gamma$ is called \emph{a constraint language}. Another way to formalize the Constraint Satisfaction Problem is via conjunctive formulas. Every $h$-ary relation on $A$ can be viewed as a predicate, that is, a mapping $A^{h}\rightarrow \{0,1\}$. Suppose $\Gamma\subseteq R_{A}$, then $\CSP(\Gamma)$ is the following decision problem: given a formula $$\rho_{1}(v_{1,1},\ldots,v_{1,n_{1}}) \wedge \dots \wedge \rho_{s}(v_{s,1},\ldots,v_{s,n_{s}}),$$ where $\rho_{1},\dots,\rho_{s}\in \Gamma$, and $v_{i,j}\in \{x_{1},\dots,x_{n}\}$ for every $i,j$; decide whether this formula is satisfiable. It is well known that many combinatorial problems can be expressed as $\CSP(\Gamma)$ for some constraint language $\Gamma$. Moreover, for some sets $\Gamma$ the corresponding decision problem can be solved in polynomial time; while for others it is NP-complete. It was conjectured that $\CSP(\Gamma)$ is either in P, or NP-complete \cite{FederVardi}. \begin{conj}\label{FederVardiConj} Suppose $\Gamma\subseteq R_{A}$ is a finite set of relations. Then $\CSP(\Gamma)$ is either solvable in polynomial time, or $NP$-complete. \end{conj} We say that an operation $f\colon A^{n}\to A$ \emph{preserves} the relation $\rho\in R_{A}$ of arity $m$ if for any tuples $(a_{1,1},\ldots,a_{1,m}),\dots,(a_{n,1},\ldots,a_{n,m})\in \rho$ the tuple $(f(a_{1,1},\ldots,a_{n,1}),\ldots,f(a_{1,m},\ldots,a_{n,m}))$ is in $\rho$. We say that an operation \emph{preserves a set of relations $\Gamma$} if it preserves every relation in $\Gamma$. A mapping $f:A\to A$ is called \emph{an endomorphism} of $\Gamma$ if it preserves $\Gamma$. \begin{thm}\label{coreHasSameComplexity}\cite{jeavons1998algebraic} Suppose $\Gamma\subseteq R_{A}$. If $f$ is an endomorphism of $\Gamma$, then $\CSP(\Gamma)$ is polynomially reducible to $\CSP(f(\Gamma))$ and vice versa, where $f(\Gamma)$ is a constraint language with domain $f(A)$ defined by $f(\Gamma) = \{f(\rho)\colon \rho\in \Gamma\}$. \end{thm} A constraint language is \emph{a core} if every endomorphism of $\Gamma$ is a bijection. It is not hard to show that if $f$ is an endomorphism of $\Gamma$ with minimal range, then $f(\Gamma)$ is a core. Another important fact is that we can add all singleton unary relations to a core constraint language without increasing the complexity of its $\CSP$. By $\sigma_{=a}$ we denote the unary relation $\{a\}$. \begin{thm}\label{addingIdempotency}\cite{CSPconjecture} Let $\Gamma\subseteq R_{A}$ be a core constraint language, and $\Gamma' = \Gamma\cup \{\sigma_{=a}\mid a\in A\}$. Then $\CSP(\Gamma')$ is polynomially reducible to $\CSP(\Gamma)$. \end{thm} Therefore, to prove Conjecture~\ref{FederVardiConj} it is sufficient to consider only the case when $\Gamma$ contains all unary singleton relations. In other words, all the predicates $x = a$, where $a\in A$, are in the constraint language $\Gamma$. In \cite{Schaefer} Schaefer classified all tractable constraint languages over two-element domain. In \cite{BulatovForThree} Bulatov generalized the result for three-element domain. His dichotomy theorem was formulated in terms of a $G$-set. Later, the dichotomy conjecture was formulated in several different forms (see \cite{CSPconjecture}). The result of Mckenzie and Mar{\'o}ti~\cite{miklos} allows us to formulate the dichotomy conjecture in the following nice way. An operation $f$ on a set $A$ is called \emph{a weak near-unanimity operation (WNU)} if it satisfies $f(y,x,\ldots,x) = f(x,y,x,\ldots,x) = \dots = f(x,x,\ldots,x,y)$ for all $x,y\in A$. An operation $f$ is called \emph{idempotent} if $f(x,x,\ldots,x) = x$ for all $x\in A$. \begin{conj}\label{mainconj} Suppose $\Gamma\subseteq R_{A}$ is a finite set of relations. Then $\CSP(\Gamma)$ can be solved in polynomial time if there exists a WNU preserving $\Gamma$; $\CSP(\Gamma)$ is NP-complete otherwise. \end{conj} It is not hard to see that the existence of a WNU preserving $\Gamma$ is equivalent to the existence of a WNU preserving a core of $\Gamma$, and also equivalent to the existence of an idempotent WNU preserving the core. Hence, Theorems~\ref{coreHasSameComplexity} and \ref{addingIdempotency} imply that it is sufficient to prove Conjecture~\ref{mainconj} for a core and an idempotent WNU. One direction of this conjecture follows from \cite{miklos}. \begin{thm}\label{MiklosMckenzie}\cite{miklos} Suppose $\Gamma\subseteq R_{A}$ and $\{\sigma_{=a}\mid a\in A\} \subseteq \Gamma$. If there exists no WNU preserving $\Gamma$, then $\CSP(\Gamma)$ is NP-complete. \end{thm} The dichotomy conjecture was proved for many special cases: for CSPs over undirected graphs \cite{hell1990complexity}, for CSPs over digraphs with no sources or sinks \cite{barto2009csp}, for constraint languages containing all unary relations~\cite{bulatov2003conservative}, and many others. More information about the algebraic approach to CSP can be found in \cite{bartopolymorphisms}. In this paper we present an algorithm that solves $\CSP(\Gamma)$ in polynomial time if $\Gamma$ is preserved by an idempotent WNU, and therefore prove the dichotomy conjecture. \begin{thm} Suppose $\Gamma\subseteq R_{A}$ is a finite set of relations. Then $\CSP(\Gamma)$ can be solved in polynomial time if there exists a WNU preserving $\Gamma$; $\CSP(\Gamma)$ is NP-complete otherwise. \end{thm} Another proof of the dichotomy conjecture was announced by Andrei Bulatov~\cite{BulatovProofCSP,BulatovProofCSPFOCS}. Even though both algorithms appeared at the same time, they are significantly different. Bulatov's algorithm uses full strength of the few subpowers algorithm \cite{idziak2010tractability}, uses Maroti's trick for trees on top of Mal'tsev \cite{maroti2011tree}, while this one just checks some local consistency and solves linear equations over prime fields. Also Bulatov's algorithm works for infinite constraint languages, which is not the case for the algorithm presented in this paper. But its slight modification works even for infinite constraint languages \cite{zhuk2018modification}. The paper is organized as follows. In Section~\ref{ZFourExample} we explain the algorithm informally and give an example showing how the algorithm works for a system of linear equations in $\mathbb Z_{4}$. In Section~\ref{Definition} we give main definitions, in Section~\ref{Algorithm} we give a formal description of the algorithm showing a pseudocode for most functions and explain the meaning of every function. In Section~\ref{CorretnessSection} we formulate all theorems that are necessary to prove the correctness of the algorithm. Then, we prove that on every algebra (domain) with a WNU operation there exists a subuniverse of one of four types, which is the main ingredient of the algorithm. Additionally, in this section we prove that some functions of the algorithm work properly and the algorithm actually works in polynomial time. In Section~\ref{DefinitionSection} we give the remaining definitions. The proof of the main theorems is divided into three sections. In Section~\ref{AbsCenterPCLinear} we study properties of subuniverses of each of four types (absorbing, central, PC, and linear subuniverses). In Section~\ref{AuxStatements} we prove all the auxiliary statements, and in the last section we prove the main theorems of this paper formulated in Section~\ref{CorretnessSection}. In Section~\ref{ConclusionsSection}, we discuss open questions and consequences of this result. In particular, we consider generalizations of the CSP such as Valued CSP, Infinite Domain CSP, Quantified CSP, Promise CSP and so on. \section{Outline of the algorithm}\label{ZFourExample} In this section we give an informal description of the algorithm and show how it works for a system of linear equations in $\mathbb Z_{4}$. The algorithm is based on the following three ingredients: \begin{itemize} \item Each domain has either one of three kinds of proper strong subsets (absorbing, central, polynomially complete) or an equivalence relation modulo which the domain is essentially a product of prime fields (Theorem \ref{NextReduction}). \item If a sufficient level of consistency (cycle consistency + irreducibility - see Section \ref{CSPInstancesDef}) is enforced, then we do not lose all the solutions when we reduce the domain to a proper strong subset (that is, if the original instance has a solution, then the reduced instance has a solution as well), which is guaranteed by Theorems \ref{AbsorptionCenterStep} and \ref{PCStepThm}. \item If we cannot reduce the domain in such a way, we are left with an instance whose each domain has an equivalence relation modulo which it is a product of prime fields, and all relations are affine subspaces. Now we have: $A$ = the set of all solutions of the instance factorized by the equivalences; $B$ = the set of all solutions of the factorized instance (where all domains and relations are factorized). Both $A$ and $B$ are affine subspaces, $A \subseteq B$. We would like to know whether $A$ is empty, what we can efficiently compute is $B$ (using Gaussian elimination). The algorithm gradually makes $B$ smaller (of smaller dimension), while maintaining the property $A \subseteq B$. First, for some solution from $B$ we check whether $A$ has the same solution, which can be done by a recursive call of the algorithm for smaller domains. If $A$ has it then we are done. If $A$ has not then $A\neq B$. In this case we can make (see Theorem \ref{LinearStep}) the instance weaker maintaining the property $A'\subsetneq B$ (here $A'$ is $A$ for the weaker instance) until the moment when \begin{enumerate} \item $A'$ is a subspace of $B$ of codimension one, \item or $A=A'=\varnothing$, \item or the obtained instance is not linked (it splits into several instances on smaller domains, hence $A'$ can be calculated using recursion). \end{enumerate} In (1) and (2) $A'$ can be computed by linearly many recursive calls of the algorithm for smaller domains. In fact, $A'$ can be defined by a linear equation $c_1x_1 + \dots + c_h x_h = c_{0}$ in a prime field $\mathbb Z_{p}$. Then the coefficients $c_0,c_{1},\dots,c_{h}$ can be learned (up to a multiplicative constant) by $(p\cdot h+1)$ queries of the form ``$(a_1,\ldots,a_h) \in A'$?'' (see Subsection~\ref{FindingLinearEquationSection} for more details). To check each query we just need to call the algorithm recursively for the smaller domains that are the equivalence classes corresponding to $a_1,\ldots,a_h$. We update $B=A'$, return back to the original instance and continue tightening $B$. We eventually stop when $B = A$, which gives us the answer to our question: if $B \neq \varnothing$ then the original instance has a solution, if $B = \varnothing$ then it has no solutions. \end{itemize} We demonstrate the work of the algorithm on a system of linear equations in $\mathbb Z_{4}$: \begin{equation}\label{OriginalEquation} \left\{ \begin{aligned} x_{1}+2x_{2}+x_{3}+x_{4}&=2 \\ 2x_{1}+x_{2}+x_{3}+x_{4}&=2\\ x_{1}+x_{2} &= 2\\ x_{1}+x_{2} +2x_{4}&= 2 \end{aligned} \right. \end{equation} All the relations (equations) are invariants of the WNU $x_{1}+\dots+x_{5} \;(mod\ 4)$, therefore, this system of equations is an instance of $\CSP(\Gamma)$, where $\Gamma$ is the set of all relations of arity at most $4$ preserved by the WNU. Hence, we can apply the algorithm. First, $\mathbb Z_{4}$ does not have a proper strong subset, which is why for every domain there should be an equivalence relation modulo which it is just a product of prime fields. In our example it is the modulo 2 equivalence relation. We factorize our instance modulo 2, and obtain a system of linear equations in $\mathbb Z_{2}$, where $x_{i}' = x_{i}\;(mod \ 2)$ for every $i$. \begin{equation}\label{ZTwoEquation} \left\{ \begin{aligned} x_{1}'+x_{3}'+x_{4}'&=0 \\ x_{2}'+x_{3}'+x_{4}'&=0\\ x_{1}'+x_{2}' &= 0\\ x_{1}'+x_{2}' &= 0 \end{aligned} \right. \end{equation} Using Gaussian elimination we solve this system of equations in a field, choose independent variables $x_{1}'$ and $x_{3}'$, and write the general solution (the set $B$ in the informal description): $x_{1}' = x_{1}', x_{2}' = x_{1}', x_{3}' = x_{3}', x_{4}' = x_{1}' + x_{3}'.$ We choose any solution from $B$. Let it be $(0,0,0,0)$ for $x_{1}' = x_{3}' = 0$. Then we check whether (\ref{OriginalEquation}) has a solution corresponding to $(0,0,0,0)$ by restricting every domain to the set $\{0,2\}$ ($x_{i}\mod 2 = 0$). We recursively call the algorithm for smaller domain and find out that (\ref{OriginalEquation}) has no solutions inside $\{0,2\}$. This means that $(0,0,0,0)$ does not belong to $A$ from the informal description, therefore $A\subsetneq B$. Then we try to make the instance weaker so that $A'\subsetneq B$, where $A'$ is the intersection of $B$ with the set of all solutions of the new instance factorized by the equivalences. Let us remove the last equation from~(\ref{OriginalEquation}) to obtain a new solution set $A'$. \begin{equation}\label{SimplifiedEquation} \left\{ \begin{aligned} x_{1}+2x_{2}+x_{3}+x_{4}&=2 \\ 2x_{1}+x_{2}+x_{3}+x_{4}&=2\\ x_{1}+x_{2} &= 2 \end{aligned} \right. \end{equation} Again, by solving an instance on the 2-element domain $\{0,2\}$ we find out that (\ref{SimplifiedEquation}) has no solutions corresponding to $(0,0,0,0)$. Therefore, we have $A'\subsetneq B$. We need to check that if we remove one more equation from (\ref{SimplifiedEquation}), then we get $A' = B$. Thus, for every weaker instance we need to check that for any $a_{1},a_{3}\in \mathbb Z_{2}$ there exists a solution corresponding to $(x_{1}',x_{3}') = (a_{1},a_{3})$. Since $A'$ is an affine subspace, it is sufficient to check this for $(x_{1}',x_{3}') = (0,0)$, $(x_{1}',x_{3}') = (1,0)$, and $(x_{1}',x_{3}') = (0,1)$, i.e. for $h+1$ tuples, where $h$ is the dimension of $B$. Again, to check a concrete solution from $B$ we recursively call the algorithm for 2-element domains. Since (\ref{SimplifiedEquation}) is linked, Theorem \ref{LinearStep} guarantees that the dimension of $A'$ equals the dimension of $B$ minus one or $A'$ is empty. Hence, we need exactly one equation to describe all pairs $(a_{1},a_{3})$ such that (\ref{SimplifiedEquation}) has a solution corresponding to $(x_{1}',x_{3}') = (a_{1},a_{3})$. Let the equation be $c_{1}x_{1}'+c_{3}x_{3}'=c_{0}$. We need to find $c_{1},c_{3},$ and $c_{0}$. Recursively calling the algorithm for smaller domains, we find out that (\ref{SimplifiedEquation}) has a solution $(3,3,0,1)$ corresponding to $(x_{1}',x_{3}')=(1,0)$ (the solution $(1,1,0,1)$ from $B$) but does not have a solution corresponding to $(x_{1}',x_{3}')=(0,1)$ (the solution $(0,0,1,1)$ from $B$). We have \begin{equation*} \left\{ \begin{aligned} c_{1}\cdot 0 + c_{3}\cdot 0 &\neq c_{0} \\ c_{1}\cdot 1 + c_{3}\cdot 0 &= c_{0} \\ c_{1}\cdot 0 + c_{3}\cdot 1 &\neq c_{0} \end{aligned} \right., \end{equation*} which implies that $c_{1} = 1$, $c_{3} = 0$, $c_{0} = 1$, and the equation we are looking for is $x_{1}'=1$. Thus, we found $A'$. We add this equation to (\ref{ZTwoEquation}) (update $B= A'$) and solve the new system of linear equations in $\mathbb Z_{2}$. \begin{equation}\label{ZTwoEquationNew} \left\{ \begin{aligned} x_{1}'+x_{3}'+x_{4}'&=0 \\ x_{2}'+x_{3}'+x_{4}'&=0\\ x_{1}'+x_{2}' &= 0\\ x_{1}'+x_{2}' &= 0\\ x_{1}' &= 1 \end{aligned} \right. \end{equation} The general solution of this system (the new set $B$) is $x_{1}'=1$, $x_{2}'=1$, $x_{3}' = x_{3}'$, $x_{4}' = x_{3}'+1$, where $x_{3}'$ is an independent variable. Thus, we decreased the dimension of the solution set $B$ by 1 and we still have the property that $A\subseteq B$. We go back to (\ref{OriginalEquation}), and check whether it has a solution corresponding to $x_{3}' = 0$ (the solution $(1,1,0,1)$ from $B$). Again, by solving an instance on the 2-element domain we find out that $(1,1,0,1)\notin A$. Therefore $A\subsetneq B$. The remaining part of the procedure looks trivial but we want to follow the algorithm till the end to make it clear. Again, we try to make the instance weaker so that $A'\subsetneq B$. Let us remove the third equation from~(\ref{OriginalEquation}). \begin{equation}\label{SimplifiedEquation2} \left\{ \begin{aligned} x_{1}+2x_{2}+x_{3}+x_{4}&=2 \\ 2x_{1}+x_{2}+x_{3}+x_{4}&=2\\ x_{1}+x_{2} +2x_{4}&= 2 \end{aligned} \right. \end{equation} By solving this instance on smaller domains we find out that (\ref{SimplifiedEquation2}) has no solutions corresponding to $x_{3}'=0$ (the solution $(1,1,0,1)$ of $B$). Therefore, we obtained a new set $A'\subsetneq B$. Then we try to remove one more equation from (\ref{SimplifiedEquation2}) maintaining the property $A'\subsetneq B$. We check for every weaker instance that for any $a_{3}\in \mathbb Z_{2}$ there exists a solution corresponding to $x_{3}' = a_{3}$. Again, the instance (\ref{SimplifiedEquation2}) is linked, and by Theorem \ref{LinearStep} we need exactly one equation to describe all elements $a_{3}$ such that (\ref{SimplifiedEquation2}) has a solution corresponding to $x_{3}' = a_{3}$. Let the equation be $c_{3}x_{3}'=c_{0}$. We already checked that it does not hold for $x_{3}'=0$. By solving an instance on 2-element domains we find out that (\ref{SimplifiedEquation2}) has a solution $(3,3,1,0)$ corresponding to the solution $(1,1,1,0)$ from $B$ and $x_{3}' = 1$. Thus we have \begin{equation*} \left\{ \begin{aligned} c_{3}\cdot 0 &\neq c_{0} \\ c_{3}\cdot 1 &= c_{0} \end{aligned} \right., \end{equation*} which implies $c_{3}=1$, $c_{0} = 1$, and the equation we are looking for is $x_{3}'=1$ (we calculated $A'$). We add this equation to (\ref{ZTwoEquationNew}) (update $B = A'$) and solve the new system of linear equations in $\mathbb Z_{2}$. \begin{equation}\label{ZTwoEquationNewNew} \left\{ \begin{aligned} x_{1}'+x_{3}'+x_{4}'&=0 \\ x_{2}'+x_{3}'+x_{4}'&=0\\ x_{1}'+x_{2}' &= 0\\ x_{1}'+x_{2}' &= 0\\ x_{1}' &= 1\\ x_{3}' &= 1 \end{aligned} \right. \end{equation} The only solution of this system is $(x_{1}',x_2',x_{3}',x_{4}') = (1,1,1,0)$. Thus, we decreased the dimension of the solution set $B$ to 0 and we still have the property that $A\subseteq B$. It remains to check whether the original system (\ref{OriginalEquation}) has a solution corresponding to the solution $(1,1,1,0)$ of $B$. Again, by solving an instance on the 2-element domain we find a solution $(3,3,1,0)$ of the original instance. Therefore, $(1,1,1,0)\in A$ and we finally reached the condition $A=B$. \section{Definitions}\label{Definition} A set of operations is called \emph{a clone} if it is closed under composition and contains all projections. For a set of operations $M$ by $\Clo(M)$ we denote the clone generated by $M$. An idempotent WNU $w$ is called \emph{special} if $x \circ (x \circ y) = x \circ y$, where $x \circ y = w(x,\dots,x,y)$. It is not hard to show that for any idempotent WNU $w$ on a finite set there exists a special WNU $w'\in\Clo(w)$ (see Lemma 4.7 in \cite{miklos}). A relation $\rho \subseteq A_{1}\times\dots\times A_{n}$ is called \emph{subdirect} if for every $i$ the projection of $\rho$ onto the $i$-th coordinate is $A_{i}$. For a relation $\rho$ by $\proj_{i_1,\ldots,i_{s}}(\rho)$ we denote the projection of $\rho$ onto the coordinates $i_1,\ldots,i_{s}$. \subsection{Algebras} \emph{An algebra} is a pair $\mathbf{A}:=(A;F)$, where $A$ is a finite set, called \emph{universe}, and $F$ is a family of operations on $A$, called \emph{basic operations of $\mathbf{A}$}. In the paper we always assume that we have a special WNU $w$ preserving all constraint relations. Therefore, every domain $D$, which is from the constraint language, can be viewed as an algebra $(D;w)$. By $\Clo(\mathbf{A})$ we denote the clone generated by all basic operations of $\mathbf{A}$. An equivalence relation $\sigma$ on the universe of an algebra $\mathbf{A}$ is called \emph{a congruence} if it is preserved by every operation of the algebra. A congruence (an equivalence relation) is called \emph{proper}, if it is not equal to the full relation $A\times A$. A subuniverse is called \emph{nontrivial} if it is proper and nonempty. We use standard universal algebraic notions of term operation, subalgebra, factor algebra, product of algebras, see~\cite{bergman2011universal}. We say that a subalgebra $\mathbf{R} = (R;F_R)$ is \emph{a subdirect subalgebra} of $\mathbf{A}\times \mathbf{B}$ if $R$ is a subdirect relation in $A\times B$. \subsection{Polynomially complete algebras} An algebra $(A;F_{A})$ is called \emph{polynomially complete (PC)} if the clone generated by $F_{A}$ and all constants on $A$ is the clone of all operations on $A$ (see \cite{istinger1979characterization,lausch2000algebra}). \subsection{Linear algebra} An idempotent finite algebra $(A;w_{A})$ is called \emph{linear} (similar to affine in \cite{freese1987commutator}) if it is isomorphic to $(\mathbb{Z}_{p_1}\times\dots\times \mathbb{Z}_{p_s};x_1+\ldots+x_m)$ for prime numbers $p_{1},\ldots,p_{s}$. Since $\mathbf{A}/(\sigma\cap\tau)$ is always isomorphic to a subalgebra of $\mathbf{A}/\sigma\times \mathbf{A}/\tau$, and since linear algebras are closed under products and subalgebras by Corollary~\ref{LinearAlgebrasAreClosed}, for every idempotent finite algebra $(B;w_{B})$ there exists a least congruence $\sigma$, called \emph{the minimal linear congruence}, such that $(B;w_{B})/\sigma$ is linear. \subsection{Absorption} Let $B$ be a (probably empty) subuniverse of $\mathbf{A}=(A;F_{A})$. We say that \emph{$B$ absorbs $\mathbf{A}$} if there exists $t\in \Clo(\mathbf{A})$ such that $t(B,B,\dots,B,A,B,\dots,B) \subseteq B$ for any position of $A$. In this case we also say that \emph{$B$ is an absorbing subuniverse of $\mathbf A$ with a term operation $t$}. If the operation $t$ can be chosen binary then we say that $B$ is a binary absorbing subuniverse of $\mathbf A$. For more information about absorption and its connection with CSP see \cite{barto2017absorption}. \subsection{Center} Suppose $\mathbf{A} = (A;w_{A})$ is a finite algebra with a special WNU operation. $C\subseteq A$ is called a \emph{center} if there exists an algebra $\mathbf{B} = (B;w_{B})$ with a special WNU operation of the same arity and a subdirect subalgebra $(R;w_{R})$ of $\mathbf{A}\times\mathbf{B}$ such that there is no nontrivial binary absorbing subuniverse in $\mathbf{B}$ and $C = \{a\in A\mid \forall b\in B\colon (a,b)\in R\}.$ This notion was motivated by central relations defining maximal clones on finite sets (see section 5.2.5 in \cite{lau}) and it is very similar to ternary absorption (see Corollary~\ref{ternaryAbsorption}). \subsection{CSP instance}\label{CSPInstancesDef} An instance of the constraint satisfaction problem is called \emph{a CSP instance}. Sometimes we use the same letter for a CSP instance and for the set of all constraints of this instance. For a variable $z$ by $D_{z}$ we denote the domain of the variable $z$. We say that $z_{1}-C_{1}-z_{2}-\dots - C_{l-1}-z_{l}$ is \emph{a path} in a CSP instance $\Theta$ if $z_{i},z_{i+1}$ are in the scope of $C_{i}$ for every $i$. We say that \emph{a path $z_{1}-C_{1}-z_{2}-\dots- C_{l-1}-z_{l}$ connects $b$ and $c$} if there exists $a_{i}\in D_{z_{i}}$ for every $i$ such that $a_{1} = b$, $a_{l} = c$, and the projection of $C_{i}$ onto $z_{i}, z_{i+1}$ contains the tuple $(a_{i},a_{i+1})$. A CSP instance is called \emph{1-consistent} if every constraint of the instance is subdirect. A CSP instance is called \emph{cycle-consistent} if it is 1-consistent and for every variable $z$ and $a\in D_{z}$ any path starting and ending with $z$ in $\Theta$ connects $a$ and $a$. Other types of local consistency and its connection with the complexity of CSP are considered in \cite{kozik2016weak}. A CSP instance $\Theta$ is called \emph{linked} if for every variable $z$ occurring in the scope of a constraint of $\Theta$ and every $a,b\in D_{z}$ there exists a path starting and ending with $z$ in $\Theta$ that connects $a$ and $b$. Suppose $\mathbf{X'}\subseteq\mathbf{X}$. Then we can define a projection of $\Theta$ onto $\mathbf{X'}$, that is a CSP instance where variables are elements of $\mathbf{X'}$ and constraints are projections of the constraints of $\Theta$ onto the intersection of their scopes with $\mathbf{X'}$, ignoring any constraint whose scope does not intersect $\mathbf{X'}$. We say that an instance $\Theta$ is \emph{fragmented} if the set of variables $\mathbf X$ can be divided into 2 disjoint sets $\mathbf{X_1}$ and $\mathbf{X_2}$ such that each of them contains a variable from the scope of a constraint of $\Theta$, and the constraint scope of any constraint of $\Theta$ either has variables only from $\mathbf{X_1}$, or only from $\mathbf{X_2}$. Thus, if an instance is fragmented, then it can be divided into several nontrivial instances. A CSP instance $\Theta$ is called \emph{irreducible} if any instance $\Theta'=(\mathbf X',\mathbf D',\mathbf C')$ such that $\mathbf X'\subseteq\mathbf X$, $D_{x}'=D_{x}$ for every $x\in\mathbf X'$, and every constraint of $\Theta'$ is a projection of a constraint from $\Theta$ on some set of variables is fragmented, or linked, or its solution set is subdirect. We say that a constraint $C_{1}= ((y_{1},\ldots,y_{t});\rho_{1})$ is \emph{weaker or equivalent to} a constraint $C_{2}= ((z_{1},\ldots,z_s);\rho_{2})$ if $\{y_{1},\ldots,y_{t}\}\subseteq \{z_{1},\ldots,z_s\}$ and $C_{2}$ implies $C_{1}$. In other words, the second condition says that the solution set to $\Theta_{1}:=(\{z_{1},\dots,z_{s}\},(D_{z_{1}},\ldots,D_{z_{s}}),C_{1})$ contains the solution set to $\Theta_{2}:=(\{z_{1},\dots,z_{s}\},(D_{z_{1}},\ldots,D_{z_{s}}),C_{2})$. We say that $C_{1}$ is \emph{weaker than} $C_{2}$ if $C_{1}$ is weaker or equivalent to $C_{2}$ but $C_{1}$ does not imply $C_{2}$. The following remark justifies weakening constraints of the instance in the algorithm (this remark follows from Lemma~\ref{ExpandedConsistencyLemma}). \begin{remark} Suppose $\Theta = \langle\mathbf{X};\mathbf{D};\mathbf{C}\rangle$ and $\Theta' = \langle\mathbf{X'};\mathbf{D'};\mathbf{C'}\rangle$ are CSP instances such that $\mathbf{X'}\subseteq \mathbf{X}$, $D_{x}'=D_{x}$ for every $x\in \mathbf{X'}$, and every constraint of $\Theta'$ is weaker or equivalent to a constraint of $\Theta$. If $\Theta$ is cycle-consistent and irreducible, then so is $\Theta'$. \end{remark} We say that a variable $y_{i}$ of the constraint $((y_{1},\ldots,y_{t});\rho)$ is \emph{dummy} if $\rho$ does not depend on its $i$-th variable. \begin{remark} Adding a dummy variable to a constraint and removing of a dummy variable do not affect the property of being cycle-consistent and irreducible. \end{remark} Let $D_{i}'\subseteq D_{i}$ for every $i$. A constraint $C$ of $\Theta$ is called \emph{crucial in $(D_{1}',\ldots,D_{n}')$} if it has no dummy variables, $\Theta$ has no solutions in $(D_{1}',\ldots,D_{n}')$ but the replacement of $C\in\Theta$ by all weaker constraints gives an instance with a solution in $(D_{1}',\ldots,D_{n}')$. A CSP instance $\Theta$ is called \emph{crucial in $(D_{1}',\ldots,D_{n}')$} if it has at least one constraint and every constraint of $\Theta$ is crucial in $(D_{1}',\ldots,D_{n}')$. \begin{remark}\label{GetCrucialInstance} Suppose $\Theta$ has no solutions in $(D_{1}',\ldots,D_{n}')$. We can replace each constraint by its projection onto its non-dummy variables. Then we iteratively replace every constraint by all weaker constraints having no dummy variables until it is crucial. Finally, we get a CSP instance that is crucial in $(D_{1}',\ldots,D_{n}')$. \end{remark} \newcommand{\mbox{\textsc{CheckTuple}}}{\mbox{\textsc{CheckTuple}}} \newcommand{\mbox{\textsc{type}}}{\mbox{\textsc{type}}} \newcommand{\State \textbf{break} }{\State \textbf{break} } \newcommand{\mbox{Output}}{\mbox{Output}} \newcommand{\mbox{Changed}}{\mbox{Changed}} \newcommand{\mathbf{C}}{\mathbf{C}} \newcommand{\mathbf{D}}{\mathbf{D}} \newcommand{\mathbf{X}}{\mathbf{X}} \section{Algorithm}\label{Algorithm} \subsection{Main part}\label{AlgorithmMainPart} Suppose we have a constraint language $\Gamma_{0}$ that is preserved by an idempotent WNU operation. As it was mentioned before, $\Gamma_{0}$ is also preserved by a special WNU operation $w$. Let $k_{0}$ be the maximal arity of the relations in $\Gamma_{0}$. By $\Gamma$ we denote the set of all relations of arity at most $k_{0}$ that are preserved by $w$. Obviously, $\Gamma_{0}\subseteq \Gamma$, therefore every instance of $\CSP(\Gamma_{0})$ is an instance of $\CSP(\Gamma)$. In this section we provide an algorithm that solves $\CSP(\Gamma)$ in polynomial time. Suppose we have a CSP instance $\Theta = \langle \mathbf{X} , \mathbf{D} , \mathbf{C} \rangle$, where $\mathbf{X}=\{x_1,\ldots ,x_n\}$ is a set of variables, $\mathbf{D}=\{D_{1},\ldots ,D_{n}\}$ is a set of the respective domains, $\mathbf{C}=\{C_{1},\ldots ,C_{q}\}$ is a set of constraints. Let the arity of the WNU $w$ be equal to $m$. The main part of the algorithm (function \mbox{\textsc{Solve}}) is an iterative loop; in each pass through the loop, the algorithm calls a subroutine $\mbox{\textsc{AnswerOrReduce}}$ whose job is to find a reduction of a domain or to terminate with the final answer. The reduction returned by the function should satisfy the following property: if $\Theta$ has a solution, then it has a solution after the reduction. If the reduction was found then we apply the function $\mbox{\textsc{Reduce}}$, which takes an instance $\Theta=(\mathbf{X},\mathbf{D},\mathbf{C})$ and a domain set $\mathbf{D'} = (D_1',\ldots,D_{n}')$, and returns a new instance $(\mathbf{X},\mathbf{D'},\mathbf{C'})$, where $\mathbf{C'}=\{((x_{i_{1}},\dots,x_{i_{s}}),\rho\cap (D_{i_{1}}'\times\dots \times D_{i_{s}}'))\mid ((x_{i_{1}},\dots,x_{i_{s}}),\rho)\in\mathbf{C}\}$. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{Solve}}}{$\Theta$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta=(\mathbf{X},\mathbf{D},\mathbf{C})$, $\mathbf{X}=(x_1,\ldots,x_n)$, $\mathbf{D} = (D_1,\ldots,D_n)$} \Repeat \State{$\mbox{Output} := \mbox{\textsc{AnswerOrReduce}}(\Theta)$} \If{$\mbox{Output} = \mbox{``Solution"}$} \Return{``Solution"} \EndIf \If{$\mbox{Output} = \mbox{``No solution"}$} \Return{``No solution"} \EndIf \If{$\mbox{Output} = (x_i,U)$} \Comment{$\varnothing \ne U\subset D_i$} \State{$\Theta:= \mbox{\textsc{Reduce}}(\Theta,(D_1,\dots,D_{i-1},U,D_{i+1},\dots,D_{n}))$} \Comment{Set $D_{i}=U$} \EndIf \Until{Done} \EndFunction \end{algorithmic} \end{algorithm} The function $\mbox{\textsc{AnswerOrReduce}}$ (see the pseudocode) checks different types of consistency such as cycle-consistency and irreducibility, and reduce a domain if the instance is not consistent. If it is consistent, then either it reduces a domain to a proper strong subset, or it uses $\mbox{\textsc{SolveLinearCase}}$ to solve the remaining case. First, the function $\mbox{\textsc{AnswerOrReduce}}$ checks whether the instance $\Theta$ is cycle-consistent (function $\mbox{\textsc{CheckCycleConsistency}}$). If it is not cycle-consistent then either some domain can be reduced, or the instance has no solutions. In both cases we terminate the function and return the result. If it is cycle-consistent then we go on. If the size of every domain is one it returns that a solution was found. Then we check whether the instance is irreducible (function \mbox{\textsc{CheckIrreducibility}}). If it is not irreducible then we return how to reduce some domain or return that there is no solutions, otherwise we go on. \begin{algorithm} \begin{algorithmic}[1] \Function{AnswerOrReduce}{$\Theta$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta=(\mathbf{X},\mathbf{D},\mathbf{C})$, $\mathbf{X}=(x_1,\ldots,x_n)$, $\mathbf{D} = (D_1,\ldots,D_n)$} \State{$\mbox{Output} := \mbox{\textsc{CheckCycleConsistency}}(\Theta)$} \If{$\mbox{Output} \neq \mbox{``Ok"}$} \Return{$\mbox{Output}$} \EndIf \If{$|D_{i}|=1$ for every $i$} \Return{``Solution"} \EndIf \State{$\mbox{Output} := \mbox{\textsc{CheckIrreducibility}}(\Theta)$} \If{$\mbox{Output} \neq \mbox{``Ok"}$} \Return{$\mbox{Output}$} \EndIf \State{$\mbox{Output} := \mbox{\textsc{CheckWeakerInstance}}(\Theta)$} \If{$\mbox{Output} \neq \mbox{``Ok"}$} \Return{$\mbox{Output}$} \EndIf \If{$B_{i}$ is a nontrivial binary absorbing subuniverse of $D_{i}$} \Return{$(x_{i},B_{i})$} \EndIf \If{$C_{i}$ is a nontrivial center of $D_{i}$} \Return{$(x_{i},C_{i})$} \EndIf \If{$\sigma$ is a proper congruence on $D_{i}$ and $(D_{i};w)/\sigma$ is polynomially complete} Choose an equivalence class $E$ of $\sigma$ \Return{$(x_{i},E)$} \EndIf \Return{$\mbox{\textsc{SolveLinearCase}}(\Theta)$} \EndFunction \end{algorithmic} \end{algorithm} After that we check a different type of consistency (function $\mbox{\textsc{CheckWeakerInstance}}$). We make a copy of $\Theta$, and simultaneously replace every constraint by all weaker constraints without dummy variables. Recursively calling the algorithm, we check that the obtained instance has a solution with $x_{i}=b$ for every $i\in\{1,2,\ldots,n\}$ and $b\in D_{i}$. If not, reduce $D_{i}$ to the projection onto $x_{i}$ of the solution set of the obtained instance. Otherwise, go on. By Theorem~\ref{AbsorptionCenterStep} we cannot pass from an instance having solutions to an instance having no solutions when reduce a domain to a nontrivial binary absorbing subuniverse or to a nontrivial center. Thus, if $D_{i}$ has a nontrivial binary absorbing subuniverse $B_{i}\subsetneq D_{i}$ for some $i$, then we reduce $D_{i}$ to $B_{i}$, Similarly, if $D_{i}$ has a nontrivial center $C_{i}\subsetneq D_{i}$ for some $i$, then we reduce $D_{i}$ to $C_{i}$ By Theorem~\ref{PCStepThm} we cannot pass from an instance having solutions to an instance having no solutions when reduce a domain to an equivalence class of a proper congruence $\sigma$ such that $(D_{i};w)/\sigma$ is polynomially complete. Thus, if such a congruence on $D_{i}$ exists, we reduce $D_{i}$ to its equivalence class. By Theorem~\ref{NextReduction}, it remains to consider the case when on every domain $D_{i}$ of size greater than 1 there exists a proper congruence $\sigma$ such that $(D_{i};w)/\sigma$ is isomorphic to $(\mathbb Z_{p};x_1+\dots+x_{m})$ for some $p$. In this case the problem is solved by the function $\mbox{\textsc{SolveLinearCase}}$, which will be described in the next subsection. A detailed description and a pseudocode for the functions $\mbox{\textsc{CheckCycleConsistency}}$, $\mbox{\textsc{CheckIrreducibility}}$, and $\mbox{\textsc{CheckWeakerInstance}}$ will be given in Subsection \ref{AlgorithmTechnicalities} \subsection{Linear case}\label{AlgorithmLinearCase} In this section we define the function $\mbox{\textsc{SolveLinearCase}}$ (see the pseudocode). For every $i$ let $\sigma_{i}$ be the minimal linear congruence on $D_{i}$, which is the smallest congruence $\sigma$ such that $(D_{i};w)/\sigma$ is linear. Then $(D_{i};w)/\sigma_{i}$ is isomorphic to $(\mathbb Z_{p_{1}}\times \dots \times \mathbb Z_{p_{l}};x_{1}+\dots+x_{m})$ for prime numbers $p_{1},\ldots,p_{l}$. Recall that we apply the function $\mbox{\textsc{SolveLinearCase}}$ only if $\sigma_{i}$ is proper for every $i$ such that $|D_{i}|>1$. We will show that modulo these congruences the instance can be viewed as a system of linear equations in fields. We denote $D_{i}/\sigma_{i}$ by $L_{i}$ and define a new CSP instance $\Theta_{L}$ with domains $L_{1},\ldots,L_{n}$ as follows. To every constraint $((x_{i_1},\ldots,x_{i_s});\rho)\in \Theta$ we assign a constraint $((x_{i_1}',\ldots,x_{i_s}');\rho')$, where $\rho'\subseteq L_{i_{1}}\times\dots\times L_{i_{s}}$ and $(E_{1},\ldots,E_{s})\in\rho'\Leftrightarrow (E_{1}\times\dots\times E_{s})\cap\rho\neq\varnothing.$ The constraints of $\Theta_{L}$ are all constraints that are assigned to the constraints of $\Theta$. The function generating the instance $\Theta_{L}$ from $\Theta$ is called $\mbox{\textsc{FactorizeInstance}}$ in the pseudocode. Note that $\Theta_{L}$ is a CSP instance but not necessarily an instance in the constraint language $\Gamma$. Since each $L_{i}$ is isomorphic to some $\mathbb Z_{m_{1}}\times \dots\times\mathbb Z_{m_s}$, we may define a natural bijective mapping $\psi:\mathbb Z_{p_{1}}\times\dots\times \mathbb Z_{p_r}\to L_{1}\times\dots\times L_{n}$, and assign a variable $z_{i}$ to every $\mathbb Z_{p_{i}}$. Since every relation on $\mathbb Z_{p_{1}}\times \dots \times \mathbb Z_{p_{r}}$ preserved by $x_{1}+\ldots+x_{m}$ is known (see Lemma~\ref{LinearAlgebrasFact}) to be a conjunction of linear equations, the instance $\Theta_{L}$ can be viewed as a system of linear equations over $z_{1},\ldots,z_{r}$. Note that every equation is an equation in $\mathbb Z_{p}$ but $p$ can be different for different equations, and only variables with the same domain $\mathbb Z_{p}$ may appear in one equation. \begin{algorithm} \begin{algorithmic}[1] \Function{SolveLinearCase}{$\Theta$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta=(\mathbf{X},\mathbf{D},\mathbf{C})$, $\mathbf{X}=(x_1,\ldots,x_n)$, $\mathbf{D} = (D_1,\ldots,D_n)$} \State{$\Theta_{L} := \mbox{\textsc{FactorizeInstance}}(\Theta)$} \State{$Eq := \varnothing$} \Comment{The equations we add to $\Theta_{L}$} \Repeat \State{$\phi := \mbox{\textsc{SolveLinearSystem}}(\Theta_{L}\cup Eq)$} \Comment{$\phi(\mathbb Z_{q_{1}}\times\dots\times\mathbb Z_{q_{k}})$ is the solution set of $\Theta_{L}\cup Eq$} \If{$\phi = \varnothing$} \Return{``No solution"} \EndIf \If{$\mbox{\textsc{Solve}}(\mbox{\textsc{Reduce}}(\Theta, \phi(0,0,\ldots,0))) = \mbox{``Solution"}$} \Return{``Solution"} \ElsIf{k=0} \Return{``No solution"} \Comment{$\Theta_{L}$ has just one solution} \EndIf \State{$\Theta':= \mbox{\textsc{RemoveTrivialities}}(\Theta)$} \Repeat \Comment{Try to weaken $\Theta'$} \State{$\mbox{Changed}:= false$} \For{$C\in \Theta'$} \State{$\Omega:= \mbox{\textsc{RemoveTrivialities}}(\mbox{\textsc{WeakenConstraint}}(\Theta',C))$} \If{$\neg\mbox{\textsc{CheckAllTuples}}(\Omega,\phi)$} \State{$\Theta':=\Omega$} \State{$\mbox{Changed}:= true$} \State \textbf{break} \EndIf \EndFor \Until{$\neg\mbox{Changed}$} \Comment{$\Theta'$ cannot be weakened anymore} \If{$\Theta'$ is not linked} \State{$Eq := Eq\cup\mbox{\textsc{FindEquationsNonlinked}}(\Theta')$} \Else \State{$Eq := Eq\cup\{\mbox{\textsc{FindOneEquationLinked}}(\Theta',\phi)\}$} \EndIf \Until{Done} \EndFunction \end{algorithmic} \end{algorithm} As it was described in Section \ref{ZFourExample}, we consider the set $A$, which is the solution set of $\Theta$ factorized by the congruences $\sigma_{1},\ldots,\sigma_{n}$, and the set $B$, which is the solution set of $\Theta_{L}$. We know that $A\subseteq B$ and we want to check whether $A$ is empty. We iteratively add new equations to the set $\Theta_{L}$ maintaining the property that $A\subseteq B$, and therefore reduce the dimension of $B$. We start with the empty set of equations $Eq$ (line 4 of the pseudocode). Then we apply the function $\mbox{\textsc{SolveLinearSystem}}$ that solves the system of linear equations $\Theta_{L}\cup Eq$ using Gaussian elimination. If the system has no solutions then $\Theta$ has no solutions and we are done. Otherwise, we choose independent variables $y_{1},\ldots,y_{k}$, then the general solution (the set $B$) can be written as an affine mapping $\phi\colon\mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}}\to L_{1}\times\dots\times L_{n}$. Denote ${Z} = \mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}}$, then any solution of $\Theta_{L}\cup Eq$ can be obtained as $\phi(a_{1},\ldots,a_{k})$ for some $(a_{1},\ldots,a_{k})\in {Z}$. Note that for any tuple $(a_{1},\ldots,a_{k})\in {Z}$ we can check recursively whether $\Theta$ has a solution in $\phi(a_{1},\ldots,a_{k})$ (i.e. whether $\phi(a_{1},\ldots,a_{k})\in A$). To do this, we just need to reduce the domains to the solution (function \mbox{\textsc{Reduce}}) and solve an easier CSP instance (on smaller domains). Similarly, we can check whether $\Theta$ has a solution in $\phi(a_{1},\ldots,a_{k})$ for every $(a_{1},\ldots,a_{k})\in \mathbb {Z}$ (i.e. whether $A=B$). Since $A$ and $B$ are subuniverses of $L_{1}\times\dots\times L_{n}$ (almost subspaces), we just need to check the existence of a solution in $\phi(0,\ldots,0)$ and $\phi(0,\ldots,0,1,0,\ldots,0)$ for any position of $1$. See the pseudocode of the function $\mbox{\textsc{CheckAllTuples}}$ for the last procedure. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{CheckAllTuples}}}{$\Theta$, $\phi$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$, a solution of a linear system of equations $\phi$} \If{$\mbox{\textsc{Solve}}(\mbox{\textsc{Reduce}}(\Theta,\phi(0,\ldots,0))) = \mbox{``No solution"}$} \Return{$false$} \EndIf \For{$i=1,2,\ldots,k$} \State{$t := (\underbrace{0,\ldots,0,1}_{i},0,\ldots,0)$} \If{$\mbox{\textsc{Solve}}(\mbox{\textsc{Reduce}}(\Theta,\phi(t))) = \mbox{``No solution"}$} \Return{$false$} \EndIf \EndFor \Return {$true$}; \EndFunction \end{algorithmic} \end{algorithm} Let us go back to the function $\mbox{\textsc{SolveLinearCase}}$. After solving the linear system we check whether there exists a solution of $\Theta$ corresponding to the solution $\phi(0,0,\ldots,0)$ of $\Theta_{L}\cup Eq$. If $k=0$, i.e. $\Theta_{L}\cup Eq$ has only one solution, then we denote this solution by $\phi(0,0,\ldots,0)$. If $\Theta$ has a solution in $\phi(0,\ldots,0)$, then it remains to return the result ``Solution''. If it has no solutions and $k=0$ then return the result ``No solution''. At this point (line 10 of the pseudocode of $\mbox{\textsc{SolveLinearCase}}$), we have the property that the set $B$ is of dimension at least 1, and $A\neq B$ since we found a solution $\phi(0,\ldots,0)$ of the system of linear equations without the corresponding solution of $\Theta$. Then we iteratively remove from $\Theta$ all constraints that are weaker than some other constraints of $\Theta$, remove all constraints without non-dummy variables, and replace every constraint by its projection onto non-dummy variables. This procedure we denote by the function $\mbox{\textsc{RemoveTrivialities}}$. In the pseudocode of $\mbox{\textsc{SolveLinearCase}}$ we denote the obtained instance by $\Theta'$. Then we try to make the constraints of $\Theta'$ weaker maintaining the property that $A'\neq B$, where $A'$ is the solution set of $\Theta'$ factorized by the congruences $\sigma_{1},\ldots,\sigma_{n}$. Precisely, we choose a constraint $C$, replace it by all weaker constraints without dummy variables (function $\mbox{\textsc{WeakenConstraint}}$), apply $\mbox{\textsc{RemoveTrivialities}}$, and check using the function $\mbox{\textsc{CheckAllTuples}}$ whether $A' = B$. If not, then we replace $\Theta'$ by the new weaker instance. Suppose we cannot make any constraint weaker maintaining the property $A'\neq B$. Then $\Theta'$ has no solutions in $\phi(b_{1},\ldots,b_{k})$ for some $(b_{1},\ldots,b_{k})\in {Z}$, but if we replace any constraint $C\in\Theta'$ by all weaker constraints, then we get an instance that has a solution in $\phi(a_{1},\ldots,a_{k})$ for every $(a_{1},\ldots,a_{k})\in {Z}$. Therefore, $\Theta'$ is crucial in $\phi(b_{1},\ldots,b_{k})$. Note that by Lemma~\ref{ExpandedConsistencyLemma} the instance $\Theta'$ is still cycle-consistent and irreducible. Also, $\Theta'$ is not fragmented because it is crucial. Then, in line 20 of the function $\mbox{\textsc{SolveLinearCase}}$ we have two options. If $\Theta'$ is not linked then using the function $\mbox{\textsc{FindEquationsNonlinked}}$ we calculate its solution set factorized by the congruences (the set $A'$). This solution set can be defined by a set of linear equations, which we add to $Eq$ and therefore replace $B$ by $A'\cap B$. Thus, we made $B$ smaller and we still have the property $A\subseteq B$, since $A'$ is the factorized solution set of the instance $\Theta'$, which is weaker than $\Theta$. If $\Theta'$ is linked then by Theorem~\ref{LinearStep} either $A' = \varnothing$, or the dimension of $A'$ is equal to the dimension of $B$ minus 1, which allows us to find a new linear equation by polynomially many queries ``Does there exist a solution of $\Theta'$ in $\phi(a_{1},\ldots,a_{k})$?''. We calculate this new equation by the function $\mbox{\textsc{FindOneEquationLinked}}$, which will be defined in the next section as well as the function $\mbox{\textsc{FindEquationsNonlinked}}$. Note that the new equation can be ``$0=1$'' if $A'=\varnothing$. After new equations found, we go back to line 6 of the function $\mbox{\textsc{SolveLinearCase}}$ and solve a system of linear equations again. Since every time we reduce the dimension of $B$ by at least one, the procedure will stop in at most $r$ steps. \subsection{Finding linear equations}\label{FindingLinearEquationSection} In this section we define the functions $\mbox{\textsc{FindOneEquationLinked}}$, $\mbox{\textsc{FindOneEquationNonlinked}}$, and $\mbox{\textsc{FindEquationsNonlinked}}$, which allow us to find new equations defining the set $A'$. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{FindOneEquationLinked}}}{$\Theta,\phi$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$, a solution of a system of linear equations $\phi$} \State{$t := \varnothing$} \Comment{We search for a tuple $t$ outside of the solution set} \If{$\mbox{\textsc{Solve}}(\mbox{\textsc{Reduce}}(\Theta,\phi(0,\ldots,0))) = \mbox{``No solution"}$} \State{$t:= (0,\ldots,0)$} \Else \For{$i=1,2,\ldots,k$} \State{$t' := (\underbrace{0,\ldots,0,1}_{i},0,\ldots,0)$} \If{$\mbox{\textsc{Solve}}(\mbox{\textsc{Reduce}}(\Theta,\phi(t'))) = \mbox{``No solution"}$} \State{$t := t'$} \State \textbf{break} \EndIf \EndFor \EndIf \If{$t = \varnothing$} \Return{``$0=0$''} \EndIf \For{$i=1,2,\ldots,k$} \State{$b_{i}:=0$} \For{$a\in \mathbb Z_{q_{i}}\setminus\{t(i)\}$} \State{$t' := t$} \State{$t'(i):= a$} \If{$\mbox{\textsc{Solve}}(\mbox{\textsc{Reduce}}(\Theta,\phi(t'))) = \mbox{``Solution"}$} \State{$b_{i} := 1/(a-t(i))$} \EndIf \EndFor \EndFor \Return {``$b_{1}(y_{1}-t(1)) +\dots + b_{k}(y_{k}-t(k)) = 1$''} \EndFunction \end{algorithmic} \end{algorithm} First, we explain how the function $\mbox{\textsc{FindOneEquationLinked}}$ works. Suppose $V$ is an affine subspace of $\mathbb Z_{p}^{k}$ of dimension $k-1$, thus $V$ is the solution set of a linear equation $c_1y_1 + \dots + c_k y_k = c_{0}$. Then the coefficients $c_0,c_{1},\dots,c_{k}$ can be learned (up to a multiplicative constant) by $(p\cdot k+1)$ queries of the form ``$(a_1,\ldots,a_k) \in V$?'' as follows. First, we need at most $(k+1)$ queries to find a tuple $(t_{1},\ldots,t_{k})\notin V$. To do this we just check all tuples with 0s and at most one 1 (lines 4-11 of the pseudocode). Then, to find this equation it is sufficient to check for every $a$ and every $i$ whether the tuple $(t_{1},\ldots,t_{i-1},a,t_{i+1},\ldots,t_{k})$ satisfies this equation (lines 13-19 of the pseudocode). Here the query is performed by the reduction of all domains to the corresponding solution (the function $\mbox{\textsc{Reduce}}$) and a recursive call of the main function $\mbox{\textsc{Solve}}$. As we said before, we may define a natural bijective mapping $\psi:\mathbb Z_{p_{1}}\times\dots\times \mathbb Z_{p_r}\to L_{1}\times\dots\times L_{n}$, and assume that all relations from $\Theta_{L}$ and $Eq$ are systems of linear equations over $z_{1},\ldots,z_{r}$. Below we explain how the function $\mbox{\textsc{FindEquationsNonlinked}}$ calculates the solution set of $\Theta'$ factorized by the congruences (the set $A'$) if $\Theta'$ is not linked. It describes the solution set by linear equations over $z_{1},\ldots,z_{r}$. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{FindEquationsNonlinked}}}{$\Theta$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$} \State{$I := \{1\}$} \Comment{$I$ is the set of independent variables} \State{$E := \varnothing$} \Comment{We start with an empty set of equations} \For{$j=1,2,\ldots,r$} \State{$e := \mbox{\textsc{FindOneEquationNonlinked}}(\Theta,I\cup\{j\})$} \If{$e = \mbox{``$0=0$"}$} \Comment{$j$-th variable is independent} \State{$I := I\cup \{j\}$} \ElsIf{$e = \mbox{``$0=1$"}$} \Return{$\mbox{``No solution"}$} \Else \State{$E := E\cup e$} \Comment{Add the equation we found} \EndIf \EndFor \Return {$E$} \EndFunction \end{algorithmic} \end{algorithm} We start with an empty set of equations $E$ and claim that the first variable is independent, by $I$ we denote the set of independent variables (see the pseudocode of $\mbox{\textsc{FindEquationsNonlinked}}$). Assume that we already found all the equations over $z_{1},\ldots,z_{j-1}$, i.e. we described the projection of $A'$ onto $z_{1},\ldots,z_{j-1}$. Then the projection of $A'$ onto the independent variables and the $j$-th variable is either full or of codimension 1. Thus, we can learn this equation by queries of the form ``Does there exist $v\in A'$ such that $\proj_{I\cup\{j\}}(v) = (a_{1},\ldots,a_{h})$?'' in the same way as we did in $\mbox{\textsc{FindOneEquationLinked}}$, but now we use $\mbox{\textsc{FindOneEquationNonlinked}}$. The only difference in these functions is how we check a query: in $\mbox{\textsc{FindEquationsNonlinked}}$ we use the function $\mbox{\textsc{CheckTuple}}$ instead of $\mbox{\textsc{Reduce}}$ and $\mbox{\textsc{Solve}}$ (see the pseudocode). If the new equation was found and this equation is not trivial then we add this equation to $E$ and claim that $z_{j}$ is not independent. If the equation we found is ``$0=0$'' then add $x_{j}$ to the set of independent variables and go to the next variable. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{FindOneEquationNonlinked}}}{$\Theta,I$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$, $I = \{i_1,\ldots,i_{h}\}$ a set of variables} \State{$t := \varnothing$} \Comment{We search for a tuple $t$ outside of the solution set} \If{$\neg\mbox{\textsc{CheckTuple}}(\Theta,I,(0,\ldots,0))$} \State{$t:= (0,\ldots,0)$} \Else \For{$j=1,2,\ldots,h$} \State{$t' := (\underbrace{0,\ldots,0,1}_{j},0,\ldots,0)$} \If{$\neg\mbox{\textsc{CheckTuple}}(\Theta,I,t')$} \State{$t := t'$} \State \textbf{break} \EndIf \EndFor \EndIf \If{$t = \varnothing$} \Return{``$0=0$''} \EndIf \For{$j=1,2,\ldots,h$} \State{$b_{j}:=0$} \For{$a\in \mathbb Z_{p_{i_{j}}}\setminus\{t(j)\}$} \State{$t' := t$} \State{$t'(j):= a$} \If{$\mbox{\textsc{CheckTuple}}(\Theta,I,t')$} \State{$b_{j} := 1/(a-t(j))$} \EndIf \EndFor \EndFor \Return {``$b_{1}(z_{i_{1}}-t(1)) +\dots + b_{h}(z_{i_{h}}-t(r)) = 1$''} \EndFunction \end{algorithmic} \end{algorithm} It remains to explain how the function $\mbox{\textsc{CheckTuple}}$ works. As an input it takes an instance $\Theta$, a set of variables $I$, and a tuple $t$ of length $|I|$. The restriction of the variables from $I$ to the tuple $t$ implies the restrictions $L_{1}',\ldots,L_{n}'$ of the domains $L_{1},\ldots,L_{n}$. Put $D_{i}' = \bigcup\limits_{E\in L_{i}'} E$ for every $i$. Then we add unary constraints $x_{i}\in D_{i}'$ to $\Theta$ and solve the obtained instance by the function $\mbox{\textsc{SolveNonlinked}}$, which works only for non-linked instances and will be defined in the next section. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{CheckTuple}}}{$\Theta,I,t$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$, $I$ a subset of variables, $t$ a tuple of length $|I|$} \State{$R:=\{\alpha\in \mathbb Z_{p_{1}}\times\dots\times\mathbb Z_{p_{r}} \mid \proj_{I}(\alpha) = t\}$} \Comment{We don't really calculate $R$} \For{$i=1,2,\ldots,n$} \State{$D_{i}':=\bigcup_{E\in\proj_{i}(\psi(R))} E$} \Comment{We calculate $D_{i}'$} \EndFor \If{$\mbox{\textsc{SolveNonlinked}}(\Theta\wedge(x_{1}\in D_{1}')\wedge\dots\wedge(x_{n}\in D_{n}')) = \mbox{``Solution''}$} \Return true \Else ~\Return false \EndIf \EndFunction \end{algorithmic} \end{algorithm} \subsection{Remaining functions}\label{AlgorithmTechnicalities} In this subsection we define the functions $\mbox{\textsc{CheckCycleConsistency}}$, $\mbox{\textsc{CheckIrreducibility}}$, and $\mbox{\textsc{CheckWeakerInstance}}$ which were used in Subsection~\ref{AlgorithmMainPart}, and function $\mbox{\textsc{SolveNonlinked}}$ from Subsection~\ref{FindingLinearEquationSection}. First, we define the function $\mbox{\textsc{CheckCycleConsistency}}$. To check cycle-consistency it is sufficient to use constraint propagation providing a variant of (2,3)-consistency (see the pseudocode). First, for every pair of variables $(x_{i},x_{j})$ we consider the intersections of projections of all constraints onto these variables. The corresponding relation we denote by $\rho_{i,j}$. Then, for every $i,j,k\in\{1,2,\ldots,n\}$ we replace $\rho_{i,j}$ by $\rho_{i,j}'$ where $\rho_{i,j}'(x,y) = \exists z \; \rho_{i,j}(x,y)\wedge \rho_{i,k}(x,z)\wedge \rho_{k,j}(z,y).$ We repeat this procedure while we can change some $\rho_{i,j}$. If in the end we get a relation $\rho_{i,j}$ that is not subdirect in $D_{i}\times D_{j}$, then we can either reduce $D_{i}$ or $D_{j}$, or, if $\rho_{i,j}$ is empty, state that there are no solutions. If every relation $\rho_{i,j}$ is subdirect in $D_{i}\times D_{j}$, then we claim (see Lemma \ref{ProofCycleConsistencyFunction}) that the original CSP instance is cycle-consistent. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{CheckCycleConsistency}}}{$\Theta$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$} \For{$i,j\in\{1,2,\ldots,n\}$} \Comment{Calculate binary projections $\rho_{i,j}$} \State{$\rho_{i,j} := D_{i}\times D_{j}$} \For{$C\in\Theta$} \State{$\rho_{i,j} :=\rho_{i,j}\cap \proj_{x_i,x_j} C$} \Comment{$\proj_{x_i,x_j} C$ is the projection of $C$ onto $x_{i},x_{j}$} \EndFor \EndFor \Repeat \Comment{Propagate constraints to reduce $\rho_{i,j}$} \State{$\mbox{Changed}:= false$} \For{$i,j,k\in\{1,2,\ldots,n\}$} \State{$\rho_{i,j}'(x,y) := \exists z\; \rho_{i,j}(x,y)\wedge \rho_{i,k}(x,z)\wedge \rho_{k,j}(z,y)$} \If{$\rho_{i,j} \neq \rho_{i,j}'$} \State{$\rho_{i,j}:=\rho_{i,j}'$} \State{$\mbox{Changed}:= true$} \EndIf \EndFor \Until{$\neg\mbox{Changed}$} \Comment{We cannot reduce $\rho_{i,j}$ anymore} \For{$i,j\in\{1,2,\ldots,n\}$} \If{$\rho_{i,j}=\varnothing$} \Return{``No solution"} \EndIf \If{$\proj_{1}(\rho_{i,j})\neq D_{i}$} \Return{$(x_{i},\proj_{1}(\rho_{i,j}))$} \EndIf \If{$\proj_{2}(\rho_{i,j})\neq D_{j}$} \Return{$(x_{j},\proj_{2}(\rho_{i,j}))$} \EndIf \EndFor \Return{\mbox{``Ok"}} \EndFunction \end{algorithmic} \end{algorithm} Let us explain how $\mbox{\textsc{CheckIrreducibility}}$ works. For every $k\in\{1,2,\ldots,n\}$ and every maximal congruence $\sigma_{k}$ on $D_{k}$ we do the following. We start with the partition $\sigma_{k}$ of the $k$-th variable, so we put $I=\{k\}$ (line 4 of the pseudocode), which is the set of variables with a partition. Then we try to extend the partition of $D_{k}$ to other domains. We choose a constraint having $x_{k}$ in the scope, choose another variable $x_{j}$, and consider the projection of $C$ onto $x_{k},x_{j}$, which we denote by $\delta$. Since $\sigma_{k}$ is maximal, we may have two possibilities: either all equivalence classes of $\sigma_{k}$ are connected in $\delta$, or none of the equivalence classes are connected in $\delta$. In the second case the partition of $D_{k}$ generates a partition of $D_{j}$ with the same number of classes, and we add $j$ to $I$ (lines 10-15 of the pseudocode). We continue this procedure while we can add new variables to $I$. As a result we get a set $I$ and a partition of $D_{i}$ for every $i\in I$. Put $\mathbf{X'} = \{x_{i}\mid i\in I\}$. Then, the projection of $\Theta$ onto $\mathbf{X'}$ can be split into several instances on smaller domains, and each of them can be solved using recursion. Thus, we can check whether the solution set of the projection of the instance onto $\mathbf{X'}$ is subdirect or empty. If it is empty then we state that there are no solutions. If it is not subdirect, then we can reduce the corresponding domain. If it is subdirect, then we go to the next $k\in\{1,2,\ldots,n\}$ and the next maximal congruence $\sigma_{k}$ on $D_{k}$, and repeat the procedure. If for all $k$ and all maximal congruences the solution set of the obtained instance is subdirect, then the instance is irreducible (see Lemma~\ref{CheckIrreducibilityCorrectness}). \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{CheckIrreducibility}}}{$\Theta$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$} \For{$k=1,\ldots,n$} \For{$\sigma_{k}=\{E_{k}^{1},\ldots,E_{k}^{t}\}$ is a maximal congruence on $D_{k}$} \State{$I:=\{k\}$} \Repeat \State{$\mbox{Changed}:=false$} \For{$C\in\Theta$, $i\in I$, $j\notin I$ such that $x_{i}$ and $x_{j}$ are in the scope of $C$} \State{$\delta:=\proj_{x_{i},x_{j}} C$} \Comment{$\proj_{x_{i},x_{j}} C$ is the projection of $C$ onto $x_{i},x_{j}$} \For{$u=1,2,\dots,t$} \Comment{Calculate the partition on $D_{j}$} \State{$E_{j}^{u}:= \{b\in D_{j}\mid \exists a\in E_{i}^{u}: (a,b)\in \delta\}$} \EndFor \If{$E_{j}^{1},\dots,E_{j}^{t}$ are disjoint} \State{$I:=I\cup\{j\}$} \State{$\mbox{Changed}:=true$} \State \textbf{break} \EndIf \EndFor \Until{$\neg\mbox{Changed}$} \For{$i\in I$} \State{$D_{i}':=\varnothing$} \For{$a\in D_{i}$} \State{Choose $u$ such that $a\in E_{i}^{u}$} \For{$j=1,2,\ldots,n$} \If{$j=i$} \State{$E_{j} := \{a\}$} \ElsIf{$j\in I$} \State{$E_{j} := E_{j}^{u}$} \Else \State{$E_{j} := D_{j}$} \EndIf \EndFor \State{$\mathbf{X'} := \{x_{i}\mid i\in I\}$} \If{$\mbox{\textsc{Solve}}(\proj_{\mathbf{X'}}(\mbox{\textsc{Reduce}}(\Theta,(E_{1},\ldots,E_{n})))) = \mbox{``Solution"}$} \State{$D_{i}':=D_{i}'\cup\{a\}$} \EndIf \EndFor \If{$D_{i}'=\varnothing$} \Return{``No solution"} \ElsIf{$D_{i}'\neq D_{i}$} \Return{$(x_{i},D_{i}')$} \EndIf \EndFor \EndFor \EndFor \Return{\mbox{``Ok"}} \EndFunction \end{algorithmic} \end{algorithm} Define the function $\mbox{\textsc{CheckWeakerInstance}}$, which checks that if we simultaneously weaken every constraint then the solution set of the obtained instance is subdirect. Thus, we weaken every constraint of $\Theta$ (function $\mbox{\textsc{WeakenEveryConstraint}}$ in the pseudocode), that is, we make a copy of $\Theta$, and replace each constraint by all weaker constraints without dummy variables. Recursively calling the algorithm, check that the obtained instance has a solution with $x_{i}=b$ for every $i\in\{1,2,\ldots,n\}$ and $b\in D_{i}$. If not, reduce $D_{i}$ to the projection onto $x_{i}$ of the solution set of the obtained instance. Otherwise, go on. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{CheckWeakerInstance}}}{$\Theta$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$} \State{$\Theta'= \mbox{\textsc{WeakenEveryConstraint}}(\Theta)$} \For{$i=1,\ldots,n$} \State{$D_{i}':=\varnothing$} \For{$a\in D_{i}$} \State{$\mbox{Output} := \mbox{\textsc{Solve}}(\mbox{\textsc{Reduce}}(\Theta',(D_{1},\dots,D_{i-1}, \{a\},D_{i+1},\dots,D_{n})))$} \If{$\mbox{Output} = \mbox{``Solution"}$} \State{$D_{i}':=D_{i}'\cup \{a\}$} \EndIf \EndFor \If{$D_{i}'=\varnothing$} \Return{``No solution"} \ElsIf{$D_{i}'\neq D_{i}$} \Return{$(x_{i},D_{i}')$} \EndIf \EndFor \Return{``Ok"} \EndFunction \end{algorithmic} \end{algorithm} It remains to define the function $\mbox{\textsc{SolveNonlinked}}$, which solves an instance that is not linked and not fragmented (see the pseudocode). Such an instance can be split into several instances on smaller domains. First, we consider the set $\mathbf X'$ of all variables appearing in the constraints of the instance and take the projection of the instance onto $\mathbf{X'}$. Then we consider each linked component, that is, elements that can be connected by a path in the instance. Since the instance is cycle-consistent, the division into linked components defines a congruence on every domain (see Lemma~\ref{LinkedConIsCon}), and each block of this congruence is a subuniverse of the domain. Thus, each linked component can be viewed as a CSP instance in a constraint language $\Gamma$ on smaller domains, which can be solved using the recursion. If at least one of them has a solution, then the original instance has a solution. \begin{algorithm} \begin{algorithmic}[1] \Function{\mbox{\textsc{SolveNonlinked}}}{$\Theta$} \State{\textbf{Input:} CSP($\Gamma$) instance $\Theta$} \State{$\mathbf{X'}:=\Var(\Theta)$} \Comment{Choose variables that appear in $\Theta$} \State{$\Theta':=\proj_{\mathbf{X'}}(\Theta)$} \Comment{Remove variables that never occur} \For{a linked component $(D_{1}',\dots,D_{n'}')$ of $\Theta'$} \If{$\mbox{\textsc{Solve}}(\mbox{\textsc{Reduce}}(\Theta',(D_{1}',\dots,D_{n'}'))) = \mbox{``Solution"}$} \Return{``Solution"} \EndIf \EndFor \Return{``No solution"} \EndFunction \end{algorithmic} \end{algorithm} \section{Correctness of the Algorithm}\label{CorretnessSection} \subsection{Rosenberg completeness theorem}\label{RosenbergSection} The main idea of the algorithm is based on a beautiful result obtained by Ivo Rosenberg in 1970, who found all maximal clones on a finite set. Applying this result to the clone generated by a WNU together with all constant operations, we can show that every algebra with a WNU operation has a nontrivial binary absorbing subuniverse, or a nontrivial center, or it is polynomially complete or linear modulo some proper congruence. \begin{thm}\label{NextReduction} Suppose $\mathbf{A} = (A;w)$ is a finite algebra, where $w$ is a special WNU of arity $m$. Then one of the following conditions holds: \begin{enumerate} \item there exists a nontrivial binary absorbing subuniverse $B\subsetneq A$, \item there exists a nontrivial center $C\subsetneq A$, \item there exists a proper congruence $\sigma$ on $A$ such that $(A;w)/\sigma$ is polynomially complete, \item there exists a proper congruence $\sigma$ on $A$ such that $(A;w)/\sigma$ is isomorphic to $(\mathbb Z_{p};x_{1}+\dots +x_{m})$. \end{enumerate} \end{thm} \begin{proof} Let us prove this statement by induction on the size of $A$. If we have a nontrivial binary absorbing subuniverse in $A$ then there is nothing to prove. Assume that $A$ has no nontrivial binary absorbing subuniverse. Let $M$ be the clone generated by $w$ and all constant operations on $A$. If $M$ is the clone of all operations, then $(A;w)$ is polynomially complete. Otherwise, by Rosenberg's Theorem \cite{rosmax}, $M$ belongs to one of the following maximal clones. \begin{enumerate} \item Maximal clone of monotone operations, that is, the clone of operations preserving a partial order relation with the greatest and the least element; \item Maximal clone of autodual operations, that is, the clone of operations preserving the graph of a permutation of a prime order without a fixed element; \item Maximal clone defined by an equivalence relation; \item Maximal clone of quasi-linear operations; \item Maximal clone defined by a central relation; \item Maximal clone defined by an $h$-regularly generated (or $h$-universal) relation. \end{enumerate} Let us consider all the cases. \begin{enumerate} \item As we assumed, there is no nontrivial binary absorbing subuniverse on $A$. Hence, the least element of the partial order can be viewed as a center by letting $\mathbf B = \mathbf A$ and using the partial order relation as a subdirect subuniverse of $\mathbf{A}\times \mathbf{B}$ (the least element is connected with all other elements in the partial order relation). Thus, we have a nontrivial center in $A$. \item Constants are not autodual operations. This case cannot happen. \item Let $\delta$ be a maximal congruence on $\mathbf{A}$. We consider a factor algebra $(A;w)/\delta$ and apply the inductive assumption. \begin{enumerate} \item If $\mathbf{A}/\delta$ has a binary absorbing subuniverse $B'\subseteq A/\delta$, then $\bigcup_{E\in B'}E$ is a binary absorbing subuniverse of $A$ with the same term operation. \item If $\mathbf{A}/\delta$ has a nontrivial center $C'\subseteq A/\delta$ witnessed by a subdirect relation $R'\subseteq A/\delta\times B$, then $\bigcup_{E\in C'}E$ is a nontrivial center of $A$ witnessed by $R = \bigcup_{(E,b)\in R'} E\times \{b\}$. \item Suppose $(\mathbf{A}/\delta)/\sigma $ is polynomially complete. Since $\delta$ is a maximal congruence, $\sigma$ is the equality relation and $\mathbf{A}/\delta$ is polynomially complete. \item Suppose $(\mathbf{A}/\delta)/\sigma $ is isomorphic to $(\mathbb Z_{p};x_{1}+\dots +x_{m})$. Since $\delta$ is a maximal congruence, $\sigma$ is the equality relation and $\mathbf{A}/\delta$ is isomorphic to $(\mathbb Z_{p};x_{1}+\dots +x_{m})$. \end{enumerate} \item By Lemma 6.4 from \cite{KeyRelations}, we know that $w(x_{1},\ldots,x_{m}) = x_{1}+\dots +x_{m}$, where $+$ is the operation in an abelian group. We assume that $\mathbf{A}$ has no nontrivial congruences, otherwise we refer to case (3). Then the algebra $\mathbf{A}$ is simple and isomorphic to $(\mathbb Z_{p};x_{1}+\dots +x_{m})$ for a prime number $p$. \item Let $\rho$ be a central relation of arity $k$ preserved by $w$. It is not hard to see that the existence of a nontrivial binary absorbing subuniverse on $\underbrace{\mathbf{A}\times\dots\times\mathbf{A}}_{k-1}$ implies the existence of a nontrivial binary absorbing subuniverse on $\mathbf{A}$ (see Lemma~\ref{GenBinAbToBinAb}). Since there is no nontrivial binary absorbing subuniverse on $\mathbf{A}$ and the relation $\rho$ contains all tuples $(b_{1},\ldots,b_{k})$ such that $b_{1}$ is from the center of $\rho$, the center of $\rho$ is a center of $A$ by letting $\mathbf B = \underbrace{\mathbf{A}\times\dots\times\mathbf{A}}_{k-1}$. \item By Corollary 5.10 from \cite{KeyRelations} this case cannot happen. \end{enumerate} \end{proof} \subsection{The algorithm is polynomial} \begin{lem}\label{RecursionDepth} The depth of the recursion in the algorithm is less than $|A|+|\Gamma|$. \end{lem} \begin{proof} We use the recursion in the functions $\mbox{\textsc{SolveLinearCase}}, \mbox{\textsc{FindOneEquationLinked}},$ $\mbox{\textsc{CheckAllTuples}}, \mbox{\textsc{CheckIrreducibility}}, \mbox{\textsc{CheckWeakerInstance}}, \mbox{\textsc{SolveNonlinked}}. $ In each of them but $\mbox{\textsc{CheckWeakerInstance}}$ we reduce all domains of size greater than 1 before using the recursion and we never increase the domain. Therefore, every path in the recursion tree contains at most $|A|$ calls of the function $\mbox{\textsc{Solve}}$ in the above functions. Let us consider the function $\mbox{\textsc{CheckWeakerInstance}}$. First, we introduce a partial order on the set of relations in $\Gamma$. We say that $\rho_{1}\leqslant\rho_{2}$ if one of the following conditions hold \begin{enumerate} \item the arity of $\rho_{1}$ is less than the arity of $\rho_{2}$. \item the arities of $\rho_{1}$ and $\rho_{2}$ are equal, $\proj_{i}(\rho_{1})\subseteq \proj_{i}(\rho_{2})$ for every $i$, $\proj_{j}(\rho_{1})\neq \proj_{j}(\rho_{2})$ for some $j$. \item the arities of $\rho_{1}$ and $\rho_{2}$ are equal, $\proj_{i}(\rho_{1})= \proj_{i}(\rho_{2})$ for every $i$, and $\rho_{1}\supseteq \rho_{2}$. \end{enumerate} We can check that in the algorithm we never make any relation bigger, and every time we use recursion in $\mbox{\textsc{CheckWeakerInstance}}$ we make every constraint relation strictly smaller. Since our constraint language $\Gamma$ is finite, every path in the recursion tree contains at most $|\Gamma|$ calls of the function $\mbox{\textsc{Solve}}$ in $\mbox{\textsc{CheckWeakerInstance}}$. Therefore the depth of the recursion tree is bounded by $|A|+|\Gamma|$. \end{proof} \begin{cons} The algorithm is polynomial. \end{cons} \begin{proof} Since the depth of the recursive tree is bounded by $|A|+|\Gamma|$, it remains to show that each loop in each function is polynomial. In the function $\mbox{\textsc{Solve}}$ we go through the loop at most $n\cdot |A|$ times, which is polynomially many. In the function $\mbox{\textsc{SolveLinearCase}}$ we go through the external \textbf{repeat} loop at most $r$ times, where $r$ is the dimension of $L_{1}\times\dots\times L_{n}$. Therefore, $r$ is bounded by $|A|\cdot n$. We go through the inner \textbf{repeat} loop at most $|\Gamma|\cdot N$ times, where $N$ is the number of constraints of the instance. In the function $\mbox{\textsc{CheckCycleConsistency}}$ we go through the \textbf{repeat} loop at most $|\Gamma|\cdot n^{2}$ times, because every time we change at least one relation $\rho_{i,j}$, which is from $\Gamma$, and we have $n^{2}$ of them. In the function $\mbox{\textsc{CheckIrreducibility}}$ we go through the \textbf{repeat} loop at most $n$ times, since we always add an element to $I$. All other loops are \textbf{for} loops, and polynomial bounds for them follow from the description of the algorithm. Therefore, the algorithm is polynomial. \end{proof} \subsection{Correctness of the auxiliary functions} \begin{lem}\label{ProofCycleConsistencyFunction} If the function $\mbox{\textsc{CheckCycleConsistency}}$ returns \mbox{``Ok"} then the instance is cycle-consistent, if it returns \mbox{``No solution"} then the instance has no solutions, if it returns $(x_{i},D)$ then any solution of the instance has $x_{i}\in D$. \end{lem} \begin{proof} Assume that the function returned \mbox{``Ok"}. Since every relation $\rho_{i,j}$ in the end of the algorithm is subdirect, the instance is 1-consistent. Consider a path $x_{i_1}-C_{1}-x_{i_2}-\dots-x_{i_{l-1}}-C_{l-1}- x_{i_l}$ starting and ending with $x_{i_{1}}=x_{i_l}$. Since the projection of $C_{j}$ onto $x_{i_{j}},x_{i_{j+1}}$ contains $\rho_{i_{j},i_{j+1}}$ for every $j$, to show that the instance is cycle-consistent, it is sufficient to prove that the formula $$\delta(x_{i_1}) = \exists x_{i_{2}}\dots\exists x_{i_{l-1}}\;\rho_{i_{1},i_{2}}(x_{i_1},x_{i_2})\wedge\dots \wedge\rho_{i_{l-1},i_{l}}(x_{i_{l-1}},x_{i_l}) $$ defines $D_{i_1}$. This follows from the fact that we terminated the function when for all $i,j,k$ $$\rho_{i,j}(x,y) = \exists z\; \rho_{i,j}(x,y)\wedge \rho_{i,k}(x,z)\wedge \rho_{k,j}(z,y).$$ The remaining part follows from the fact that all the constraints $\rho_{i,j}(x_{i},x_{j})$ were derived from the original constraints, and therefore they should hold for any solution. \end{proof} \begin{lem}\label{CheckIrreducibilityCorrectness} If the function $\mbox{\textsc{CheckIrreducibility}}$ returns \mbox{``Ok"} then the instance is irreducible, if it returns \mbox{``No solution"} then the instance has no solutions, if it returns $(x_{i},D)$ then any solution of the instance has $x_{i}\in D$. \end{lem} \begin{proof} Assume that $\mbox{\textsc{CheckIrreducibility}}$ returned \mbox{``Ok"} but the instance is not irreducible. Then, there exists an instance $\Theta'$ such that every constraint of $\Theta'$ is a projection of a constraint from the original instance $\Theta$ on some set of variables, and $\Theta'$ is not fragmented, not linked, and its solution set is not subdirect. Let $\mathbf{X'}$ be the set of all variables occurring in $\Theta'$. Choose a variable $x_{k}\in \mathbf{X'}$. If we consider the set of all pairs $(a,b)\in D_{k}^{2}$ such that $a$ and $b$ can be connected by a path in $\Theta'$ then we get a congruence (see Lemma~\ref{LinkedConIsCon}). Since $\Theta'$ is not linked, there should be a maximal congruence $\sigma_{k}$ containing the congruence. This congruence was chosen in the line 4 of the pseudocode. Since $\Theta'$ is not fragmented, there exists a path in $\Theta'$ from $x_{k}$ to any other variable from $\mathbf{X'}$. Following this path we can always define a partition on the next variable using the partition on the previous one. Since every constraint of $\Theta'$ is a projection of a constraint from $\Theta$, we could define the same partitions on $\Theta$ (see the pseudocode of the function). We just need to show that on every domain $D_{i}$ we can generate a unique partition using $\sigma_{k}$ (the order in which we add elements to $I$ and the way how we choose constraints is not important). Consider two paths from $x_{k}$ to $x_{i}$ defining two partitions. We glue together the beginnings of these paths and get a path from $x_{i}$ to $x_{i}$ connecting these partitions. Since the instance is cycle-consistent, these partitions should be equal. Thus, we showed that starting from the congruence $\sigma_{k}$ (in the pseudocode) we get a unique partition on every variable $x_{i}\in\mathbf{X'}$. Therefore, we actually checked in the algorithm that the solution set of $\Theta'$ is subdirect, which gives us a contradiction. Hence, $\Theta$ is irreducible. The remaining part follows from the fact that $D_{i}'$ is the set of all possible evaluations of $x_{i}$ in solutions of a weaker instance. \end{proof} \subsection{Main theorems without a proof} To explain the correctness of the algorithm in Section~\ref{Algorithm} we used the following main facts, which will be proved in Section~\ref{MainProofs}. \begin{thm}\label{AbsorptionCenterStep} Suppose $\Theta$ is a cycle-consistent irreducible CSP instance, and $B$ is a nontrivial binary absorbing subuniverse or a nontrivial center of $D_{i}$. Then $\Theta$ has a solution if and only if $\Theta$ has a solution with $x_{i}\in B$. \end{thm} \begin{thm}\label{PCStepThm} Suppose $\Theta$ is a cycle-consistent irreducible CSP instance, there does not exist a nontrivial binary absorbing subuniverse or a nontrivial center on $D_{j}$ for every $j$, $(D_{i};w)/\sigma$ is a polynomially complete algebra, and $E$ is an equivalence class of $\sigma$. Then $\Theta$ has a solution if and only if $\Theta$ has a solution with $x_{i}\in E$. \end{thm} \begin{thm}\label{LinearStep} Suppose the following conditions hold: \begin{enumerate} \item $\Theta$ is a linked cycle-consistent irreducible CSP instance with domain set $(D_{1},\ldots,D_{n})$; \item there does not exist a nontrivial binary absorbing subuniverse or a nontrivial center on $D_{j}$ for every $j$; \item if we replace every constraint of $\Theta$ by all weaker constraints then the obtained instance has a solution with $x_{i} = b$ for every $i$ and $b\in D_{i}$ (the obtained instance has a subdirect solution set); \item $L_{i} = D_{i}/\sigma_{i}$ for every $i$, where $\sigma_{i}$ is the minimal linear congruence on $D_{i}$; \item $\phi:\mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}} \to L_{1}\times\dots\times L_{n}$ is a homomorphism, where $q_{1},\dots,q_{k}$ are prime numbers; \item if we replace any constraint of $\Theta$ by all weaker constraints then for every $(a_{1},\ldots,a_{k})\in \mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}}$ there exists a solution of the obtained instance in $\phi(a_{1},\ldots,a_{k})$. \end{enumerate} Then $\{(a_{1},\dots,a_{k})\mid \Theta \text{ has a solution in }\phi(a_1,\dots,a_{k})\}$ is either empty, or is full, or is an affine subspace of $\mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}}$ of codimension 1 (the solution set of a single linear equation). \end{thm} \section{The Remaining Definitions}\label{DefinitionSection} \subsection{Variety of algebras} We consider the variety of all algebras $\mathbf A = (A;w)$ such that $w$ is a special WNU operation of arity $m$. As it was mentioned in Section~\ref{Definition} every domain $D$ will be viewed as a finite algebra $(D;w)$ from this variety. Note that in the remainder of this paper any claim or assumption ``$\rho$ is a relation'' should be understood as ``$\rho$ is a subalgebra of $\mathbf A_{1}\times\dots\times \mathbf A_{n}$'' for the corresponding finite algebras $\mathbf A_{1},\ldots,\mathbf A_{n}$ from this variety. \subsection{Additional notations} For a relation $\rho\subseteq A_{1}\times\dots\times A_{n}$ and a congruence $\sigma$ on $A_{i}$, we say that the $i$-th variable of the relation $\rho$ is \emph{stable under $\sigma$} if $(a_{1},\ldots,a_{n})\in\rho$ and $(a_{i},b_{i})\in\sigma$ imply $(a_{1},\ldots,a_{i-1},b_{i},a_{i+1},\ldots,a_{n})\in\rho$. We say that a relation is \emph{stable under} $\sigma$ if every variable of this relation is stable under $\sigma$. We say that a congruence $\sigma$ is \emph{irreducible} if it is proper and it cannot be represented as an intersection of other binary relations $\delta_{1},\ldots,\delta_{s}$ stable under $\sigma$. For an irreducible congruence $\sigma$ on a set $A$ by $\cover{\sigma}$ we denote the minimal binary relation $\delta\supsetneq \sigma$ stable under $\sigma$. For a relation $\rho$ by $\ConOne(\rho,i)$ we denote the binary relation $\sigma(y,y')$ defined by $$\exists x_{1}\dots\exists x_{i-1}\exists x_{i+1}\dots\exists x_{n}\;\rho(x_{1},\ldots,x_{i-1},y,x_{i+1},\ldots,x_{n})\wedge \rho(x_{1},\ldots,x_{i-1},y',x_{i+1},\ldots,x_{n}).$$ For a constraint $C = \rho(x_{1},\ldots,x_{n})$ by $\ConOne(C,x_{i})$ we denote $\ConOne(\rho,i)$. For a set of constraints $\Omega$ by $\Congruences(\Omega,x)$ we denote the set $\{\ConOne(C,x)\mid C\in \Omega\}$. A congruence $\sigma$ on $\mathbf{A}$ is called \emph{a PC congruence} if $\mathbf{A}/\sigma$ is a PC algebra without a nontrivial binary absorbing subuniverse or center. For an algebra $\mathbf A$ by $\PCCon(\mathbf A)$ we denote the intersection of all PC congruences. A subuniverse $A'\subseteq A$ is called a \emph{PC subuniverse} if $A' = E_{1}\cap\dots\cap E_{s}$, where each $E_{i}$ is an equivalence class of a PC congruence. Note that a PC subuniverse can be empty or full. A congruence $\sigma$ on $\mathbf{A}$ is called \emph{linear} if $\mathbf{A}/\sigma$ is a linear algebra. For an algebra $\mathbf A$ by $\LinCon(\mathbf A)$ we denote the minimal linear congruence. A subuniverse of $\mathbf A$ is called a \emph{linear subuniverse} if it is stable under $\LinCon(\mathbf A)$. Note that we could not define a PC subuniverse in the same way because not every subuniverse stable under $\PCCon(\mathbf A)$ is a PC subuniverse of $\mathbf A$ (see Subsection \ref{PCSubsection}). A subuniverse $B\subseteq A$ is called \emph{a one-of-four subuniverse} if it is a binary absorbing subuniverse, a center, a PC subuniverse, or a linear subuniverse. We say that $B$ is a one-of-four subuniverse of \emph{absorbing type}, \emph{central type}, \emph{PC type}, or \emph{linear type}, respectively. A subuniverse of type $\mathcal T$ is called \emph{minimal} if it is a minimal nontrivial subuniverse of this type. Note that a minimal PC/linear subuniverse is a block of $\PCCon(\mathbf A)$/$\ConLin(\mathbf A)$. \subsection{pp-formula, subconstraint, coverings} Every variable $x$ appearing in the paper has its domain, which we denote by $D_{x}$. In the paper we usually identify a CSP instance and a set of constraints. For an instance $\Omega$ by $\Var(\Omega)$ we denote the set of all variables occurring in constraints of $\Omega$ (the set of all variables $\mathbf{X}$ is not important, all the properties of the instance depend only on the variables that actually occur in the instance). For an instance $\Omega$ and two sets of variables $x_{1},\ldots,x_{n}$ and $y_{1},\ldots,y_{n}$ by $\Omega_{x_{1},\ldots,x_{n}}^{y_{1},\ldots,y_{n}}$ we denote the instance obtained from $\Omega$ by replacement of every variable $x_{i}$ by $y_{i}$. Sometimes we write an instance $\{C_{1},\ldots,C_{n}\}$ as a conjunctive formula $C_{1}\wedge\dots\wedge C_{n}$. We say that an instance is \emph{a tree-formula} if there is no a path $z_{1}-C_{1}-z_{2}-\dots -z_{l-1}-C_{l-1}-z_{l}$ such that $l\geqslant 3$, $z_{1} = z_{l}$, and all the constraints $C_{1},\ldots,C_{l-1}$ are different. An expression $\exists y_{1}\dots\exists y_{s}\; (C_{1}\wedge \dots\wedge C_{n})$ is called \emph{a positive primitive formula (pp-formula)}. To simplify, we use a notation $\Omega(x_{1},\ldots,x_{n})$ to write the pp-formula $\exists y_1\dots \exists y_s \Omega$, where $\Omega$ is an instance (or a conjunction of constraints) and $y_1,\ldots,y_s$ are all variables occurring in $\Omega$ except for $x_{1},\dots,x_{n}$. Then, we say that a pp-formula $\Omega(x_{1},\ldots,x_{n})$ defines a relation $\rho$ if $\rho(x_{1},\ldots,x_{n}) = \exists y_{1}\dots\exists y_{s}\; \Omega$. Sometimes, if it is convenient, we write $\Omega(x_{1},\dots,x_{n})$ meaning the relation defined by the pp-formula. A pp-formula $\Omega(x_{1},\ldots,x_{n})$ is called a \emph{subconstraint of $\Theta$} if $\Omega\subseteq \Theta$, and $\Omega$ and $\Theta\setminus\Omega$ do not have common variables except for $x_{1},\ldots,x_{n}$. Note that all relations that can be defined by a pp-formula are preserved by the WNU (see \cite{geiger1968closed,bond1,bond2}). For a formula $\Omega$ by $\ExpShort(\Omega)$ we denote the set of all formulas $\Omega'$ such that there exists a mapping $S:\Var(\Omega')\to\Var(\Omega)$ satisfying the following conditions: \begin{enumerate} \item the domain of any variable $x$ from $\Omega'$ is equal to the domain of $S(x)$ in $\Omega$; \item for every constraint $((x_{1},\ldots,x_{n});\rho)$ of $\Omega'$, $((S(x_{1}),\ldots,S(x_{n}));\rho)$ is a constraint of $\Omega$; \item if a variable $x$ appears in both $\Omega$ and $\Omega'$ then $S(x) = x$. \end{enumerate} Similarly, by $\Expanded(\Omega)$ (\emph{Expanded Coverings}) we denote the set of all formulas $\Omega'$ such that there exists a mapping $S:\Var(\Omega')\to\Var(\Omega)$ satisfying the following conditions: \begin{enumerate} \item the domain of any variable $x$ from $\Omega'$ is equal to the domain of $S(x)$ in $\Omega$; \item for every constraint $((x_{1},\ldots,x_{n});\rho)$ of $\Omega'$ either the variables $S(x_{1}),\ldots,S(x_{n})$ are different and the constraint $((S(x_{1}),\ldots,S(x_{n}));\rho)$ is weaker or equivalent to some constraint of $\Omega$, or $S(x_{1}) = \dots = S(x_{n})$ and $\{(a,a,\ldots,a)\mid a\in D_{x_{1}}\}\subseteq\rho$; \item if a variable $x$ appears in both $\Omega$ and $\Omega'$ then $S(x) = x$. \end{enumerate} For a variable $x$ we say that $S(x)$ is \emph{the parent of x}. The following easy facts about coverings can be derived from the definition. \begin{enumerate} \item every time we replace some constraints by weaker constraints we get an expanded covering of the original instance; \item any solution of the original instance can be naturally expanded to a solution of a covering (expanded covering); \item suppose $\Omega$ is a covering (expanded covering) of a 1-consistent instance and $\Omega$ is a tree-formula, then the solution set of $\Omega$ is subdirect; \item the union (union of all constraints) of two coverings (expanded coverings) is also a covering (expanded covering); \item a covering (expanded covering) of a covering (expanded covering) is a covering (expanded covering). \end{enumerate} Another important property is formulated in the following lemma. \begin{lem}\label{ExpandedConsistencyLemma} Suppose $\Theta$ is a cycle-consistent irreducible CSP instance and $\Theta'\in\Expanded(\Theta)$. Then $\Theta'$ is cycle-consistent and irreducible. \end{lem} \begin{proof} Let us prove that $\Theta'$ is cycle-consistent. Consider a path in $\Theta'$ starting and ending with $z$. Since $\Theta'$ is an expanded covering, for every constraint of $\Theta'$ either there exists a corresponding constraint in $\Theta$, or this constraint is reflexive (contains all tuples $(a,a,\ldots,a)$). Thus, to transform the path in $\Theta'$ to a path in $\Theta$ it is sufficient to replace every variable $x$ in the path by $S(x)$ (from the definition of expanded coverings), remove all reflexive constraints, and replace the remaining constraints by the corresponding constraints from $\Theta$. Since $\Theta$ is cycle-consistent, the obtained path connects $a$ with $a$ for any $a\in D_{z}$. Since constraints in the path in $\Theta'$ are weaker or equivalent to constraints in the path in $\Theta$ and relations we removed are reflexive, the path in $\Theta'$ also connects $a$ with $a$ for every $a\in D_{z}$. Let us show that $\Theta'$ is irreducible. Assume the converse, then there exists an instance $\Omega'$ consisting of projections of constraints from $\Theta'$ that is not linked, not fragmented, and its solution set is not subdirect. By $\Omega$ we denote the set of corresponding projections of constraints from $\Theta$ corresponding to the constraints of $\Omega'$ (we ignore reflexive constraints from $\Omega'$). To be more accurate, suppose a constraint $C''\in\Omega'$ is equal to $\proj_{\mathbf X}(C')$ for a constraint $C'\in\Theta'$ and a set of variable $\mathbf X$, and $C'$ is weaker or equivalent to a constraint $C\in\Theta$. Then we add the constraint $\proj_{S(\mathbf X)}(C)$ to $\Omega$. Let us show that $\Omega$ is not linked. Assume the contrary. For any path in $\Omega$ connecting elements $a$ and $b$ of $D_{x}$ we can build a path connecting $a$ and $b$ in $\Omega'$ in the following way. We replace every constraint of $\Omega$ by the corresponding constraint of $\Omega'$, and glue them with any path in $\Omega'$ starting and ending with the corresponding variables having the same parent. Since $\Omega'$ is not fragmented, we can always do this. Since $\Omega$ is cycle-consistent, the obtained path connects $a$ and $b$ in $\Omega'$. Thus, $\Omega$ is not linked. Any solution of $\Omega$ can be naturally extended to a solution of $\Omega'$, hence the solution set of $\Omega$ cannot be subdirect. Since $\Omega'$ is not fragmented, $\Omega$ is also not fragmented. Thus, $\Omega$ is not linked, not fragmented, and its solution set is not subdirect, which contradicts the fact that $\Theta$ is irreducible. \end{proof} For an instance $\Theta$ and its variable $x$ by $\LinkedCon(\Theta,x)$ we denote the binary relation on the set $D_{x}$ defined as follows: $(a,b)\in \LinkedCon(\Theta,x)$ if there exists a path in $\Theta$ that connects $a$ and $b$. \begin{lem}\label{LinkedConIsCon} Suppose $\Theta$ is a cycle-consistent CSP instance, $x\in \Var(\Theta)$. Then there exists a path in $\Theta$ connecting all pairs $(a,b)\in \LinkedCon(\Theta,x)$ and $\LinkedCon(\Theta,x)$ is a congruence. \end{lem} \begin{proof} Since the instance is cycle-consistent, gluing all the paths starting and ending at $x$ we can build a path connecting all pairs $(a,b)\in \LinkedCon(\Theta,x)$. The set of all pairs $(a,b)$ connected by this path can be defined by a pp-formula, therefore it is an invariant relation, which is also reflexive (by cycle-consistency) and transitive (we can glue paths). \end{proof} \subsection{Critical, key relations, and parallelogram property} \label{DefinitionRectangularitySubsection} We say that a relation $\rho$ \emph{has the parallelogram property} if any permutation of its variables gives a relation $\rho'$ satisfying $$\forall \alpha_{1},\beta_{1},\alpha_2,\beta_2\colon (\alpha_{1}\beta_2,\beta_1\alpha_2,\beta_1\beta_2\in\rho' \Rightarrow \alpha_1\alpha_2\in\rho').$$ Note that the parallelogram property plays an important role in universal algebra (see \cite{agnes} for more details). We say that \emph{the $i$-th variable of a relation $\rho$ is rectangular}, if for every $(a_{i},b_{i})\in\ConOne(\rho,i)$ and $(a_{1},\ldots,a_{n})\in\rho$ we have $(a_{1},\ldots,a_{i-1},b_{i},a_{i+1},\ldots,a_{n})\in\rho$. We say that a relation is \emph{rectangular} if all of its variables are rectangular. The following facts can be easily seen: if the $i$-th variable of a subdirect relation $\rho$ is rectangular then $\ConOne(\rho,i)$ is a congruence; if a relation has the parallelogram property then it is rectangular. A relation $\rho\subseteq A_{1}\times\dots\times A_{n}$ is called \emph{essential} if it cannot be represented as a conjunction of relations with smaller arities. It is easy to see that any relation $\rho$ can be represented as a conjunction of essential relations that are projections of $\rho$ on some sets of variables (See Lemma 4.2 in \cite{MinimalClones}). A relation $\rho\subseteq A_{1}\times\dots\times A_{n}$ is called \emph{critical} if it cannot be represented as an intersection of other subalgebras of $\mathbf A_{1}\times\dots\times \mathbf A_{n}$ and it has no dummy variables This notion was introduced in \cite{agnes} but appeared in \cite{mvlsc,mybook} by the name maximal. For a critical relation $\rho$ the minimal relation $\rho'$ (a subalgebra of $\mathbf A_{1}\times\dots\times \mathbf A_{n}$) such that $\rho'\supsetneq\rho$ is called \emph{the cover of $\rho$}. Suppose $\rho\subseteq A_{1}\times\dots\times A_{h}$. A tuple $\Psi =(\psi_1,\psi_2,\ldots,\psi_h)$, where $\psi_i:A_{i}\to A_{i}$, is called a \emph{unary vector-function}. We say that $\Psi$ \emph{preserves} $\rho$ if $\Psi\left(\begin{smallmatrix} a_1\\ a_2\\ \vdots\\ a_h \end{smallmatrix}\right):= \left(\begin{smallmatrix} \psi_1(a_1)\\ \psi_2(a_2)\\ \vdots\\ \psi_h(a_h) \end{smallmatrix}\right)\in \rho$ for every $\left(\begin{smallmatrix} a_1\\ a_2\\ \vdots\\ a_h \end{smallmatrix}\right)\in \rho$. We say that $\rho$ is \emph{a key relation} if there exists a tuple $\beta\in (A_{1}\times\dots\times A_{h})\setminus \rho$ such that for every $\alpha\in (A_{1}\times\dots\times A_{h})\setminus \rho$ there exists a vector-function $\Psi$ which preserves $\rho$ and gives $\Psi(\alpha) = \beta$. A tuple $\beta$ is called a \emph{key tuple} for $\rho$. The notion key relation was introduced in \cite{KeyRelations}, where such relations were characterized for all algebras having a WNU term operation. A constraint is called \emph{critical/essential/key} if the constraint relation is critical/essential/key. The notions critical, crucial, essential, and key relation are related to each other, namely, we can observe: \begin{enumerate} \item if $C$ is a constraint in a CSP instance and $C$ is crucial in some $(D_{1},\dots,D_{n})$ then the constraint relation of $C$ is critical; \item every critical relation of arity greater than 1 is essential; \item every critical relation of arity greater than 1 is a key relation (see Lemma 2.4 in \cite{KeyRelations}). \end{enumerate} The notions essential, critical, and key relations (see \cite{KeyRelations} for their comparison) proved their efficiency in clone theory and universal algebra (see \cite{mvlsc,mybook,MinimalClones,VardiProblem,dm_post,agnes}). Instead of considering all relations we consider only relations with one of these properties, and this is still the general case because any relation can be represented as a conjunction of essential/key/critical relations. For instance, we can always assume that all constraint relations are critical. \subsection{Reductions} Suppose the domain set of an instance $\Theta$ is $D = (D_{1},\ldots,D_{n})$. A domain set $D' = (D_{1}',\ldots,D_{n}')$ is called \emph{a reduction of $\Theta$} if $D_{i}'$ is a subuniverse of $D_{i}$ for every $i$. Note that to avoid unnecessary bold font starting at this subsection we do not use it for domain sets. Thus, every time we write $D$ without a subscript we mean a domain set or a reduction. Note that any reduction of $\Theta$ can be naturally extended to a covering (expanded covering) of $\Theta$, thus we assume that any reduction is automatically defined on any covering (expanded covering). A reduction $D' = (D_{1}',\ldots,D_{n}')$ is called \emph{1-consistent} if the instance obtained after reduction of every domain is 1-consistent. We say that $D'$ is \emph{an absorbing reduction}, if there exists a term operation $t$ such that $D_{i}'$ is a binary absorbing subuniverse of $D_{i}$ with the term operation $t$ for every $i$. We say that $D'$ is \emph{a central reduction}, if $D_{i}'$ is a center of $D_{i}$ for every $i$. We say that $D'$ is \emph{a PC/linear reduction}, if $D_{i}'$ is a PC/linear subuniverse of $D_{i}$ and $D_{i}$ does not have a nontrivial binary absorbing subuniverse or a nontrivial center for every $i$. Additionally, we say that $D'$ is \emph{a minimal central/PC/linear reduction} if $D'$ is a minimal center/PC/linear subuniverse of $D_{i}$ for every $i$. We say that $D'$ is \emph{a minimal absorbing reduction} for a term operation $t$ if $D'$ is a minimal absorbing subuniverse of $D_{i}$ with $t$ for every $i$. A reduction is called \emph{nonlinear} if it is an absorbing, central, or PC reduction. A reduction $D'$ is called \emph{one-of-four reduction} if it is an absorbing, central, PC, or linear reduction such that $D'\neq D$. We usually denote reductions by $D^{(j)}$ for some $j$ (or by $D^{(\top)}$). In this case by $C^{(j)}$ we denote the constraint obtained after the reduction of the constraint $C$. Similarly, by $\Theta^{(j)}$ we denote the instance obtained after the reduction of every constraint of $\Theta$. For a relation $\rho$ by $\rho^{(j)}$ we denote the relation $\rho$ restricted to the corresponding domains of $D^{(j)}$. Sometimes we write $(a_{1},\ldots,a_{n})\in D^{(j)}$ meaning that every $a_{i}$ belongs to the corresponding $D_{x}^{(j)}$. A \emph{strategy} for a CSP instance $\Theta$ with a domain set $D$ is a sequence of reductions $D^{(0)},\ldots,D^{(s)}$, where $D^{(j)} = (D_{1}^{(j)},\ldots,D_{n}^{(j)})$, such that $D^{(0)} = D$ and $D^{(j)}$ is a one-of-four 1-consistent reduction of $\Theta^{(j-1)}$ for every $j\geqslant 1$. A strategy is called \emph{minimal} if every reduction in the sequence is minimal. \subsection{Bridges} Suppose $\sigma_{1}$ and $\sigma_{2}$ are congruences on $D_{1}$ and $D_{2}$, respectively. A relation $\rho\subseteq D_{1}^{2}\times D_{2}^{2}$ is called \emph{a bridge} from $\sigma_{1}$ to $\sigma_{2}$ if the first two variables of $\rho$ are stable under $\sigma_{1}$, the last two variables of $\rho$ are stable under $\sigma_{2}$, $\proj_{1,2}(\rho) \supsetneq \sigma_{1}$, $\proj_{3,4}(\rho) \supsetneq \sigma_{2}$, and $(a_{1},a_2,a_{3},a_{4})\in \rho$ implies $$(a_1,a_2)\in \sigma_{1}\Leftrightarrow (a_3,a_4)\in \sigma_{2}.$$ An example of a bridge is the relation $\rho=\{(a_{1},a_{2},a_{3},a_{4})\mid a_{1},a_{2},a_{3},a_{4}\in \mathbb Z_{4}: a_{1}-a_{2} = 2 a_{3} - 2 a_{4}\}$. We can check that $\rho$ is a bridge from the equality relation (0-congruence) and $(mod\;2)$ equivalence relation. For example, we have $\proj_{1,2} \rho$ is $(mod\;2)$-equivalence relation, $\proj_{3,4} \rho$ is full relation. The notion of a bridge is strongly related to other notions in Universal Algebra and Tame Congruence Theory such as similarity and centralizers (see \cite{RossSlides} for the detailed comparison). For a bridge $\rho$ by $\widetilde{\rho}$ we denote the binary relation defined by $\widetilde{\rho}(x,y) = \rho(x,x,y,y)$. The following lemma shows how we can compose bridges. \begin{lem}\label{BridgeComposition} Suppose $\sigma_{1}$, $\sigma_{2}$, $\sigma_{3}$ are irreducible congruences, $\rho_{1}$ is a bridge from $\sigma_{1}$ to $\sigma_{2}$, $\rho_{2}$ is a bridge from $\sigma_{2}$ to $\sigma_{3}$. Then the formula $$\rho(x_1,x_2,z_{1},z_{2}) = \exists y_{1}\exists y_{2}\; \rho_{1}(x_{1},x_{2},y_{1},y_{2})\wedge \rho_{2}(y_{1},y_{2},z_{1},z_{2})$$ defines a bridge from $\sigma_{1}$ to $\sigma_{3}$. Moreover, $\widetilde{\rho} = \widetilde{\rho_{1}}\circ\widetilde{\rho_{2}}$. \end{lem} \begin{proof} Stability of the first two variables under $\sigma_{1}$ and of the last two variables under $\sigma_{3}$ follows from the definition. Let us prove that $\proj_{1,2}(\rho)\supsetneq \sigma_{1}$ (the inclusion $\proj_{3,4}(\rho)\supsetneq \sigma_{3}$ can be proved in the same way). By the definition, for every $a$ there exists $b$ such that $(a,a,b,b)\in \rho_{1}$, and for every $b$ there exists $c$ such that $(b,b,c,c)\in\rho_{2}$. Then $(a,a,c,c)\in\rho$, and since the first two variables of $\rho_{1}$ are stable under $\sigma_{1}$ we obtain $\proj_{1,2}(\rho)\supseteq \sigma_{1}$. Since $\sigma_2$ is irreducible, $\proj_{3,4}(\rho_{1})\supseteq \sigma_{2}^{*}$ and $\proj_{1,2}(\rho_{2})\supseteq \sigma_{2}^{*}$. Choose $(b_{1},b_{2})\in \sigma_{2}^{*}$, then there exist $a_1,a_2, c_{1},c_{2}$ such that $(a_{1},a_{2},b_{1},b_{2})\in\rho_{1}$ and $(b_{1},b_{2},c_{1},c_{2})\in\rho_{2}$. Then $(a_1,a_2,c_{1},c_2)\in\rho$, which means that $\proj_{1,2}(\rho)\supsetneq \sigma_{1}$. Suppose $(a_1,a_2,c_{1},c_{2})\in\rho$. If $(a_{1},a_{2})\in\sigma_{1}$ then, since $\rho_{1}$ is a bridge, the corresponding values of $y_{1}$ and $y_2$ are equivalent modulo $\sigma_{2}$. Since $\rho_{2}$ is a bridge we obtain that $c_{1}$ and $c_{2}$ are equivalent modulo $\sigma_{3}$. The equation $\widetilde{\rho} = \widetilde{\rho_{1}}\circ\widetilde{\rho_{2}}$ follows directly from the definition of $\rho$. \end{proof} A bridge $\rho\subseteq D^{4}$ is called \emph{reflexive} if $(a,a,a,a)\in \rho$ for every $a\in D$. We say that two congruences $\sigma_{1}$ and $\sigma_{2}$ on a set $D$ are \emph{adjacent} if there exists a reflexive bridge from $\sigma_{1}$ to $\sigma_{2}$. \begin{remark} Since we can always put $\rho(x_{1},x_{2},x_{3},x_{4}) = \sigma(x_{1},x_{3})\wedge \sigma (x_{2},x_{4})$, any proper congruence $\sigma$ is adjacent with itself. \end{remark} A reflexive bridge $\rho$ from an irreducible congruence $\sigma_{1}$ to an irreducible congruence $\sigma_{2}$ is called \emph{optimal} if there does not exist a reflexive bridge $\rho'$ from $\sigma_{1}$ to $\sigma_{2}$ such that $\widetilde{\rho'} \supsetneq\widetilde{\rho}$. Suppose $\rho$ is a reflexive bridge from $\sigma_{1}$ to $\sigma_{2}$. then we can build a new bridge $$\rho'(x_1,x_2,y_1,y_2) =\exists x_1'\exists x_2'\exists y_1'\exists y_2' \left[\rho(x_1,x_2,y_1',y_2')\wedge \rho(x_1',x_2',y_1',y_2')\wedge \rho(x_1',x_2',y_1,y_2)\right] $$ from $\sigma_1$ to $\sigma_{2}$ such that $\widetilde{\rho'} = \widetilde{\rho}\circ \widetilde{\rho}^{-1}\circ\widetilde{\rho}$. Note that because of the reflexivity, $\widetilde \rho$ contains the equality relation. Thus, if $\rho$ is optimal, then $\widetilde{\rho}$ is a congruence. For an irreducible congruence $\sigma$ by $\Opt(\sigma)$ we denote the congruence $\widetilde{\rho}$ for an optimal bridge $\rho$ from $\sigma$ to $\sigma$. Since we can compose two reflexive bridges, $\Opt(\sigma)$ is unique and therefore well-defined. For a set of irreducible congruences $\mathfrak C$ put $\Opt(\mathfrak C) = \{\Opt(\sigma)\mid\sigma\in \mathfrak C\}$. \begin{lem}\label{OptimalForAdjacent} Suppose $\sigma_{1}$ and $\sigma_{2}$ are irreducible adjacent congruences. Then $\Opt(\sigma_{1}) = \Opt(\sigma_{2})$. \end{lem} \begin{proof} Let $\rho_{1}$ be an optimal bridge from $\sigma_{1}$ to $\sigma_{1}$, $\rho_{2}$ be an optimal bridge from $\sigma_{2}$ to $\sigma_{2}$, and $\rho$ be a reflexive bridge from $\sigma_{1}$ to $\sigma_{2}$. Assume that $\Opt(\sigma_{2})\not\subseteq\Opt(\sigma_{1})$, that is $\widetilde\rho_{2}\not\subseteq\widetilde\rho_{1}$. Using Lemma~\ref{BridgeComposition}, we compose bridges $\rho_{1}$, $\rho$,$\rho_{2}$, and $\rho$ (in this order) to obtain a reflexive bridge $\rho_{1}'$ from $\sigma_{1}$ to $\sigma_{1}$. Since $\widetilde\rho_{1}'\supseteq \widetilde\rho_{1}\cup\widetilde\rho_{2}$, we get a contradiction with the fact that $\rho_{1}$ is optimal. \end{proof} We say that two rectangular constraints $C_{1}$ and $C_{2}$ are \emph{adjacent} in a common variable $x$ if $\ConOne(C_{1},x)$ and $\ConOne(C_{2},x)$ are adjacent. A formula is called \emph{connected} if every constraint in the formula is critical and rectangular, and the graph, whose vertexes are constraints and edges are adjacent constraints, is connected. Note that this connectedness is not related to the paths from one variable to another connecting two elements. Recall that if for every $a,b$ there exists a path that connects $a$ and $b$, then the instance is called linked (see Section \ref{CSPInstancesDef}). It can be shown (see Corollary~\ref{PathInConnectedComponent}) that every two constraints with a common variable in a connected instance are adjacent. \section{Absorption, Center, PC Congruence, and Linear Congruence}\label{AbsCenterPCLinear} \subsection{Binary Absorption} \begin{lem}\label{AbsImplies}\cite{DecidingAbsorption} Suppose $\rho$ is defined by a pp-formula $\Omega(x_{1},\ldots,x_{n})$ and $\Omega'$ is obtained from $\Omega$ by replacement of some constraint relations $\sigma_{1},\ldots,\sigma_{s}$ by constraint relations $\sigma_{1}',\ldots,\sigma_{s}'$ such that $\sigma_{i}'$ absorbs $\sigma_{i}$ with a term operation $t$ for every $i$. Then the relation defined by $\Omega'(x_{1},\ldots,x_{n})$ absorbs $\rho$ with the term operation $t$. \end{lem} \begin{conslem}\label{AbsorptionQuotient} Suppose $\theta$ is a congruence of $A$. \begin{enumerate} \item If $B$ is an absorbing subuniverse of $A$, then $\{b/\theta\mid b\in B\}$ is an absorbing subuniverse of $A/\theta$ with the same term. \item If $A$ has no nontrivial (binary) absorbing subuniverse, then neither does $A/\theta$. \end{enumerate} \end{conslem} \begin{conslem}\label{AbsImpliesCons} Suppose $\rho \subseteq A_{1}\times\dots\times A_{n}$ is a relation such that $\proj_1 (\rho) = A_{1}$ and $C = \proj_{1}((C_{1}\times\dots \times C_{n})\cap\rho)$, where $C_{i}$ is an absorbing subuniverse in $A_{i}$ with a term $t$ for every $i$. Then $C$ is an absorbing subuniverse in $A_{1}$ with the term $t$. \end{conslem} \begin{proof} It is not hard to see that the sets $C$ and $A_{1}$ can be defined by the following pp-formulas $$(x_1\in C) = \exists x_{2}\dots\exists x_{n}\; \left[(x_{1}\in C_{1})\wedge \dots\wedge (x_{n}\in C_{n})\wedge \rho(x_{1},\ldots,x_{n})\right],$$ $$(x_1\in A_1)= \exists x_{2}\dots\exists x_{n}\; \left[(x_{1}\in A_{1})\wedge \dots\wedge (x_{n}\in A_{n})\wedge \rho(x_{1},\ldots,x_{n})\right].$$ It remains to apply Lemma~\ref{AbsImplies}. \end{proof} \begin{lem}\label{AbsorbingEquality} Suppose $\kappa_{A}\subseteq A\times A$ is the equality relation, $\sigma\supseteq \kappa_{A}$, and $\omega$ is a nontrivial binary absorbing subuniverse in $\sigma$. Then $\omega\cap\kappa_{A} \neq \varnothing$. \end{lem} \begin{proof} We prove the lemma by induction on the size of $A$. Suppose $\omega$ absorbs $\sigma$ with a binary absorbing term operation $f$. Assume that there exists a nontrivial binary absorbing subuniverse $B\subsetneq A$ with the absorbing operation $f$. For any $(b_{1},b_{2})\in\omega$ and $b\in B$ we have $(f(b_{1},b),f(b_{2},b))\in \omega\cap (B\times B)$. Then by Lemma~\ref{AbsImplies}, $\omega\cap (B\times B)$ is a nontrivial absorbing subuniverse in $\sigma\cap (B\times B)$, and we can restrict $\sigma$ and $\omega$ to $B$ and apply the inductive assumption. Thus, we assume that there does not exist a nontrivial binary absorbing subuniverse $B\subsetneq A$ with the absorbing operation $f$. By Lemma~\ref{AbsImplies}, $\proj_{1}(\omega)$ and $\proj_{2}(\omega)$ binary absorb $A$, then $\proj_{1}(\omega)=\proj_{2}(\omega)=A$. Now, the statement of the lemma could be derived from \cite[Theorem 6]{barto2012near} but we will finish the argument because it is simple. For every $b\in A$ we consider $A_{b}= \{a\mid (a,b)\in \sigma\}$ and $C_{b}= \{a\mid (a,b)\in \omega\}$. Since $\proj_{2}(\omega)=A$, $C_{b}\neq\varnothing$ for every $b$. By Lemma~\ref{AbsImplies} $C_{b}$ is a binary absorbing subuniverse in $A_{b}$ with $f$. Therefore $A_{b}\neq A$ or $A_{b}=C_{b} = A$. In the latter case we have $(b,b)\in\omega$, which completes this case. Assume that $A_{b}\neq A$ for some $b$. Since $\sigma\supseteq \kappa_{A}$, we have $b\in A_{b}$ and $(A_{b}\times A_{b})\cap \omega \supseteq (C_{b}\times\{b\})\cap \omega \neq \varnothing$. Then we restrict $\sigma$ and $\omega$ to $A_{b}$ and apply the inductive assumption. \end{proof} \begin{lem}\label{GenBinAbToBinAb} Suppose $\rho$ is a nontrivial absorbing subuniverse of $A_{1}\times \dots\times A_{n}$. Then for some $i$ there exists a nontrivial absorbing subuniverse $B_{i}$ in $A_{i}$ with the same term. \end{lem} \begin{proof} We prove this lemma by induction on the arity of $\rho$. If the projection of $\rho$ onto the first coordinate is not $A_{1}$ then by Lemma~\ref{AbsImplies} this projection is an absorbing subuniverse with the same term. Otherwise, we choose any element $a\in A_{1}$ such that $\rho$ does not contain all tuples starting with $a$, and consider $\rho' = \{(a_2,\ldots,a_{n})\mid (a,a_2,\ldots,a_n)\in \rho\}$, which, by Lemma~\ref{AbsImplies}, is a nontrivial absorbing subuniverse in $A_{2}\times\dots \times A_{n}$ with the same term. It remains to apply the inductive assumption. \end{proof} A relation $\rho\subseteq A^{n}$ is called \emph{$C$-essential} if $\rho\cap(C^{i-1}\times A\times C^{n-i})\neq \varnothing$ for every $i$ but $\rho\cap C^{n}=\varnothing$. A relation $\rho\subseteq A_{1}\times\dots\times A_{n}$ is called \emph{$(C_{1},\dots,C_{n})$-essential} if $\rho\cap (C_{1}\times\dots\times C_{i-1}\times A_{i} \times C_{i+1} \times\dots\times C_{n})\neq\varnothing$ for every $i$ but $\rho\cap (C_{1}\times\dots\times C_{n})=\varnothing$. \begin{lem}\label{NoEssential}\cite{DecidingAbsorption} Suppose $C$ is a subuniverse of $A$. Then $C$ absorbs $A$ with an operation of arity $n$ if and only if there does not exist a $C$-essential relation $\rho\subseteq A^{n}$. \end{lem} \begin{lem}\label{AbsLessThanThree} Suppose $D^{(1)}$ is an absorbing reduction of a CSP instance $\Theta$ and a relation $\rho\subseteq D_{i_1}\times\dots\times D_{i_n}$ is subdirect, where $D_{i_1}, \dots,D_{i_n}$ are domains of variables from $\Theta$. Then $\rho^{(1)}$ is not empty. \end{lem} \begin{proof} It is sufficient to apply the binary absorbing term operation $t$ to all the tuples of $\rho$ using term $t(x_1,t(x_2,t(x_3,\dots,t(x_{s-1},x_{s}))))$, where $s=|\rho|$. The resulting tuple will be from $\rho^{(1)}$, which means that $\rho^{(1)}$ is not empty. \end{proof} \subsection{Center} \begin{lem}\label{CenterImplies} Suppose $\rho$ is defined by a pp-formula $\Omega(x_{1},\ldots,x_{n})$ and $\Omega'$ is obtained from $\Omega$ by replacement of some constraint relations $\sigma_{1},\ldots,\sigma_{s}$ by constraint relations $\sigma_{1}',\ldots,\sigma_{s}'$ such that $\sigma_{i}'$ is a center of $\sigma_{i}$ for every $i$. Then the relation defined by $\Omega'(x_{1},\ldots,x_{n})$ is a center of~$\rho$. \end{lem} \begin{proof} Suppose $\Omega'(x_{1},\ldots,x_{n})$ defines a relation $\rho'$. Suppose $\mathbf B_{i}$ and $R_{i}$ are the corresponding algebra and binary relation such that $\sigma_{i}' = \{c\mid \forall b\in B_{i}\colon (c,b)\in R_{i}\}$. Let $|B_{i}| = n_{i}$ for every $i$. Let $\Upsilon$ be obtained from $\Omega$ by replacement of every constraint $\sigma_{i}(y_{1},\ldots,y_{t})$ by $$R_{i}((y_{1},\ldots,y_{t}),z_{i,1})\wedge \dots\wedge R_{i}((y_{1},\ldots,y_{t}),z_{i,n_{i}}).$$ Suppose $\Upsilon((x_1,\ldots,x_{n}),(z_{1,1},\dots,z_{s,n_{s}}))$ defines a relation $R$. It is not hard to see that $\rho' = \{c\mid \forall b\in (B_{1}^{n_{1}}\times\dots\times B_{s}^{n_{s}})\colon (c,b)\in R\}$. By Lemma~\ref{GenBinAbToBinAb}, there is no nontrivial binary absorbing subuniverse on $B_{1}^{n_{1}}\times\dots\times B_{s}^{n_{s}}$. This proves that $\rho'$ is a center of $\rho$. \end{proof} \begin{conslem}\label{CenterQuotient} Suppose $\theta$ is a congruence of $A$ \begin{enumerate} \item If $B$ is a center of $A$, then $\{b/\theta\mid b\in B\}$ is a center of $A/\theta$. \item If $A$ has no nontrivial center, then neither does $A/\theta$. \end{enumerate} \end{conslem} \begin{conslem}\label{CenterImpliesCons} Suppose $\rho \subseteq A_{1}\times\dots\times A_{n}$ is a relation such that $\proj_1 (\rho) = A_{1}$ and $C = \proj_{1}((C_{1}\times\dots \times C_{n})\cap\rho)$, where $C_{i}$ is a center in $A_{i}$ for every $i$. Then $C$ is a center in $A_{1}$. \end{conslem} \begin{conslem}\label{CenterProduct} Suppose $C_{i}$ is a center of $D_{i}$ for every $i$. Then $C_{1}\times\dots\times C_{n}$ is a center of $D_{1}\times\dots\times D_{n}$. \end{conslem} \begin{conslem}\label{CenterIntersection} Suppose $C_{1}$ and $C_{2}$ are centers of $D$. Then $C_{1}\cap C_{2}$ is a center of $D$. \end{conslem} \begin{lem}\label{GenCenterToCenter} Suppose $\rho$ is a nontrivial center of $A_{1}\times \dots\times A_{n}$. Then for some $i$ there exists a nontrivial center $C_{i}$ of $A_{i}$. \end{lem} \begin{proof} We prove by induction on the arity of $\rho$. If the projection of $\rho$ onto the first coordinate is not $A_{1}$ then by Lemma~\ref{CenterImplies} this projection is a center. Otherwise, we choose any element $a\in A_{1}$ such that $\rho$ does not contain all tuples starting with $a$. Then we consider $\rho' = \{(a_2,\ldots,a_{n})\mid (a,a_2,\ldots,a_n)\in \rho\}$, which, by Lemma~\ref{CenterImplies}, is a nontrivial center of $A_{2}\times\dots \times A_{n}$ . It remains to apply the inductive assumption. \end{proof} In the proof of the following two lemmas we assume that a center $C$ is defined by $C = \{a\in A\mid \forall b\in B\colon (a,b)\in R\}$ for a subalgebra $R$ of $\mathbf A\times \mathbf B$. For an element $a\in A$ we put $a^{+}=\{b\mid (a,b)\in R\}$. Also, we introduce a quasi-order on elements of $A$. We say that $y_1\leqslant y_2$ if $y_1^{+}\subseteq y_{2}^{+}$, and $y_1\sim y_2$ if $y_1^{+}= y_{2}^{+}$. Note that if $b_{1},b_{2},\ldots,b_{m}\geqslant c$, then $w(b_{1}^{+},\dots,b_{m}^{+})\supseteq w(c^{+},\dots,c^{+})\supseteq c^{+}$, and therefore $w(b_1,\ldots,b_m)\geqslant c$. \begin{lem}\label{wnuofcentralelements} Suppose $(c_{1},\ldots,c_{m})\in A^{m}$, $c_{i}\in C$ for every $i\neq j$, and $c_{j}\notin C$. Then $w(c_1,\ldots,c_{m})>c_{j}$. \end{lem} \begin{proof} Assume the contrary, then $w(c_1,\ldots,c_{m})\sim c_{j}$ and $w(\underbrace{B,\ldots,B}_{i-1},c_{j}^{+} ,\underbrace{B,\ldots,B}_{m-i}) \subseteq c_{j}^{+}$. This is enough to imply that $c_{j}^{+}$ is a binary absorbing subuniverse with the term $x\circ y = w(x,x,\ldots,x,y)$. In fact, if $b_{1}\in B$ and $b_2 \in c_{j}^{+}$, then we can write $b_1 \circ b_2 = w(b_{1},\ldots,b_{1},b_{2},b_{1},\ldots,b_{1})$ with $b_2$ in the $j$-th spot; if $b_{1}\in c_{j}^{+}$ and $b_2 \in B$, then we can write $b_1 \circ b_2 = w(b_{1},\ldots,b_{1},b_{2},b_{1},\ldots,b_{1})$ with one of the $b_1$'s in the $j$-th spot. In both cases we obtain $b_{1}\circ b_{2}\in c_{j}^{+}$. Contradiction. \end{proof} \begin{lem}\label{AbsorptionFromSequence} Suppose $w$ is a special WNU of arity $m$, $C$ is a nontrivial center in $A$, $\delta\subseteq A^{s}$ is $C$-essential. Then $s<m^{|A|}$. \end{lem} \begin{proof} Choose $\alpha_1,\dots,\alpha_{s}\in\delta$ such that $\alpha_{i}\in C^{i-1}\times A \times C^{s-i}$ for every $i$. We start with the matrix $M_1$ whose columns are tuples $\alpha_{1},\ldots,\alpha_{s}$. Then we build a matrix $M_{2}$ whose columns are tuples $w(\alpha_{1},\ldots,\alpha_{m})$, $w(\alpha_{m+1},\ldots,\alpha_{2m})$, $w(\alpha_{2m+1},\ldots,\alpha_{3m}),\ldots.$ Then we apply the WNU $w$ to the corresponding columns of the previous matrix to define a new matrix $M_{3}$. We continue this way until we get a matrix with less than $m$ columns. Note that the next matrix has $m$ times less columns than the previous one. It is not hard to see that every row of every matrix has at most one element that is not from the center. Moreover, by Lemma~\ref{wnuofcentralelements}, the noncentral element in the $i$-th row of the $(j+1)$-th matrix is greater than the noncentral element in the $i$-th row of the $j$-th matrix. This means that the $|A|$-th matrix, if it exists, has only central elements, which contradicts our assumptions. Hence, it does not exist and $s< m^{|A|}$. \end{proof} Combining this result with Lemma~\ref{NoEssential}, we obtain the following corollary. \begin{conslem}\label{centerImpliesAbsorption} Suppose $C$ is a center of $A$. Then $C$ is an absorbing subuniverse of~$A$. \end{conslem} The following lemma is a stronger version of an original lemma suggested by Marcin Kozik. \begin{lem}\label{IncreaseArity} Suppose $C_{1}\subseteq A_{1}$ and $C_{2}\subseteq A_{2}$ are centers, $B$ is a subuniverse of $D$, and a relation $\rho\subseteq A_{1}\times D^{l}\times A_{2}$ is $(C_{1},B,\dots,B,C_{2})$-essential. Then there exists a relation $\rho'\subseteq A_{1}\times D^{2l}\times A_{1}$ that is $(C_{1},B,\dots,B,C_{1})$-essential. \end{lem} \begin{proof} Assume that $\rho$ is a minimal relation (with respect to inclusion) that is $(C_{1},B,\dots,B,C_{2})$-essential. Put $E = \proj_{l+2}(\rho\cap (C_{1}\times B^{l}\times A_{2}))$. Since $\rho$ is minimal, for any $b\in E$ the algebra generated by $\{b\}\cup C_{2}$ contains $\proj_{l+2}(\rho)$ (otherwise we would restrict the $(l+2)$-th variable of $\rho$ to this algebra). Fix $b\in E$. Let $\sigma$ be the subalgebra of $A_{2}\times A_{2}$ generated by $\{b\}\times C_{2}\cup C_{2} \times C_{2}\cup C_{2}\times \{b\}$. Since our algebras are idempotent, for any $c\in \proj_{l+2}(\rho)$ we have $\{c\}\times C_{2}\subseteq \sigma$. Put $$\rho'(x,y_1,\ldots,y_l,y_{1}',\ldots,y_{l}',x') = \exists z\exists z' \;\rho(x,y_1,\ldots,y_{l},z)\wedge \rho(x',y_{1}',\ldots,y_{l}',z')\wedge\sigma(z,z').$$ Let us show that $\rho'$ is $(C_{1},B,\dots,B,C_{1})$-essential. Since $\rho$ is $(C_{1},B,\dots,B,C_{2})$-essential, for any $i\in\{1,\ldots,l+1\}$ there exists a tuple $(a_{1},\ldots,a_{l+2})$ such that only its $i$-th element is not from the corresponding set of $(C_{1},B,\dots,B,C_{2})$. Since $b\in E$, there exists $c_{1},\ldots,c_{l+1}$ such that $(c_{1},\ldots,c_{l+1},b)\in \rho\cap (C_{1}\times B^{l}\times A_{2})$. Then $(a_{1},\dots,a_{l+1}, c_{2},\ldots,c_{l+1},c_{1})\in \rho'$ (it is sufficient to put $z= a_{l+2}$ and $z' = b$). Thus, for any $i\in\{1,\ldots,l+1\}$ we build a tuple from $\rho'$ such that only its $i$-th element is not from the corresponding set of $(C_{1},B,\dots,B,C_{1})$. In the same way we can build such a tuple for each $i\in\{l+2,\ldots,2l+2\}$. To prove that $\rho'$ is $(C_{1},B,\dots,B,C_{1})$-essential it remains to show that $(C_{1}\times B^{2l}\times C_{1})\cap \rho' =\varnothing$. Assume the converse, let a tuple from the intersection be obtained by sending $z$ to $d$ and $z'$ to $d'$. Clearly, $d,d'\in E$ and $\{e\in A_{2}\mid (e,d')\in \sigma\}\supseteq \{d\}\cup C_{2}$, therefore $\{e\in A_{2}\mid (e,d')\in\sigma\} \supseteq \proj_{l+2}(\rho)$. Hence, $\{e\in A_{2}\mid (b,e)\in\sigma\}\supseteq \{d'\}\cup C_{2}$ and $\{e\in A_{2}\mid (b,e)\in\sigma\} \supseteq \proj_{l+2}(\rho)$. Thus, $(b,b)\in\sigma$ and there exists an $n$-ary term $t$ such that $$t(b,b,\ldots,b,c_{1},\ldots,c_{i}) = b,\;\;\; t(c_{1}',\ldots,c_{j}',b,b,\ldots,b) = b,$$ where $i+j\geqslant n$ and $c_{1},\ldots,c_{i},c_{1}',\ldots,c_{j}'\in C_{2}$. Suppose $R\subseteq A_{2}\times G$ is a binary relation from the definition of the center $C_{2}$, $b^{+} = \{a\mid (b,a)\in R\}$. Since $t$ preserves $R$, we have $$t(b^{+},b^{+},\ldots,b^{+},\underbrace{G,\ldots,G}_{i}) \subseteq b^{+},\;\;\; t(\underbrace{G,\ldots,G}_{j},b^{+},b^{+},\ldots,b^{+}) \subseteq b^{+},$$ and therefore $b^{+}$ absorbs $G$ with the binary term $t(\underbrace{x,\ldots,x}_{j},y,\ldots,y)$. This contradiction completes the proof. \end{proof} \begin{conslem}\label{AlmostEssTuple} Suppose $C_{1}\subseteq A_{1}$ and $C_{2}\subseteq A_{2}$ are centers and $B\subseteq D$ is an absorbing subuniverse. Then there does not exist $(C_{1},B,C_{2})$-essential relation $\rho\subseteq A_{1}\times D\times A_{2}$. \end{conslem} \begin{proof} Assume that such a relation $\rho$ exists. Iteratively applying Lemma~\ref{IncreaseArity} to $\rho$ we can obtain a $(C_{1},B,\dots,B,C_{1})$-essential relation $\rho_{l}\subseteq A_{1}\times D^{l}\times A_{1}$ for $l= 2,4,8,\dots$. If we restrict the first and the last variables of $\rho_{l}$ to $C_{1}$ and consider the projection onto the remaining variables we get a $B$-essential relation of arity $l$. Since we can make $l$ as large as we need, we get a contradiction with Lemma~\ref{NoEssential} and the fact that $B$ is an absorbing subuniverse. \end{proof} \begin{conslem}\label{ternaryAbsorption} Suppose $C$ is a center of $A$. Then $C$ is a ternary absorbing subuniverse of~$A$. \end{conslem} \begin{proof} Assume that $C$ is not a ternary absorbing subuniverse then by Lemma~\ref{NoEssential}, there exists a $C$-essential relation of arity 3. By Corollary~\ref{centerImpliesAbsorption}, $C$ is an absorbing subuniverse of $A$, then by Corollary~\ref{AlmostEssTuple} such a relation cannot exist. \end{proof} \begin{conslem}\label{CenterLessThanThree} Suppose $C_{i}$ is a center of $A_{i}$ for $i\in\{1,2,\dots,k\}$ and $k\geqslant 3$. Then there does not exist a $(C_{1},\dots,C_{k})$-essential relation $\rho\subseteq A_{1}\times \dots \times A_{k}$. \end{conslem} \begin{proof} If such a relation $\rho$ exists then restricting all but the first three variables of $\rho$ to the corresponding centers and projecting the result onto the first three variables we obtain $(C_{1},C_{2},C_{3})$-essential relation, which cannot exists by Corollary~\ref{AlmostEssTuple}. \end{proof} \subsection{PC Subuniverse}\label{PCSubsection} \begin{lem}\label{ReflexivePCRelations} Suppose $A$ is a PC algebra and $\rho\subseteq A^{n}$ is a relation containing all the constant tuples $(a,\dots,a)$. Then $\rho$ can be represented as a conjunction of binary relations of the form $x_i = x_j$.\end{lem} \begin{proof} All constant operations preserve $\rho$, and together with the constant operations the algebra $A$ generates all operations on the set $A$. Then $\rho$ is preserved by all operations on $A$, and therefore, $\rho$ is diagonal (see Theorem 2.9.3 from \cite{lau}) and it can be represented as a conjunction of binary relations of the form $x_i = x_j$. \end{proof} \begin{lem}\label{FindCenter} Suppose $\rho\subseteq A\times B$ is a subdirect relation and $A$ is a PC algebra. Then either for every $b\in B$ there exists a unique $a\in A$ such that $(a,b)\in \rho$, or there exists $b\in B$ such that $(a,b)\in \rho$ for every $a\in A$. \end{lem} \begin{proof} Put $\sigma_{l}(x_1,x_2,\ldots,x_{l}) = \exists y \;\rho(x_{1},y) \wedge\dots\wedge \rho(x_{l},y).$ It is not hard to see that $\sigma_{l}$ contains all constant tuples. Therefore, Lemma~\ref{ReflexivePCRelations} implies that $\sigma_{2}$ is either full, or the equality relation. If $\sigma_{2}$ is the equality relation, then for every $b\in B$ there exists a unique $a\in A$ such that $(a,b)\in \rho$. Suppose $\sigma_2$ is full. Then we consider the minimal $l$, if it exists, such that $\sigma_{l}$ is not full. Since $\sigma_{l-1}$ is full, the relation $\sigma_{l}$ contains all tuples whose elements are not different. Then Lemma~\ref{ReflexivePCRelations} implies that $\sigma_{l}$ is a full relation, which means that $\sigma_{l}$ is a full relation for every $l$. Substituting $l = |A|$ and $\{x_1,\dots,x_{l}\} = A$ in the definition of $\sigma_{l}$ we obtain that there exists $b$ such that $(a,b)\in \rho$ for every $a\in A$. \end{proof} \begin{lem}\label{PCRelationsLem} Suppose $\rho\subseteq A_1\times\dots \times A_{n}$ is a subdirect relation, $A_{i}$ is a PC algebra for every $i\in\{2,\ldots,n\}$, and there is no nontrivial binary absorbing subuniverse or nontrivial center on $A_{i}$ for every $i\in\{1,\ldots,n\}$. Then $\rho$ can be represented as a conjunction of binary relations $\delta_{1},\ldots,\delta_{k}$ such that $\ConOne(\delta_{l},j)$ is the equality relation whenever the domain of the $j$-th variable of $\delta_{l}$ is a PC algebra. \end{lem} This lemma says that the relation $\rho$ can be represented by constraints from the first coordinate to an $i$-th coordinate such that the $i$-th coordinate is uniquely determined by the first (also we can define the corresponding PC congruence on the first coordinate using this relation) and by bijective binary constraints between pairs of coordinates other than first. Also, it says that in a subdirect product of PC algebras without a nontrivial binary absorbing subuniverse or center (even $A_{1}$ is a PC algebra) we can choose some essential coordinates which can have any value, each other coordinate is uniquely determined by exactly one of them (in a bijective way). \begin{proof} We prove by induction on the arity of $\rho$. If $\rho$ is binary, Lemma~\ref{FindCenter} implies that there exists a nontrivial binary absorbing subuniverse on $A_{2}$, or there exists a nontrivial center on $A_{1}$ witnessed by $\rho$, or the second coordinate of $\rho$ is uniquely determined by the first, or $\rho$ is full. First two conditions contradict our assumptions, the last two conditions are what we need. Assume that $\rho$ is not essential, then it can be represented as a conjunction of essential relations satisfying the same properties. By the inductive assumption, each of them can be represented as a conjunction of binary relations. It remains to join these binary relations to complete the proof for this case. Assume that $\rho$ is essential. The projection of $\rho$ onto any proper set of variables gives a relation of a smaller arity satisfying the same properties. By the inductive assumption, the relation of a smaller arity can be represented as a conjunction of binary relations $\delta_{1},\ldots,\delta_{k}$ such that $\ConOne(\delta_{l},j)$ is the equality relation whenever the domain of the $j$-th variable of $\delta_{l}$ is a PC algebra. In each relation $\delta_{i}$ one variable (let it be the $u$-th variable of $\rho$) is uniquely determined by another, and therefore the relation $\rho$ can be represented as a conjunction of $\delta_{i}$ and the projection of $\rho$ onto all variables but $u$-th, which cannot happen with an essential relation. Therefore, each projection of $\rho$ onto any proper set of variables is a full relation. Let us consider the relation $\rho\subseteq (A_{1}\times\dots\times A_{n-1})\times A_{n}$ as a binary relation. By Lemma~\ref{FindCenter} we have one of the following two situations. Case 1: there exist $b_{1},\ldots,b_{n-1}$ such that $(b_{1},\ldots,b_{n-1},a)\in \rho$ for every $a\in A_{n}$. We consider the maximal $s$ such that $\rho(b_{1},\ldots,b_{s},x_{s+1},\ldots,x_{n})$ is not a full relation. It is easy to see that $s\leqslant n-2$ and $s$ exists. Let $R(x_{s+1},\ldots,x_{n}) =\rho(b_{1},\ldots,b_{s},x_{s+1},\ldots,x_{n})$. Since the projection of $\rho$ onto any proper subset of variables is full, $R$ is a subdirect relation. By Lemma~\ref{GenBinAbToBinAb}, there is no nontrivial binary absorbing subuniverse on $A_{s+2}\times\dots\times A_{n}$, then we get a nontrivial center $C$ on $A_{s+1}$ defined by $C = \{a_{s+1}\in A_{s+1}\mid \forall a_{s+2}\dots\forall a_{n} \colon (a_{s+1},a_{s+2},\ldots,a_{n})\in R\}$ and witnessed by $R$. Case 2: for every $a_{1},\ldots,a_{n-1}$ there exists a unique $b$ such that $(a_{1},\ldots,a_{n-1},b)\in \rho$. We can show in the same way that for any $(a_{1},a_{3},\ldots,a_{n})$ there exists a unique $b$ such that $(a_{1},b,a_{3}\ldots,a_{n})\in\rho$. Let us consider the relation $\zeta$ defined by \begin{align*}\zeta(z_{1},z_{2},z_{3},z_{4}) = \exists x_{1} \exists x_{2}\dots\exists x_{n-1} \exists x_{1}' \exists x_{2}'\; &\rho(x_{1},x_{2},x_{3},\ldots,x_{n-1},z_{1})\wedge\\ \rho(x_{1},x_{2}',x_{3},\ldots,x_{n-1},z_{2})\wedge &\rho(x_{1}',x_{2},x_{3},\ldots,x_{n-1},z_{3})\wedge \rho(x_{1}',x_{2}',x_{3},\ldots,x_{n-1},z_{4}). \end{align*} Since any projection of $\rho$ onto any proper subset of variables is a full relation, any projection of $\zeta$ onto 3 variables is a full relation. Since $\rho$ is subdirect, $\zeta$ contains all constant tuples. Then Lemma~\ref{ReflexivePCRelations} implies that $\zeta$ is a full relation. Suppose $a\neq b$ and $(a,a,a,b)\in\zeta$ witnessed by $x_{1},\ldots,x_{n-1},x_{1}',x_{2}'$. Since $z_{1}=z_{2}=a$, we have $x_{2} = x_{2}'$ and therefore $z_{3}=z_{4}$, that is $a=b$. Contradiction. \end{proof} \begin{conslem}\label{PCProperties} Suppose $\sigma_{1},\ldots,\sigma_{k}$ are all PC congruences on $A$. Put $A_{i} = A/\sigma_{i}$, and define $\psi:A\to A_{1}\times \dots\times A_{k}$ by $\psi(a) = (a/\sigma_{1},\dots,a/\sigma_{k})$. Then \begin{enumerate} \item $\psi$ is surjective, hence $A/\PCCon(A)\cong A_{1}\times\dots\times A_{k}$; \item the PC subuniverses are the sets of the form $\psi^{-1}(S)$, where $S\subseteq A_{1}\times \dots\times A_{k}$ is a relation definable by unary constraints of the form $x_{j} = a_{j}$; \item for each nonempty PC subuniverse $B$ of $A$ there is a congruence $\theta$ of $A$ such that $B$ is an equivalence class of $\theta$ and $A/\theta$ is isomorphic to a product of PC algebras having no nontrivial binary absorbing subuniverse or center. \end{enumerate} \end{conslem} \begin{proof} Consider the image $\psi(A)$, which is a subdirect subuniverse of $A_{1}\times\dots\times A_{k}$. By Lemma~\ref{PCRelationsLem}, this relation can be represented as a conjunction of binary relations whose one coordinate uniquely determines another (in a bijective way). This means that congruences $\sigma_{i}$ corresponding to these coordinates should be equal, which contradicts the definition. Then $\psi(A)$ is a full relation and $\psi$ is surjective. Claim (2) follows directly from the definition of a PC subuniverse. To prove (3) consider the intersection of all congruences whose equivalence classes we intersected to define the PC subuniverse. Then, in the same way as in (1) we can prove the isomorphism. \end{proof} \begin{conslem}\label{PCImplies} Suppose $\rho \subseteq A_{1}\times\dots\times A_{n}$ is a subdirect relation, there is no nontrivial binary absorbing subuniverse or nontrivial center on $A_{1}$, and $C = \proj_{1}((C_{1}\times\dots \times C_{n})\cap\rho)$, where $C_{i}$ is a PC subuniverse in $A_{i}$ for every $i$. Then $C$ is a PC subuniverse in $A_{1}$. \end{conslem} \begin{proof} By the previous corollary for every $i$ we choose PC algebras $A_{i,1},\dots,A_{i,k_{i}}$ and a mapping $\psi_{i}:A_{i}\to A_{i,1}\times\dots\times A_{i,k_{i}}$ such that $A_{i}/\PCCon(A_{i})\cong A_{i,1}\times \dots\times A_{i,k_{i}}$. Define $\phi:A_{1}\times\dots\times A_{n}\to A_{1}\times \prod_{i,j} A_{i,j}$ by $\phi(a_1,\dots,a_{n}) = (a_1,\psi_{1}(a_1),\ldots,\psi_{n}(a_n))$. Let $\gamma = C_{1}\times\dots\times C_{n}$, $\rho' = \phi(\rho)$, $\gamma'= \phi(\gamma)$. We can check that $\proj_{1}(\rho\cap\gamma) = \proj_{1}(\rho'\cap\gamma')$, then it is sufficient to show that $\proj_{1}(\rho'\cap\gamma')$ is a PC subuniverse of $A_{1}$. Since $\rho'$ is subdirect, by Lemma~\ref{PCRelationsLem} it can be represented by binary constraints from the first coordinate to an $i$-th coordinate such that the $i$-th coordinate is uniquely determined by the first, and by bijective binary constraints between pairs of coordinates other than first. The relation $\gamma'$ can be represented by constraints of the form $x_{i,j} = a_{i,j}$ and canonical constraints saying that the $j$-th element of $\psi_{1}(x_{1})$ is equal to $x_{1,j}$. To calculate $\proj_{1}(\rho'\cap \gamma')$ we join constraints of these two representations. Let us explain how any constraint from $x_{1}$ to $x_{i,j}$ in this representation looks like. There exists a congruence $\sigma$ on $A_{1}$ such that $A_{1}/\sigma$ is a PC algebra isomorphic to $A_{i,j}$, then the constraint assigns to all elements of each equivalence class of $\sigma$ the corresponding element of $A_{i,j}$. All other constraints of this representations are of the form $x_{i,j} = a_{i,j}$ or bijective constraints between two coordinates. This implies that $\proj_{1}(\rho'\cap\gamma')$ is an intersection of equivalence classes of PC congruences, that is, $\proj_{1}(\rho'\cap\gamma')$ is a PC subuniverse. \end{proof} \begin{conslem}\label{PCLessThanThree} Suppose $C_{i}$ is a PC subuniverse of $A_{i}$ for $i\in\{1,2,\dots,n\}$ and $n\geqslant 3$. Then there does not exist a subdirect $(C_{1},\dots,C_{n})$-essential relation $\rho\subseteq A_{1}\times \dots \times A_{n}$. \end{conslem} \begin{proof} Assume that such a relation $\rho$ exists. By Corollary~\ref{PCProperties} for every $i$ we choose PC algebras $A_{i,1},\dots,A_{i,k_{i}}$ and a mapping $\psi_{i}:A\to A_{i,1}\times \dots\times A_{i,k_{i}}$ such that $A_{i}/\PCCon(A_{i})\cong A_{i,1}\times \dots\times A_{i,k_{i}}$. Define $\phi:A_{1}\times\dots\times A_{n}\to \prod_{i,j} A_{i,j}$ by $\phi(a_1,\dots,a_{n}) = (\psi_{1}(a_1),\ldots,\psi_{n}(a_n))$. Let $\gamma_{i} = C_{1}\times\dots\times C_{i-1}\times A_{i}\times C_{i+1}\times\dots\times C_{n}$ for every $i$ and $\gamma = C_{1}\times\dots\times C_{n}$. Put $\rho' = \phi(\rho)$, $\gamma'= \phi(\gamma)$, and $\gamma_{i}'= \phi(\gamma_{i})$ for every $i$. Since $\rho'$ is subdirect, by Lemma~\ref{PCRelationsLem} $\rho'$ can be represented by bijective binary constraints between pairs. By Corollary~\ref{PCProperties} $\gamma'$ can be represented by constraints of the form $x_{i,j} = a_{i,j}$. If $\rho\cap\gamma=\varnothing$, then $\rho'\cap\gamma'=\varnothing$, which can only happen if two unary constraints defining $\gamma'$ assign contradictory values to variables with respect to the binary constraints defining $\rho'$. Since $k\geqslant 3$, we can choose $l$ such that $\gamma_{l}'$ includes the two contradictory unary constraints. Then $\rho'\cap \gamma_{l}'=\varnothing$ and $\rho\cap \gamma_{l}=\varnothing$, which gives a contradiction. \end{proof} \begin{lem}\label{BiggerThanPC} Suppose $\sigma\supseteq \sigma_{1}\cap \dots\cap \sigma_{n}$, where $\sigma_{1},\dots,\sigma_{n}$ are PC congruences on $D$ and $\sigma$ is a proper congruence on $D$. Then there exists $I\subseteq \{1,2,\dots,n\}$ such that $\sigma = \bigcap_{i\in I}\sigma_{i}$. \end{lem} \begin{proof} Consider a $2n$-ary relation $R\subseteq D/\sigma_{1}\times\dots\times D/\sigma_{n} \times D/\sigma_{1}\times\dots\times D/\sigma_{n}$ consisting of all tuples $(a/\sigma_{1},\dots,a/\sigma_{n},b/\sigma_{1},\dots,b/\sigma_{n})$, where $(a,b)\in \sigma$. By Lemma~\ref{PCRelationsLem}, $R$ can be represented as a conjunction of binary bijective relations. Since $(a/\sigma_{1},\dots,a/\sigma_{n},a/\sigma_{1},\dots,a/\sigma_{n})\in R$ for every $a\in D$, we conclude that all these binary relations are equalities. This implies that $\sigma = \bigcap_{i\in I}\sigma_{i}$ for some $I\subseteq \{1,2,\dots,n\}$. \end{proof} \begin{lem}\label{NoAbsCenterInPCAlgebra} For every $D$ the algebra $D/\ConPC(D)$ has no nontrivial binary absorbing subuniverse or center. \end{lem} \begin{proof} By Corollary~\ref{PCProperties}, $D/\PCCon(D)\cong A_{1}\times\dots\times A_{k}$, where $A_{i}$ is a PC algebra without a nontrivial binary absorbing subuniverse or center. Then Lemmas~\ref{GenBinAbToBinAb} and \ref{GenCenterToCenter} imply that there cannot be a nontrivial binary absorbing subuniverse or center on $D$. \end{proof} \begin{lem}\label{PCCongruenceOnProduct} Suppose $\sigma$ is a PC congruence on $A_{1}\times A_{2}$, there is no nontrivial binary absorbing subuniverse or center on $A_{1}$ and $A_{2}$. Then there exist $i\in\{1,2\}$ and a PC congruence $\sigma_{i}$ on $A_{i}$ such that $\sigma = \{(\alpha,\beta)\mid (\proj_{i}(\alpha), \proj_{i}(\beta))\in\sigma_{i}\}$. \end{lem} \begin{proof} First, consider $S\subseteq A_{1}\times (A_{1}\times A_{2})/\sigma$ consisting of all pairs $(a_1,(a_1,a_2)/\sigma)$ such that $a_1\in A_{1}$, $a_{2}\in A_{2}$. Since there is no nontrivial binary absorbing subuniverse or center on $A_{1}$, Lemma~\ref{PCRelationsLem} implies that either $S$ is a full relation, or $\ConOne(S,2)$ is the equality relation. In the latter case the congruence $\sigma$ depends only on the first coordinate, that is, there exists a PC congruence $\sigma_{1}$ on $A_{1}$ such that $\sigma = \{(\alpha,\beta)\mid (\proj_{1}(\alpha), \proj_{1}(\beta))\in\sigma_{1}\}$, which completes this case. Thus, we assume that $S$ is a full relation and for every $a_1\in A_1$ and every equivalence class $E$ of $\sigma$ there exists $a_{2}\in A_{2}$ such that $(a_{1},a_{2})\in E$. In the same way we assume that for every $a_2\in A_2$ and every equivalence class $E$ of $\sigma$ there exists $a_{1}\in A_{1}$ such that $(a_{1},a_{2})\in E$. Choose an element $c_1\in A_{1}$. By $\sigma_{2}$ we denote the congruence $\{(a_{2},a_{2}')\mid ((c_{1},a_{2}),(c_{1},a_{2}'))\in\sigma\}$. As it follows from the above assumptions, $A_{2}/\sigma_{2}\cong (A_{1}\times A_{2})/\sigma$. Consider the ternary relation $\rho\subseteq A_{1}\times (A_{1}\times A_{2})/\sigma\times A_{2}/\sigma_{2}$ consisting of all the tuples $(a_1,(a_{1},a_{2})/\sigma,a_{2}/\sigma_{2})$, where $a_{1}\in A_{1}$ and $a_{2}\in A_{2}$. As we already know, the projection of $\rho$ onto any two coordinates is a full relation. Then Lemma~\ref{PCRelationsLem} implies that $\rho$ is a full relation, which contradicts the fact that $(c_{1},(c_{1},a_{2})/\sigma,b_{2}/\sigma_{2})\notin\rho$ for any $(a_{2},b_{2})\notin \sigma_{2}$. \end{proof} \begin{conslem}\label{PCCongruenceOnProductGen} Suppose $\sigma$ is a PC congruence on $A_{1}\times A_{2}\times\dots\times A_{n}$, there is no nontrivial binary absorbing subuniverse or center on $A_{i}$ for every $i$. Then there exist $i\in\{1,2,\dots,n\}$ and a PC congruence $\sigma_{i}$ on $A_{i}$ such that $\sigma = \{(\alpha,\beta)\mid (\proj_{i}(\alpha), \proj_{i}(\beta))\in\sigma_{i}\}$. \end{conslem} \begin{proof} We prove this corollary by induction on $n$. For $n=2$ it follows from Lemma~\ref{PCCongruenceOnProduct}. By Lemmas~\ref{GenBinAbToBinAb}, \ref{GenCenterToCenter}, there is no nontrivial binary absorbing subuniverse or center on $A_{2}\times\dots\times A_{n}$. We apply Lemma~\ref{PCCongruenceOnProduct} to $A_{1}\times(A_{2}\times\dots\times A_{n})$ to get a PC congruence on $A_{1}$ or on $A_{2}\times\dots\times A_{n}$. In the latter case we apply the inductive assumption to complete the proof. \end{proof} \begin{lem}\label{PCSubuniverseOnProduct} Suppose $B$ is a PC subuniverse on $A_{1}\times\dots\times A_{n}$, and there is no nontrivial binary absorbing subuniverse or center on $A_{i}$ for every $i$. Then there exists a PC subuniverse $B_{i}$ on $A_{i}$ for every $i$ such that $B = B_{1}\times\dots\times B_{n}$. \end{lem} \begin{proof} Assume that $B=E_{1}\cap\dots\cap E_{t}$, where $E_{i}$ is an equivalence class of a PC congruence $\sigma_{i}$ on $A_{1}\times\dots\times A_{n}$ for every $i$. By Corollary~\ref{PCCongruenceOnProductGen}, for every $i$ there exists $s_{i}$ and a congruence $\sigma_{i}'$ on $A_{s_{i}}$ such that $\sigma_{i} = \{(\alpha,\beta)\mid (\proj_{s_{i}}(\alpha), \proj_{s_{i}}(\beta))\in\sigma_{i}'\}$. Then there exists an equivalence class $E_{i}'$ of $\sigma_{i}'$ such that $E_{i} = A_{1}\times\dots\times A_{s_{i}-1}\times E_{i}'\times A_{s_{i}+1}\times\dots\times A_{n}$. Hence, the intersection $E_{1}\cap\dots\cap E_{t}$ is equal to $B_{1}\times\dots\times B_{n}$ for PC subuniverses $B_{1},\dots,B_{n}$. \end{proof} \begin{lem}\label{CenterProvidesBinaryAbsorptionForPC} Suppose $\rho\subseteq A\times B$ is a subdirect relation, $A$ is a PC algebra without nontrivial binary absorbing subuniverse or center, and $C = \{b\in B\mid \forall a\in A\colon (a,b)\in \rho\}$. Then $C$ binary absorbs $B$. \end{lem} \begin{proof} Suppose $A = \{a_{1},\ldots,a_{k}\}$. Let us consider the matrix $M$ whose rows are the tuples $(\underbrace{a,a,\ldots,a}_{k+1},b,a_{1},\ldots,a_{k})$ and $(b,a_{1},\ldots,a_{k},\underbrace{a,a,\ldots,a}_{k+1})$ for all $a,b\in A$. The $2k+2$ columns of this matrix we denote by $\alpha_{1},\ldots,\alpha_{2k+2}$. By $\beta$ we denote the tuple of length $2k^2$ such that the $i$-th element of $\beta$ equals $b$ from the corresponding row. By Lemma~\ref{PCRelationsLem}, the relation generated by $\alpha_{1},\ldots,\alpha_{2k+2}$ is a full relation. Hence, there exists a term operation $f$ such that $f(\alpha_{1},\ldots,\alpha_{2k+2})=\beta$. Let us show that $C$ absorbs $B$ with the term operation defined by $h(x,y)=f(\underbrace{x,\ldots,x}_{k+1},y,\ldots,y)$. Suppose $d\in B$, $c\in C$. Assume that $h(d,c)=e\notin C$. Choose elements $a,a'\in A$ such that $(a,e)\notin\rho$ and $(a',d)\in\rho$. Consider the row $(a',\ldots,a',a,a_{1},\ldots,a_{k})$ from the matrix. We know that $f$ returns $a$ on this tuple and $f(\underbrace{d,\ldots,d}_{k+1},c,\ldots,c) = e$, which contradicts the fact that $f$ preserves $\rho$. Thus, $h(d,c)\in C$. In the same way we can prove that $h(c,d)\in C$ for every $d\in B$, $c\in C$. \end{proof} \begin{lem}\label{IdentificationDoesNotReducePC} Suppose $\rho\subseteq A\times B\times B$ is a subdirect relation, $A$ is a PC algebra without a nontrivial binary absorbing subuniverse or center, and for every $b\in B$ there exists $a\in A$ such that $(a,b,b)\in \rho$. Then for every $a\in A$ there exists $b\in B$ such that $(a,b,b)\in\rho$. \end{lem} \begin{proof} We prove the lemma by induction on the size of $B$. By Lemma~\ref{FindCenter}, only two situations are possible: either there exist $c_{1},c_{2}\in B$ such that $(a,c_{1},c_{2})\in \rho$ for every $a\in A$, or for each $(b_{1},b_{2})\in \proj_{2,3}(\rho)$ there exists a unique $a\in A$ such that $(a,b_{1},b_{2})\in \rho$. Case 1. There exist $c_{1},c_{2}\in B$ such that $(a,c_{1},c_{2})\in \rho$ for every $a\in A$. Put $D = \{(b,c)\mid \forall a\in A\colon (a,b,c)\in\rho\}$. By Lemma~\ref{CenterProvidesBinaryAbsorptionForPC}, $D$ is a binary absorbing subuniverse in the projection of $\rho$ onto the last two variables. By Lemma~\ref{AbsorbingEquality}, there exists $(b,b)\in D$. This completes this case. Case 2. For each $(b_{1},b_{2})\in \proj_{2,3}(\rho)$ there exists a unique $a\in A$ such that $(a,b_{1},b_{2})\in \rho$. Let $\delta_{1}$ be the projection of $\rho$ onto the first two variables. By Lemma~\ref{FindCenter} we have one of two situations. Case 2A. For every $b\in B$ there exists a unique $a$ such that $(a,b)\in\delta_{1}$. Since $\rho$ is subdirect, for every $a$ there exists $(a,b,b')\in\rho$, which implies that $(a,b,b)\in\rho$ and completes this case. Case 2B. There exists an element $b$ such that $(a,b)\in\delta_{1}$ for every $a\in A$. Consider the relation $\delta_{2}(x,y_{2}) = \rho(x,b,y_{2})$. If $\proj_{2}(\delta_{2})\neq B$, then we restrict the last two variables of $\rho$ to $\proj_{2}(\delta_{2})$ and apply the inductive assumption. Assume that $\proj_{2}(\delta_{2}) = B$. By the definition of the second case we know that for every $c\in B$ there exists a unique $a$ such that $(a,c)\in\delta_{2}$. Then $\sigma=\ConOne(\delta_2,2)$ is a proper congruence such that $B/\sigma\cong A$. If $\sigma$ is the equality relation, then $B\cong A$, and, by Lemma~\ref{PCRelationsLem}, $\rho$ can be represented by binary bijective constraints. If the first coordinate of $\rho$ is uniquely defined by the second or the third, then it is equivalent to the case 2A, which we already considered. If the first coordinate of $\rho$ does not depend on the others, then the claim is trivial. If $\sigma$ is not the equality relation, then we consider the relation $\rho'$ obtained from $\rho$ by factorization of the last two variables by $\sigma$, that is, $\rho'\subseteq A\times B/\sigma\times B/\sigma$ contains all tuples $(a,b/\sigma,b'/\sigma)$ such that $(a,b,b')\in\rho$. By the inductive assumption for any $a\in A$ there exists $E\in B/\sigma$ such that $(a,E,E)\in\rho'$. By Lemma~\ref{FindCenter}, we have one of the following situations. Case 1. There exists $E\in B/\sigma$ such that for every $a\in A$ we have $(a,E,E)\in\rho'$. Then we restrict the last two variables of $\rho$ to $E$ and apply the inductive assumption. Case 2. For every $E\in B/\sigma$ there exists a unique $a\in A$ such that $(a,E,E)\in\rho'$. In this case for any $a\in A$ we choose $E$ such that $(a,E,E) \in \rho'$. By the uniqueness of $a$ we have $(a,b,b)\in \rho$ for any $b\in E$, which completes the proof. \end{proof} \subsection{Linear Subuniverse} We have the following well-known fact from linear algebra \cite{greub2012linear}. \begin{lem}\label{LinearAlgebrasFact} Suppose $\rho\subseteq (\mathbb Z_{p_{1}})^{n_{1}}\times\dots\times (\mathbb Z_{p_{k}})^{n_{1}}$, where $p_{1},\ldots,p_{k}$ are distinct prime numbers dividing $m-1$ and $\mathbb Z_{p_{i}} = (\mathbb Z_{p_{i}};x_{1}+\dots+x_{m})$ for every $i$. Then $\rho = L_{1}\times\dots\times L_{k}$ where each $L_{i}$ is an affine subspace of ${(\mathbb Z_{p_{i}})}^{n_{i}}$. \end{lem} \begin{conslem}\label{LinearAlgebrasAreClosed} The set of linear algebras is closed under taking subalgebras, quotients, and finite products. \end{conslem} \begin{lem}\label{NoAbsCenterPCInLinearAlgebra} A linear algebra has no nontrivial absorbing subuniverse, nontrivial center, or nontrivial PC subuniverse. \end{lem} \begin{proof} Let us prove that a linear algebra $A$ has no nontrivial absorbing subuniverse, which by Corollary~\ref{centerImpliesAbsorption} implies that $A$ has no nontrivial center. By Lemma~\ref{GenBinAbToBinAb}, it is sufficient to show that $\mathbb Z_{p}$ has no nontrivial absorbing subuniverse. Every term operation in $\mathbb Z_{p}$ can be represented as $a_{1}x_{1}+\dots+a_{l}x_{l}$, and for each $a_{l}\neq 0$ fixing all variables but $x_{l}$ to some values gives a bijective mapping, which means that this term cannot witness an absorption. Since linear algebras (by Corollary~\ref{LinearAlgebrasAreClosed}) are closed under quotients, to prove that it does not have a nontrivial PC subuniverse it is sufficient to prove that a linear algebra $A$ cannot be a PC algebra. Assume that $A$ is isomorphic to $\mathbb Z_{p_{1}}\times\dots \times\mathbb Z_{p_{k}}$ for prime numbers $p_{1},\dots,p_{k}$, and $\psi:A\to Z_{p_{1}}$ is the canonical mapping. Let $\rho$ be the set of all tuples $(a,b,c,d)$ such that $\psi(a)+\psi(b) = \psi(c) +\psi(d)$. We can check that $\rho$ is preserved by $w$ and all constants but not all operations, therefore $A$ cannot be polynomially complete. \end{proof} \begin{lem}\label{EqualNumberOfElements} Suppose $\rho\subseteq A_{1}\times A_2$ is a subdirect relation, $A_{2}$ is a linear algebra, and there is no nontrivial binary absorbing subuniverse on $A_1$. Then for all $a,b\in A_1$ we have $$|\{c\mid (a,c)\in \rho\}| = |\{c\mid (b,c)\in \rho\}|.$$ \end{lem} \begin{proof} Assume the contrary, then we choose all elements $a$ with the maximal $|\{c\mid (a,c)\in \rho\}|.$ Denote the set of such elements by $C$. Since $w(a_{1},\ldots,a_{i-1},x,a_{i+1},\ldots,a_{m})$ is a bijection on $A_{2}$ for every $a_{1},\ldots,a_{m}\in A_{2}$, we have $w(A_{1},\ldots,A_{1},C,A_{1},\ldots,A_{1})\subseteq C$. Hence $w(x,\ldots,x,y)$ is a binary absorbing operation and $C$ is a binary absorbing subuniverse. \end{proof} \begin{lem}\label{LinearSpecialWNU} Suppose $A$ is a linear algebra. Then $w(a,b,\dots,b) = a$ for every $a,b\in A$. \end{lem} \begin{proof} Suppose $A\cong \mathbb Z_{p_{1}}\times\dots\times \mathbb Z_{p_{k}}$. Since the WNU $w$ is special and idempotent, each $p_{i}$ divides $m-1$. Therefore, $w(a,b,\ldots,b) = a$ for every $a,b\in A$. \end{proof} \begin{lem}\label{RelWithLinearPart} Suppose $\rho\subseteq A_{1}\times A_2$ is a subdirect relation, $A_{2}$ is a linear algebra, and there is no nontrivial binary absorbing subuniverse on $A_1$. Then $\rho$ has the parallelogram property. \end{lem} \begin{proof} First, we define a relation $\sigma_{k}$ for every $k\geqslant 2$ by $$\sigma_{k}(y_1,\ldots,y_k) = \exists x\; \rho(x,y_1)\wedge\dots\wedge\rho(x,y_k).$$ By Lemma~\ref{LinearSpecialWNU}, $w(a,b,\ldots,b,b) = a$ and $w(b,b,\ldots,b,c) = c$ for any $a,b,c\in A_{2}$, therefore $(a,b),(b,c),(b,b)\in\sigma_{2}$ implies $(a,c)\in\sigma_{2}$, Since $\sigma_2$ is reflexive and symmetric, it is a congruence. Let us show by induction on $k$ that $\sigma_{k}(y_{1},\ldots,y_{k}) = \bigwedge_{i=2}^{k}\sigma_{2}(y_{1},y_{i})$. For $k=2$ it is obvious. Consider a tuple $(a_{1},\ldots,a_{k})$ such that $(a_{i},a_{j})\in\sigma_{2}$ for any $i,j$. By the inductive assumption for $k-1$ we have $(a_{1},a_{1},a_{3},\ldots,a_{k}), (a_{1},a_{2},a_{1},a_{4},\ldots,a_{k}), (a_{1},a_{1},a_{1},a_{4},\ldots,a_{k}) \in\sigma_{k}$. If we apply the term operation $g(x,y,z) = w(x,y,z,\ldots,z)$ to these three tuples (in the same order) we obtain $(a_{1},\ldots,a_{k})$, which means that $(a_{1},\ldots,a_{k})\in \sigma_{k}$. Thus $\sigma_{k}(y_{1},\ldots,y_{k}) = \bigwedge_{i=2}^{k}\sigma_{2}(y_{1},y_{i})$ for every $k$. Substituting $\{y_{1},\dots,y_{k}\} = E$ in the definition of $\sigma_{k}$ for an equivalence class $E$ of $\sigma_{2}$ we derive that there exists $c\in A_{1}$ such that $(c,d)\in\rho$ for any $d\in E$. Then it follows from Lemma~\ref{EqualNumberOfElements}, that $\rho$ has the parallelogram property. \end{proof} \begin{conslem}\label{LinearImplies} Suppose $\rho \subseteq A_{1}\times\dots\times A_{n}$ is a relation such that $\proj_1 (\rho) = A_{1}$, there is no nontrivial binary absorbing subuniverse on $A_{1}$, and $C = \proj_{1}((C_{1}\times\dots \times C_{n})\cap\rho)$, where $C_{i}$ is a linear subuniverse of $A_{i}$ for every $i$. Then $C$ is a linear subuniverse of $A_{1}$. \end{conslem} \begin{proof} Let $\psi:A_{1}\times \dots\times A_{n} \to A_{1}\times A_{2}/\ConLin(A_{2})\times \dots\times A_{n}/\ConLin(A_{n})$ be a natural homomorphism. Put $\gamma = C_{1}\times\dots\times C_{n}$, $\rho' = \psi(\rho)$, $\gamma' = \psi(\gamma)$. We can check that $\proj_1 (\rho\cap \gamma) = \proj_1 (\rho'\cap \gamma')$. The relation $\rho'$ can be viewed as a subdirect subalgebra of $A_{1}\times B$, where $B = \proj_{2,\ldots,n}(\rho')$ is a linear algebra by Corollary~\ref{LinearAlgebrasAreClosed}. Then $D = \proj_{2,\ldots,n}(\gamma')\cap B$ can be viewed as a subalgebra of $B$. We need to show that $\proj_{1}(\rho'\cap (C_{1}\times D))$ is a linear subuniverse of $A_{1}$. By Lemma~\ref{RelWithLinearPart}, the binary relation $\rho'$ has the parallelogram property, then $\rho'$ induces an isomoprhism $A_{1}/\sigma_{1}\cong B/\sigma_{2}$, where $\sigma_{1} = \ConOne(\rho',1)$, $\sigma_{2} = \ConOne(\rho',2)$. Note that $\sigma_{1}$ is a linear congruence since $B$ is a linear algebra. Hence, $D_1 = \{a\in A_{1}\mid \exists d\in D\colon (a,d)\in \rho'\}$ is stable under $\sigma_{1}$ (and under $\ConLin(A_{1})$). Then $\proj_1 (\rho\cap \gamma) = \proj_1 (\rho'\cap \gamma') = C_{1}\cap D_1$ is stable under $\ConLin(A_{1})$, which completes the proof. \end{proof} \subsection{Common properties} In this subsection we list some properties that are common for all types of one-of-four subuniverses. \begin{lem}\label{PCBrel} Suppose $R\subseteq D_{1}\times\dots\times D_{n}$ is a subdirect relation, $B_{i}$ is a one-of-four subuniverse of $D_{i}$ of type $\mathcal T$ for every $i\in\{1,\dots,n\}$; if $\mathcal T$ is the absorbing type then the absorbing subuniverses are witnessed by the same term operation. Then $R\cap (B_{1}\times\dots\times B_{n})$ is a one-of-four subuniverse of $R$ of type $\mathcal T$. \end{lem} \begin{proof} If $\mathcal T$ is the absorbing type, then the statement follows from Lemma~\ref{AbsImplies}, if $\mathcal T$ is the central type, then the statement follows from Lemma~\ref{CenterImplies}. Suppose $\mathcal T$ is the linear type, then put $\sigma_{i}=\ConLin(D_{i})$ for each $i\in\{1,\dots,n\}$. First, extend every $\sigma_{i}$ naturally on $D = D_{1}\times\dots\times D_{n}$ and denote the obtained congruence $\sigma_{i}'$ so that $D/\sigma_{i}'\cong D_{i}/\sigma_{i}$. Since linear algebras are closed under taking subalgebras and quotients (Corollary~\ref{LinearAlgebrasAreClosed}), $\sigma = \sigma_{1}'\cap\dots\cap \sigma_{1}'$ is a linear congruence and $B_{1}\times\dots\times B_{n}$ is stable under this congruence. Therefore, $\sigma\cap (R\times R)$ is a linear congruence and $R\cap (B_{1}\times\dots\times B_{n})$ is stable under it. This completes this case. It remains to consider the case when $\mathcal T$ is the PC type. Let $\delta_{1},\dots,\delta_{t}$ be the set of all PC congruences on $D_{1},\ldots,D_{n}$ we need to define $B_{1},\dots,B_{n}$. For every $i\in\{1,2,\dots,t\}$ by $\delta_{i}'$ we denote $\delta_{i}$ naturally extended on $D = D_{1}\times\dots\times D_{n}$, by $E_{i}$ we denote the equivalence class of $\delta_{i}'$ containing $B_{1}\times\dots\times B_{n}$. Since $R$ is subdirect, $R/\delta_{i}'\cong D/\delta_{i}'$ and $R/\delta_{i}'$ is a PC algebra without a nontrivial binary absorbing subuniverse or center. Since $R\cap (B_{1}\times\dots\times B_{n}) = R\cap (E_{1}\cap \dots\cap E_{t})$, the set $R\cap (B_{1}\times\dots\times B_{n})$ is a PC subuniverse of $R$. \end{proof} \begin{lem}\label{FactorByStableCongruence} Suppose $\sigma$ is a congruence on $D$, $B$ is a one-of-four subuniverse of $D$ stable under $\sigma$. Then $\{b/\sigma\mid b\in B\}$ is a one-of-four subuniverse of $D/\sigma$ of the same type as $B$. \end{lem} \begin{proof} For a binary subuniverse and a center it follows from Corollaries~\ref{AbsorptionQuotient} and \ref{CenterQuotient}, respectively. Suppose $B$ is a linear subuniverse. Let $\delta$ be the minimal congruence containing both $\sigma$ and $\ConLin(D)$. By Corollary~\ref{LinearAlgebrasAreClosed}, $D/\delta$ is a linear algebra. Since $B$ is stable under $\delta$, $\{b/\sigma\mid b\in B\}$ is a linear subuniverse of $D/\sigma$. It remains to consider the case when $B$ is a PC subuniverse of $D$, that is, $B=E_{1}\cap \dots \cap E_{s}$, where $E_{i}$ is an equivalence class of a PC congruence $\sigma_{i}$ for every $i$. Let $\delta$ be the minimal congruence containing $\sigma$ and $\sigma_{1}\cap\dots\cap\sigma_{s}$. By Lemma~\ref{BiggerThanPC}, $\delta$ is an intersection of PC congruences $\delta_{1},\ldots,\delta_{t}$. Since $B$ is stable under $\delta$ and $B$ is an equivalence class of $\sigma_{1}\cap\dots\cap\sigma_{s}$, $B$ is an equivalence class of $\delta$. Hence, $\{b/\sigma\mid b\in B\}$ is an intersection of the equivalence classes of congruences on $D/\sigma$ corresponding to $\delta_{1},\dots,\delta_{t}$. \end{proof} The following corollaries (proved earlier) state that if we restrict all coordinates of a relation to one-of-four subuniverses of type $\mathcal T$ then we restrict its projection onto the first coordinate to a subuniverse of type $\mathcal T$. The only difference is that for PC subuniverse we require the relation to be subdirect and without nontrivial binary absorbing subuniverse or center on every coordinate, and for linear subuniverse the first coordinate should be without a nontrivial binary absorbing subuniverse. \begin{AbsImpliesConsCorollary} Suppose $\rho \subseteq A_{1}\times\dots\times A_{n}$ is a relation such that $\proj_1 (\rho) = A_{1}$ and $C = \proj_{1}((C_{1}\times\dots \times C_{n})\cap\rho)$, where $C_{i}$ is an absorbing subuniverse in $A_{i}$ with a term $t$ for every $i$. Then $C$ is an absorbing subuniverse in $A_{1}$ with the term $t$. \end{AbsImpliesConsCorollary} \begin{CenterImpliesConsCorollary} Suppose $\rho \subseteq A_{1}\times\dots\times A_{n}$ is a relation such that $\proj_1 (\rho) = A_{1}$ and $C = \proj_{1}((C_{1}\times\dots \times C_{n})\cap\rho)$, where $C_{i}$ is a center in $A_{i}$ for every $i$. Then $C$ is a center in $A_{1}$. \end{CenterImpliesConsCorollary} \begin{PCImpliesCorollary} Suppose $\rho \subseteq A_{1}\times\dots\times A_{n}$ is a subdirect relation, there is no nontrivial binary absorbing subuniverse or nontrivial center on $A_{1}$, and $C = \proj_{1}((C_{1}\times\dots \times C_{n})\cap\rho)$, where $C_{i}$ is a PC subuniverse in $A_{i}$ for every $i$. Then $C$ is a PC subuniverse in $A_{1}$. \end{PCImpliesCorollary} \begin{LinearImpliesCorollary} Suppose $\rho \subseteq A_{1}\times\dots\times A_{n}$ is a relation such that $\proj_1 (\rho) = A_{1}$, there is no nontrivial binary absorbing subuniverse on $A_{1}$, and $C = \proj_{1}((C_{1}\times\dots \times C_{n})\cap\rho)$, where $C_{i}$ is a linear subuniverse of $A_{i}$ for every $i$. Then $C$ is a linear subuniverse of $A_{1}$. \end{LinearImpliesCorollary} Another common property is that we cannot have $(C_{1},\dots,C_{k})$-essential relation of arity greater than 2 if $C_{1},\ldots,C_{k}$ are subuniverses of a fixed type (any but linear). Note that for PC subuniverses we additionally require the relation to be subdirect. From these claims it can be derived that for the nonlinear case (see Corollary~\ref{BoundedWidthCase}) it is sufficient to check cycle consistency (all calculations are on binary relations) to guarantee a solution. \begin{lem}\label{BinAbsLessThanTwoCorollary} Suppose $C_{i}$ is a nontrivial binary absorbing subuniverse of $A_{i}$ with a term $t$ for $i\in\{1,2,\dots,k\}$, $k\geqslant 2$. Then there does not exist a $(C_{1},\dots,C_{k})$-essential relation $\rho\subseteq A_{1}\times \dots \times A_{k}$. \end{lem} \begin{proof} Assume that such a relation exists. To get a contradiction it is sufficient to apply term $t$ to a tuple from $A_{1}\times C_{2}\times\dots\times C_{k}$ and a tuple from $C_{1}\times \dots\times C_{k-1}\times A_{k}$. \end{proof} The following two corollaries were proved earlier. \begin{CenterLessThanThreeCorollary} Suppose $C_{i}$ is a center of $A_{i}$ for $i\in\{1,2,\dots,k\}$, $k\geqslant 3$. Then there does not exist a $(C_{1},\dots,C_{k})$-essential relation $\rho\subseteq A_{1}\times \dots \times A_{k}$. \end{CenterLessThanThreeCorollary} \begin{PCLessThanThreeCorollary} Suppose $\rho\subseteq A_{1}\times \dots \times A_{n}$ is a subdirect relation, $n\geqslant 3$, $C_{i}$ is a PC subuniverse in $A_{i}$. There does not exist a $(C_{1},\dots,C_{n})$-essential relation. \end{PCLessThanThreeCorollary} \subsection{Interaction} Here we explain how one-of-four subuniverses of different types interact with each other. \begin{lem}\label{PCBsubNonPC} Suppose $B_1$ is a binary absorbing, central, or linear subuniverse of $D$, $B_{2}$ is a subuniverse of $D$. Then $B_{1}\cap B_{2}$ is a binary absorbing, central, or linear subuniverse of $B_{2}$,respectively. \end{lem} \begin{proof} If $B_{1}$ is a binary absorbing subuniverse or a center, then the claim follows from Lemmas~\ref{AbsImplies} and \ref{CenterImplies}, respectively. If $B_{1}$ is a linear subuniverse, then by Corollary~\ref{LinearAlgebrasAreClosed} $B_{2}/\ConLin(D)$ is a linear algebra, hence $B_{1}\cap B_{2}$ is a linear subuniverse of $B_{2}$. \end{proof} \begin{lem}\label{IntersectionOfTwoSubuniverses} Suppose $B_{1}$ and $B_{2}$ are nonempty one-of-four subuniverses of $D$, $B_{1}\cap B_{2} = \varnothing$. Then $B_{1}$ and $B_{2}$ are subuniverses of the same type. \end{lem} \begin{proof} Assume the converse. Consider all possible cases. Case 1. $B_{1}$ is a linear subuniverse, $B_{2}$ is a binary absorbing subuniverse. By Corollary~\ref{AbsorptionQuotient} $\{b/\ConLin(D)\mid b\in B_{2}\}$ is a binary absorbing subuniverse on $D/\ConLin(D)$. By Lemma~\ref{NoAbsCenterPCInLinearAlgebra} this subuniverse should be trivial, which contradicts the fact that $B_{1}\cap B_{2}=\varnothing$ and $B_{1}$ is stable under $\ConLin(D)$. Case 2. $B_{1}$ is a linear subuniverse, $B_{2}$ is a center. By Corollary~\ref{CenterQuotient} $\{b/\ConLin(D)\mid b\in B_{2}\}$ is a center of $D/\ConLin(D)$. By Lemma~\ref{NoAbsCenterPCInLinearAlgebra} this subuniverse should be trivial, which contradicts the fact that $B_{1}\cap B_{2}=\varnothing$ and $B_{1}$ is stable under $\ConLin(D)$. Case 3. $B_{1}$ is a linear subuniverse, $B_{2}$ is a PC subuniverse. Let $S\subseteq (D/\ConLin(D))\times D$ consist of all the tuples $(c/\ConLin(D),c)$, where $c\in D$. By Lemma~\ref{NoAbsCenterPCInLinearAlgebra}, there is no nontrivial binary absorbing subuniverse or center on $D/\ConLin(D)$. Hence, by Corollary~\ref{PCImplies}, the restriction of the second variable to $B_{2}$ implies the restriction of the first variable to a PC subuniverse. Since $B_{1}\cap B_{2}=\varnothing$, this restriction is nontrivial. Thus, there exists a nontrivial PC subuniverse on $D/\ConLin(D)$, which contradicts Lemma~\ref{NoAbsCenterPCInLinearAlgebra}. Case 4. $B_{1}$ is a PC subuniverse, $B_{2}$ is a binary absorbing subuniverse. By Corollary~\ref{AbsorptionQuotient} the set $\{b/\ConPC(D)\mid b\in B_{2}\}$ is a binary absorbing subuniverse of $D/\ConPC(D)$. By Lemma~\ref{NoAbsCenterInPCAlgebra} this subuniverse should be trivial, which contradicts the fact that $B_{1}\cap B_{2}=\varnothing$ and $B_{1}$ is a PC subuniverse. Case 5. $B_{1}$ is a PC subuniverse, $B_{2}$ is a center. By Corollary~\ref{CenterQuotient} $\{b/\ConPC(D)\mid b\in B_{2}\}$ is a center of $D/\ConPC(D)$. By Lemma~\ref{NoAbsCenterInPCAlgebra} this subuniverse should be trivial, which contradicts the fact that $B_{1}\cap B_{2}=\varnothing$ and $B_{1}$ is a PC subuniverse. Case 6. $B_{1}$ is a binary absorbing subuniverse, $B_{2}$ is a center. Suppose $R\subseteq D\times G$ is the binary relation from the definition of the center $B_{2}$, and denote $b^{+} = \{a\mid (b,a)\in R\}$ for every $b\in D$. We prove this case by induction on the size of $D$. Assume that $b_1^{+}\neq b_2^{+}$ for some $b_{1},b_{2}\in B_{1}$. Choose an element $c\in b_1^{+}\setminus b_{2}^{+}$ (or in $b_2^{+}\setminus b_{1}^{+}$). Put $D' = \{a\mid (a,c)\in R\}$. Note that $D'\subsetneq D$, $D'\cap B_{1}\neq \varnothing$, $D'\cap B_{2} = B_{2}$. Thus, we obtain subuniverses $B_{1}\cap D'$ and $B_{2}$ of a smaller set $D'$ that are a binary absorbing subuniverse and a center (by Lemma~\ref{PCBsubNonPC}), respectively. It remains to apply the inductive assumption to $B_{1}\cap D'$ and $B_{2}$. Let us consider the case when $b_{1}^{+} = b_{2}^{+}$ for any $b_{1},b_{2}\in B_{1}$. Since $B_{1}\cap B_{2}=\varnothing$, $b^{+}\neq G$ for every $b\in B_{1}$. Let $f$ be the binary absorbing operation. Choose $b\in B_{1}$ and $e\in B_{2}$. Then $f(b,e)= b_{1}\in B_{1}$ and $f(e, b)= b_{2}\in B_{1}$, which means that $f(b^{+},G)\subseteq b_{1}^{+} = b^{+}$, $f(G,b^{+})\subseteq b_{2}^{+} = b^{+}$. This contradicts the definition of a center, saying that there is no nontrivial binary absorbing subuniverse on $G$. \end{proof} \begin{thm}\label{PCBsub} Suppose $B_1$ and $B_{2}$ are one-of-four subuniverses of $D$ of types $\mathcal T_{1}$ and $\mathcal T_{2}$, respectively. Then $B_{1}\cap B_{2}$ is a one-of-four subuniverse of $B_{2}$ of type $\mathcal T_{1}$. \end{thm} \begin{proof} If $B_{1}$ is not a PC subuniverse, then the claim follows from Lemma~$\ref{PCBsubNonPC}$. Assume that $B_{1}$ is a PC subuniverse of $D$. Let $\sigma_{1},\ldots,\sigma_{t}$ be the set of all PC congruences on $D$. Assume that $B_{2}$ is not a PC subuniverse. Every equivalence class $E$ of $\sigma_{i}$ is a PC subuniverse. Then Lemma~\ref{IntersectionOfTwoSubuniverses} implies that $E$ has a nonempty intersection with $B_{2}$. Therefore $B_{2}/\sigma_{i}\cong D/\sigma_{i}$ and $\sigma_{i}\cap (B_{2}\times B_{2})$ is a PC congruence on $B_{2}$ for every $i$. Hence, $B_{1}\cap B_{2}$ is a PC subuniverse of $B_{2}$, which completes this case. If $B_{2}$ is also a PC subuniverse, then by Corollary~\ref{PCProperties}, $B_{1}\cap B_{2}$ is a PC subuniverse of $B_{2}$. \end{proof} \begin{lem}\label{SequencesOfSubuniverses} Suppose $D = A_{0} = B_{0}$, $s\geqslant 1$, $t\geqslant 0$, $A_{i}$ is a one-of-four subuniverse of $A_{i-1}$ for every $i\in\{1,\dots,s\}$, and $B_{i}$ is a one-of-four subuniverse of $B_{i-1}$ for every $i\in\{1,\dots,t\}$. Then $A_{s}\cap B_{t}$ is a one-of-four subuniverse of $A_{s-1}\cap B_{t}$ of the same type as $A_{s}$. \end{lem} \begin{proof} We prove this lemma by induction on $s+t$. Let $A_{s}$ be a one-of-four subuniverse of $A_{s-1}$ of type $\mathcal T$. For $t=0$ the claim follows from the statement. Assume that $t\geqslant 1$. By the inductive assumption, $A_{s-1}\cap B_{t}$ and $A_{s}\cap B_{t-1}$ are one-of-four subuniverses of $A_{s-1}\cap B_{t-1}$, and the second of them is of type $\mathcal T$. Then by Theorem~\ref{PCBsub}, their intersection $A_{s}\cap B_{t}$ is a one-of four subuniverse of $A_{s-1}\cap B_{t}$ of type $\mathcal T$. \end{proof} \begin{lem}\label{ReductionAndProjectionGivesOneOfFour} Suppose $R\subseteq A_0\times B_{0}$ is a subdirect relation, $B_{i}$ is a one-of-four subuniverse of $B_{i-1}$ for every $i\in\{1,\dots,t\}$, $A_{1}$ is a one-of-four subuniverse of $A_{0}$. Then $\proj_{2}(R\cap (A_{1}\times B_{t}))$ is a one-of-four subuniverse of $\proj_{2}(R\cap (A_{1}\times B_{t-1}))$ of the same type as $B_{t}$. \end{lem} \begin{proof} By Lemma~\ref{PCBrel}, $R\cap (A_{0}\times B_{i})$ is a one-of-four subuniverse of $R\cap (A_{0}\times B_{i-1})$ of the same type as $B_{i}$, and $R\cap (A_{1}\times B_{0})$ is a one-of-four subuniverse of $R$. By Lemma~\ref{SequencesOfSubuniverses}, $R\cap (A_{1}\times B_{t})$ is a one-of-four subuniverse of $R\cap (A_{1}\times B_{t-1})$ of the same type as $B_{t}$. Let $\sigma$ be the congruence on $R\cap (A_{1}\times B_{0})$ such that two elements are equivalent whenever there projections onto the second coordinate are equal. Then $R\cap (A_{1}\times B_{t})$ is stable under $\sigma$ for every $i$. By Lemma~\ref{FactorByStableCongruence}, $\proj_{2}(R\cap (A_{1}\times B_{t}))$ is a one-of-four subuniverse of $\proj_{2}(R\cap (A_{1}\times B_{t-1}))$ of the same type as $B_{t}$. \end{proof} \begin{thm}\label{PCBint} Suppose $B_{1},\dots,B_{n}$ are one-of-four subuniverses of $D$, and $B_{1}\cap\dots\cap B_{n} = \varnothing$. Then there exists $I\subseteq\{1,\dots,n\}$ with $\bigcap_{i\in I}B_{i} = \varnothing$ satisfying one of the following conditions: \begin{enumerate} \item $|I|\leqslant 2$ and all subuniverses $B_{i}$, where $i\in I$, are of the same type; \item $B_{i}$ is a linear subuniverse for every $i\in I$; \item $B_{i}$ is a binary absorbing subuniverse for every $i\in I$. \end{enumerate} \end{thm} \begin{proof} Let us prove by induction on $n$. For $n=1$ it is trivial. For $n=2$ it follows from Lemma~\ref{IntersectionOfTwoSubuniverses}. If $\bigcap_{i\in I}B_{i} = \varnothing$ for some $I\subsetneq\{1,2,\dots,n\}$, then applying the inductive assumption to $\bigcap_{i\in I}B_{i}$ we obtain the required property. Thus, we assume that if we remove one one-of-four subuniverse from the intersection $B_{1}\cap\dots\cap B_{n}$ we get a nonempty set. Let us show that all subuniverses should be of the same type. Put $C_{i} = B_{i}\cap B_{n}$ for every $i\in\{1,2,\dots,n-1\}$. By Lemma~\ref{PCBsub}, $C_{i}$ is a one-of-four subuniverse of $B_{n}$ of the same type as $B_{i}$. Applying the inductive assumption to $C_{1}\cap \dots\cap C_{n-1}=\varnothing$, we derive that $C_{1},\dots,C_{n-1}$ are of the same type, hence $B_{1},\dots,B_{n-1}$ are of the same type. Similarly we can show that $B_{2},\dots,B_{n}$ are of the same type, and therefore, since $n\geqslant 3$, all of them are of the same types. Assume that all subuniverses $B_{1},\dots,B_{n}$ are centers or PC subuniverses. Let $R$ be the $n$-ary relation consisting of all tuples $(a,a,\dots,a)$. Then $R$ is a $(B_{1},\dots,B_{n})$-essential relation, which contradicts Corollary~\ref{CenterLessThanThree} for centers and Corollary~\ref{PCLessThanThree} for PC subuniverses. \end{proof} IMPORTANT! Sometimes we use $\Theta(x_{1},\dots,x_{n})$ to denote the relation. A subuniverse $A'$ of $\mathbf A$ is called \emph{a PC subuniverse} if $A' = E_{1}\cap\dots\cap E_{s}$, where $E_{i}$ is an equivalence class of a congruence $\sigma_{i}$ such that $\mathbf A/\sigma_{i}$ is a PC algebra {\bf without binary absorption or center}. $B\subseteq A$ is called \emph{a one-of-four subuniverse} if it is a binary absorbing subuniverse, a center, a linear subuniverse, or a PC subuniverse. We say that $B$ is a one-of-four subuniverse of \emph{absorbing type}, \emph{central type}, \emph{linear type}, or \emph{PC type}. \section{Proof of the Auxiliary Statements}\label{AuxStatements} \subsection{One-of-four reductions} \begin{lem}\label{nonPCReductionImpliesSubuniverse} Suppose $D^{(1)}$ is a one-of-four reduction for an instance $\Theta$ of type $\mathcal T$, which is not the PC type. Then $\Theta^{(1)}(z)$ is a one-of-four subuniverse of $\Theta(z)$ of type $\mathcal T$ for every varaible $z$. \end{lem} \begin{proof} Let $\Var(\Theta) = \{x_{1},\dots,x_{t}\}$ and $\Theta(x_{1},\dots,x_{t})$ define the relation $R$. By Lemma~\ref{PCBsubNonPC}, $D_{x_{i}}^{(1)}\cap \proj_{i}(R)$ is a one-of-four subuniverse of $\proj_{i}(R)$ of type $\mathcal T$ for every $i$. Considering $R$ as a subdirect relation on smaller domains and applying Corollaries~\ref{AbsImpliesCons}, \ref{CenterImpliesCons}, and \ref{LinearImplies} we conclude that $\Theta^{(1)}(z)$ is a one-of-four subuniverse of $\Theta(z)$ of type $\mathcal T$. \end{proof} \begin{lem}\label{PCReductionImpliesSubuniverse} Suppose $D^{(1)}$ is a PC reduction for a 1-consistent instance $\Theta$, for every variable $y$ appearing at least twice in $\Theta$ the pp-formula $\Theta(y)$ defines $D_{y}$, and $\Theta(z)$ defines $D_{z}$ for a variable $z$. Then $\Theta^{(1)}(z)$ is a PC subuniverse of $D_{z}$. \end{lem} \begin{proof} First, we rename the variables in $\Theta$ so that every variable occurs just once and denote the obtained instance by $\Theta_{0}$. Then we identify variables back to obtain the original instance step by step. Thus, we get a sequence $\Theta_{0},\Theta_{1},\Theta_{2},\dots, \Theta_{s}$ such that $\Theta_{i+1}$ is obtained from $\Theta_{i}$ by identifying of two variables and $\Theta_{s} = \Theta$. Let us show by induction on $i$ that for every variable $z$ the set $\Theta_{i}(z)\cap D_{z}^{(1)}$ is a PC subuniverse of $\Theta_{i}(z)$. For $i=0$ it follows from the fact that $\Theta$ is 1-consistent, and therefore, $\Theta_{0}(z)$ defines the full $D_{z}$. Assume that $\Theta_{i+1}$ is obtained from $\Theta_{i}$ by identifying of $y$ and $y'$, and the variable in $\Theta$ corresponding to $y$ and $y'$ is $y$. We know that for every variable $z$ appearing at least twice in $\Theta$, $\Theta(z)$ defines $D_{z}$. Hence $\Theta_{i+1}(y)$ also defines $D_{y}$. Thus, we just need to show that for any variable $z$ different from $y$ and $y'$ the set $\Theta_{i+1}(z)\cap D_{z}^{(1)}$ is a PC subuniverse of $\Theta_{i+1}(z)$. By the inductive assumption $\Theta_{i}(z)\cap D_{z}^{(1)}$ is a PC subuniverse of $\Theta_{i}(z)$. Then $\Theta_{i}(z)\cap D_{z}^{(1)} = E_{1}\cap\dots\cap E_{t}$, where $E_{j}$ is an equivalence class of a PC congruence $\sigma_{j}$ on $\Theta_{i}(z)$ for every $j$. Let $S\subseteq \Theta_{i}(z)/\sigma_{j}\times D_{y}\times D_{y}$ be the relation consisting of all tuples $(a/\sigma_{j},b,b')$ such that $\Theta_{i}$ has a solution with $z=a$, $y=b$, $y'=b'$. Since the variable $y$ appears at least twice in $\Theta$, $\Theta(y)$ defines a full relation. Hence, the relation $S$ is subdirect and for every $b\in D_{y}$ there exists $E$ such that $(E,b,b)\in S$. Lemma~\ref{IdentificationDoesNotReducePC} implies that for every equivalence class $E$ of $\sigma_{j}$ there exists $b$ such that $\Theta_{i}$ has a solution with $z\in E$ and $y=y'=b$, which means that there exists a solution of $\Theta_{i+1}$ with $z\in E$. Therefore, $\Theta_{i}(z)/\sigma_{j}\cong \Theta_{i+1}(z)/\sigma_{j}$, which implies that $\Theta_{i+1}(z)\cap D_{z}^{(1)}$ is a PC subuniverse of $\Theta_{i+1}(z)$. This completes the inductive step. Since $\Theta=\Theta_{s}$, we proved that $\Theta(z)\cap D_{z}^{(1)}$ is a PC subuniverse of $\Theta(z)$ for every variable $z$ of $\Theta$. Suppose $\Var(\Theta) = \{x_{1},\dots,x_{t}\}$, $\Theta(x_{1},\dots,x_{t})$ defines a relation $R$. Then $R$ can be viewed as a subdirect relation if we reduce the domain of every variable $x_{i}$ to $\Theta(x_{i})$. By Corollary~\ref{PCImplies}, for any variable $z$ with $\Theta(z) = D_{z}$ we obtain that $\Theta^{(1)}(z)$ is a PC subuniverse of $D_{z}$. \end{proof} \begin{lem}\label{nonPCReductionForFormulas} Suppose $D^{(1)}$ is a minimal absorbing, central, or linear reduction for an instance $\Theta$, and $\Theta(x_{1},\ldots,x_{n})$ defines a full relation. Then $\Theta^{(1)}(x_{1},\ldots,x_{n})$ defines a full relation or an empty relation. \end{lem} \begin{proof} If $\Theta^{(1)}(x_{1},\ldots,x_{n})$ defines an empty relation, then there is nothing to prove. Assume that $\Theta^{(1)}(x_{1},\ldots,x_{n})$ is not empty. We prove by induction on $n$. For $n=1$ by Lemma~\ref{nonPCReductionImpliesSubuniverse} $\Theta^{(1)}(x_{1})$ is a subuniverse of $\Theta(x_{1})$ of the corresponding type. By the minimality of the reduction $D^{(1)}$ the pp-formula $\Theta^{(1)}(x_{1})$ defines $D_{x_{1}}^{(1)}$. Let us prove the induction step. For each $i\in\{1,\dots,n-1\}$ choose $a_{i}\in D_{x_{i}}^{(1)}$. By the inductive assumption, $\Theta^{(1)}(x_{1},\dots,x_{n-1})$ defines a full relation, hence there exists a solution of $\Theta^{(1)}$ having $x_{i} = a_{i}$ for every $i\in\{1,\dots,n-1\}$. Add the constraint $x_{i}= a_{i}$ to $\Theta$ for every $i\in\{1,\dots,n-1\}$ and denote the obtained instance by $\Omega$. By the condition of this lemma $\Omega(x_{n})$ defines $D_{x_{n}}$. By Lemma~\ref{nonPCReductionImpliesSubuniverse}, $\Omega^{(1)}(x_{n})$ defines a one-of-four subuniverse of $D_{x_{n}}$ of the corresponding type, which by the minimality of the reduction $D^{(1)}$ implies that $\Omega^{(1)}(x_{n})$ defines $D_{x_{n}}^{(1)}$. Since we chose $a_{1},\dots,a_{n-1}$ arbitrary, this means that $\Theta^{(1)}(x_{1},\ldots,x_{n})$ defines a full relation. \end{proof} \begin{lem}\label{PCReductionForFormulas} Suppose $D^{(1)}$ is a minimal PC reduction for a 1-consistent instance $\Theta$, for every variable $y$ appearing at least twice in $\Theta$ the pp-formula $\Theta(y)$ defines $D_{y}$, and $\Theta(x_{1},\ldots,x_{n})$ defines a full relation. Then $\Theta^{(1)}(x_{1},\ldots,x_{n})$ defines a full relation or an empty relation. \end{lem} \begin{proof} If $\Theta^{(1)}(x_{1},\ldots,x_{n})$ defines an empty relation, then there is nothing to prove. Assume that $\Theta^{(1)}(x_{1},\ldots,x_{n})$ is not empty. First, we join variables $x_{1},\dots,x_{n}$ into one variable $X$ with domain $D_{x_{1}}\times\dots\times D_{x_{n}}$. We replace $x_{1},\dots,x_{n}$ by $X$ and change all constraints containing one of the variables $x_{1},\ldots,x_{n}$ correspondingly. The obtained instance we denote by $\Omega$. Since $\Theta(x_{1},\ldots,x_{n})$ defines a full relation, the instance $\Omega$ is 1-consistent. Second, we define a reduction $D^{(1)}$ on the domain of the new variable $X$ by $D^{(1)}_{X} = D^{(1)}_{x_{1}}\times\dots\times D^{(1)}_{x_{n}}$. Let us show that this is a PC reduction. By Lemma~\ref{PCBrel}, $D^{(1)}_{X}$ is a PC subuniverse of $D_{X}$. By Lemmas~\ref{GenBinAbToBinAb} and \ref{GenCenterToCenter}, there is no nontrivial binary absorbing subuniverse or center on $D_{X}$. Thus, $D^{(1)}$ is a PC reduction for $\Omega$. By Lemma~\ref{PCReductionImpliesSubuniverse}, $\Omega^{(1)}(X)$ is a PC subuniverse of $D_{X}$. By Lemma~\ref{PCSubuniverseOnProduct}, $\Omega^{(1)}(X) = B_{1}\times\dots\times B_{n}$, where $B_{i}$ is a PC subuniverse of $D_{x_{i}}$ for every $i$. By the minimality of $D^{(1)}$ on $\Theta$ we obtain that $B_{i} = D^{(1)}_{x_{i}}$. Hence, $\Theta^{(1)}(x_{1},\ldots,x_{n})$ defines a full relation. \end{proof} \begin{lem}\label{ProperReductionPreservesSubdirectness} Suppose $D^{(1)}$ is a one-of-four minimal reduction of an instance $\Theta$, $\rho(x_{1},\ldots,x_{n})$ is a subdirect constraint of $\Theta$, and $\rho^{(1)}$ is not empty. Then $\rho^{(1)}$ is subdirect. \end{lem} \begin{proof} We need to show that $\proj_{i}(\rho\cap (D_{x_{1}}^{(1)}\times \dots\times D_{x_{n}}^{(1)})) = D_{x_{i}}^{(1)}$. By Corollaries~\ref{AbsImpliesCons}, \ref{CenterImpliesCons}, \ref{PCImplies}, \ref{LinearImplies}, $B_{i} = \proj_{i}(\rho\cap (D_{x_{1}}^{(1)}\times \dots\times D_{x_{n}}^{(1)}))$ is a one-of-four subuniverse of $D_{x_{i}}$ of the same type. Since $\rho^{(1)}$ is not empty, $B_{i}$ is not empty. Since $D_{x_{i}}^{(1)}$ is a minimal subuniverse of this type, we have $B_{i} =D_{x_{i}}^{(1)}$. \end{proof} \begin{lem}\label{ProperReductionPreservesCycleConAndIrreducability} Suppose $D^{(1)}$ is a one-of-four minimal reduction for a cycle-consistent irreducible CSP instance $\Theta$, and $\Theta^{(1)}$ has a solution. Then $\Theta^{(1)}$ is cycle-consistent and irreducible. \end{lem} \begin{proof} Consider a path $P$ in $\Theta$ starting and ending with one variable $x$. By $\Omega$ we denote its covering $z_{1}-Q_{1}-z_{2}-\dots-Q_{l-1}-z_{l}$ (which is also a covering of $\Theta$) that is obtained from $P$ by renaming the variables so that every variable except for $z_{2},\ldots,z_{l-1}$ occurs just once, $z_{2},\ldots,z_{l-1}$ occur twice. Thus, $z_{1}$ and $z_{l}$ are different but $S(z_{1}) = S(z_{l}) = x$ in the definition of the covering. By $\Omega'$ we denote the formula obtained from $\Omega$ by substituting $z_{1}$ for $z_{l}$. First, we prove that $P$ connects $a$ with $a$ in $\Theta^{(1)}$ for every $a\in D_{x}^{(1)}$. Since $\Theta$ is cycle-consistent, $\Omega'(z_{1})$ defines $D_{x}$. Since $\Theta^{(1)}$ has a solution, $\Omega'^{(1)}(z_{1})$ defines a nonempty relation. By Lemmas~\ref{nonPCReductionForFormulas} and \ref{PCReductionForFormulas}, $\Omega'^{(1)}(z_{1})$ defines $D_{x}^{(1)}$, which means that $P$ connects $a$ with $a$ in $\Theta^{(1)}$ for every $a\in D_{x}^{(1)}$. Hence, $\Theta^{(1)}$ is cycle-consistent. Assume that $P$ connects any two elements of $D_{x}$, which means that $\Omega(z_{1},z_{l})$ defines a full relation. Since $\Theta^{(1)}$ has a solution, $\Omega^{(1)}(z_{1},z_{l})$ defines a nonempty relation. By Lemmas~\ref{nonPCReductionForFormulas}, \ref{PCReductionForFormulas}, $\Omega^{(1)}(z_{1},z_{l})$ also defines a full relation, which means that $P$ connects any two elements of $D_{x}^{(1)}$ in $\Theta^{(1)}$. Let us prove that $\Theta^{(1)}$ is irreducible. Consider an instance $\Upsilon_{1}=\{C_{1}',\ldots,C_{s}'\}$ consisting of projections of constraints from $\Theta^{(1)}$ such that it is not fragmented and not linked. Let $\Var(\Upsilon_{1}) = \{x_{1},\ldots,x_{n}\}$. By the definition for each constraint $C_{i}'$ we can find a constraint $C_{i}\in\Theta$ such that $C_{i}'$ is a projection of $C_{i}^{(1)}$ onto some variables. Let $\Upsilon_{2}$ consist of the projections of $C_{1},\dots,C_{s}$ onto the same variables as in $\Upsilon_{1}$, and $\Upsilon\in\ExpShort(\Theta)$ is obtained from $\{C_{1},\ldots,C_{s}\}$ by renaming variables so that each variable except for $x_{1},\ldots,x_n$ appears just once. Then the pp-formulas $\Upsilon_{1}(x_{1},\ldots,x_{n})$ and $\Upsilon^{(1)}(x_{1},\ldots,x_{n})$ define the same relation, $\Upsilon_{2}(x_{1},\ldots,x_{n})$ and $\Upsilon(x_{1},\ldots,x_{n})$ define the same relation. Since $\Upsilon_{1}$ is not fragmented, both $\Upsilon$ and $\Upsilon_{2}$ are not fragmented. Also, by Lemma~\ref{ExpandedConsistencyLemma}, both $\Upsilon$ and $\Upsilon_{2}$ are cycle-consistent and irreducible. Assume that $\Upsilon_{2}$ is linked. By Lemma~\ref{LinkedConIsCon} there exists a path that connects any two elements of $D_{x_{1}}$ in $\Upsilon_{2}$. Then there exists a corresponding path within the variables $x_{1},\ldots,x_{n}$ of $\Upsilon$ connecting any two elements of $D_{x_{1}}$. As we showed earlier this path, reduced to $D^{(1)}$, also connects any two elements of $D_{x_{1}}^{(1)}$ in $\Upsilon^{(1)}$. The same path can be used to connect any two elements of $D_{x_{1}}^{(1)}$ in $\Upsilon_{1}$, which contradicts our assumption that $\Upsilon_{1}$ is not linked. Suppose $\Upsilon_{2}$ is not linked. Since $\Upsilon$ is irreducible, the solution set of $\Upsilon_{2}$ is subdirect. Thus, for each variable $x_{i}$ (these are only variables appearing more than once in $\Upsilon$) we have $\Upsilon(x_{i}) = \Upsilon_{2}(x_{i}) = D_{x_{i}}$. Then by Lemmas~\ref{nonPCReductionForFormulas}, \ref{PCReductionForFormulas}, $\Upsilon^{(1)}(x_{i})$ defines $D_{x_{i}}^{(1)}$ or an empty set. It cannot be empty because $\Theta^{(1)}$ has a solution, therefore we have $\Upsilon_{1}(x_{i})=\Upsilon^{(1)}(x_{i})=D_{x_{i}}^{(1)}$ for every $i$, and the solution set of $\Upsilon_{1}$ is subdirect, which completes the proof. \end{proof} \begin{lem}\label{LinkedStayLinkedForAC} Suppose $D^{(1)}$ is a minimal absorbing or central reduction for $\Theta$, the solution set of $\Theta$ is subdirect, $D_{x_{1}} = D_{x_{2}}$, $D^{(1)}_{x_{1}} = D^{(1)}_{x_{2}}$, both $\Theta(x_1,x_2)$ and $\Theta^{(1)}(x_{1},x_{2})$ define reflexive symmetric relations, and $\Theta(x_1,x_2)$ contains $(a,b)\in D_{x_1}^{(1)}\times D_{x_2}^{(1)}$. Then $a$ and $b$ are linked in the relation defined by $\Theta^{(1)}(x_{1},x_{2})$. \end{lem} \begin{proof} Let $\Var(\Theta) = \{x_{1},x_{2},y_{1},\dots,y_{t}\}$, $\Theta(x_{1},x_{2},y_{1},\dots,y_{t})$ define a relation $R$. The relation $R$ can be viewed as a ternary relation $R\subseteq D_{x_{1}}\times D_{x_{2}}\times (D_{y_{1}}\times\dots\times D_{y_{t}})$. By Lemmas~\ref{AbsImplies} and \ref{CenterImplies}, $G:= D^{(1)}_{y_{1}}\times\dots\times D^{(1)}_{y_{t}}$ is a one-of-four subuniverse of $D_{y_{1}}\times\dots\times D_{y_{t}}$ of the same type as the reduction $D^{(1)}$. Let $$R'(Y,Y',Y'') =\exists x_{1}\exists x_{2}\;R(a,x_{1},Y) \wedge R(x_{1},x_{2},Y') \wedge R(x_{2},b,Y'')\wedge x_{1}\in D_{x_{1}}^{(1)}\wedge x_{2}\in D_{x_{1}}^{(1)}.$$ Since $\Theta(x_{1},x_{2})$ contains $(a,b)$ and $\Theta^{(1)}(x_{1},x_{2})$ defines a reflexive relation, there exist $B_{1},B_{1}'\in G$ such that $(B_1,B_{1}',B_{1}'')\in R'$ (put $x_{1}=x_{2}= a$). Similarly, there exist $B_{2}',B_{2}''\in G$ such that $(B_2,B_{2}',B_{2}'')\in R'$ (put $x_{1}=x_{2}= b$), and $B_{3},B_{3}''\in G$ such that $(B_3,B_{3}',B_{3}'')\in R'$ (put $x_{1}=a$, $x_{2}= b$). By Lemma~\ref{BinAbsLessThanTwoCorollary} and Corollary~\ref{CenterLessThanThree} $R'$ cannot be $G$-essential, which means that $R\cap (G\times G\times G)\neq \varnothing$. Hence $a$ and $b$ are linked (by a path of length 3) in $\Theta^{(1)}(x_{1},x_{2})$. \end{proof} \begin{lem}\label{LinkedStayLinkedForPC} Suppose $D^{(1)}$ is a minimal PC reduction for $\Theta$, the solution set of $\Theta$ is subdirect, $D_{x_{1}} = D_{x_{2}}$, $D^{(1)}_{x_{1}} = D^{(1)}_{x_{2}}$, both $\Theta(x_1,x_2)$ and $\Theta^{(1)}(x_{1},x_{2})$ define reflexive symmetric relations, and $\Theta(x_1,x_2)$ contains $(a,b)\in D_{x_1}^{(1)}\times D_{x_2}^{(1)}$. Then $a$ and $b$ are linked in the relation defined by $\Theta^{(1)}(x_{1},x_{2})$. \end{lem} \begin{proof} Suppose $y$ is a variable of $\Theta$ and $\sigma$ is a PC congruence on $D_{y}$. Consider a relation $\rho\subseteq D_{x_{1}}\times D_{y}/\sigma$ consisting of all the tuples $(c,C)$ such that there exists a solution of $\Theta$ with $x_{1} = c$ and $y\in C$. Since there is no nontrivial binary absorbing subuniverse or center on $D_{x_{1}}$, by Lemma~\ref{PCRelationsLem}, either $\rho$ is a full relation, or $\ConOne(\rho,2)$ is the equality relation. In the first case it does not matter what value we substitute for the variable $x_{1}$ the variable $y$ can be at any equivalence class of $\sigma$. In the second case the equivalence class is uniquely determined by the variable $x_{1}$. Moreover, since $D_{x_{1}}^{(1)}$ is a minimal PC subuniverse, the equivalence class is the same for all elements of $D_{x_{1}}^{(1)}$. Since $\Theta^{(1)}$ has a solution, this equivalence class is the class containing $D_{y}^{(1)}$. Later, we will specify whether a PC congruence is of the \emph{first type} (from the first case) or of the \emph{second type} (from the second case). Let $\Var(\Theta) = \{x_1,x_2,y_1,\dots,y_t\}$. Let $R$ be the relation defined by $\Theta(x_{1},x_{2},y_{1},\ldots,y_{t})$. By $\Upsilon$ denote the following formula $$R(a,x_2,y_1,\dots,y_{t}) \wedge R(x_{1},x_2,y_1',\dots,y_{t}') \wedge R(x_{1},x_{2}',z_1,\dots,z_{t}) \wedge R(b,x_2',z_{1}',\dots,z_{t}') \wedge x_{1}\in D^{(1)}_{x_{1}}.$$ Consider a congruence $\sigma$ of the first type on the domain of any variable $y$ of $\Upsilon$. Assume that $y\in \{x_{2},y_{1},\dots,y_{t},y_1',\dots,y_{t}'\}$. It follows from the definition of the first type that for any equivalence class $E$ of $\sigma$ there exists a solution of $\Upsilon$ such that $x_{1} = a$, $x_{2}' = b$, $y_{i} = y_{i}'$ for every $i$, and $y\in E$. Similarly, assume that $y\in \{x_{2}',z_{1},\dots,z_{t},z_1',\dots,z_{t}'\}$. For any equivalence class $E$ of $\sigma$ there exists a solution of $\Upsilon$ such that $x_{1} = x_{2} = b$, $z_{i} = z_{i}'$ for every $i$, and $y\in E$. Thus, we showed that in both cases $\Upsilon(y)/\sigma\cong D_{y}/\sigma$. Let $E$ be the equivalence class of $\sigma$ containing $D_{y}^{(1)}$. By $\delta$ we denote the extension of $\sigma$ onto the solution set of $\Upsilon$, and by $E_{\sigma}$ we denote the equivalence class of $\delta$ corresponding to $E$. Since $\Upsilon(y)/\sigma\cong D_{y}/\sigma$ and $D_{y}/\sigma$ is a PC algebra without a nontrivial binary absorbing subuniverse or center, $E_{\sigma}$ is a PC subuniverse of the solution set of $\Upsilon$. Consider the intersection of $E_{\sigma}$ for all PC congruences $\sigma$ of the first type. If this intersection is not empty, then there exists a solution of $\Upsilon$ such that any element of this solution is in the equivalence class containing $D_{y}^{(1)}$ for any PC congruence of the first type. Since $a,b\in D_{x_{1}}^{(1)}$ and $x_{1}\in D^{(1)}_{x_{1}}$ in the definition of $\Upsilon$, the same is true for any PC congruence of the second type. Since $D_{y}^{(1)}$ is the intersection of all equivalence classes containing $D^{(1)}_{y}$ of all PC congruences for any variable $y$, the solution is in $D^{(1)}$, which means that $a$ and $b$ are linked in $\Theta^{(1)}(x_{1},x_{2})$. Assume that the intersection of $E_{\sigma}$ for all PC congruences $\sigma$ of the first type is empty. By Theorem~\ref{PCBint} there should be two congruences $\sigma$ and $\sigma'$ such that $E_{\sigma}\cap E_{\sigma'} = \varnothing.$ Let $y$ and $y'$ be the variables of $\Upsilon$ corresponding to $\sigma$ and $\sigma'$. Consider several cases. Case 1. $y,y'\in\{x_{2},x_{2}',y_{1}',\dots,y_{t}',z_{1},\dots,z_{t},z_{1}',\dots,z_{t}'\}$. Since $\Theta^{(1)}(x_{1},x_{2})$ defines a reflexive relation, $\Upsilon$ has a solution with $x_{2} = x_{1} = x_{2}' = b$ and all the variables $y_{1}',\dots,y_{t}',z_{1},\dots,z_{t},z_{1}',\dots,z_{t}'$ are from $D^{(1)}$. This contradicts the fact that $E_{\sigma}\cap E_{\sigma'} = \varnothing.$ Case 2. $y,y'\in\{x_{2},x_{2}',y_{1},\dots,y_{t},y_{1}',\dots,y_{t}',z_{1},\dots,z_{t}\}$. Similarly, $\Upsilon$ has a solution with $x_{1} = x_{2} = x_{2}' = a$ and all the variables $y_{1},\dots,y_{t},y_{1}',\dots,y_{t}',z_{1},\dots,z_{t}$ are from $D^{(1)}$. This contradicts the fact that $E_{\sigma}\cap E_{\sigma'} = \varnothing.$ Case 3. $y\in \{y_{1},\dots,y_{t}\}$, $y'\in\{z_{1}',\dots,z_{t}'\}$. Similarly, $\Upsilon$ has a solution with $x_{1} = x_{2} =a$, $x_{2}' = b$ and all the variables $y_{1},\dots,y_{t},y_{1}',\dots,y_{t}',z_{1}',\dots,z_{t}'$ are from $D^{(1)}$. Again, this contradicts the fact that $E_{\sigma}\cap E_{\sigma'} = \varnothing.$ \end{proof} \begin{lem}\label{LinkedStayLinked} Suppose $D^{(1)}$ is a minimal nonlinear reduction for $\Theta$, the solution set of $\Theta$ is subdirect, $\Theta^{(1)}$ is not empty, $\Theta(x_1,x_2)$ defines a relation containing $(a,b)\in D_{x_1}^{(1)}\times D_{x_2}^{(1)}$. Then $a$ and $b$ are linked in the relation defined by $\Theta^{(1)}(x_{1},x_{2})$. \end{lem} \begin{proof} By Lemma~\ref{ProperReductionPreservesSubdirectness}, the solution set of $\Theta^{(1)}$ is subdirect. Let $\Var(\Theta) = \{x_{1},x_{2},y_{1},\dots,y_{t}\}$, $R$ be the relation defined by $\Theta(x_{1},x_{2},y_{1},\dots,y_{t})$. Let $\Omega$ be the following instance $$R(x_{1},x_{2},y_{1},\dots,y_{t}) \wedge R(x_{1}',x_{2},y_{1}',\dots,y_{t}').$$ Since the solution sets of $\Theta$ and $\Theta^{(1)}$ are subdirect, the solution sets of $\Omega$ and $\Omega^{(1)}$ are also subdirect. Also, there should be a solution of $\Theta^{(1)}$ with $x_{2} = b$. Let $x_{1} = b'$ in this solution. Then $\Omega(x_{1},x_{1}')$ contains $(a,b')$. Since both $\Omega(x_{1},x_{1}')$ and $\Omega^{(1)}(x_{1},x_{1}')$ define symmetric reflexive relations, Lemmas~\ref{LinkedStayLinkedForAC} and \ref{LinkedStayLinkedForPC} imply that $a$ and $b'$ are linked in $\Omega^{(1)}(x_{1},x_{1}')$. Since $(b',b)$ is in $\Theta^{(1)}(x_{1},x_{2})$, we derive that $a$ and $b$ are linked in $\Theta^{(1)}(x_{1},x_{2})$, which completes the proof. \end{proof} \subsection{Properties of $\ConOne(\rho,x)$} \begin{lem}\label{RectangularCriticalArityTwo} Suppose $\rho$ is a critical rectangular relation of arity $n\geqslant 2$, $\rho'$ is the cover of $\rho$. Then $\ConOne(\rho',1)\supsetneq\ConOne(\rho,1)$, for $n>2$ we also have $\ConOne(\proj_{1,2}(\rho),1)\supsetneq\ConOne(\rho,1)$, \end{lem} \begin{proof} For every $i\in\{1,2,\dots,n\}$ we define $\rho_{i}(x_{1},\ldots,x_{n}) = \exists x_{i}' \rho(x_{1},\ldots,x_{i-1},x_{i}',x_{i+1},\dots,x_{n})$. Since $\rho$ is critical, it has no dummy variables, therefore $\rho\subsetneq\rho_{i}$ for every $i$. Also $\rho\subsetneq \bigcap_{i} \rho_{i}$. Choose a tuple $(a_{1},\ldots,a_{n})\in \rho'\setminus \rho$. Since $\rho'$ is the cover of $\rho$ we have $\rho'\subseteq\bigcap_{i} \rho_{i}$. Since this tuple is in $\rho_{i}$, for every $i$ there is $b_{i}$ such that $(a_{1},\ldots,a_{i-1},b_{i},a_{i+1},\dots,a_{n})\in\rho$. Then $(a_{1},b_{1})\in\ConOne(\rho',1)$, which means by the rectangularity of $\rho$ that $\ConOne(\rho',1)\supsetneq\ConOne(\rho,1)$. For $n>2$ we have $(b_{1},a_2,\dots,a_{n}), (a_{1},\dots,a_{n-1},b_{n})\in \rho$, hence $(a_{1},b_{1})\in\ConOne(\proj_{1,2}(\rho),1)$ and therefore $\ConOne(\proj_{1,2}(\rho),1)\supsetneq\ConOne(\rho,1)$. \end{proof} \begin{lem}\label{CriticalMeansIrreducible} Suppose $\rho$ is a critical subdirect relation and the $i$-th variable of $\rho$ is rectangular. Then $\ConOne(\rho,i)$ is an irreducible congruence. \end{lem} \begin{proof} To simplify notations assume that $i=1$. Put $\sigma=\ConOne(\rho,1)$. As we mentioned in Section~\ref{DefinitionRectangularitySubsection}, $\sigma$ should be a congruence. Assume that it is not an irreducible congruence. Consider binary relations $\delta_{1},\ldots,\delta_{s} \supsetneq\sigma$ stable under $\sigma$ such that $\delta_{1}\cap\dots\cap\delta_{s} = \sigma$. Put $$\rho_{j}(x_{1},\ldots,x_{n})=\exists x_{1}' \;\rho(x_{1}',x_{2},\ldots,x_{n})\wedge \delta_{j}(x_{1},x_{1}').$$ Consider a tuple $(x_1,\dots,x_{n})$ in the intersection of $\rho_{1},\ldots,\rho_{s}$. Since $\delta_{j}$ is stable under $\sigma=\ConOne(\rho,1)$, we may assume that $x_{1}'$ takes the same value in the definition of every $\rho_{j}$. Then $(x_{1},x_{1}')$ should be in $\delta_{j}$ for every $j$, which implies that $(x_{1},x_{1}')\in\sigma$ and $(x_{1},\dots,x_{n})\in\rho$. Hence, the intersection of $\rho_{1},\dots,\rho_{s}$ gives $\rho$. Since $\rho\subsetneq \rho_{j}$ for every $j$, this contradicts the fact that $\rho$ is critical. \end{proof} For a relation $\rho$ of arity $n$ by $\VPol(\rho)$ we denote the set of all unary vector-functions preserving the relation $\rho$. \begin{lem}\label{KeyDerivation} Suppose a pp-formula $\Omega(x_{1},\ldots,x_{n})$ defines a relation $\rho$, $\alpha\in D_{x_{1}}\times\dots\times D_{x_{n}}$, and $\rho' = \{f(\alpha)\mid f\in \VPol(\rho)\}$. Then there exists $\Omega'\in\ExpShort(\Omega)$ such that $\Omega'(x_{1},\ldots,x_{n})$ defines~$\rho'$. \end{lem} \begin{proof} Suppose $\alpha = (a_{1},\ldots,a_{n})$. We introduce new variables $x_{i}^{a}$ for every $i\in\{1,2,\ldots,n\}$ and $a\in D_{x_{i}}$. By $\Upsilon$ we denote the following formula $\bigwedge\limits_{(b_{1},\ldots,b_{n})\in\rho} \rho(x_{1}^{b_{1}},\ldots,x_{n}^{b_{n}}).$ This formula can be understood in the following way. If we encode a unary vector function by variables so that $f(b_{1},\ldots,b_{n}) = (x_1^{b_{1}},\dots,x_n^{b_{n}})$ for every $b_{1},\ldots,b_{n}$, then the formula says that the vector function preserves $\rho$. Then $\rho'$ can defined by a pp-formula $\Upsilon(x_{1}^{a_{1}},\ldots,x_{n}^{a_{n}})$. To obtain the formula $\Omega'$ it is sufficient to replace each $\rho(x_{1}^{b_{1}},\ldots,x_{n}^{b_{n}})$ by a copy of $\Omega$ (replacing $x_{1},\dots,x_{n}$ with $x_{1}^{b_{1}},\dots,x_{n}^{b_{n}}$) and then replace $x_{1}^{a_{1}},\dots,x_{n}^{a_{n}}$ with $x_{1},\dots,x_{n}$. \end{proof} \begin{conslem}\label{MaximalMeansKey} Suppose a pp-formula $\Omega(x_{1},\ldots,x_{n})$ defines a relation without a tuple $\alpha\in D_{x_{1}}\times\dots\times D_{x_{n}}$, $\Sigma$ is the set of all relations defined by $\Upsilon(x_{1},\ldots,x_{n})$ where $\Upsilon\in\ExpShort(\Omega)$, and $\rho$ is an inclusion-maximal relation in $\Sigma$ without the tuple $\alpha$. Then $\alpha$ is a key tuple for $\rho$. \end{conslem} \begin{proof} For every tuple $\beta\notin\rho$ we consider $\rho_{\beta} := \{f(\beta)\mid f\in \VPol(\rho)\}$. Since $f$ can be a constant mapping to a tuple from $\rho$ and an identity, we have $\rho_{\beta}\supsetneq \rho$ for every $\beta$. By Lemma~\ref{KeyDerivation}, $\rho_{\beta}$ should be in $\Sigma$. Since $\rho$ is inclusion-maximal, $\alpha\in\rho_{\beta}$. Therefore, any $\beta$ can be mapped to $\alpha$ by a unary vector-function preserving $\rho$, which means that $\alpha$ is a key tuple for $\rho$. \end{proof} The next lemma shows that we can apply the operation $\ConOne$ and a nonlinear reduction $D^{(1)}$ to a pp-formula $\Upsilon(x_{1},\dots,x_{n})$ in any order, the result will be the same. For the linear reduction a slight modification of the statement is required (see Lemma~\ref{AddLinearVariables}). \begin{lem}\label{SameConOneForNonlinear} Suppose $D^{(1)}$ is a minimal nonlinear reduction for an instance $\Upsilon$, the solution set of $\Upsilon$ is subdirect, and $\Upsilon^{(1)}(x_{1},\ldots,x_{n})$ defines a subdirect rectangular relation. Then for every $i$ $$ (\ConOne(\Upsilon(x_{1},\ldots,x_{n}),i))^{(1)}= \ConOne(\Upsilon^{(1)}(x_{1},\ldots,x_{n}),i). $$ \end{lem} \begin{proof} Put $\sigma_{0} = \ConOne(\Upsilon(x_{1},\ldots,x_{n}),i)$, $\sigma_{1} = \ConOne(\Upsilon^{(1)}(x_{1},\ldots,x_{n}),i)$. Let $\{x_{1},\ldots,x_{n},y_{1},\ldots,y_{s}\}$ be the set of all variables of $\Upsilon$. Let $\Xi = \Upsilon\wedge \Upsilon_{x_{i},y_{1},\ldots,y_{s}}^{x_{i}',y_{1}',\ldots,y_{s}'}$. We can check that $\sigma_{0}$ is defined by $\Xi(x_{i},x_{i}')$, and $\sigma_{1}$ is defined by $\Xi^{(1)}(x_{i},x_{i}')$. Since $\Upsilon^{(1)}(x_{1},\ldots,x_{n})$ defines a rectangular relation, $\sigma_{1}$ is a congruence. It follows from the definition that $\sigma_{0}^{(1)}\supseteq\sigma_{1}$. Let us prove the backward inclusion. Choose a pair $(a,b)\in \sigma_{0}^{(1)}$. Since $\sigma_{0}$ is defined by $\Xi(x_{i},x_{i}')$, by Lemma~\ref{LinkedStayLinked}, $a$ and $b$ should be linked in $\Xi^{(1)}(x_{i},x_{i}')$. Since $\sigma_{1}$ is a congruence, $a$ and $b$ can be linked only if $(a,b)\in\sigma_{1}$, which means that $\sigma_{0}^{(1)} = \sigma_{1}$. \end{proof} \begin{lem}\label{AddLinearVariables} Suppose $D^{(1)}$ is a minimal linear reduction for $\Upsilon$, $\Upsilon^{(1)}(x_{1},\ldots,x_{n})$ defines a subdirect rectangular relation, $\Var(\Upsilon) = \{x_{1},\ldots,x_{n},v_{1},\ldots,v_{r}\}$, and $\Omega = \Upsilon \wedge \bigwedge_{i=1}^{r} \sigma_{i}(v_{i},u_{i})$, where $\sigma_{i}=\ConLin(D_{v_{i}})$. Then $(\ConOne(\Omega(x_{1},\ldots,x_{n},u_{1},\ldots,u_{r}), j))^{(1)} = \ConOne(\Upsilon^{(1)}(x_{1},\ldots,x_{n}),j))$ for every~$j$. \end{lem} \begin{proof} Without loss of generality assume that $j=1$. Since the reduction $D^{(1)}$ is minimal, we have the following inclusion $$(\ConOne(\Omega(x_{1},\ldots,x_{n},u_{1},\ldots,u_{r}), 1))^{(1)} \supseteq \ConOne(\Upsilon^{(1)}(x_{1},\ldots,x_{n}), 1). $$ Let us prove the backward inclusion. Suppose $\Omega(x_{1},\ldots,x_{n},u_{1},\ldots,u_{r})$ and $\Upsilon^{(1)}(x_{1},\ldots,x_{n})$ define the relations $\rho'$ and $\rho$ respectively. Choose $a,b\in D_{x_{1}}^{(1)}$ such that $(a,b)\in \ConOne(\rho',1)$. For some $\beta$ we have $a\beta,b\beta\in\rho'$. Since $\rho$ is subdirect, there exist $\alpha_{a}$ and $\alpha_{b}$ in $D^{(1)}$ such that $a\alpha_{a},b\alpha_{b}\in\rho'$. Since $w$ preserves $\rho'$, \begin{align*}w(a,a,\ldots,a) w(\alpha_{a},\beta,\ldots,\beta) \in\rho',\\ w(a,b,\ldots,b) w(\alpha_{a},\beta,\ldots,\beta) \in\rho',\\ w(b,\ldots,b,a) w(\alpha_{b},\beta,\ldots,\beta) \in\rho',\\ w(b,b,\ldots,b) w(\alpha_{b},\beta,\ldots,\beta) \in\rho'. \end{align*} By Lemma~\ref{LinearSpecialWNU}, $w(\alpha_{a},\beta,\ldots,\beta)$ and $w(\alpha_{b},\beta,\ldots,\beta)$ belong to $D^{(1)}$. Then, for $c= w(a,b,\ldots,b)=w(b,\ldots,b,a)$ we have $(a,c),(c,b)\in \ConOne(\rho,1)$. Since $\rho$ is rectangular, we have $(a,b)\in \ConOne(\rho,1)$. \end{proof} \subsection{Adding linear variable} Below we formulate few statements from \cite{KeyRelations} that will help us to prove the main property of a bridge. This property will be the main ingredient of the proof of the fact that $A'$ from the informal description of the algorithm should be of codimension 1. A relation $\rho\subseteq A^{n}$ is called \emph{strongly rich} if for every tuple $(a_{1},\ldots,a_{n})$ and every $j\in \{1,\ldots,n\}$ there exists a unique $b\in A$ such that $(a_{1},\ldots,a_{j-1},b,a_{j+1},\ldots,a_n)\in\rho.$ We will need two statements from \cite{KeyRelations}. Recall that for any bridge $\rho$ by $\widetilde{\rho}$ we denote the binary relation defined by $\widetilde{\rho}(x,y) = \rho(x,x,y,y)$. \begin{thm}\label{StronglyRichRelationTHM}\cite{KeyRelations} Suppose $\rho\subseteq A^{n}$ is a strongly rich relation preserved by an idempotent WNU. Then there exists an abelian group $(A;+)$ and bijective mappings $\phi_1$, $\phi_2$, \ldots,$\phi_n: A\to A$ such that \[\rho = \{(x_1,\ldots,x_n)\mid \phi_1(x_1)+\phi_2(x_2) + \ldots +\phi_n(x_n) = 0\}.\] \end{thm} \begin{lem}\label{LinearWNU}\cite{KeyRelations} Suppose $(G;+)$ is a finite abelian group, the relation $\sigma\subseteq G^{4}$ is defined by $\sigma = \{(a_1,a_2,a_3,a_4)\mid a_1+a_2=a_3+a_4\}$, and $\sigma$ is preserved by an idempotent WNU $f$. Then $f(x_{1},\ldots,x_{n}) = t\cdot x_{1}+t\cdot x_2 + \ldots + t\cdot x_{n}$ for some $t\in \{1,2,3,\ldots\}$. \end{lem} \begin{thm}\label{LinkedBridgeThm} Suppose $\sigma\subseteq A^{2}$ is a congruence, $\rho$ is a bridge from $\sigma$ to $\sigma$ such that $\widetilde{\rho}$ is a full relation, $\proj_{1,2}(\rho) = \omega$, $\omega$ is a minimal relation stable under $\sigma$ such that $\omega\supsetneq \sigma$. Then there exists a prime number $p$ and a relation $\zeta\subseteq A\times A\times \mathbb Z_{p}$ such that $\proj_{1,2}\zeta = \omega$ and $(a_{1},a_{2},b)\in\zeta$ implies that $(a_{1},a_{2})\in \sigma\Leftrightarrow (b=0)$. \end{thm} \begin{proof} Since the relations $\rho$ and $\omega$ are stable under $\sigma$, we consider $A/\sigma$ instead of $A$ and assume that $\sigma$ is the equality relation. Without loss of generality we assume that $\rho(x_{1},x_{2},y_{1},y_{2}) = \rho(y_{1},y_{2},x_{1},x_{2})$ and $(a,b,a,b)\in\rho$ for any $(a,b)\in\omega$. Otherwise, we consider the relation $\rho'$ instead of $\rho$, where $$\rho'(x_{1},x_{2},y_{1},y_{2}) = \exists z_{1}\exists z_{2}\; \rho(x_{1},x_{2},z_{1},z_{2}) \wedge \rho(y_{1},y_{2},z_{1},z_{2}).$$ We prove by induction on the size of $A$. Assume that for some subuniverse $A'\subsetneq A$ we have $(A'\times A')\cap (\omega\setminus \sigma) \neq \varnothing$. By $\sigma'$ we denote the equality relation on $A'$. By $\omega'$ we denote a minimal relation such that $\sigma'\subsetneq\omega'\subseteq (A'\times A')\cap \omega$. Since $\proj_{1,2}(\rho\cap (\omega'\times\omega')) =\omega'\supsetneq \sigma'$, the relation $\rho\cap (\omega'\times\omega')$ is a bridge from $\sigma'$ to $\sigma'$. The inductive assumption for $\rho\cap (\omega'\times\omega')$ implies that there exists a relation $\zeta'\subseteq A'\times A'\times \mathbb Z_{p}$ such that $(x_{1},x_{2},0)\in \zeta'\Leftrightarrow (x_{1},x_{2})\in\sigma'$ and $\proj_{1,2}(\zeta') = \omega'$. Put $$\zeta(x_{1},x_{2},z) = \exists y_{1} \exists y_{2} \;\rho(x_{1},x_{2},y_{1},y_{2})\wedge \zeta'(y_1,y_2,z).$$ By the minimality of $\omega$, we have $\proj_{1,2}(\zeta)=\omega$. The remaining property of $\zeta$ follows from the fact that $\rho$ is a bridge and the properties of $\zeta'$. Thus, we may assume that for any subuniverse $A'\subsetneq A$ we have $(A'\times A')\cap (\omega\setminus\sigma)= \varnothing$. Consider a pair $(a_{1},a_{2})\in \omega\setminus\sigma$. Let $A' = \{a\mid (a_{1},a)\in\omega\}$. Since $\omega\supsetneq \sigma$, we have $a_{1}\in A'$, and therefore $(a_{1},a_{2})\in(A'\times A')\cap (\omega\setminus\sigma) \neq\varnothing$ and $A'=A$. Thus, $\{a\mid (a_{1},a)\in\omega\} =\{a \mid (a,a_{2})\in\omega\} = A$. Hence, any element connected in $\omega$ to some other element is connected to all elements. Therefore, $(a_{1},a),(a,a_{2})\in\omega$ for every $a\in A\setminus\{a_{1},a_{2}\}$, which for $|A|>2$ implies that $\omega = A\times A$. If $|A|=2$ and $\omega \neq A\times A$ then $\omega = \{(a,a),(a,b),(b,b)\}$ and $\rho$ is uniquely defined. We know \cite{Post} that any clone on a 2-element domain containing an idempotent WNU operation contains majority operation, conjunction, disjunction, or minority operation. None of them preserve $\rho$, which contradicts our assumptions. Thus, we proved that $\omega = A\times A$ and $A$ has no proper subuniverses of size at least 2. Note the remaining part of the proof could also be derived from known facts of commutator theory. In fact, it follows from the properties of $\rho$ that $\sigma$ (the equality) is an equivalence block of a congruence on $A^{2}$, which means that $A$ is Abelian. Using Abelianess for Taylor varieties (since we have a WNU), we could also define the required ternary relation $\zeta$ (see \cite{bergman2011universal} for more details). Nevertheless, we do not want to introduce new algebraic notions, and give a proof based on two claims from \cite{KeyRelations}. Let us show that for any $a_{1},a_{2},a_{3}\in A$ there exists a unique $a_{4}$ such that $(a_{1},a_{2},a_{3},a_{4})\in \rho$. For every $a\in A$ put $\lambda_{a}(x_1,x_2) = \exists y_2 \rho(x_1,x_2,a,y_{2})$. It is easy to see that $\sigma\subsetneq\lambda_{a}\subseteq\omega$. Therefore $\lambda_{a}=\omega = A\times A$ for every $a$. We consider the unary relation defined by $\delta(x) = \rho(a_{1},a_{2},a_{3},x)$. By the above fact $\delta$ is not empty. Since $\rho$ is a bridge, $\delta$ is not full. If $\delta$ contains more than one element, then we get a contradiction with the fact that there are no proper subuniverses of size at least 2. Then $\rho$ is a strongly rich relation. By Theorem~\ref{StronglyRichRelationTHM}, there exist an Abelian group $(A;+)$ and bijective mappings $\phi_1, \phi_2, \phi_3,\phi_4\colon A\to A$ such that $$\rho = \{(a_1,a_2,b_1,b_2)\mid \phi_1(a_1)+\phi_2(a_2) + \phi_{3}(b_{1}) +\phi_4(b_2) = 0\}.$$ Without loss of generality we can assume that $\phi_{1}(x) = x$. We know that $(a,a,b,b)\in\rho$ for any $a,b\in A$, then $\phi_{1}(x)+\phi_{2}(x)+\phi_{3}(0)+\phi_{4}(0)=0$, which means that $\phi_{2}(x) = -x - \phi_{3}(0)-\phi_{4}(0)$. Since $(a,b,a,b)\in\rho$ for any $a,b\in A$, we have $\phi_{1}(x)+\phi_{2}(0)+\phi_{3}(x)+\phi_{4}(0) = 0$, which means that $\phi_{3}(x) = -\phi_{1}(x) - \phi_{2}(0)-\phi_{4}(0) =-x+\phi_{3}(0)$. Similarly, since $\phi_{1}(0)+\phi_{2}(0)+\phi_{3}(x)+\phi_{4}(x)=0$, we have $\phi_{4}(x) = x-\phi_{3}(0)-\phi_{2}(0)-\phi_{1}(0) = x+\phi_{4}(0)$. Substituting this into the definition of $\rho$ we obtain $$\rho = \{(a_1,a_2,b_1,b_2)\mid a_{1}-a_{2}-a_{3}+a_{4} = 0\}.$$ It follows from Lemma~\ref{LinearWNU} that $w$ on $A$ is defined by $t(x_{1}+\ldots+x_{m})$, Since $w$ is special, $t\cdot (t-1)$ should be divided by the order of any element of $A$. By the idempotency, $t$ and the order of any element are coprime. Hence, $t-1$ should be divided by the order of any element and we may put $t=1$. Therefore, the relation $\zeta\subseteq A\times A\times A$ defined by $\zeta = \{(b_1,b_2,b_{3})\mid b_{1}-b_{2}+b_{3}=0\}$ is preserved by $w$. If $(A;+)$ is not simple, then any equivalence class of a congruence is a proper subuniverse of size at least 2, which contradicts our assumption. Therefore, $(A;+)$ is a simple Abelian group. \end{proof} \begin{cons}\label{LinkedLink} Suppose $\sigma\subseteq A^{2}$ is an irreducible congruence and $\rho$ is a bridge from $\sigma$ to $\sigma$ such that $\widetilde{\rho}$ is a full relation. Then there exists a prime number $p$ and a relation $\zeta\subseteq A\times A\times \mathbb Z_{p}$ such that $\proj_{1,2}\zeta = \sigma^{*}$ and $(a_{1},a_{2},b)\in\zeta$ implies that $(a_{1},a_{2})\in \sigma\Leftrightarrow (b=0)$. \end{cons} \begin{lem}\label{ReflexiveBridgeProperty} Suppose $\rho\subseteq A^{4}$ is an optimal bridge from $\sigma_{1}$ to $\sigma_{2}$, and $\sigma_{1}$ and $\sigma_{2}$ are different irreducible congruences. Then $\widetilde \rho\supsetneq\sigma_{2}$. \end{lem} \begin{proof} Since the first two variables are stable under $\sigma_{1}$ and the last two variables are stable under $\sigma_2$, we have $\sigma_{1}\subseteq \widetilde\rho$ and $\sigma_{2}\subseteq \widetilde\rho$. Assume that the lemma does not hold, then $\widetilde{\rho}=\sigma_{2}$. Since $\sigma_{1}$ and $\sigma_{2}$ are different, we obtain $\sigma_{1}\subsetneq\sigma_{2}$, First, we want $(a,d)$ to be from $\sigma_{2}$ for every $(a,b,c,d)\in \rho$. Put $\rho_{1}(x_{1},x_{2},y_{1},y_{2}) = \rho(x_{1},x_{2},y_{1},y_{2})\wedge \sigma_{2}(x_{1},y_{2})$. If $\rho_{1}$ is a bridge then we replace $\rho$ by $\rho_{1}$. Assume that $\rho_{1}$ is not a bridge, then for every $(a,b,c,d)\in \rho$ with $(a,d)\in \sigma_{2}$ we have $(a,b)\in\sigma_{1}$. Put $\rho_{2}(x_{1},x_{2},y_{1},y_{2}) = \exists z\; \rho(x_{1},x_{2},z,y_{1})\wedge \sigma_{2}(x_{1},y_{2})$. Let us show that $\rho_{2}$ is a bridge. Suppose $(x_{1},x_{2})\in\sigma_{1}$ and $(x_1,x_2,y_1,y_2)\in\rho_{2}$. Then $(x_{1},x_{2},z,y_{1})\in\rho$. Since $\rho$ is a bridge from $\sigma_1$ to $\sigma_2$, this implies that $(z,y_{1})\in\sigma_2$ and $(x_1,y_1)\in\widetilde\rho$. Since $\widetilde \rho=\sigma_{2}$, we have $(x_{1},y_{1})\in\sigma_{2}$ and therefore $(y_1,y_{2})\in\sigma_{2}$. If $(y_{1},y_2)\in\sigma_{2}$ and $(x_1,x_2,y_1,y_2)\in\rho_{2}$, then $(x_{1},y_{1})\in\sigma_{2}$ and by the above assumption we have $(x_{1},x_{2})\in \sigma_{1}$. It remains to show that $\proj_{1,2}(\rho_{2})\supsetneq \sigma_{1}$. Consider any tuple $(a_{1},a_{2},a_{3},a_{4})\in\rho$ such that $(a_{1},a_{2})\notin\sigma_{1}$, then $(a_{1},a_{2},a_{4},a_{1})\in\rho_{2}$. Thus, $\rho_{2}$ is a bridge with the required property, so we replace $\rho$ by $\rho_{2}$. Second, we want $\proj_{1,2}(\rho)$ to be equal to $\sigma_{1}^{*}$, and $\proj_{3,4}(\rho)$ to be equal to $\sigma_{2}^{*}$. To achieve this we replace $\rho$ by the relation defined by $\rho(x_{1},x_{2},y_{1},y_{2})\wedge \sigma_{1}^{*}(x_{1},x_{2})\wedge \sigma_{2}^{*}(y_{1},y_{2})$, which has the same properties. Recall that \emph{a polynomial} is an operation that can be defined by a term over the basic operations of an algebra and constant operations. In our case, a polynomial is an operation defined by a term over the WNU $w$ and constants. Let $D$ be a minimal subset (not necessarily a subuniverse) of $A$ such that \begin{enumerate} \item there exists a unary polynomial $h$ such that $h(h(x)) = h(x)$ and $h(A) = D$, and \item $(\sigma_{2}^{*}\setminus \sigma_{2})\cap D^{2}\neq \varnothing.$ \end{enumerate} Since constants preserve a reflexive bridge $\rho$ and congruences $\sigma_{1}$ and $\sigma_{2}$, the unary polynomial $h$ also preserves $\rho$, $\sigma_{1}$ and $\sigma_{2}$. It is not hard to see that $h(w(x_{1},\ldots,x_{m}))$ is an idempotent WNU on $D$, then by $w^{D}$ we denote a special WNU on $D$ that can be derived from the idempotent WNU on $D$. For any relation $\delta$, by $\delta^{D}$ we denote its restriction to $D$ (that is $h(\delta)$). It is not hard to see that $\rho^{D}$ is a bridge from $\sigma_{1}^{D}$ to $\sigma_{2}^{D}$. The idea of the proof is to define a bridge $\epsilon^{D}$ from $\sigma_{2}^{D}$ to $\sigma_{2}^{D}$ such that $\widetilde{\epsilon^{D}}\not\subseteq \sigma_{2}$. Then we define a bridge $\epsilon$ from $\sigma_{2}$ to $\sigma_2$ having the same property and use this bridge to make $\widetilde{\rho}$ bigger, which gives us a contradiction because $\rho$ is optimal. Consider $(b_{1},b_{2})\in(\sigma_{2}^{*})^{D}\setminus\sigma_{2}^{D}$ and the unary operation $g_{b_1}(x) = w^D(b_{1},\ldots,b_{1},x)$. Since $w^D$ is a special WNU, $g_{b_1}(g_{b_1}(x)) = g_{b_1}(x)$. Let us show that $(\sigma_{2}^{*}\setminus \sigma_{2})\cap (D')^{2}\neq \varnothing$, where $D' = g_{b_1}(D)$. Since $\proj_{3,4}(\rho)= \sigma_{2}^{*}$, there are $a_1,a_2$ such that $(a_1,a_2,b_1,b_2)\in\rho$. Since $(a_{1},b_{2})\in\sigma_{2}$, and $(b_{1},b_2)\notin\sigma_{2}$, we have $(a_1,b_1)\notin\sigma_2$. Consider the relation $\delta(x,y)$ defined by $\exists x_1\exists x_2 \exists y_2 \sigma_{2}(x,x_1)\wedge \rho(x_1,x_2,y,y_2)$. It follows from the definition that $\delta$ is stable under $\sigma_{2}$. Also $(a_1,b_1)\in\delta$, therefore $\delta\supsetneq\sigma_{2}$. From irreducibility of $\sigma_2$ we obtain that $(b_1,b_2)\in \delta$. Then by the definition of $\delta$ there exist $(c_1,c_2,b_2,c_3)\in\rho$ such that $(b_1,c_{1})\in\sigma_{2}$. Put $d_{i} = h(c_{i})$ for $i=1,2,3$. Then $(d_1,d_2,b_2,d_3)\in\rho$ (we have $h(b_2)=b_2$). Since $h$ preserves $\sigma_{2}$ and $h(b_1) = b_1$, we have $(d_1,b_1)\in\sigma_2$. Therefore, $(d_1,b_2)\notin\sigma_2$. Since $\widetilde{\rho} = \sigma_2$, we have $(d_1,d_2)\notin\sigma_1$ and $(b_2,d_3)\notin\sigma_2$. Let $E$ be the equivalence class of $\sigma_{2}^{D}$ containing $d_1$ and $d_2$ (they are in one class because $\proj_{1,2}(\rho)=\sigma_{1}^{*}\subseteq \sigma_2$). By $w'$ we denote $w^{D}$ restricted to $E$, put $\rho' = \rho\cap (E^{2}\times D^{2})$ and $\sigma_{1}' = \sigma_{1}\cap E^{2}$. Since $(d_1,d_2)\in (\sigma_{1}^{*}\setminus \sigma_{1})\cap E^{2}$, we can find a minimal relation $\omega\subseteq\sigma_{1}^{*}\cap E^{2}$ stable under $\sigma_{1}'$ such that $\omega\supsetneq\sigma_{1}'$. It is not hard to check that the formula $$\rho''(x_{1},x_{2},x_{1}',x_{2}') = \exists y_{1}\exists y_{2}\; \rho'(x_{1},x_{2},y_{1},y_{2})\wedge\rho'(x_{1}',x_{2}',y_{1},y_{2})\wedge \omega(x_{1},x_{2})\wedge \omega(x_{1}',x_{2}')$$ defines a reflexive bridge $\rho''$ from $\sigma_{1}'$ to $\sigma_{1}'$. Since $\widetilde{\rho}=\sigma_{2}$ and $E$ is an equivalence class of $\sigma_{2}^{D}$, we have $\widetilde{\rho''}=E^{2}$. Then by Theorem~\ref{LinkedBridgeThm}, there exists a prime number $p$ and a relation $\zeta\subseteq E\times E\times \mathbb Z_{p}$ such that $\proj_{1,2}(\zeta)=\omega$ and $(a_1,a_2,b)\in\zeta$ implies $(a_1,a_2)\in\sigma_{1}'\Leftrightarrow (b=0)$. We want to show that for each $(e_{1}, e_{2}) \in \omega\setminus\sigma_{1}'$ we have $(w^{D}(d_1,\ldots,d_{1},e_{1}),w^{D}(d_{1},\ldots,d_{1},e_{2}))\notin\sigma_{1}$. In fact, choose $b\in \mathbb Z_{p}$ such that $(e_1,e_2,b)\in\zeta$. Note that $b\neq 0$ and $(d_1,d_1,0)\in\zeta$. Applying $w'$ to $n$ tuples $(d_1,d_1,0)$ and one tuple $(e_1,e_2,b)$ we get, by Lemma~\ref{LinearSpecialWNU}, a tuple $(w^{D}(d_1,\ldots,d_{1},e_{1}),w^{D}(d_{1},\ldots,d_{1},e_{2}),b)\in\zeta$. Since $b\neq 0$, we have $$(w^{D}(d_1,\ldots,d_{1},e_{1}),w^{D}(d_{1},\ldots,d_{1},e_{2}))\notin\sigma_{1}.$$ We can find $(e_3,e_4)\in\sigma_{2}^{*}\setminus \sigma_{2}$ such that $(e_1,e_2,e_3,e_4)\in\rho$. Since $h$ preserves $\rho$, $w^{D}$ preserves $\rho^{D}$, and $\rho^{D}$ is a bridge, we can derive that $(w^{D}(d_1,\ldots,d_1,h(e_3)),w^{D}(d_1,\ldots,d_1,h(e_4)))\notin\sigma_{2}$. Since $(d_1,b_1)\in\sigma_{2}$ we also have $(w^{D}(b_1,\ldots,b_1,h(e_3)),w^{D}(b_1,\ldots,b_1,h(e_4)))\notin\sigma_{2}$. Thus, we proved that $(\sigma_{2}^{*}\setminus \sigma_{2})\cap (D')^{2}\neq \varnothing$, where $D' = g_{b_1}(D)$ and $g_{b_1}(x) = w^D(b_{1},\ldots,b_{1},x)$. If $D'\neq D$, then we found a smaller set $D'$ and the corresponding polynomial $g_{b_{1}}(h(x))$, which contradicts the minimality of $D$. Thus, for any $(b_{1},b_{2})\in(\sigma_{2}^{*})^{D}\setminus\sigma_{2}^{D}$ we have $w^{D}(b_1,\ldots,b_1,x) = x$. Let us show that $\ConOne(\rho^{D},1) = \sigma_{1}^{D}$ and $\ConOne(\rho^{D},3) = \sigma_{2}^{D}$. Choose $(a_{1},a_{2},a_{3},a_{4})\in\rho^{D}$. We consider two cases. Case 1: $(a_{1},a_{2})\in\sigma_{1}$ and $(a_{3},a_{4})\in\sigma_{2}$. Then for any tuple $(a_{1}',a_{2},a_{3},a_{4})\in\rho^{D}$ we have $(a_{1},a_{1}')\in\sigma_{1}$. Similarly, for any tuple $(a_{1},a_{2},a_{3}',a_{4})\in\rho^{D}$ we have $(a_{3},a_{3}')\in\sigma_{2}$. Case 2: $(a_{1},a_{2})\notin\sigma_{1}$ and $(a_{3},a_{4})\notin\sigma_{2}$. Since $(a_{1},a_{4})\in\sigma_{2}$ and $(a_{1},a_{2})\in \sigma_{1}^{*}\subseteq \sigma_{2}$, we have $(a_{1},a_{3}),(a_{2},a_{3}), (a_{3},a_{4})\in (\sigma_{2}^{*})^{D}\setminus\sigma_{2}^{D}$. Also notice that $\sigma_{2}^{*}$ is symmetric. Assume that $(a_{1}',a_{2},a_{3},a_{4})\in\rho^{D}$. Since $w^{D}$ preserves $\rho^{D}$, we have $$(w^{D}(a_{1}',a_{1},\ldots,a_{1}), w^{D}(a_{2},\ldots,a_{2},a_{1}), w^{D}(a_{3},\ldots,a_{3},a_{1}), w^{D}(a_{4},\ldots,a_{4},a_{1}))\in\rho^{D}.$$ As we showed before this tuple equals $(a_{1}',a_{1},a_{1},a_{1})$, which means that $(a_{1},a_{1}')\in\sigma_{1}$. Similarly, if $(a_{1},a_{2},a_{3}',a_{4})\in\rho^{D}$ we consider a tuple $$(w^{D}(a_{1},\ldots,a_{1},a_{3}), w^{D}(a_{2},\ldots,a_{2},a_{3}), w^{D}(a_{3}',a_{3},\ldots,a_{3},a_{3}), w^{D}(a_{4},\ldots,a_{4},a_{3}))\in\rho^{D},$$ that equals $(a_{3},a_{3},a_{3}',a_{3})$, which means that $(a_{3},a_{3}')\in\sigma_{2}$. Thus we proved that $\ConOne(\rho^{D},1) = \sigma_{1}^{D}$ and $\ConOne(\rho^{D},3) = \sigma_{2}^{D}$. Consider $(a_{1},a_{2},b_{1},b_{2})\in\rho^{D}$ with $(b_{1},b_{2})\notin\sigma_{2}$ and the formula $$\Theta = \rho(z,x_{1},x_{2},x_{3}) \wedge \rho(z',x_{1},x_{2}',x_{3}') \wedge \rho(z,x_{4},x_{5},x_{6}) \wedge \rho(z',x_{4},x_{5}',x_{6}').$$ Let $\epsilon$ be the relation defined by $\Theta(x_{2},x_{2}',x_{5},x_{5}')$. Since $h(h(x)) = h(x)$ and $h$ preserves $\rho$, $\epsilon^{D}$ is defined by the same formula but with $\rho^{D}$ instead of $\rho$ everywhere. Let us prove that $\epsilon^{D}$ is a bridge from $\sigma_{2}^{D}$ to $\sigma_{2}^{D}$. Assume that $(x_{2},x_{2}')\in\sigma_{2}^{D}$. Since $(x_{1},z),(x_{1},z')\in \sigma_{1}^{*}\subseteq \sigma_{2}$, we have $(z,z')\in\sigma_{2}^{D}$. Recall that $(a,d)\in\sigma_{2}$ whenever $(a,b,c,d)\in\rho$, hence $(x_{3},x_{3}'),(x_{6},x_{6}')\in\sigma_{2}^{D}$. Since $\ConOne(\rho^{D},1) = \sigma_{1}^{D}$, we have $(z,z')\in\sigma_{1}$. Since $\ConOne(\rho^{D},3) = \sigma_{2}^{D}$, we have $(x_{5},x_{5}')\in\sigma_{2}$. In the same way we can show that if $(x_{5},x_{5}')\in\sigma_{2}^{D}$, then $(x_{2},x_{2}')\in\sigma_{2}^{D}$. Since all the variables of $\epsilon$ are stable under $\sigma_{2}$ and $\epsilon^{D}$ is reflexive, we have $\proj_{1,2}(\epsilon^{D})\supseteq \sigma_{2}^{D}$. By sending $(z,x_{1},x_{2},x_{3})$ to $(a_{1},a_{2},b_{1},b_{2})$, $(z',x_{1},x_{2}',x_{3}')$ to $(a_{2},a_{2},a_{2},a_{2})$, $(z,x_{4},x_{5},x_{6})$ to $(a_{1},a_{2},b_{1},b_{2})$, $(z',x_{4},x_{5}',x_{6}')$ to $(a_{2},a_{2},a_{2},a_{2})$, we show that $(b_{1},a_{2},b_{1},a_{2})\in \epsilon$. Since $(a_1,a_2),(a_1,b_2)\in\sigma_2$ and $(b_{1},b_{2})\notin\sigma_2$, we have $(b_1,a_2)\notin \sigma_2$, and therefore $\proj_{1,2}(\epsilon^{D})\supsetneq \sigma_{2}^{D}$. In the same way we can show that $\proj_{3,4}(\epsilon^{D})\supsetneq \sigma_{2}^{D}$. Thus $\epsilon^{D}$ is a bridge. Let us show that $\epsilon$ is a bridge from $\sigma_{2}$ to $\sigma_{2}$. Assume the contrary. Then without loss of generality we assume that there exists $(d_{0},d_{0},d_{1},d_{2})\in \epsilon$ such that $(d_{1},d_{2})\notin\sigma_{2}$. Put $\delta_{0}(y,z) = \exists x\;\epsilon(x,x,y,z)$. The relation $\delta_{0}$ is stable under $\sigma_2$ and strictly larger than $\sigma_2$, hence $\delta_{0}\supseteq \sigma_{2}^{*}$ and $(b_{1},b_{2})\in\delta_{0}$. Then there exists $d$ such that $(d,d,b_{1},b_{2})\in\epsilon$, which means that $(h(d),h(d),b_{1},b_{2})\in\epsilon^{D}$. This contradicts the fact that $\epsilon^{D}$ is a bridge. Hence, $\epsilon$ is also a bridge. By sending $(z,x_{1},x_{2},x_{2}',x_{3},x_{3}')$ to $(a_{1},a_{2},b_{1},b_{1},b_{2},b_{2})$ and $(z',x_{4},x_{5},x_{5}',x_{6},x_{6}')$ to $(a_{1},a_{1},a_{1},a_{1},a_{1},a_{1})$ we can show that $(b_{1},a_{1})\in\widetilde{\epsilon}$. If we compose bridges $\rho$ and $\epsilon$, then we get a bridge $\epsilon'$ from $\sigma_{1}$ to $\sigma_{2}$ containing $(b_{1},b_1,a_{1},a_1)$. Hence $\widetilde{\epsilon'}\supsetneq \widetilde{\rho}$, which contradicts the fact that $\rho$ is optimal. \end{proof} \subsection{Existence of a bridge} In this subsection we show four ways to build a bridge: from congruences with an additional property, from a rectangular relation, by composing bridges appearing in the instance, and from a pp-formula. \begin{lem}\label{MinimalsAdjacent} Suppose $\sigma, \sigma_{1}$, and $\sigma_{2}$ are congruences on $A$, $\sigma\cap\sigma_{1} = \sigma\cap\sigma_{2}$, and $\sigma\setminus\sigma_{1}\neq\varnothing$. Then $\sigma_{1}$ and $\sigma_{2}$ are adjacent. \end{lem} \begin{proof} Let us define a relation $\rho$ by $$\rho(x_{1},x_{2},y_{1},y_{2}) = \exists z_{1}\exists z_{2}\; \sigma_{1}(x_{1},z_{1})\wedge \sigma_{2}(z_{1},y_{1})\wedge \sigma_{1}(x_{2},z_{2})\wedge \sigma_{2}(z_{2},y_{2})\wedge \sigma(z_{1},z_{2}).$$ It is clear that the first two variables of $\rho$ are stable under $\sigma_{1}$ and the last two variables are stable under $\sigma_{2}$. Let us show that for any $(a_{1},a_{2},a_{3},a_{4})\in\rho$ that $(a_{1},a_{2})\in\sigma_{1}\Leftrightarrow(a_{3},a_{4})\in\sigma_{2}$. In fact, if $(x_{1},x_{2})\in\sigma_{1}$, then $(z_{1},z_{2})\in\sigma_{1}$. Since $\sigma\cap\sigma_{1} = \sigma\cap\sigma_{2}$, we have $(z_{1},z_{2})\in\sigma_{2}$. Therefore, $(y_{1},y_{2})\in\sigma_{2}$. Also $(a,a,a,a)\in\rho$ for any $a\in A$. Choose $(a,b)\in\sigma\setminus\sigma_{1}$. Then $(a,b,a,b)\in\rho$ (put $z_{1} = a$, $z_{2} = b$), which proves that $\rho$ is a reflexive bridge. \end{proof} \begin{lem}\label{OneLink} Suppose $\rho\subseteq A_{1}\times\dots\times A_{n}$ is a subdirect relation, the first and the last variables of $\rho$ are rectangular, and there exist $(b_{1},a_{2},\ldots,a_n),(a_{1},\ldots,a_{n-1},b_{n})\in\rho$ such that $(a_{1},a_{2},\ldots,a_n)\notin\rho$. Then there exists a bridge $\delta$ from $\ConOne(\rho,1)$ to $\ConOne(\rho,n)$ such that $\widetilde{\delta} = \proj_{1,n}(\rho)$. \end{lem} \begin{proof} The required bridge can be defined by $$\delta(x_{1},x_{2},y_{1},y_{2}) = \exists z_{2}\dots\exists z_{n-1}\; \rho(x_{1},z_{2},\ldots,z_{n-1},y_{1})\wedge \rho(x_{2},z_{2},\ldots,z_{n-1},y_{2}).$$ In fact, since the first and the last variables of $\rho$ are rectangular, we have $(x_{1},x_{2})\in\ConOne(\rho,1)$ if and only if $(y_{1},y_{2})\in\ConOne(\rho,n)$. It remains to notice that $(b_{1},a_{1},a_{n},b_{n})\in\delta$ and $(b_{1},a_{1})\notin\ConOne(\rho,1)$. \end{proof} Recall that by Lemma~\ref{CriticalMeansIrreducible} $\ConOne(\rho,i)$ is an irreducible congruence for every critical subdirect rectangular relation $\rho$ and its coordinate $i$. \begin{lem}\label{OneLinkCritical} Suppose $\rho\subseteq A_{1}\times\dots\times A_{n}$ is a critical subdirect rectangular relation. Then \begin{enumerate} \item there exists a bridge $\delta$ from $\ConOne(\rho,1)$ to $\ConOne(\rho,n)$ such that $\widetilde{\delta} = \proj_{1,n}(\rho)$. Moreover, if $n=2$ then $\ConOne(\widetilde{\delta},1) = \ConOne(\rho,1)$ and $\ConOne(\widetilde{\delta},2) = \ConOne(\rho,n)$; if $n>2$ then $\ConOne(\widetilde{\delta},1) \supsetneq \ConOne(\rho,1)$ and $\ConOne(\widetilde{\delta},2) \supsetneq \ConOne(\rho,n)$. \item if $\Opt(\ConOne(\rho,n))\neq \ConOne(\rho,n)$, then there exists a bridge $\delta$ from $\ConOne(\rho,1)$ to $\ConOne(\rho,n)$ such that $\widetilde{\delta}$ contains the projection of the cover of $\rho$ onto the first and the last coordinates. \end{enumerate} \end{lem} \begin{proof} Using the argument from Lemma~\ref{RectangularCriticalArityTwo} we find tuples $(b_{1},a_{2},\dots,a_{n})$ $(a_{1},\dots,a_{n-1},b_{n})$ satisfying the conditions of Lemma~\ref{OneLink}. Then, to prove the first part it is sufficient to use the formula from Lemma~\ref{OneLink} to define a bridge $\delta$. If $n=2$ then $\widetilde{\delta} = \rho$ and we have the required property. If $n>2$ then by Lemma~\ref{RectangularCriticalArityTwo} we have $\ConOne(\widetilde{\delta},1) = \ConOne(\proj_{1,n}(\rho),1)\supsetneq\ConOne(\rho,1)$ and $\ConOne(\widetilde{\delta},2) = \ConOne(\proj_{1,n}(\rho),2)\supsetneq\ConOne(\rho,n)$. Let us prove the second part of the claim. Let $\xi$ be an optimal bridge from $\ConOne(\rho,n)$ to $\ConOne(\rho,n)$. Define a bridge $\delta(x_{1},x_{2},y_{1},y_{2})$ by $$ \exists z_{2}\dots\exists z_{n-1}\exists u_1\exists u_2\; \rho(x_{1},z_{2},\ldots,z_{n-1},u_{1})\wedge \rho(x_{2},z_{2},\ldots,z_{n-1},u_{2}) \wedge \xi(u_1,u_2,y_1,y_2) .$$ Note that $\delta$ is just a composition of the bridge constructed in Lemma~\ref{OneLink} and $\xi$. Then we have $\widetilde{\delta}(x,y)= \exists z_{2}\dots\exists z_{n-1}\exists u\; \rho(x,z_{2},\ldots,z_{n-1},u) \wedge \widetilde{\xi}(u,y).$ Put $\rho'(x_{1},\ldots,x_{n}) = \exists x_{n}' \rho(x_{1},\dots,x_{n-1},x_{n}')\wedge \widetilde{\xi}(x_{n}',x_{n})$. Since $\widetilde{\xi}\supsetneq\ConOne(\rho,n)$, the relation $\rho'$ contains the cover of $\rho$. Since $\proj_{1,n}(\rho')=\widetilde{\delta}$, $\widetilde{\delta}$ contains the projection of the cover of $\rho$ onto the first and the last coordinates, which completes the proof. \end{proof} \begin{thm}\label{PathInConnectedComponentThm} Suppose $\Theta$ is a cycle-consistent connected instance. Then for every constraints $C, C'$ with variables $x,x'$ there exists a bridge $\delta$ from $\ConOne(C,x)$ to $\ConOne(C',x')$ such that $\widetilde{\delta}$ contains all pairs of elements linked in $\Theta$. Moreover, if $\ConOne(C'',x'')\neq \LinkedCon(\Theta,x'')$ for some constraint $C''\in\Theta$ and a variable $x''$, then $\delta$ can be chosen so that $\widetilde{\delta}$ contains all pairs of elements linked in $\Theta'$, where $\Theta'$ is obtained from $\Theta$ by replacing every constraint relation by its cover. \end{thm} \begin{proof} Since $C$ and $C'$ are connected, there exists a path $z_{0} C_{1} z_{1}C_{2} z_{2}\dots C_{t-1} z_{t-1} C_{t} z_{t}$, where $z_{0}= x$, $z_{t} = x'$, $C_{1} = C$, $C_{t} = C'$, $z_{i-1}\neq z_{i}$, and $C_{i}$ and $C_{i+1}$ are adjacent in $z_{i}$ for every $i$. By Lemma~\ref{CriticalMeansIrreducible}, every relation defined by $\ConOne(C_{0},x_{0})$ for some $C_{0}$ and $x_{0}$ is an irreducible congruence. Suppose $\zeta_{i}$ is an optimal bridge from $\ConOne(C_{i},z_{i})$ to $\ConOne(C_{i+1},z_{i})$, $\delta_{i}$ is a bridge from $\ConOne(C_{i},z_{i-1})$ to $\ConOne(C_{i},z_{i})$ from the first item of Lemma~\ref{OneLinkCritical} for every $i$. Then we compose all bridges together and define a new bridge $\delta(u_{0},u_{0}',v_{t},v_{t}')$ from $\ConOne(C,x)$ to $\ConOne(C',x')$ by \begin{multline}\label{BridgeBuilding} \exists u_{1}\exists u_{1}'\exists v_{1}\exists v_{1}' \dots \exists u_{t-1}\exists u_{t-1}'\exists v_{t-1}\exists v_{t-1}'\; \delta_{1}(u_{0},u_{0}',v_{1},v_{1}') \wedge \\ \bigwedge_{i=1}^{t-1} \left( \zeta_{i}(v_{i},v_{i}',u_{i},u_{i}') \wedge \delta_{i+1}(u_{i},u_{i}',v_{i+1},v_{i+1}') \right).\end{multline} Since $\widetilde{\delta}$ can be defined as a composition of $\widetilde{\zeta}$'s and $\widetilde{\delta}$'s, and $\widetilde{\zeta}$'s are reflexive, it follows that $\widetilde{\delta}$ contains all pairs of elements linked by this path. Since $\Theta$ is cycle-consistent, if $x=x'$ then $\delta$ is a reflexive bridge from $\ConOne(C,x)$ to $\ConOne(C',x)$. Thus we proved that any two constraints with a common variable are adjacent. Using Lemma~\ref{LinkedConIsCon}, we can show that there exists a path in $\Theta$ starting at $x$ and ending at $x'$ that connects any pair of elements linked in $\Theta$. Since any two constraints with a common variable are adjacent, we can assume that the above path $z_{0} C_{1} z_{1}C_{2} z_{2}\dots C_{t-1} z_{t-1} C_{t} z_{t}$ connects any pair of elements linked in $\Theta$. Again, it follows that $\widetilde{\delta}$ contains all pairs of elements linked in $\Theta$. To prove the remaining part of the theorem, assume that $\ConOne(C'',x'')\neq \LinkedCon(\Theta,x'')$ for some constraint $C''\in\Theta$ and a variable $x''$. First, observe that any bridge $\rho$ from $\sigma_{1}$ to $\sigma_{2}$ defined by the first item of Lemma~\ref{OneLinkCritical} satisfies one of the following properties: \begin{enumerate} \item $\ConOne(\widetilde{\rho},1) = \sigma_{1}$ and $\ConOne(\widetilde{\rho},2) = \sigma_{2}$, \item $\ConOne(\widetilde{\rho},1) \supsetneq \sigma_{1}$ and $\ConOne(\widetilde{\rho},2) \supsetneq \sigma_{2}$. \end{enumerate} If $\sigma_1\neq\sigma_2$, by Lemma~\ref{ReflexiveBridgeProperty} an optimal bridge from $\sigma_1$ to $\sigma_{2}$ satisfies property (2) . If $\sigma_1=\sigma_2$, an optimal bridge from $\sigma_1$ to $\sigma_2$ obviously satisfies one of the two properties. Thus, every bridge in (\ref{BridgeBuilding}) satisfies one of the above properties. Let us show that if a bridge $\rho$ from $\sigma_1$ to $\sigma_2$ satisfies property (1), then for all $(a_1,b_1),(a_2,b_2)\in\widetilde\rho$ we have $(a_1,a_2)\in\sigma_1\Leftrightarrow(b_1,b_2)\in\sigma_2$. In fact, if $(a_1,a_2)\in\sigma_1$ then since the first two variables of $\rho$ are stable under $\sigma_1$, we have $(a_1,b_2)\in\widetilde \rho$, hence $(b_1,b_2)\in\ConOne(\widetilde \rho,2)=\sigma_2$. Now we want to show that if we compose bridges $\rho_{1},\dots,\rho_{s}$ together as in (\ref{BridgeBuilding}) and at least one of the bridges satisfies property (2) then the obtained bridge satisfies property (2). Let $\rho_{j}$ be a bridge from $\sigma_{j-1}$ to $\sigma_j$ for every $j$, then the composition $\rho$ is a bridge from $\sigma_{0}$ to $\sigma_{s}$. Consider the first bridge $\rho_{i}$ in the sequence having property (2). Then $(a_{i-1},a_{i}),(b_{i-1},b_{i})\in\widetilde\rho_{i}$ for some $(a_{i-1},b_{i-1})\notin\sigma_{i-1}$ and $a_{i} = b_{i}$. Choose $a_{j}$ and $b_{j}$ so that $(a_{j-1},a_{j}),(b_{j-1},b_{j})\in\widetilde\rho_{j}$ for every $j$, and $a_{j} = b_{j}$ for every $j\geqslant i$. Then $(a_0,b_{0})\in \ConOne(\widetilde \rho_1,1)\setminus \sigma_{0}$ and $(a_{0},a_{s}), (b_{0},b_{s})\in \widetilde\rho$. Since $a_{s} = b_{s}$, we get $(a_0,b_0)\in\ConOne(\widetilde \rho,1)$ and $\ConOne(\widetilde \rho,1)\supsetneq \sigma_{0}$. To prove that $\ConOne(\widetilde \rho,2)\supsetneq \sigma_{s}$ we consider the last bridge in the sequence having property (2) and do exactly the same. By the first part of the theorem $\Opt(\ConOne(C'',x''))\supsetneq \ConOne(C'',x'')$, hence an optimal bridge from $\ConOne(C'',x'')$ to $\ConOne(C'',x'')$ satisfies property (2). We may assume that any path goes through the variable $x''$ and through the optimal bridge from $\ConOne(C'',x'')$ to $\ConOne(C'',x'')$, which guarantees that every bridge we obtain satisfies property (2). Consider a constraint $C_{0}\in \Theta$ and a variable $x_{0}$ in it. Considering a path from $x_{0}$ to $x_{0}$ going through $x''$ we can build a reflexive bridge having property (2), which means that $\Opt(\ConOne(C_{0},x_{0}))\supsetneq \ConOne(C_{0},x_{0})$ for any constraint $C_{0}\in \Theta$ and any variable $x_{0}$ in it. To complete the proof, we replace every $\delta_{i}$ in (\ref{BridgeBuilding}) by the corresponding bridge $\delta_{i}'$ obtained using the second item of Lemma~\ref{OneLinkCritical}, and replace the path by the corresponding path connecting any pair of linked elements in $\Theta'$. Since $\widetilde{\delta_{i}'}$ contains the projection of the cover of $C_{i}$ onto the variables $z_{i-1}$ and $z_{i}$, $\widetilde{\delta}$ contains all pairs of elements linked in $\Theta'$. \end{proof} \begin{cons}\label{PathInConnectedComponent} Suppose $\Theta$ is a cycle-consistent connected instance. Then for every constraints $C, C'$ with a common variable $x$ there exists a bridge $\delta$ from $\ConOne(C,x)$ to $\ConOne(C',x)$ such that $\widetilde{\delta}$ contains the relation $\LinkedCon(\Theta,x)$. \end{cons} \begin{lem}\label{SubconstraintConnectivity} Suppose $D^{(1)}$ is a minimal one-of-four reduction for an instance $\Upsilon$, the solution set of $\Upsilon$ is subdirect, $\Upsilon^{(1)}(x_{1},\ldots,x_{n})$ defines a subdirect key rectangular relation $\rho$. For $i=1,2$ the variable $x_{i}$ of every constraint of $\Upsilon$ containing $x_{i}$ is stable under an irreducible congruence $\sigma_{i}$, and there exist tuples $(a_{1},a_2,\dots,a_n),(b_{1},b_2,b_3\dots,b_n)\in \rho$, $(a_{1}',a_2,\dots,a_n),(b_{1},b_2',b_3\dots,b_n)\notin \rho$ such that $(a_1,a_1')\in\sigma_{1}^{*}\setminus \sigma_{1}$, $(b_2,b_2')\in\sigma_{2}^{*}\setminus \sigma_{2}$. Then there exists a bridge $\delta$ from $\sigma_{1}$ to $\sigma_{2}$ such that $\widetilde \delta$ contains $\Upsilon(x_{1},x_{2})$. \end{lem} \begin{proof} If $D^{(1)}$ is a nonlinear reduction then by $\omega$ we denote the relation defined by $\Upsilon(x_{1},\ldots,x_{n})$. If $D^{(1)}$ is a linear reduction, then by $\omega$ we denote the relation defined by $\Omega(x_{1}\dots,x_n,u_{1},\ldots,u_{r})$, where $\Var(\Upsilon) = \{x_{1},\ldots,x_{n},v_{1},\ldots,v_{r}\}$, $\Omega=\Upsilon \wedge \bigwedge_{i=1}^{r} \delta_{i}(v_{i},u_{i})$ and $\delta_{i}=\ConLin(D_{v_{i}})$ (see Lemma~\ref{AddLinearVariables}). We know from Lemmas \ref{SameConOneForNonlinear} and \ref{AddLinearVariables} that $\ConOne(\omega,j)^{(1)} = \ConOne(\rho,j)$ for every $j\in\{1,2\}$. Since $\rho$ is rectangular, we have $(a_{1},a_{1}')\notin\ConOne(\rho,1)$, and therefore $(a_{1},a_{1}')\notin\ConOne(\omega,1)^{(1)}$. Since $x_{1}$ in every constraint containing $x_{1}$ is stable under $\sigma_{1}$, the relation $\ConOne(\omega,1)$ is stable under $\sigma_{1}$. Therefore, $\ConOne(\omega,1)$ should be equal $\sigma_{1}$, since otherwise $\ConOne(\omega,1)\supseteq \sigma_{1}^{*}$, which contradicts $(a_{1},a_{1}')\notin\ConOne(\omega,1)^{(1)}$. In the same way we can show that $\ConOne(\omega,2)=\sigma_2$. Since $\rho$ is a key relation, there should be a key tuple $\beta\in (D_{x_{1}}^{(1)}\times\dots\times D_{x_{n}}^{(1)})\setminus \rho$ such that for every $\alpha\in (D_{x_{1}}^{(1)}\times\dots\times D_{x_{n}}^{(1)})\setminus \rho$ there exists a vector-function $\Psi$ which preserves $\rho$ and gives $\Psi(\alpha) = \beta$. First, we put $\alpha_a = (a_{1}',a_2,\dots,a_n)$ and apply the corresponding unary vector-function $\Psi_{a}$ to $(a_{1},a_2,\dots,a_n)$ to get a tuple $\beta_{a}$. Second, we put $\alpha_{b} = (b_{1},b_2',b_{3},\dots,b_n)$ and apply the corresponding unary vector-function $\Psi_{b}$ to $(b_{1},b_2,b_{3},\dots,b_n)$ to get a tuple $\beta_{b}$. As a result we get two tuples $\beta_{a}$ and $\beta_{b}$ from $\rho$ that differ from the key tuple $\beta$ just in the first and second coordinates, respectively. Since $\ConOne(\omega,j)^{(1)} = \ConOne(\rho,j)$ for every $j\in\{1,2\}$, we have $\beta\notin\omega$ if $D^{(1)}$ is nonlinear, and $\beta\notin\proj_{1,\dots,n}(\omega^{(1)})$ if $D^{(1)}$ is linear. Then by applying Lemma~\ref{OneLink} to $\omega$ and $\beta_{a},\beta_{b},\beta$ (if $D^{(1)}$ is a linear reduction we extend these tuples), we get a bridge $\delta$ from $\sigma_1$ to $\sigma_2$ such that $\widetilde\delta$ contains $\proj_{1,2}(\omega)$, which is equal to the relation defined by $\Upsilon(x_{1},x_{2})$. \end{proof} \subsection{Expanded coverings of crucial instances} In this subsection we prove two properties of expanded coverings of crucial instances. \begin{lem}\label{KeepCrucialConstraint} Suppose $\Theta$ is a crucial instance in $D^{(1)}$, $\Theta'\in\Expanded(\Theta)$ via the map $S\colon \Var(\Theta') \to\Var(\Theta)$, and $\Theta'$ has no solution in $D^{(1)}$. Then for every constraint $C = \rho(x_1,\dots,x_n)$ in $\Theta$ there exists a constraint $C'$ in $\Theta'$ whose image in $\Theta$ is $C$ (i.e., $C' = \rho(y_1,\dots,y_n)$ and $S(y_i) = x_i$ for $i = 1,2,\dots,n$). \end{lem} \begin{proof} Let $\Theta''$ be obtained from $\Theta$ by replacing every variable $y$ by $S(y)$. Obviously, $\Theta''$ still does not have a solution in $D^{(1)}$. By the definition of expanded coverings every relation in the obtained instance is either unary (and full), or weaker or equivalent to a constraint from $\Theta'$. Since $\Theta$ is crucial in $D^{(1)}$ and $\Theta''$ has no solutions in $D^{(1)}$, there should be a constraint $C'$ in $\Theta'$ such that its image $C''$ in $\Theta''$ is weaker or equivalent to $C$ but not weaker than $C$. Since $\Theta$ is crucial, all variables of $C$ are not dummy. Since $C''$ cannot have more variables than $C$ we obtain that $C'' = C$, which means that $C' = \rho(y_{1},\dots,y_{n})$ and $S(y_{i}) = x_{i}$ for every $i\in\{1,2,\dots,n\}$. \end{proof} \begin{lem}\label{StayNotConnected} Suppose $\Theta$ is a crucial instance in $D^{(1)}$, $\Theta'\in\Expanded(\Theta)$ has no solutions in $D^{(1)}$, every constraint relation of $\Theta$ is a critical rectangular relation, and $\Theta'$ is connected. Then $\Theta$ is connected. \end{lem} \begin{proof} Let $\Theta''$ be obtained from $\Theta'$ by replacing every variable $y$ by $S(y)$ from the definition of the expanded covering. Let us show that any two constraints $C_{1}$ and $C_{2}$ with a common variable $x$ of $\Theta$ are adjacent. By Lemma~\ref{KeepCrucialConstraint}, there exist constraints $C_{1}'$ and $C_{2}'$ of $\Theta'$ whose images in $\Theta$ are $C_{1}$ and $C_{2}$. Since $\Theta'$ is connected, the instance $\Theta''$ is also connected. By Corollary~\ref{PathInConnectedComponent} constraints $C_{1}$ and $C_{2}$ of $\Theta''$ are adjacent in $x$. Therefore, $C_{1}$ and $C_{2}$ are adjacent in $\Theta$. Thus, we proved that any two constraints of $\Theta$ with a common variable are adjacent. Since $\Theta$ is crucial in $D^{(1)}$, it is not fragmented, which implies that $\Theta$ is connected. \end{proof} \subsection{Strategies} \begin{thm}\label{PreviousReductions} Suppose $D^{(0)},D^{(1)},\dots,D^{(s)}$ is a strategy for $\Omega$, the solution set of $\Omega^{(i)}$ is subdirect for every $i\in\{0,1,\ldots,s\}$, $j<s$, $D^{(s+1)}$ is a one-of-four reduction, at least one of the two reductions $D^{(j+1)}$, $D^{(s+1)}$ is nonlinear, and $(\Omega^{(j)}(x_{1},\ldots,x_{n}))^{(s+1)}$ defines a nonempty relation. Then $(\Omega^{(j+1)}(x_{1},\ldots,x_{n}))^{(s+1)}$ defines a nonempty relation. \end{thm} \begin{proof} Let $\Var(\Omega) = \{x_{1},\dots,x_{n},y_{1},\dots,y_{t}\}$, $\Omega^{(j)}(x_{1},\dots,x_{n},y_{1},\dots,y_{t})$ define a relation $R$. Let the reduction $D^{(j+1)}$ be of type $\mathcal T_{1}$, the reduction $D^{(s+1)}$ be of type $\mathcal T_{2}$. Assume that $D^{(s+1)}$ is an absorbing reduction. Since $\Omega^{(s)}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{t})$ defines a subdirect relation, Lemma~\ref{AbsLessThanThree} implies that $\Omega^{(s+1)}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{t})$ defines a nonempty relation. From now on we assume that $\mathcal T_{2}$ is not the absorbing type. For $i\in\{j,\dots,s\}$ and $k\in\{1,\dots,n\}$ put \begin{align*} B_{i}= R\cap& (D_{x_{1}}^{(i)}\times\dots\times D_{x_{n}}^{(i)}\times D_{y_{1}}^{(j)}\times\dots\times D_{y_{t}}^{(j)}),\\ B_{i}'= R\cap& (D_{x_{1}}^{(i)}\times\dots\times D_{x_{n}}^{(i)}\times D_{y_{1}}^{(j+1)}\times\dots\times D_{y_{t}}^{(j+1)}),\\ B^{k} = R\cap& (D_{x_{1}}^{(s)}\times\dots\times D_{x_{k-1}}^{(s)} \times D_{x_{k}}^{(s+1)}\times D_{x_{k+1}}^{(s)}\times \dots\times D_{x_{n}}^{(s)}\times D_{y_{1}}^{(j)}\times\dots\times D_{y_{t}}^{(j)}). \end{align*} By Lemma~\ref{PCBrel} $B_{j+1}$ and $B_{j}'$ are one-of-four subuniverses of $R=B_{j}$ of type $\mathcal T_{1}$. Similarly, $B_{i+1}$ is a one-of-four subuniverse of $B_{i}$ for every $i$ and $B^{k}$ is a one-of-four subuniverse of $B_{s}$ of type $\mathcal T_{2}$ for every $k$ (here we may need to reduce the domain of the last $t$ variables to achieve the subdirectness of $B_{i}$). Let us show by induction on $i$ that $B_{i}'$ is a one-of-four subuniverse of $B_{i}$ of type $\mathcal T_{1}$. For $i=j$ we already know this. Assume that $B_{i}'$ is a one-of-four subuniverse of $B_{i}$. By Lemma~\ref{PCBsub}, $B_{i+1}\cap B_{i}' = B_{i+1}'$ is a one-of-four subuniverse of $B_{i+1}$ of type $\mathcal T_{1}$. Therefore, $B_{s}'$ is a one-of-four subuniverse of $B_{s}$ of type $\mathcal T_{1}$. We need to prove that $B^{1}\cap\dots\cap B^{n}\cap B_{s}'\neq\varnothing$. Since $(\Omega^{(j)}(x_{1},\ldots,x_{n}))^{(s+1)}$ defines a nonempty relation, $B^{1}\cap\dots\cap B^{n}\neq \varnothing$, Since the solution set of $\Omega^{(s)}$ is subdirect, $B^{k}\cap B_{s}'\neq\varnothing$ for every $k\in\{1,\dots,n\}$. Note that $B^{k}$ is of type $\mathcal T_{2}$ and $B_{s}'$ is of type $\mathcal T_{1}$, and they cannot be both linear. Since $B^{k}$ is not a binary absorbing subuniverse, Lemma~\ref{PCBint} implies that $B^{1}\cap\dots\cap B^{n}\cap B_{s}'\neq\varnothing$. \end{proof} \begin{cons}\label{PathStability} Suppose $\Theta$ is a cycle-consistent CSP instance, $D^{(0)},D^{(1)},\dots,D^{(s)}$ is a strategy for $\Theta$, $\Upsilon\in\Expanded(\Theta)$ is a tree-formula, $x$ is a parent of $x_{1}$ and $x_{2}$, and either (i) $B$ is a center of $D_{x}^{(s)}$, or (ii) $B$ is a PC subuniverse of $D_{x}^{(s)}$ and $D_{y}^{(s)}$ has no nontrivial binary absorbing subuniverse or center for every $y$. Then the pp-formula $\Upsilon^{(s)}(x_{1},x_{2})$ defines a binary relation with a nonempty intersection with $B\times B$. \end{cons} \begin{proof} Since every reduction in a strategy is 1-consistent and $\Upsilon$ is a tree-formula, the solution set of $\Upsilon^{(i)}$ is subdirect for every $i$. If $B = D_{x}^{(s)}$ then the claim follows from the definition of a strategy (every reduction is 1-consistent). Otherwise, let us define a reduction $D^{(s+1)}$ by $D_{x}^{(s+1)} = D_{x_1}^{(s+1)} = D_{x_2}^{(s+1)} = B$, $D_{y}^{(s+1)}=D_{y}^{(s)}$ for the remaining variables. Thus, we have a nonlinear reduction $D^{(s+1)}$. Since the instance $\Theta$ is cycle-consistent, and $x$ is a parent of $x_{1}$ and $x_{2}$, $\Upsilon(x_{1},x_{2})$ defines a reflexive relation. Hence, $(\Upsilon(x_{1},x_{2}))^{(s+1)}$ defines a nonempty relation. By Theorem~\ref{PreviousReductions}, we obtain that $(\Upsilon^{(1)}(x_{1},x_{2}))^{(s+1)}$ defines a nonempty relation. Repeatedly applying Theorem~\ref{PreviousReductions}, we show that $(\Upsilon^{(2)}(x_{1},x_{2}))^{(s+1)}, (\Upsilon^{(3)}(x_{1},x_{2}))^{(s+1)},\dots (\Upsilon^{(s)}(x_{1},x_{2}))^{(s+1)}$ define nonempty relations, which means that $\Upsilon^{(s)}(x_{1},x_{2})$ has a nonempty intersection with $B\times B$. \end{proof} \begin{lem}\label{FitInLinearSubuniverses} Suppose $R\subseteq A_{0}\times B_{0}$ is a subdirect relation, and \begin{enumerate} \item $A_{0}\supseteq A_{1}\supseteq\dots\supseteq A_{s+1}$ and $A_{i+1}$ is a one-of-four subuniverse of $A_{i}$ for $i\in\{0,1,2,\dots,s\}$; \item $B_{0}\supseteq B_{1}\supseteq\dots\supseteq B_{t+1}$ and $B_{i+1}$ is a one-of-four subuniverse of $B_{i}$ for $i\in\{0,1,2,\dots,t\}$; \item $A_{s+1}$ and $B_{t+1}$ are linear subuniverses of $A_{s}$ and $B_{t}$, respectively; \item there exist $a\in A_{s+1}$, $b\in B_{s+1}$, $a'\in A_{0}$, $b'\in B_{0}$ such that $(a,b'),(a',b),(a',b')\in R$; \item $R\cap (A_{s}\times B_{t})\neq \varnothing$. \end{enumerate} Then $R\cap (A_{s+1}\times B_{t+1})\neq \varnothing$. \end{lem} \begin{proof} Denote $a'' = w(a,a',\dots,a')$, $b'' = w(b,b',\dots,b') = w(b',\dots,b',b)$. We prove by induction on $s+t$. Assume that $s+t=0$, which implies $s=t=0$. By Lemma~\ref{LinearSpecialWNU}, we have $a''\in A_{1}$ and $b''\in B_{1}$. Since $w$ preserves $R$, $(a'',b'')\in R$, which completes this case. Let us prove the induction step. Assume that $s+t>0$. Without loss of generality we assume that $s>0$. Put $$R'(x_{1},x_2,y_{1},y_{2}) = R(x_{1},y_{2})\wedge R(y_{1},x_{2})\wedge R(y_{1},y_{2}).$$ Put $P_{i} = R'\cap (A_{i}\times B_{0}\times A_{0}\times B_{0})$, $Q_{i} = R'\cap (A_{0}\times B_{i}\times A_{0}\times B_{0})$, $T = R'\cap (A_{0}\times B_{0}\times A_{1}\times B_{0})$. Since $R'$ is subdirect, by Lemma~\ref{PCBrel} $P_{i+1}$ is a one-of-four subuniverse of $P_{i}$ , $Q_{i+1}$ is a one-of-four subuniverse of $Q_{i}$ for every $i$, and $T$ is a one-of-four subuniverse of $R' = P_{0}=Q_{0}$. We want to prove that $P_{s+1}\cap Q_{t+1}\cap T\neq \varnothing$. Since $(a,b,a',b')\in P_{s+1}\cap Q_{t+1}$, Lemma~\ref{SequencesOfSubuniverses}, implies that $P_{s+1}\cap Q_{t}$ and $P_{s}\cap Q_{t+1}$ are one-of-four subuniverses of $P_{s}\cap Q_{t}$. Since $R\cap (A_{s}\times B_{t})\neq \varnothing$, we have $T\cap P_{s}\cap Q_{t}\neq\varnothing$. Lemma~\ref{SequencesOfSubuniverses} implies that $P_{0}\supseteq P_{1}\supseteq\dots\supseteq P_{s}\supseteq P_{s}\cap Q_{1}\supseteq \dots\supseteq P_{s}\cap Q_{t}$ (here all inclusions mean one-of-four subuniverses), which by the same lemma implies that $T\cap P_{s}\cap Q_{t}$ is a one-of-four subuniverse of $P_{s}\cap Q_{t}$. If $A_{1}$ is not a linear subuniverse of $A_{0}$, then by Theorem~\ref{PCBint} the intersection of one-of-four subuniverses $P_{s+1}\cap Q_{t}$, $P_{s}\cap Q_{t+1}$, and $T\cap P_{s}\cap Q_{t}$ of different types cannot be empty, that is, $P_{s+1}\cap Q_{t+1}\cap T\neq \varnothing$, which completes this case. Assume that $A_{1}$ is linear. Since $(a,b,a',b'), (a',b',a',b'), (a',b',a,b')\in R'$ and $w$ preserves $R'$, we obtain $(a,b,a',b'), (a'',b'',a',b'), (a'',b'',a'',b')\in R'$. Note that by Lemma~\ref{LinearSpecialWNU} $a''\in A_{1}$ and $b''\in B_{1}$. We look at $R'$ as a binary relation $R'\subseteq (A_{0}\times B_{0})\times (A_{0}\times B_{0})$ to apply the inductive assumption. Put $\mathcal A_{i} = \proj_{1,2}(R')\cap (A_{i+1}\times B_{1})$ for $0\leqslant i\leqslant s$ and $\mathcal A_{i} = \proj_{1,2}(R')\cap (A_{s+1}\times B_{i-s+1})$ for $s+1\leqslant i\leqslant s+t$. Combining Lemma~\ref{PCBrel} and Lemma~\ref{SequencesOfSubuniverses} we derive that $\mathcal A_{i+1}$ is a one-of-four subuniverse of $\mathcal A_{i}$ for $i=0,1,\dots,s+t-1$. Put $\mathcal B_{i} = \proj_{3,4}(R'\cap (A_{1}\times B_{1}\times A_{i}\times B_{0}))$ for $i=0,1$. Combining Lemmas~\ref{PCBrel}, \ref{SequencesOfSubuniverses}, and \ref{ReductionAndProjectionGivesOneOfFour}, we derive that $\mathcal B_{1}$ is a linear subuniverse of $\mathcal B_{0}$. Then we apply the inductive assumption to $R'\cap (\mathcal A_{0}\times \mathcal B_{0})$, $\mathcal A_{0}\supseteq \mathcal A_{1}\supseteq\dots\supseteq \mathcal A_{s+t}$ and $\mathcal B_{0}\supseteq \mathcal B_{1}$, and show that $R'\cap (A_{s+1}\times B_{t+1}\times A_{1}\times B_{0})= P_{s+1}\cap Q_{t+1}\cap T\neq \varnothing$. Put $B_{i}' = \proj_{2}(R\cap (A_{1}\times B_{i}))$ for $i=0,1,\dots,s+1$. By Lemma~\ref{ReductionAndProjectionGivesOneOfFour}, $B_{i+1}'$ is a one-of-four subuniverse of $B_{i}'$ for every $i$. Applying the inductive assumption to $R\cap (A_{1}\times B_{0})$, $A_{1}\supseteq A_{2}\supseteq\dots \supseteq A_{s+1}$, and $B_{0}'\supseteq B_{1}'\supseteq\dots \supseteq B_{t+1}'$, we obtain that $A_{s+1}\cap B_{t+1} \neq\varnothing$. \end{proof} \begin{lem}\label{ParProperty} Suppose $D^{(0)}, D^{(1)},\ldots,D^{(s)}$ is a strategy for a subdirect constraint $\rho(x_{1},\ldots,x_{n})$, $D^{(s+1)}$ is a linear reduction, and \begin{align*} (b_1,\ldots,b_{t},a_{t+1},\ldots,a_{n})&\in\rho,\\ (a_1,\ldots,a_{t},b_{t+1},\ldots,b_{n})&\in\rho,\\ (b_1,\ldots,b_{t},b_{t+1},\ldots,b_{n})&\in \rho,\\ (a_1,\ldots,a_{t},a_{t+1},\ldots,a_{n})&\in D^{(s+1)}. \end{align*} Then there exists $(d_1,d_{2},\ldots,d_{n})\in \rho^{(s+1)}$. \end{lem} \begin{proof} For $i=0,1,2,\dots,s+1$ put $$A_{i} = \proj_{1,2,\dots,t} (\rho \cap (D_{x_{1}}^{(i)}\times\dots\times D_{x_{t}}^{(i)} \times D_{x_{t+1}}\times\dots\times D_{x_{n}}),$$ $$B_{i} = \proj_{1,2,\dots,t} (\rho \cap (D_{x_{1}}\times\dots\times D_{x_{t}} \times D_{x_{t+1}}^{(i)}\times\dots\times D_{x_{n}}^{(i)}).$$ Since $\rho^{(i)}$ is subdirect for every $i\in\{0,1,\dots,s\}$, Lemma~\ref{PCBrel} implies that $A_{i+1}$ is a one-of-four subuniverse of $A_{i}$ and $B_{i+1}$ is a one-of-four subuniverse of $B_{i}$ for every $i\in\{0,1,\dots,s\}$. Since $(a_{1},\ldots,a_{t})\in A_{s+1}$ and $(a_{t+1},\ldots,a_{n})\in B_{s+1}$, Lemma~\ref{FitInLinearSubuniverses} implies that $A_{s+1}\cap B_{s+1}\neq \varnothing$, which completes the proof. \end{proof} \subsection{Growing population divides into colonies.} In this section we prove a theorem that clarifies the inductive strategy used in the proof of Theorem \ref{FindPerfectConstraint}. To simplify explanation we decided to avoid our usual terminology. Instead, we argue in terms of organisms, reproduction, and friendship. We consider a set $X$ whose elements we call \emph{organisms}. At the moment 1 we had a set of organisms $X_{1}$. At every moment some organisms give a birth to new organisms, as a result we get a sequence of organisms $X_{1}\subseteq X_{2}\subseteq X_{3}\subseteq \dots,$ where $\bigcup_{i} X_{i} = X$, $X_{i}\subseteq X$, and $|X_{i}|<\infty$ for every $i$. We assume that each organism from $X\setminus X_{1}$ has exactly one parent. Every organism has a characteristic that we call \emph{strength}. Thus we have a mapping $\xi:X\to \{1,2,\ldots, S\}$ that assigns a characteristic to every organism. Also we have a binary reflexive symmetric relation $F$ on the set $X$, which we call \emph{friendship}. For an organism $x$ by $\BD(x)$ we denote the minimal $i$ such that $x\in X_{i}$. A sequence of organisms $x_{1},\ldots,x_{n}$ such that $x_{i}$ is a friend of $x_{i+1}$ for every $i$ is called \emph{a path}. \begin{thm}\label{newcolonies} Suppose $X_{1},X_{2},X_{3},\ldots$, $\xi$, and $F$ satisfy the following conditions: \begin{enumerate} \item \textbf{A child is always weaker than its parent.} If $y$ is the parent of $x$, then $\xi(y)>\xi(x)$. \item \textbf{Older friends are parents' friends.} If $\BD(y)<\BD(x)$ and $x$ is a friend of $y$, then the parent of $x$ is a friend of $y$ (or the parent of $x$ is $y$). \item \textbf{Only friends' kids can be friends.} If $\BD(x) = \BD(y)$ and $x$ is a friend of $y$, then the parents of $x$ and $y$ are friends. \item \textbf{No one can have infinitely many friends.} $|\{y\in X\mid (x,y)\in F\}|<\infty$ for every $x\in X$. \item \textbf{Reproduction never stops.} $|\bigcup_{i}X_{i}|=\infty$. \end{enumerate} Then there exists $N$ such that $X_{N}$ can be divided into two nonempty disjoint sets $X_{N}'$ and $X_{N}''$ such that there is no friendship between $X_{N}'$ and $X_{N}''$. \end{thm} \begin{proof} Assume the contrary. Then there exists a path between any two organisms. For every moment $t$ and every organism $x$ by $x^{t}$ we denote the predecessor of $x$ from $X_{t}$ with the maximal $\BD$, that is a closest predecessor who already lived at the moment $t$. For example $x^{t} = x$ for $t\geqslant \BD(x)$, and $x^{\BD(x)-1}$ is the parent of $x$. Suppose we have a path of organisms $x_{1},\ldots,x_{n}$. We claim that $x_{1}^{t},\ldots,x_{n}^{t}$ is also a path for any $t$. We will prove by induction starting with sufficiently large $t$ such that $x_{1},\ldots,x_{n}\in X_{t}$ and therefore $(x_{1}^{t},\ldots,x_{n}^{t}) = (x_{1},\ldots,x_{n})$. As the inductive step, we assume that this is a path for $t=t_{0}$ and show that this is a path for $t = t_{0}-1$. The induction step follows from hypotheses (2) and (3). The path $x_{1}^{t},\ldots,x_{n}^{t}$ will be called \emph{a path at the moment $t$}. Note that organisms of the path at the moment $t$ are not weaker than the corresponding organisms of the original path. Choose a maximal strength $s$ such that we have infinitely many organisms of strength $s$. Then infinitely many of them have the same parent, hence, there exists a parent reproducing infinitely many times. For every $x$ and a strength $s$ by $\Kids(x,s)$ we denote the set of all children $y$ of $x$ such that there exists a path from $x$ to $y$ with all the organisms in the path stronger than $s$. We consider the maximal $s_{0}$ such that $\Kids(x,s_{0})$ is infinite for some organism $x$. Since we can always put $s_{0}=0$ for a parent reproducing infinitely many times, $s_0$ exists. Note that this implies that $x$ is stronger than $s_{0}+1$. By $Y$ we denote the set of all organisms $y$ such that there exists a path from $x$ to $y$ with all the organisms in the path stronger than $s_{0}+1$. Note that $Y$ includes $x$. Let us show that $Y$ is finite. Assume the opposite. Let $s$ be the maximal strength such that we have infinitely many organisms of this strength in $Y$. Consider an organism $v$ from $Y$ with strength $s$ such that $\BD(v)>\BD(x)$ (we still have infinitely many of them). Considering the path from $x$ to $v$ at the moment $\BD(v)-1$, we get a path from $x$ to the parent of $v$, which means that the parent of $v$ is also in $Y$. Since parents are stronger than children and we may have only finitely many organisms stronger than $s$ in $Y$, we have only finitely many such parents in $Y$. Therefore, there exists a parent $z\in Y$ with infinitely many children from $Y$. Since we can glue a path from the parent to $x$ and a path from $x$ to its kid, this implies that $\Kids(z,s_{0}+1)$ is infinite. This contradicts the maximality of $s_{0}$ and proves that $Y$ is finite. Let $t$ be the first moment such that $X_t$ contains all friends of friends of organisms from $Y$. Consider an organism $y$ from $\Kids(x,s_{0})$ with $\BD(y)>t$. Choose a path from $x$ to $y$ with all organisms stronger than $s_{0}$. We consider the last organism $u$ in the path such that $\BD(u)<\BD(y)$. Taking the fragment of this path from $y$ to $u$ at the moment $\BD(y)-1$ we obtain a path from $x$ to $u$ with all organisms but $u$ stronger than $s_{0}+1$. This means that all the organisms but $u$ in this path are from $Y$. Thus $u$ has a friend from $Y$, which means that all friends of $u$ were born before the moment $t$. This contradicts the fact that an organism next to $u$ in the original path from $x$ to $y$ was born after the moment $\BD(y)-1$. \end{proof} \section{Proof of the Main Theorems}\label{MainProofs} \subsection{Existence of a next reduction} The next Lemma has its roots in Theorem 20 from \cite{FederVardi}, where the authors proved that bounded width 1 is equivalent to tree duality. \begin{lem}\label{ConstraintPropagation} Suppose $D^{(0)},D^{(1)},\dots,D^{(s)}$ is a strategy for a 1-consistent CSP instance $\Theta$, and $D^{(\top)}$ is a reduction of $\Theta^{(s)}$. \begin{enumerate} \item If there exists a 1-consistent reduction contained in $D^{(\top)}$ and $D^{(s+1)}$ is maximal among such reductions, then for every variable $y$ of $\Theta$ there exists a tree-formula $\Upsilon_{y}\in \ExpShort(\Theta)$ such that $\Upsilon_{y}^{(\top)}(y)$ defines $D_{y}^{(s+1)}$. \item Otherwise, there exists a tree-formula $\Upsilon\in \ExpShort(\Theta)$ such that $\Upsilon^{(\top)}$ has no solutions. \end{enumerate} \end{lem} \begin{proof} The proof is based on the constraint propagation procedure. We consider the instance $\Theta^{(s)}$. We start with an empty set $\Upsilon_{y}$ (empty tree-formula) for every $y$, these tree-formulas define the reduction $D^{(\top)}$. Then we introduce a recursive algorithm that gives a correct tree-formula $\Upsilon_{y}$ for every variable $y$. If at some step the reduction defined by these tree-formulas is 1-consistent, then we are done. Otherwise, we consider a constraint $C$ that breaks 1-consistency. Then the current restrictions of the variables $z_{1},\ldots,z_{l}$ in the constraint $C= \rho(z_{1}\ldots,z_{l})$ imply a stronger restriction of some variable $z_{i}$ and the corresponding domain $D_{z_{i}}^{(s)}$. We change the tree-formula $\Upsilon_{z_{i}}$ describing the reduction of the variable $z_{i}$ in the following way $\Upsilon_{z_{i}}:= C\wedge \Upsilon_{z_{1}}\wedge\dots\wedge\Upsilon_{z_{l}}$. Note that we have to be careful with all the variables appearing in different $\Upsilon_{y}$ to avoid collisions. Every time we join $\Upsilon_{u}$ and $\Upsilon_{v}$ we rename the variables so that they do not have common variables. Obviously, this procedure will eventually stop. If $\Upsilon_{y}^{(\top)}(y)$ defines an empty set for some $y$, then $\Upsilon_{y}$ can be taken as $\Upsilon$ to witness condition (2). Otherwise, these tree-formulas define a 1-consistent reduction, which is a maximal 1-consistent reduction since it is defined by tree-formulas. \end{proof} \begin{thm}\label{NextReductionOne} Suppose $D^{(0)},D^{(1)},\dots,D^{(s)}$ is a strategy for a cycle-consistent CSP instance $\Theta$. \begin{itemize} \item If $D_{x}^{(s)}$ has a nontrivial binary absorbing subuniverse $B$ then there exists a 1-consistent absorbing reduction $D^{(s+1)}$ of $\Theta^{(s)}$ with $D_{x}^{(s+1)}\subseteq B$. \item If $D_{x}^{(s)}$ has a nontrivial center $B$ then there exists a 1-consistent central reduction $D^{(s+1)}$ of $\Theta^{(s)}$ with $D_{x}^{(s+1)}\subseteq B$. \item If $D_{y}^{(s)}$ has no nontrivial binary absorbing subuniverse or center for every $y$ but there exists a nontrivial PC subuniverse $B$ in $D_{x}^{(s)}$ for some $x$, then there exists a 1-consistent PC reduction $D^{(s+1)}$ of $\Theta^{(s)}$ with $D_{x}^{(s+1)}\subseteq B$. \end{itemize} \end{thm} \begin{proof} Without loss of generality we assume that $B$ is a minimal one-of-four subuniverse of this type. Let us define a reduction $D^{(\top)}$ by $D_{x}^{(\top)} = B$ and $D_{y}^{(\top)}=D_{x}^{(s)}$ for $y\neq x$, and apply Lemma~\ref{ConstraintPropagation}. We consider two cases corresponding to two cases of Lemma~\ref{ConstraintPropagation}. Case 1. There exists a 1-consistent reduction $D^{(s+1)}$ of $\Theta^{(s)}$ such that $D_{y}^{(s+1)}$ is defined by $\Upsilon_{y}(y)$ for a tree-formula $\Upsilon_{y}$ for every variable $y$. Let $R$ be the solution set of $\Upsilon_{y}^{(s)}$. Since $\Upsilon_{y}$ is a tree-formula and $\Theta^{(s)}$ is 1-consistent, the solution set $R$ is subdirect. Applying Corollaries~\ref{AbsImpliesCons}, \ref{CenterImpliesCons}, \ref{PCImplies} to $R$ we derive that $D_{y}^{(s+1)}$ is a one-of-four subuniverse of the corresponding type. Case 2. There exists a tree-formula $\Upsilon\in\ExpShort(\Theta)$ such that $\Upsilon^{(\top)}$ has no solutions. We consider the minimal set of variables $\{x_{1},\ldots,x_{k}\}$ from $\Upsilon$ whose parent is $x$ such that $\Upsilon^{(s)}(x_{1},\ldots,x_{k})$ does not have any tuple in $B^{k}$. Since $\Theta^{(s)}$ is 1-consistent and $\Upsilon$ is a tree-formula, $k\geqslant 2$. If $B$ is a binary absorbing subuniverse, then we get a contradiction with Lemma~\ref{AbsLessThanThree}. For other cases with $k=2$ we get a contradiction from Corollary~\ref{PathStability}. If $k\geqslant 3$ and $B$ is a center then we get a contradiction with Corollary~\ref{CenterLessThanThree}. If $k\geqslant 3$ and $B$ is a PC subuniverse then we get a contradiction with Corollary~\ref{PCLessThanThree}. \end{proof} As a corollary we can derive that cycle-consistency is a sufficient condition to guarantee the existence of a solution of an instance whose domains avoid linear algebras (so called bounded width case). Note that this corollary follows from the result by Marcin Kozik in \cite{kozik2016weak}. \begin{conslem}\label{BoundedWidthCase} Suppose $\Theta$ is a cycle-consistent CSP instance, for every domain $D_{x}$ there is no $B\subseteq D_{x}$ and a congruence $\sigma$ on $B$ such that $B/\sigma$ is a nontrivial linear algebra. Then $\Theta$ has a solution. \end{conslem} \begin{proof} We recursively build a strategy $D^{(0)},D^{(1)},\dots,D^{(s)}$. We start with $s=0$. If every domain $D_{x}^{(s)}$ is of size 1, then we already have a solution because $\Theta^{(s)}$ is 1-consistent. Otherwise, by Theorem~\ref{NextReduction} on every domain $D_{x}^{(s)}$ of size greater than 1 there exists a nontrivial one-of-four subuniverse. Note that this subuniverse cannot be linear because this contradicts the assumption that there is no $B\subseteq D_{x}$ and a congruence $\sigma$ on $B$ such that $B/\sigma$ is linear. If we found a binary absorbing subuniverse or a center, then by Theorem~\ref{NextReductionOne} we can always find a next 1-consistent absorbing or central reduction $D^{(s+1)}$. Otherwise, by the same theorem we can find a 1-consistent PC reduction. Since the strategy cannot be infinite, we eventually stop with the instance whose variable domains are of size 1. \end{proof} \begin{thm}\label{NextReductionTwo} Suppose $D^{(0)},D^{(1)},\dots,D^{(s)}$ is a strategy for a cycle-consistent CSP instance $\Theta$, and $D^{(\top)}$ is a nonlinear 1-consistent reduction of $\Theta^{(s)}$. Then there exists a 1-consistent minimal reduction $D^{(s+1)}$ of $\Theta^{(s)}$ of the same type such that $D_{x}^{(s+1)}\subseteq D_{x}^{(\top)}$ for every variable $x$. \end{thm} \begin{proof} Let the reduction $D^{(\top)}$ be of type $\mathcal T$. Let us consider a minimal by inclusion 1-consistent reduction $D^{(s+1)}$ of $\Theta^{(s)}$ of type $\mathcal T$ such that $D_{x}^{(s+1)}\subseteq D_{x}^{(\top)}$ for every variable $x$. Assume that for some $z$ the domain $D_{z}^{(s+1)}$ is not a minimal one-of-four subuniverse of type $\mathcal T$. Then choose a minimal one-of-four subuniverse $B$ of $D_{z}^{(s)}$ of this type contained in $D_{z}^{(s+1)}$. We define a reduction $D^{(\bot)}$ of $\Theta^{(s)}$ by $D^{(\bot)}_{z} = B$, $D_{y}^{(\bot)} = D_{y}^{(s+1)}$ if $y\neq z$, and apply Lemma~\ref{ConstraintPropagation}. Since $D_{y}^{(s+1)}$ is a minimal by inclusion reduction, there exists a tree-formula $\Upsilon\in\ExpShort(\Theta)$ such that $\Upsilon^{(\bot)}$ has no solutions. Again, we consider a minimal set of variables $\{z_{1},\ldots,z_{k}\}$ from $\Upsilon$ whose parent is $z$ such that $\Upsilon^{(s+1)}(z_{1},\ldots,z_{k})$ does not have any tuple in $B^{k}$. Since the reduction $D^{(s+1)}$ is 1-consistent, $B\subsetneq D_{z}^{(s+1)}$, and $\Upsilon$ is a tree-formula, we have $k\geqslant 2$. If $D^{(\top)}$ is an absorbing or central reduction of $\Theta^{(s)}$, then it is also an absorbing or central reduction of $\Theta^{(s+1)}$. Then we can get a contradiction just as we did in the proof of Theorem~\ref{NextReductionOne} using Lemma~\ref{AbsLessThanThree}, Corollary~\ref{PathStability} or Corollary~\ref{CenterLessThanThree}. It remains to consider the case when $B$ is a PC subuniverse. Choose a minimal set of variables $y_{1},\ldots,y_{t}$ of $\Upsilon$ different from $z_{1},\dots,z_{k}$ such that $(\Upsilon^{(s)}(z_{1},\ldots,z_{k},y_{1},\ldots,y_{t}))^{(s+1)}$ does not have tuples with the first $k$ elements from $B$. If $t=0$ and $k=2$ then $\Upsilon^{(s)}(z_{1},z_{2})$ has an empty intersection with $B\times B$, which contradicts Corollary~\ref{PathStability}. If $t+k\geqslant 3$ then the relation defined by $\Upsilon^{(s)}(z_{1},\ldots,z_{k},y_{1},\ldots,y_{t})$ is $(B,\dots,B,D_{y_{1}}^{(s+1)},\dots,D_{y_{t}}^{(s+1)})$-essential relation, which contradicts Corollary~\ref{PCLessThanThree}. \end{proof} \begin{thm}\label{NextReductionThree} Suppose $D^{(\top)}$ is a 1-consistent PC reduction for a cycle-consistent irreducible CSP instance $\Theta$, and $\Theta$ is not linked and not fragmented. Then there exist a reduction $D^{(1)}$ of $\Theta$ and a minimal strategy $D^{(1)},\ldots, D^{(s)}$ for $\Theta^{(1)}$ such that the solution set of $\Theta^{(1)}$ is subdirect, the reductions $D^{(2)}, \ldots, D^{(s)}$ are nonlinear, $D_{x}^{(s)}\subseteq D_{x}^{(\top)}$ for every variable $x$. \end{thm} \begin{proof} Since $\Theta$ is not linked, there exists a maximal congruence $\sigma_{x}$ on $D_{x}$ for a variable $x$ of $\Theta$ such that $\LinkedCon(\Theta,x)\subseteq \sigma_{x}$. Choose an equivalence class $D_{x}^{(1)}$ of $\sigma_{x}$ with a nonempty intersection with $D_{x}^{(\top)}$. For every variable $y$ by $D_{y}^{(1)}$ we denote the set of all elements of $D_{y}$ linked to an element of $D_{x}^{(1)}$. Note that for every $y$ there is a congruence $\sigma_{y}$ on $D_y$ such that $D_{x}/\sigma_x\cong D_{y}/\sigma_{y}$. Then $D_{y}^{(1)}$ is an equivalence class of $\sigma_{y}$. By Corollaries~\ref{AbsorptionQuotient} and \ref{CenterQuotient}, there is no nontrivial binary absorbing subuniverse or center on $D_{x}/\sigma_{x}$. Then by Theorem~\ref{NextReduction}, $\sigma_{x}$ is either PC congruence, or linear congruence, which means that $D^{(1)}$ is a PC reduction or linear reduction. Let us show that $D_{y}^{(1)}\cap D_{y}^{(\top)}\neq\varnothing$ for every $y$. Since $\Theta$ is not fragmented, we may consider a path starting at $x$ and ending at $y$. Since the reduction $D^{(\top)}$ is 1-consistent, this path connects an element of $D_{x}^{(1)}\cap D_{x}^{(\top)}$ with some element of $D_{y}^{(\top)}$, which is also in $D_{y}^{(1)}$. Since $\Theta$ is irreducible, the solution set of $\Theta^{(1)}$ is subdirect. We build the remaining part of the strategy in the following way. Suppose we already have $D^{(0)}, D^{(1)},\ldots, D^{(t)}$, where the reductions $D^{(2)},\ldots,D^{(t)}$ are absorbing or central. If there exists a nontrivial binary absorbing subuniverse or a nontrivial center on $D_{y}^{(t)}$ for some $y$, then by Theorems~\ref{NextReductionOne}, \ref{NextReductionTwo} we can find the next minimal 1-consistent absorbing or central reduction $D^{(t+1)}$. Suppose there is no binary absorbing subuniverse or center on $D_{y}^{(t)}$ for every $y$. Put $D_{y}^{(\bot)}= D_{y}^{(\top)}\cap D_{y}^{(t)}$ for every variable $y$. By Lemma~\ref{SequencesOfSubuniverses} $D_{y}^{(\bot)}$ is a PC subuniverse of $D_{y}^{(t)}$ for every variable $y$. Hence, $D^{(\bot)}$ is a PC reduction of $\Theta^{(t)}$. Then we apply Lemma~\ref{ConstraintPropagation} to find a 1-consistent reduction of $\Theta^{(t)}$ smaller than $D^{(\bot)}$. If we cannot find it, then there exists a tree-formula $\Upsilon$ such that $\Upsilon^{(\bot)}$ has no solutions. Let $R$ be the solution set of $\Upsilon$. Note that $R^{(i)}$ is a subdirect relation for $i=0,1,\dots,t$ because $\Upsilon$ is a tree-formula and $D^{(i)}$ is a 1-consistent reduction. By Lemma~\ref{PCBrel}, $R^{(\top)}$ is a PC subuniverse of $R$. Since $D^{(\top)}$ is 1-consistent, the intersection $R^{(1)}\cap R^{(\top)}$ is not empty. Let us prove by induction on $i$ that $R^{(i)}\cap R^{(\top)}$ is a nonempty PC subuniverse of $R^{(i)}$ for $i=1,2,\dots,t$. By the inductive assumption, we assume that $R^{(i-1)}\cap R^{(\top)}$ is a nonempty PC subuniverse of $R^{(i-1)}$ (for $i=1$ it follows from the definition). By Lemma~\ref{PCBrel}, $R^{(i)}$ is a one-of-four subuniverse of $R^{(i-1)}$. For $i\geqslant 2$ it is not a PC subuniverse, then by Theorem~\ref{PCBint}, the intersection of $R^{(i-1)}\cap R^{(\top)}$ and $R^{(i)}$, that is $R^{(i)}\cap R^{(\top)}$, cannot be empty. For $i=1$ we already know that $R^{(1)}\cap R^{(\top)}\neq\varnothing$. Applying Theorem~\ref{PCBsub} to $R^{(i-1)}\cap R^{(\top)}\subseteq R^{(i-1)}$ and $R^{(i)}\subseteq R^{(i-1)}$ we derive that $R^{(i)}\cap R^{(\top)}$ is a nonempty PC subuniverse of $R^{(i)}$. Thus, we proved that $R^{(t)}\cap R^{(\top)}$ is not empty, which contradicts the assumption about the tree-formula $\Upsilon$. Hence, there exists a 1-consistent reduction $D^{(\triangle)}$ of $\Theta^{(t)}$ smaller than $D^{(\bot)}$ such that for every variable $y$ the new domain $D_{y}^{(\triangle)}$ can be defined by a tree-formula $\Upsilon_{y}^{(\bot)}$. Since the solution set of $\Upsilon_{y}^{(t)}$ is subdirect, by Corollary~\ref{PCImplies}, the domain $D_{y}^{(\triangle)}$ is a PC subuniverse of $D_{y}^{(t)}$. Hence $D^{(\triangle)}$ is a PC reduction of $\Theta^{(t)}$. It remains to apply Theorem~\ref{NextReductionTwo} to find a minimal PC reduction $D^{(t+1)}$ smaller than $D_{y}^{(\triangle)}$, put $s= t+1$, and finish the strategy. \end{proof} \subsection{Existence of a linked connected component} In this subsection we prove that all constraints in a crucial instance have the parallelogram property, show that we can always find a linked connected component with required properties, and prove that we cannot pass from an instance having solutions to an instance having no solutions while applying a nonlinear reduction. \begin{thm}\label{KeyConjunctionMain} Suppose $D^{(1)}$ is a minimal 1-consistent one-of-four reduction of a cycle-consistent irreducible CSP instance $\Theta$, $\Omega(x_{1},\ldots,x_{n})$ is a subconstraint of $\Theta$, the solution set of $\Omega^{(1)}$ is subdirect, $\Theta\setminus\Omega$ has a solution in $D^{(1)}$, and $\Theta$ has no solutions in $D^{(1)}$. Then there exist instances $\Upsilon_{1},\ldots,\Upsilon_{t}\in \ExpShort(\Omega)$ such that $\Phi=(\Theta\setminus\Omega)\cup \Upsilon_{1}\cup\dots\cup\Upsilon_{t}$ has no solutions in $D^{(1)}$, each $\Upsilon_{i}(x_{1},\ldots,x_{n})$ is a subconstraint of $\Phi$, and $\Upsilon_{i}^{(1)}(x_{1},\ldots,x_{n})$ defines a subdirect key relation with the parallelogram property for every $i$. \end{thm} \begin{thm}\label{FindPerfectConstraint} Suppose $D^{(1)}$ is a minimal 1-consistent one-of-four reduction of a cycle-consistent irreducible CSP instance $\Theta$, $\Theta$ is crucial in $D^{(1)}$ and not connected. Then there exists an instance $\Theta'\in\Expanded(\Theta)$ that is crucial in $D^{(1)}$ and contains a linked connected component whose solution set is not subdirect. \end{thm} \begin{thm}\label{CannotLooseSolution} Suppose $D^{(1)}$ is a 1-consistent nonlinear reduction of a cycle-consistent irreducible CSP instance $\Theta$. If $\Theta$ has a solution then it has a solution in $D^{(1)}$. \end{thm} \begin{thm}\label{ParPropertyMain} Suppose $D^{(0)},\ldots,D^{(s)}$ is a minimal strategy for a cycle-consistent irreducible CSP instance $\Theta$, and a constraint $\rho(x_{1},\ldots,x_{n})$ of $\Theta$ is crucial in $D^{(s)}$. Then $\rho$ is a critical relation with the parallelogram property. \end{thm} \begin{thm}\label{ParPropertyForSubcontraint} Suppose $D^{(0)},\ldots,D^{(s)}$ is a minimal strategy for a cycle-consistent irreducible CSP instance $\Theta$, $\Upsilon(x_{1},\ldots,x_{n})$ is a subconstraint of $\Theta$, the solution set of $\Upsilon^{(s)}$ is subdirect, $k\in\{1,2,\dots,n-1\}$, $\Var(\Upsilon) = \{x_{1},\ldots,x_{n},u_{1},\ldots,u_{t}\}$, $$\Omega = \Upsilon_{x_{1},\ldots,x_{k},u_{1},\ldots,u_{t}}^{y_{1},\ldots,y_{k},v_{1},\ldots,v_{t}} \wedge \Upsilon_{x_{k+1},\ldots,x_{n},u_{1},\ldots,u_{t}}^{y_{k+1},\ldots,y_{n},v_{t+1},\ldots,v_{2t}} \wedge \Upsilon_{x_{1},\ldots,x_{n},u_{1},\ldots,u_{t}}^{y_{1},\ldots,y_{n},v_{2t+1},\ldots,v_{3t}},$$ and $\Theta^{(s)}$ has no solutions. Then $(\Theta\setminus\Upsilon)\cup\Omega$ has no solutions in $D^{(s)}$. \end{thm} To prove these theorems we need to introduce a partial order on domain sets. To every domain set $D^{(\top)}$ we assign a tuple of integers $\size(D^{(\top)}) = (|D_{1}|,|D_{2}|,\dots,|D_{s}|)$, where $D_{1},D_{2},\ldots,D_{s}$ is the set of all different domains of $D^{(\top)}$ ordered by their size starting from the large one. Then the lexicographic order on tuples of integers induces a partial order on domain sets, that is we say that $(a_{1},\ldots,a_{k})< (b_{1},\ldots,b_{l})$ if there exists $j\in\{1,2,\dots,\min(k+1,l)\}$ such that $a_{i} = b_{i}$ for every $i<j$, and $a_{j}<b_{j}$ or $j=k+1$. It follows from the definition that $\leqslant$ is transitive and there does not exist an infinite descending chain of reductions. Note that duplicating domains does not affect this partial order, that is why we do not make the size of a domain set larger if we consider an expanded covering. At the same time, for every minimal (proper) one-of-four reduction $D^{(1)}$ of the instance with a domain set $D^{(0)}$ we have $\size(D^{(1)})<\size(D^{(0)})$. Let us show this for a central reduction. We replace every domain having a nontrivial center by a smaller domain and we do not change other domains. Let $D_{y}^{(0)}$ be a domain of the maximal size having a nontrivial center. Then $|D_{y}^{(0)}|$ will be replaced by smaller numbers in the sequence $\size(D^{(0)})$ making the sequence smaller. We prove theorems of this subsection simultaneously by the induction on the size of the domain sets. Let $D^{(\bot)}$ be a domain set. Assume that Theorems~\ref{KeyConjunctionMain}, \ref{FindPerfectConstraint}, and \ref{CannotLooseSolution} hold on instances $\Theta$ with a domain set $D^{(0)}$ if $\size(D^{(0)})< \size(D^{(\bot)})$, and Theorems~\ref{ParPropertyMain} and \ref{ParPropertyForSubcontraint} hold if $\size(D^{(s)})< \size(D^{(\bot)})$. Let us prove Theorems~\ref{KeyConjunctionMain}, \ref{FindPerfectConstraint}, and \ref{CannotLooseSolution} on instances $\Theta$ with a domain set $D^{(0)}$ if $\size(D^{(0)})=\size(D^{(\bot)})$, and Theorems~\ref{ParPropertyMain} and \ref{ParPropertyForSubcontraint} for $\size(D^{(s)})=\size(D^{(\bot)})$. \begin{THMKeyConjunctionMain} Suppose $D^{(1)}$ is a minimal 1-consistent one-of-four reduction of a cycle-consistent irreducible CSP instance $\Theta$, $\Omega(x_{1},\ldots,x_{n})$ is a subconstraint of $\Theta$, the solution set of $\Omega^{(1)}$ is subdirect, $\Theta\setminus\Omega$ has a solution in $D^{(1)}$, and $\Theta$ has no solutions in $D^{(1)}$. Then there exist instances $\Upsilon_{1},\ldots,\Upsilon_{t}\in \ExpShort(\Omega)$ such that $\Phi=(\Theta\setminus\Omega)\cup \Upsilon_{1}\cup\dots\cup\Upsilon_{t}$ has no solutions in $D^{(1)}$, each $\Upsilon_{i}(x_{1},\ldots,x_{n})$ is a subconstraint of $\Phi$, and $\Upsilon_{i}^{(1)}(x_{1},\ldots,x_{n})$ defines a subdirect key relation with the parallelogram property for every $i$. \end{THMKeyConjunctionMain} \begin{proof} Let $\Sigma$ be the set of all relations defined by $\Upsilon^{(1)}(x_{1},\ldots,x_{n})$ where $\Upsilon\in\ExpShort(\Omega)$. To every relation $\rho\in\Sigma$ we assign a constraint $((x_{1},\ldots,x_{n});\rho)$, which we denote by $C(\rho)$. We can find $\Sigma_0\subseteq \Sigma$ such that the instance $(\Theta^{(1)}\setminus\Omega^{(1)})\cup C(\Sigma_{0})$ has no solutions, but if we replace any relation of $\Sigma_{0}$ by all bigger relations from $\Sigma$ (weaker in terms of constraints) then we get an instance with a solution. Let $\Sigma_{0} = \{\rho_{1},\ldots,\rho_{t}\}$. For each $\rho_{i}$ and each $\alpha\notin\rho_{i}$ we consider an inclusion-maximal relation $\rho_{i,\alpha}\supseteq \rho_{i}$ from $\Sigma$ such that $\alpha\notin \rho_{i,\alpha}$. Since $\rho_{i} = \bigcap_{\alpha\notin \rho_{i}} \rho_{i,\alpha}$, if $\rho_{i}\neq \rho_{i,\alpha}$ for each $\alpha$ then $\rho_{i}$ could be replace by bigger relations that are still in $\Sigma$, which contradicts our assumptions. Then for each $\rho_{i}$ there exists a tuple $\alpha_{i}$ such that $\rho_{i}$ is an inclusion-maximal relation without $\alpha_{i}$ in $\Sigma$. By Corollary~\ref{MaximalMeansKey}, $\rho_{i}$ is a key relation for every $i$. Therefore we get a sequence of instances $\Upsilon_{1},\ldots,\Upsilon_{t}\in\ExpShort(\Omega)$ such that $\Upsilon_{i}^{(1)}$ defines $\rho_{i}$ for every $i$. Put $\Phi = (\Theta\setminus\Omega)\cup \Upsilon_{1}\cup\dots\cup\Upsilon_{t}$. We choose variables in the instance so that the only common variables of $\Upsilon_{1},\ldots,\Upsilon_{t}$ are $x_{1},\ldots,x_{n}$, which guarantees that $\Upsilon_{i}(x_{1},\ldots,x_{n})$ is a subconstraint of $\Phi$. Since $\Phi$ is a covering of $\Theta$, by Lemma~\ref{ExpandedConsistencyLemma}, $\Phi$ is cycle-consistent and irreducible. Assume that $\rho_{i}$ does not have the parallelogram property. Without loss of generality we assume that the failing partition is $\{x_{1},\dots,x_{k}\}$, $\{x_{k+1},\dots,x_{n}\}$. Define the instance $\Omega_{i}$ from $\Upsilon_{i}$ using the construction from Theorem~\ref{ParPropertyForSubcontraint}. Then the relation defined by $\Omega_{i}^{(1)}(x_{1},\dots,x_{n})$ is bigger than $\rho_{i}$ and $\Omega_{i}\in\ExpShort(\Omega)$, which means that $(\Phi\setminus\Upsilon_{i})\cup\Omega_{i}$ has a solution in $D^{(1)}$ and contradicts the inductive assumption for Theorem~\ref{ParPropertyForSubcontraint}. Hence, $\rho_{i}$ has the parallelogram property for every $i$. \end{proof} \input{Transformations.tex} \begin{THMCannotLooseSolution} Suppose $D^{(1)}$ is a 1-consistent nonlinear reduction of a cycle-consistent irreducible CSP instance $\Theta$. If $\Theta$ has a solution then it has a solution in $D^{(1)}$. \end{THMCannotLooseSolution} \begin{proof} Assume the contrary, that is, $\Theta$ has a solution but $\Theta^{(1)}$ has no solutions. By Theorem~\ref{NextReductionTwo}, there exists a minimal 1-consistent nonlinear reduction such that $\Theta$ has no solutions in it. First, we consider the set of all minimal 1-consistent nonlinear reductions of $\Theta$, which we denote by $\mathfrak{R}$. Then we consider an instance $\Theta'\in\Expanded(\Theta)$ with the minimal positive number of reductions $D^{(\vartriangle)}\in \mathfrak{R}$ such that $\Theta'$ has no solutions in $D^{(\vartriangle)}$. Note that this transformation of $\Theta$ to $\Theta'$ can be omitted if $D^{(1)}$ is not a PC reduction. Then we weaken the instance $\Theta'$ (replace any constraint by all weaker constraints) while we still have a reduction $D^{(\vartriangle)}\in \mathfrak{R}$ such that $\Theta'$ has no solutions in $D^{(\vartriangle)}$. After that we remove all dummy variables from constraints and denote the obtained instance by $\Theta''$. Note that $\Theta''$ is not fragmented (since it is crucial in some $D^{(\vartriangle)}$), $\Theta''\in\Expanded(\Theta)$, and for any reduction $D^{(\vartriangle)}\in \mathfrak{R}$ the instance $\Theta''$ is either crucial in $D^{(\vartriangle)}$, or has a solution in $D^{(\vartriangle)}$. The last property also holds for any expanded covering if $\Theta''$ which is crucial in some reduction $D^{(\vartriangle)}$. Choose a reduction $D^{(\vartriangle)}$ from $\mathfrak{R}$ such that $\Theta''$ is crucial in it. Assume that $\Theta''$ is not linked. If $D^{(\vartriangle)}$ is a PC reduction, then we apply Theorem~\ref{NextReductionThree} to find a reduction $D^{(1)}$ (it is a different reduction $D^{(1)}$) and a strategy $D^{(1)},\dots,D^{(s)}$ for $\Theta''^{(1)}$ such that the solution set of $\Theta''^{(1)}$ is subdirect, the strategy has only nonlinear reductions, $D_{y}^{(s)}\subseteq D_{y}^{(\vartriangle)}$ for every $y$. Then $\Theta''^{(1)}$ is cycle-consistent and irreducible. By the inductive assumption $\Theta''^{(2)}$ has a solution, then by Lemma~\ref{ProperReductionPreservesCycleConAndIrreducability} $\Theta''^{(2)}$ is cycle-consistent and irreducible, by the inductive assumption $\Theta''^{(3)}$ has a solution, and so on. Thus we can prove that $\Theta''^{(s)}$ has a solution, which means that $\Theta''^{(\vartriangle)}$ has a solution and contradicts our assumption. If $D^{(\vartriangle)}$ is an absorbing or central reduction, then we choose a variable $x$ of $\Theta''$ and an element $c\in D_{x}^{(\vartriangle)}$, and for every variable $y$ by $D_{y}^{(\top)}$ we denote the set of all elements of $D_{y}$ linked to $c$. Since $\Theta''$ is irreducible, the solution set of $\Theta''^{(\top)}$ is subdirect. Therefore, $\Theta''^{(\top)}$ is irreducible and cycle-consistent. By Lemmas \ref{AbsImplies}, \ref{CenterImplies} the reduction $D^{(\bot)}$, defined by $D_{y}^{(\bot)} = D_{y}^{(\top)}\cap D_{y}^{(\vartriangle)}$ for every variable $y$, is an absorbing or central reduction for $\Theta''^{(\top)}$. Since $D^{(\vartriangle)}$ is a 1-consistent reduction and $D^{(\top)}$ is just a linked component, the reduction $D^{(\bot)}$ is also 1-consistent. By the inductive assumption, $\Theta''^{(\bot)}$ has a solution, which gives a contradiction. Thus, we assume that $\Theta''$ is linked. Recall that by the inductive assumption for Theorem~\ref{ParPropertyMain}, every constraint of $\Theta''$ is critical and has the parallelogram property. If $\Theta''$ is not connected, then by Theorem~\ref{FindPerfectConstraint}, there exists an instance $\Upsilon\in\Expanded(\Theta'')$ that is crucial in $D^{(\vartriangle)}$ and contains a linked connected subinstance $\Omega$. If $\Theta''$ is connected, then $\Theta''$ is a linked connected component itself and we put $\Upsilon = \Omega = \Theta''$. At the moment we have $\Upsilon\in\Expanded(\Theta'')$ that is crucial in $D^{(\vartriangle)}$ and a linked connected subinstance $\Omega$. Let $x_{1}$ be the first variable in a constraint $C\in\Omega$. By Lemma~\ref{CriticalMeansIrreducible}, $\ConOne(C,x_1)$ is irreducible. By Corollary~\ref{PathInConnectedComponent}, there exists a bridge $\delta$ from $\ConOne(C,x_1)$ to $\ConOne(C,x_1)$ such that $\delta(x,x,y,y)$ is a full relation. By Corollary~\ref{LinkedLink}, there exists a relation $\zeta\subseteq D_{x_1}\times D_{x_1}\times \mathbb Z_{p}$ such that $(y_{1},y_{2},0)\in \zeta\Leftrightarrow (y_{1},y_{2})\in\ConOne(C,x_1)$ and $\proj_{1,2}(\zeta) = \cover{\ConOne(C,x_1)}$. Let us replace the variable $x_1$ of $C$ in $\Upsilon$ by $x_1'$ and add the constraint $\zeta(x_1,x_1',z)$. The obtained instance we denote by $\Upsilon'$. Let $\Var(\Upsilon) = \{x_{1},\ldots,x_{n}\}$, $\Upsilon'(x_{1},\ldots,x_{n},z)$ define the relation $S$, which is the projection of the solution set of $\Upsilon'$ onto all variables but $x_1'$. Let $C = R(x_1, x_{i_1},\ldots,x_{i_s})$, $R'(x_1, x_{i_1},\ldots,x_{i_s}) = \exists x_1' R(x_1', x_{i_1},\ldots,x_{i_s})\wedge (x_1,x_1')\in \cover{\ConOne(C,x_{1})}$. The projection of $S$ onto the first $n$ variables is the solution set of the instance $\Upsilon$ whose constraint $C$ is replaced by the weaker constraint $R'(x_1, x_{i_1},\ldots,x_{i_s})$. Since $\Upsilon$ is crucial in $D^{(\vartriangle)}$, the solution set $S$ contains a tuple whose first $n$ elements are from $D^{(\vartriangle)}$. Moreover, the last element of all such tuples is not equal to 0, since otherwise this would imply that $\Upsilon$ has a solution in $D^{(\vartriangle)}$. By the assumption, $\Theta$ has a solution, and therefore $\Upsilon$ has a solution, which means that $\Upsilon'$ has a solution with $z=0$ and, equivalently, $S$ has a tuple whose last element is $0$. Since $\mathbb Z_{p}$ does not have proper subalgebras of size greater than 1, we have $\proj_{n+1}(S) = \mathbb Z_{p}$. Let us show for $i\in\{1,2,\dots,n\}$ that $(\proj_{i}(S))^{(\vartriangle)}$ is a one-of-four subuniverse of $\proj_{i}(S)$ of the same type as $D^{(\vartriangle)}$. For absorbing and central reductions it follows from Lemma~\ref{PCBsubNonPC}. For the PC type we consider a PC congruence $\sigma$ on $D_{x_{i}}$. By Theorems~\ref{NextReductionOne}, \ref{NextReductionTwo}, for every equivalence class $U$ of $\sigma$ there exists a minimal 1-consistent PC reduction $D^{(\triangledown)}\in \mathfrak{R}$ such that $D_{x_{i}}^{(\triangledown)} \subseteq U$. As we assumed earlier, for any reduction from $\mathfrak{R}$ the instance $\Upsilon$ is either crucial in it, or has a solution in it. Therefore, $\Upsilon'$ has a solution in any reduction from $\mathfrak{R}$, and $\Upsilon'$ has a solution with $x_{i}\in U$. Hence, $\sigma$ restricted to $\proj_{i}(S)$ is still a PC congruence. Moreover, $(\proj_{i}(S))^{(\vartriangle)}$ is an intersection of equivalence classes of the corresponding PC congruences on $\proj_{i}(S)$. Thus, we showed that $(\proj_{i}(S))^{(\vartriangle)}$ is a one-of-four subuniverse of $\proj_{i}(S)$ of the same type as $D^{(\vartriangle)}$. By Lemma~\ref{PCBrel}, $S^{(\vartriangle)}$ is a nonlinear one-of-four subuniverse of $S$ (here we do not reduce the last variable). Also, by Lemma~\ref{PCBrel}, the set of all tuples from $S$ whose last element is 0 is a linear subuniverse of $S$, we denote this subuniverse by $S_{0}$. By Lemma~\ref{IntersectionOfTwoSubuniverses}, the intersection $S^{(\vartriangle)}\cap S_{0}$ is not empty, which means that $\Upsilon$ has a solution in $D^{(\vartriangle)}$ and contradicts our assumptions. \end{proof} Note that Theorem~\ref{ParPropertyMain} could be derived from Theorem~\ref{ParPropertyForSubcontraint}, but we decided to keep the original proof of Theorem~\ref{ParPropertyMain} because it demonstrates the idea for both theorems in an easier way. \begin{THMParPropertyMain} Suppose $D^{(0)},\ldots,D^{(s)}$ is a minimal strategy for a cycle-consistent irreducible CSP instance $\Theta$, and a constraint $\rho(x_{1},\ldots,x_{n})$ of $\Theta$ is crucial in $D^{(s)}$. Then $\rho$ is a critical relation with the parallelogram property. \end{THMParPropertyMain} \begin{proof} Since $\rho(x_{1},\ldots,x_{n})$ is crucial, $\rho$ is a critical relation. Let $\Theta'$ be obtained from $\Theta$ by replacement of $\rho(x_{1},\ldots,x_{n})$ by all weaker constraints. Since $\Theta$ is crucial in $D^{(s)}$, $\Theta'$ has a solution in $D^{(s)}$. By Lemma~\ref{ExpandedConsistencyLemma}, $\Theta'$ is cycle-consistent and irreducible. Assume that $|D^{(s)}_{x}|=1$ for every variable $x$. Since the reduction $D^{(s)}$ is 1-consistent, we get a solution, which contradicts the fact that $\Theta$ has no solutions in $D^{(s)}$. If we have a nontrivial binary absorbing subuniverse, or a nontrivial center, or a nontrivial PC subuniverse on some domain $D_{x}^{(s)}$, then by Theorems~\ref{NextReductionOne},~\ref{NextReductionTwo}, there exists a minimal nonlinear 1-consistent reduction $D^{(s+1)}$ for $\Theta$. As we explained before, $\size(D^{(s+1)})<\size(D^{(s)})$. Then, by Lemma~\ref{ProperReductionPreservesCycleConAndIrreducability}, $\Theta'^{(s)}$ is cycle-consistent and irreducible. By Theorem~\ref{CannotLooseSolution}, $\Theta'$ has a solution in $D^{(s+1)}$. Hence, $\rho(x_{1},\ldots,x_{n})$ is crucial in $D^{(s+1)}$. By the inductive assumption $\rho$ has the parallelogram property. It remains to consider the case when $\ConLin(D_{x}^{(s)})$ is proper for every $x$ such that $|D_{x}^{(s)}|>1$. Let $\alpha$ be a solution of $\Theta'$ in $D^{(s)}$. Let the projection of $\alpha$ onto the variables $x_{1},\ldots,x_n$ be $(a_{1},\ldots,a_{n})$. Assume that $\rho$ does not have the parallelogram property. Without loss of generality we can assume that there exist $c_{1},\ldots,c_{n}$ and $d_{1},\ldots,d_{n}$ such that \begin{align*} (c_{1},\ldots,c_{k},c_{k+1},\ldots,c_{n})&\notin\rho,\\ (c_{1},\ldots,c_{k},d_{k+1},\ldots,d_{n})&\in\rho,\\ (d_{1},\ldots,d_{k},c_{k+1},\ldots,c_{n})&\in\rho,\\ (d_{1},\ldots,d_{k},d_{k+1},\ldots,d_{n})&\in\rho. \end{align*} Put \begin{align*} \rho'(x_{1},\ldots,x_{n}) = \exists y_{1}\dots\exists y_{n}\; \rho(x_{1},\ldots,x_{k},y_{k+1},\ldots,y_{n}) \wedge&\\ \rho(y_{1},\ldots,y_{k},x_{k+1},\ldots,x_{n}) \wedge \rho(y_{1},\ldots,y_{k},y_{k+1},\ldots,y_{n}). \end{align*} Obviously, $\rho\subsetneq\rho'$ and $\rho'\in\Gamma$, therefore $(a_{1},\ldots,a_{n})\in\rho'.$ Hence, there exist $b_{1},\ldots,b_{n}$ such that \begin{align*} (a_{1},\ldots,a_{k},b_{k+1},\ldots,b_{n})&\in\rho,\\ (b_{1},\ldots,b_{k},a_{k+1},\ldots,a_{n})&\in\rho,\\ (b_{1},\ldots,b_{k},b_{k+1},\ldots,b_{n})&\in\rho. \end{align*} By Lemma~\ref{ParProperty}, there exists a tuple $(e_{1},\ldots,e_{n})\in \rho$ such that $(a_{i},e_{i})\in\LinCon(D_{x_{i}}^{(s)})$ for every $i$. Consider the minimal linear reduction $D^{(s+1)}$ of $\Theta^{(s)}$ such that $\alpha\in D^{(s+1)}$. Then we have $(e_{1},\ldots,e_{n})\in \rho^{(s+1)}$, and by Lemma~\ref{ProperReductionPreservesSubdirectness}, $D^{(s+1)}$ is a 1-consistent reduction of $\Theta^{(s)}$. Since $\Theta'$ has a solution in $D^{(s+1)}$, $\rho(x_{1},\ldots,x_{n})$ is crucial in $D^{(s+1)}$. We get a longer minimal strategy with smaller $\size(D^{(s+1)})$, hence by the inductive assumption the relation $\rho$ is a critical relation with the parallelogram property. \end{proof} \begin{THMParPropertyForSubcontraint} Suppose $D^{(0)},\ldots,D^{(s)}$ is a minimal strategy for a cycle-consistent irreducible CSP instance $\Theta$, $\Upsilon(x_{1},\ldots,x_{n})$ is a subconstraint of $\Theta$, the solution set of $\Upsilon^{(s)}$ is subdirect, $k\in\{1,2,\dots,n-1\}$, $\Var(\Upsilon) = \{x_{1},\ldots,x_{n},u_{1},\ldots,u_{t}\}$, $$\Omega = \Upsilon_{x_{1},\ldots,x_{k},u_{1},\ldots,u_{t}}^{y_{1},\ldots,y_{k},v_{1},\ldots,v_{t}} \wedge \Upsilon_{x_{k+1},\ldots,x_{n},u_{1},\ldots,u_{t}}^{y_{k+1},\ldots,y_{n},v_{t+1},\ldots,v_{2t}} \wedge \Upsilon_{x_{1},\ldots,x_{n},u_{1},\ldots,u_{t}}^{y_{1},\ldots,y_{n},v_{2t+1},\ldots,v_{3t}},$$ and $\Theta^{(s)}$ has no solutions. Then $(\Theta\setminus\Upsilon)\cup\Omega$ has no solutions in $D^{(s)}$. \end{THMParPropertyForSubcontraint} \begin{proof} Put $\Theta' = (\Theta\setminus\Upsilon)\cup\Omega$. Since $\Omega$ is a covering of $\Upsilon$, Lemma~\ref{ExpandedConsistencyLemma} implies that $\Theta\cup\Omega$ is cycle-consistent and irreducible. Assume that $\Theta'$ has a solution in $D^{(s)}$. We recursively build a strategy $D^{(s)},D^{(s+1)},\ldots,D^{(q)}$ for $\Theta\cup\Omega = \Theta'\cup\Upsilon$ satisfying the following conditions: \begin{enumerate} \item $D^{(s)},D^{(s+1)},\dots,D^{(q)}$ is a minimal strategy for $\Theta'^{(s)}$; \item if $s\leqslant j<q$ and $D^{(j+1)}$ is a linear reduction, then for each $i\in\{1,2,\dots,t\}$ $$D_{u_{i}}^{(j+1)} = \proj_{n+i}(\rho'\cap (D_{x_{1}}^{(j+1)}\times\dots\times D_{x_{n}}^{(j+1)} \times D_{u_{1}}^{(j)}\times\dots\times D_{u_{t}}^{(j)})), $$ where $\rho'$ is the relation defined by $\Upsilon(x_{1},\ldots,x_{n},u_1,\ldots,u_t)$; \item the solution set of $\Upsilon^{(j)}$ is subdirect for $s\leqslant j\leqslant q$; \item $\Theta'$ has a solution in $D^{(q)}$. \end{enumerate} Note that here we allow $D^{(j)}$ to be equal to $D^{(j+1)}$ in a strategy, which can happen if $D^{(j+1)}$ is a proper reduction for $\Upsilon^{(j)}$ but not proper for $\Theta'^{(j)}$. We will prove that we can make this sequence longer while $|D^{(q)}_{x_{i}}|>1$ for some $i$. By Theorem~\ref{NextReduction}, there exists a nontrivial one-of-four subuniverse on $D^{(q)}_{x}$ if $|D^{(q)}_{x}|>1$. We consider two cases: Case 1. There exists a nontrivial binary absorbing subuniverse, or a nontrivial center, or a nontrivial PC congruence on some domain $D_{x}^{(q)}$. Then applying Theorems~\ref{NextReductionOne}, \ref{NextReductionTwo} to the strategy $D^{(0)},D^{(1)},\dots,D^{(q)}$ of $\Theta\cup\Omega$, we conclude that there exists a minimal 1-consistent nonlinear reduction $D^{(q+1)}$ for $(\Theta\cup\Omega)^{(q)}$. By Lemma~\ref{ProperReductionPreservesCycleConAndIrreducability}, $\Theta'^{(q)}$ and $\Upsilon^{(q)}$ are cycle-consistent and irreducible. By Theorem~\ref{CannotLooseSolution}, $\Theta'$ has a solution in $D^{(q+1)}$ and $\Upsilon$ has a solution in $D^{(q+1)}$. By Lemma~\ref{ProperReductionPreservesSubdirectness}, the solution set of $\Upsilon^{(q+1)}$ is subdirect. Thus, we made the sequence longer. Case 2. $\ConLin(D_{x}^{(q)})$ is proper for every $x$ such that $|D_{x}^{(q)}|>1$. Let $\alpha$ be a solution of $\Theta'$ in $D^{(q)}$. We define the new linear reduction $D^{(q+1)}$ as follows. For all variables but $u_{1},\ldots,u_{t}$, we choose an equivalence class of $\ConLin(D_{x}^{(q)})$ containing the corresponding element of the solution $\alpha$. For the variable $u_{i}$ we define $D_{u_{i}}^{(q+1)}$ by the formula in (2) from the above list for $j=q$. By Lemma~\ref{ProperReductionPreservesSubdirectness}, $D^{(q+1)}$ is 1-consistent for $\Theta'$. Note that it does not follow from the definition that $D_{u_{i}}^{(q+1)}$ is not empty and we will prove this later. Let the projection of $\alpha$ onto the variables $x_{1},\ldots,x_n$ be $(a_{1},\ldots,a_{n})$. Suppose $\Upsilon^{(s)}(x_{1},\ldots,x_{n})$ defines a relation $\rho$. Since $\alpha$ is a solution of $\Theta'^{(s)}$, there exist $b_{1},\ldots,b_{n}$ such that \begin{align*} (a_{1},\ldots,a_{k},b_{k+1},\ldots,b_{n})&\in\rho,\\ (b_{1},\ldots,b_{k},a_{k+1},\ldots,a_{n})&\in\rho,\\ (b_{1},\ldots,b_{k},b_{k+1},\ldots,b_{n})&\in\rho. \end{align*} Since the solution set of $\Upsilon^{(j)}$ is subdirect for $s\leqslant j\leqslant q$, we can apply Lemma~\ref{ParProperty} to $\rho$ and the strategy $D^{(s)},\dots,D^{(q)}$. Hence, there exists a tuple $(d_{1},\ldots,d_{n})\in \rho$ such that $(a_{i},d_{i})\in\LinCon(D_{x_{i}}^{(q)})$ for every $i$. Therefore, $(\Upsilon^{(s)}(x_{1},\ldots,x_{n}))^{(q+1)}$ is not empty. Let us show by induction on $j=s,s+1,\dots,q$ that $(\Upsilon^{(j)}(x_{1},\ldots,x_{n}))^{(q+1)}$ is not empty. For $j=s$ we already know this. Assume that $(\Upsilon^{(j)}(x_{1},\ldots,x_{n}))^{(q+1)}$ is not empty. If the reduction $D^{(j+1)}$ is not linear then we apply Theorem~\ref{PreviousReductions} to $(\Upsilon^{(j)}(x_{1},\ldots,x_{n}))^{(q+1)}$ and the strategy $D^{(s)},\dots,D^{(q)}$, and obtain that $(\Upsilon^{(j+1)}(x_{1},\ldots,x_{n}))^{(q+1)}$ is not empty. If the reduction $D^{(j+1)}$ is linear then it follows from the definition of $D_{u_i}^{(j+1)}$ that $(\Upsilon^{(j+1)}(x_{1},\ldots,x_{n}))^{(q+1)}$ is not empty. Thus, we can prove that $(\Upsilon^{(q)}(x_{1},\ldots,x_{n}))^{(q+1)}$ is not empty, and therefore $D_{u_i}^{(q+1)}$ is not empty for every $i$. Considering the solution set of $\Upsilon$ and applying Corollary~\ref{LinearImplies}, we derive that $D_{u_{i}}^{(q+1)}$ is a linear subuniverse of $D_{u_{i}}^{(q)}$. Hence, the reduction $D^{(q+1)}$ is a 1-consistent linear reduction for $\Upsilon^{(q)}$. By Lemma~\ref{ProperReductionPreservesSubdirectness}, $(\Upsilon^{(q)}(x_{1},\ldots,x_{n}))^{(q+1)}$ is subdirect. From the definition of $D_{u_{i}}^{(q+1)}$ we derive that the projection of the solution set of $\Upsilon^{(q+1)}$ onto $u_{i}$ is $D_{u_{i}}^{(q+1)}$ for every $i$, which means that the solution set of $\Upsilon^{(q+1)}$ is subdirect. Hence, we get a longer strategy having all the necessary properties. Thus, we showed that we can make the sequence longer until $|D^{(q)}_{x_{i}}|=1$ for every $i$. Assume that we reached this final state. Since both $\Upsilon$ and $\Theta'$ have a solution in $D^{(q)}$ and $x_{1},\dots,x_{n}$ are their only common variables, $\Theta$ has a solution in $D^{(q)}$, which contradicts the fact that $\Theta$ has no solutions in~$D^{(s)}$. \end{proof} \subsection{Theorems from Section~\ref{CorretnessSection}} In this subsection we assume that the variables of the instance $\Theta$ are $x_{1},\ldots,x_{n}$, and the domain of $x_{i}$ is $D_{i}$ for every $i$. The first two theorems are proved together. \begin{thmAbsorptionCenterStep} Suppose $\Theta$ is a cycle-consistent irreducible CSP instance, and $B$ is a nontrivial binary absorbing subuniverse or a nontrivial center of $D_{i}$. Then $\Theta$ has a solution if and only if $\Theta$ has a solution with $x_{i}\in B$. \end{thmAbsorptionCenterStep} \begin{thmPCStepThm} Suppose $\Theta$ is a cycle-consistent irreducible CSP instance, there does not exist a nontrivial binary absorbing subuniverse or a nontrivial center on $D_{j}$ for every $j$, $(D_{i};w)/\sigma$ is a polynomially complete algebra, and $E$ is an equivalence class of $\sigma$. Then $\Theta$ has a solution if and only if $\Theta$ has a solution with $x_{i}\in E$. \end{thmPCStepThm} \begin{proof} By Theorems~\ref{NextReductionOne}, \ref{NextReductionTwo}, there exists a minimal 1-consistent nonlinear reduction $D^{(1)}$ such that $D_{x_{i}}^{(1)}\subseteq B$ for Theorem~\ref{AbsorptionCenterStep}, and $D_{x_{i}}^{(1)}\subseteq E$ for Theorem~\ref{PCStepThm}. By Theorem~\ref{CannotLooseSolution}, there exists a solution in $D^{(1)}$. \end{proof} The next theorem will be used in the proof of Theorem~\ref{LinearStep} from Section~\ref{CorretnessSection}. \begin{thm}\label{LinearStepHelp} Suppose the following conditions hold: \begin{enumerate} \item $\Theta$ is a linked cycle-consistent irreducible CSP instance; \item there does not exist a nontrivial binary absorbing subuniverse or a nontrivial center on $D_{j}$ for every $j$; \item if we replace every constraint of $\Theta$ by all weaker constraints then the obtained instance has a solution with $x_{i} = b$ for every $i$ and $b\in D_{i}$ (the obtained instance has a subdirect solution set); \item $D^{(1)}$ is a minimal linear reduction for $\Theta$; \item $\Theta$ is crucial in $D^{(1)}$. \end{enumerate} Then there exists a constraint $\rho(x_{i_1},\ldots,x_{i_s})$ in $\Theta$ and a subuniverse $\zeta$ of $\mathbf{D_{i_1}}\times\dots\times \mathbf{D_{i_s}}\times \mathbf{\mathbb Z_{p}}$ such that the projection of $\zeta$ onto the first $s$ coordinates is bigger than $\rho$ but the projection of $\zeta\cap (D_{i_1}\times\dots\times D_{i_s}\times \{0\})$ onto the first $s$ coordinates is equal to $\rho$. \end{thm} \begin{proof} We consider two cases. Case 1. Assume that $\Theta$ contains just one constraint $\rho(x_1,\ldots,x_{n})$. By Corollary~\ref{LinearImplies}, $D_{n}'=\proj_{n}(\rho\cap (D_{1}^{(1)}\times\dots\times D_{n-1}^{(1)} \times D_{n}))$ is a linear subuniverse of $D_{n}$. By Lemma~\ref{LinearAlgebrasFact}, $D_{n}^{(1)}$ and $D_{n}'$ can be viewed as products of affine subspaces and can be defined by linear equations. Since $D_{n}^{(1)}\cap D_{n}'=\varnothing$ and $D_{n}^{(1)}$ is a minimal reduction, we can take an equation defining $D_{n}'$ that does not hold on $D_{n}^{(1)}$ to get a maximal linear congruence $\sigma$ on $D_{n}$ such that $D_{n}^{(1)}$ and $D_{n}'$ are in different equivalence classes of $\sigma$. Note that $D_{n}/\sigma\cong\mathbb Z_{p}$ for some $p$. Let $\psi$ be the corresponding homomorphism from $D_{n}$ to $\mathbb Z_{p}$. Put $$\zeta(x_{1},\ldots,x_{n},z) = \exists x_{n}'\; \rho(x_{1},\ldots,x_{n-1},x_{n}') \wedge (\psi(x_{n})=\psi(x_{n}') + z),$$ where the expression $(\psi(x_{n})=\psi(x_{n}') + z)$ defines ternary subalgebra of $D_{n}\times D_{n}\times \mathbb Z_{p}$. Thus, we have $\rho$ and $\zeta$ with the required properties. Case 2. $\Theta$ contains more than one constraint. Then by condition (5), every constraint $C^{(1)}$ is not empty, which by Lemma~\ref{ProperReductionPreservesSubdirectness} implies that $C^{(1)}$ is subdirect. Then $D^{(1)}$ is a minimal 1-consistent linear reduction. By Theorem~\ref{ParPropertyMain}, every constraint in $\Theta$ is critical and has the parallelogram property. If $\Theta$ is not connected, then by Theorem~\ref{FindPerfectConstraint} there exists an instance $\Theta'\in\Expanded(\Theta)$ that is crucial in $D^{(1)}$ and contains a linked connected component $\Omega$ such that the solution set of $\Omega$ is not subdirect. By condition (3), since the solution set of $\Omega$ is not subdirect, $\Omega$ should contain a constraint relation from the original instance $\Theta$. If $\Theta$ is connected, then $\Theta$ is a linked connected component itself and we put $\Omega = \Theta$. Thus, in both cases we have a linked connected instance $\Omega$ having a constraint relation $\rho$ from $\Theta$. Let $\rho(x_{i_1},\ldots,x_{i_s})$ be a constraint of $\Theta$. By Lemma~\ref{CriticalMeansIrreducible}, $\ConOne(\rho,1)$ is an irreducible congruence. By Corollary~\ref{PathInConnectedComponent}, there exists a bridge $\delta$ from $\ConOne(\rho,1)$ to $\ConOne(\rho,1)$ such that $\widetilde\delta$ is a full relation. By Corollary~\ref{LinkedLink}, there exists a relation $\xi\subseteq D_{i_{1}}\times D_{i_{1}}\times \mathbb Z_{p}$ such that $(x_{1},x_{2},0)\in \xi\Leftrightarrow (x_{1},x_{2})\in\ConOne(\rho,1)$ and $\proj_{1,2}(\xi) = \cover{\ConOne(\rho,1)}$. It remains to put $\zeta(x_{i_1},\ldots,x_{i_s},z) = \exists x_{i_{1}}'\; \rho(x_{i_1}',x_{i_2},\ldots,x_{i_s})\wedge \xi(x_{i_{1}},x_{i_{1}}',z)$. \end{proof} \begin{THMmainLinearStep} Suppose the following conditions hold: \begin{enumerate} \item $\Theta$ is a linked cycle-consistent irreducible CSP instance with domain set $(D_{1},\ldots,D_{n})$; \item there does not exist a nontrivial binary absorbing subuniverse or a nontrivial center on $D_{j}$ for every $j$; \item if we replace every constraint of $\Theta$ by all weaker constraints then the obtained instance has a solution with $x_{i} = b$ for every $i$ and $b\in D_{i}$ (the obtained instance has a subdirect solution set); \item $L_{i} = D_{i}/\sigma_{i}$ for every $i$, where $\sigma_{i}$ is the minimal linear congruence on $D_{i}$; \item $\phi:\mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}} \to L_{1}\times\dots\times L_{n}$ is a homomorphism, where $q_{1},\dots,q_{k}$ are prime numbers; \item if we replace any constraint of $\Theta$ by all weaker constraints then for every $(a_{1},\ldots,a_{k})\in \mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}}$ there exists a solution of the obtained instance in $\phi(a_{1},\ldots,a_{k})$. \end{enumerate} Then $\{(a_{1},\dots,a_{k})\mid \Theta \text{ has a solution in }\phi(a_1,\dots,a_{k})\}$ is either empty, or is full, or is an affine subspace of $\mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}}$ of codimension 1 (the solution set of a single linear equation). \end{THMmainLinearStep} \begin{proof} Put $B=\{(a_{1},\dots,a_{k})\mid \Theta \text{ has a solution in }\phi(a_1,\dots,a_{k})\}$. If $B$ is full then there is nothing to prove. Assume that $B$ is not full, then consider $(b_1,\ldots,b_{k})\notin B$. It follows from condition (6) that $\Theta$ is crucial in $\phi(b_{1},\ldots,b_{k})$. Note that $\phi(b_{1},\ldots,b_{k})$ defines a minimal linear reduction for $\Theta$. By Theorem~\ref{LinearStepHelp} there exists a constraint $\rho(x_{i_1},\ldots,x_{i_s})$ in $\Theta$ and a subuniverse $\zeta$ of $\mathbf{D_{i_1}}\times\dots\times \mathbf{D_{i_s}}\times \mathbf{\mathbb Z_{p}}$ such that the projection of $\zeta$ onto the first $s$ coordinates is bigger than $\rho$ but the projection of $\zeta\cap (D_{i_1}\times\dots\times D_{i_s}\times \{0\})$ onto the first $s$ coordinates is equal to $\rho$. Then we add a new variable $z$ with domain $\mathbb Z_{p}$ and replace $\rho(x_{i_1},\ldots,x_{i_s})$ by $\zeta(x_{i_1},\ldots,x_{i_s},z)$. We denote the obtained instance by $\Upsilon$. Let $L$ be the set of all tuples $(a_{1},\ldots,a_{k},b)\in \mathbb Z_{q_{1}}\times \dots \times \mathbb Z_{q_{k}} \times \mathbb Z_{p}$ such that $\Upsilon$ has a solution with $z=b$ in $\phi(a_{1},\ldots,a_{k})$. We know that the projection of $L$ onto the first $k$ coordinates is a full relation and $(b_{1},\dots,b_{k},0)\notin L$. Therefore $L$ is defined by one linear equation. If this equation is $z = b$ for some $b\neq 0$, then $B$ is empty. Otherwise, we put $z=0$ in this equation and get an equation describing all $(a_{1},\ldots,a_{k})$ such that $\Theta$ has a solution in $\phi(a_{1},\ldots,a_{k})$. \end{proof} \section{Conclusions}\label{ConclusionsSection} Even though the main problem has been resolved, there are many important questions that are still open. In this section we will discuss some consequences of this result, as well as some open questions and generalizations of the CSP. \subsection{A general algorithm for the CSP} The algorithm presented in the paper, as well as the algorithm of Andrei Bulatov \cite{BulatovProofCSP,BulatovProofCSPFOCS}, uses detailed knowledge of the algebra and depends exponentially on the size of the domain. Is there a ``truly polynomial algorithm''? By $\CSPWNU$ we denote the following decision problem: given a formula $$\rho_{1}(v_{1,1},\ldots,v_{1,n_{1}}) \wedge \dots \wedge \rho_{s}(v_{s,1},\ldots,v_{1,n_{s}}),$$ where all relations $\rho_{1},\dots,\rho_{s}$ are preserved by a WNU (we just know it exists); decide whether this formula is satisfiable. \begin{problem} Does there exist a polynomial algorithm for $\CSPWNU$? \end{problem} If the domain is fixed then $\CSPWNU$ can be solved by the algorithm presented in this paper. In fact, we know from \cite[Theorem 4.2]{cyclicterms} that from a WNU on a domain of size $k$ we can always derive a WNU (and also a cyclic operation) of any prime arity greater than $k$. Thus, we can find finitely many WNU operations on domain of size $k$ such that any constraint language preserved by a WNU is preserved by one of them. It remains to apply the algorithm for each WNU and return a solution if one of them gave a solution. \subsection{A simplification of the algorithm.} We believe that the algorithm presented in the paper can be simplified. For instance, we strongly believe that the function $\mbox{\textsc{WeakenEveryConstraint}}$ can be removed from the main function $\mbox{\textsc{Solve}}$ without any consequences. \begin{problem} Would the algorithm still work if the function $\mbox{\textsc{WeakenEveryConstraint}}$ was removed from the function $\mbox{\textsc{Solve}}$? \end{problem} This would reduce the complexity of the algorithm significantly (the depth of the recursion would be $|A|$ instead of $|A|+|\Gamma|$, see Lemma~\ref{RecursionDepth}). \subsection{A generalization for the nonWNU case.} Another important question is whether some results and ideas introduced in this paper can be applied for constraint languages not preserved by a WNU. For example, it is not clear what assumptions are sufficient to reduce safely a domain to a binary absorbing subuniverse. \begin{problem} What are the weakest assumptions for Theorems~\ref{AbsorptionCenterStep} and \ref{PCStepThm} to hold. \end{problem} \subsection{Infinite domain CSP} If we allow the domain to be infinite, the situation is changing significantly. As it was shown in \cite{bodirskyInfiniteHell} every computational problem is equivalent (under polynomial-time Turing reductions) to a problem of the form $\CSP(\Gamma)$. In \cite{InfiniteDomainSurvey} the authors gave a nice example of a constraint language $\Gamma$ such that $\CSP(\Gamma)$ is undecidable. Let $\Gamma$ consists of three relations (predicates) $x+y=z$, $x\cdot y=z$ and $x = 1$ over the set of all integers $\mathbb Z$. Then the Hilbert's 10-th problem can be expressed as $\CSP(\Gamma)$, which proves undecidability of $\CSP(\Gamma)$. A reasonable assumption on $\Gamma$ which sends the CSP back to the class NP is that $\Gamma$ is a reduct of a finitely bounded homogeneous structure. A nice result for such constraint languages is the full complexity classification of the CSPs over the reducts of $(\mathbb Q;<)$ \cite{bodirskyforrationals}. This additional assumption allows to formulate a statement of the algebraic dichotomy conjecture for the complexity of the infinite domain CSP \cite{barto2016algebraic}. For more information about the infinite domain CSP and the algebraic approach see \cite{bodirsky2012complexity, InfiniteDomainSurvey}. For a method of reducing an infinite domain CSP to CSPs over finite domains see \cite{bodirsky2016dichotomy}. \subsection{Valued CSP} A natural generalization of the Constraint Satisfaction Problem is the \emph{Valued Constraint Satisfaction Problem} ($\VCSP$), where constraint relations are replaced by mappings to the set of rational numbers, and conjunctions are replaced by sum \cite{VCSPIntroduction}. For a finite set $A$ and a set $\Gamma$ of mappings $A\to \mathbb Q\cup \{\infty\}$ by $\VCSP(\Gamma)$ we denote the following problem: given a formula $$f(x_{1},\dots,x_{n}) = f_{1}(v_{1,1},\ldots,v_{1,n_{1}}) + \dots + f_{s}(v_{s,1},\ldots,v_{s,n_{s}}),$$ where all the mappings $f_{1},\dots,f_{s}$ are from $\Gamma$ and $v_{i,j}\in \{x_{1},\ldots,x_{n}\}$ for every $i,j$; find an assignment $(a_{1},\dots,a_{n})$ that minimizes $f(x_{1},\dots,x_{n})$. In \cite[Theorem 21]{VCSPDichotomy}, the authors proved that the dichotomy conjecture for CSP would imply the dichotomy conjecture for the Valued CSP, and described all sets of mappings $\Gamma$ such that $\VCSP(\Gamma)$ is tractable (modulo the CSP Dichotomy Conjecture). Thus, the result obtained in this paper implies the characterization of the complexity of $\VCSP(\Gamma)$ for all $\Gamma$. \subsection{Quantified CSP} An equivalent definition of $\CSP(\Gamma)$ is to evaluate a sentence $\exists x_1 \dots \exists x_n \ (\rho_{1}(\dots)\wedge\dots \wedge \rho_{s}(\dots))$, where $\rho_1,\dots,\rho_s$ are from the constraint language $\Gamma$. Then a natural generalization of CSP is the \emph{Quantified Constraint Satisfaction Problem} ($\QCSP$), where we allow to use both existential and universal quantifiers. For a constraint language $\Gamma$, $\QCSP(\Gamma)$ is the problem to evaluate a sentence of the form $\forall x_1 \exists y_1 \dots \forall x_n \exists y_n \ (\rho_{1}(\dots)\wedge\dots \wedge \rho_{s}(\dots))$, where $\rho_1,\dots,\rho_s$ are relations from the constraint language $\Gamma$ (see \cite{BBCJK,hubie-sicomp,Meditations,QC2017}). It was conjectured by Hubie Chen \cite{Meditations,MFCS2017} that for any constraint language $\Gamma$ the problem $\QCSP(\Gamma)$ is either solvable in polynomial time, or NP-complete, or PSpace-complete. Recently, this conjecture was disproved in \cite{QCSPMonsters}, where the authors found constraint languages $\Gamma$ such that $\QCSP(\Gamma)$ is coNP-complete (on 3-element domain), DP-complete (on 4-element domain), $\Theta_{2}^{P}$-complete (on 10-element domain). Also the authors classified the complexity of the Quantified Constraint Satisfaction Problem for constraint languages on 3-element domain containing all unary singleton relations (so called idempotent case), that is, they showed that for such languages $\QCSP(\Gamma)$ is either tractable, or NP-complete, or coNP-complete, or PSpace-complete. Nevertheless, for higher domain as well as for the nonidempotent case the complexity is not known. \begin{problem} What can be the complexity of $\QCSP(\Gamma)$? \end{problem} Now it is hard to believe that there will be a simple answer to this question, that is why it is interesting to start with 3-element domain (nonidempotent case) and 4-element domain. Another natural question is how many complexity classes can be expressed by $\QCSP(\Gamma)$ up to polynomial equivalence. Probably more important problem is to describe all tractable cases. \begin{problem} Describe all constraint languages $\Gamma$ such that $\QCSP(\Gamma)$ is tractable. \end{problem} \subsection{Promise CSP} Another natural generalization of the CSP is \emph{the Promise Constraint Satisfaction Problem}, where a promise about the input is given (see \cite{brakensiek2018promise,PCSPAlgebraicApproach}). Let $\Gamma = \{(\rho_{1},\sigma_{1}), \dots ,(\rho_{t},\sigma_{t})\}$, where $\rho_{i}$ and $\sigma_{i}$ are relations of the same arity over the domains $A$ and $B$, respectively. Then $\PCSP(\Gamma)$ is the following decision problem: given two formulas \begin{align*}&\rho_{i_1}(v_{1,1},\ldots,v_{1,n_{1}}) \wedge\dots\wedge \rho_{i_s}(v_{s,1},\ldots,v_{s,n_{s}}),\\ &\sigma_{i_1}(v_{1,1},\ldots,v_{1,n_{1}}) \wedge\dots\wedge \sigma_{i_s}(v_{s,1},\ldots,v_{s,n_{s}}), \end{align*} where $(\rho_{i_j},\sigma_{i_j})$ are from $\Gamma$ for every $i$ and $v_{i,j}\in \{x_{1},\ldots,x_{n}\}$ for every $i,j$; distinguish between the case when both of them are satisfiable, and when both of them are not satisfiable. Thus, we are given two CSP instances and a promise that if one has a solution then another has a solution. Usually it is also assumed that there exists a mapping (homomorphism) $h\colon A\to B$ such that $h(\rho_{i})\subseteq \sigma_{i}$ for every $i$. In this case, the satisfiability of the first formula implies the satisfiability of the second one. To make sure that the promise can actually make an NP-hard problem tractable, see example 2.8 in \cite{PCSPAlgebraicApproach}. The most popular example of the Promise CSP is graph $(k,l)$-colorability, where we need to distinguish between $k$-colorable graphs and not even $l$-colorable, where $k\leqslant l$. This problem can be written as follows. \begin{problem} Let $|A|=k$, $|B|=l$, $\Gamma = \{(\neq_{A},\neq_{B})\}$. What is the complexity of $\PCSP(\Gamma)$? \end{problem} Recently, it was proved \cite{PCSPAlgebraicApproach} that $(k,l)$-colorability is NP-hard for $l = 2k-1$ and $k\geqslant 3$ but even the complexity of $(3,6)$-colorability is still not known. Even for two element domain the problem is widely open, but recently a dichotomy for symmetric Boolean PCSP was proved \cite{ficak2019dichotomy}. \begin{problem} Let $A= B = \{0,1\}$. Describe the complexity of $\PCSP(\Gamma)$ for all $\Gamma$. \end{problem} \subsection{Surjective CSP} Another modification of the CSP is \emph{the Surjective Constraint Satisfaction Problem}. For a constraint language $\Gamma$ over a domain $A$, $\SurjCSP(\Gamma)$ is the following decision problem: given a formula $$\rho_{1}(\dots) \wedge \dots \wedge \rho_{s}(\dots),$$ where all relations $\rho_{1},\dots,\rho_{s}$ are from $\Gamma$; decide whether there exists a surjective solution, that is a solution with $\{x_{1},\dots,x_{n}\} = A$. Only few results are known about the complexity of the Surjective CSP \cite{chen2014algebraic}. That is why, we suggest to start studying this question with a very concrete constraint language on a 3-element domain. \begin{problem} Suppose $A = \{a,b,c\}$, $R = \{(x,y,z)\mid \{x,y,z\}\neq A\}$. What is the complexity of $\SurjCSP(\{R\})$? \end{problem} After this problem (called \emph{no-rainbow problem}) we can move to the general question. \begin{problem} Describe the complexity of $\SurjCSP(\Gamma)$ for all constraint languages $\Gamma$. \end{problem} \end{document}
arXiv
\begin{definition}[Definition:Vector (Linear Algebra)] Let $V$ be a vector space. Any element $v$ of $V$ is called a '''vector'''. \end{definition}
ProofWiki
HomeAboutResearchcategoriesSubscribeInstituteshop  © 2015 - 2021 Math3ma Ps. 148 © 2015 – 2022 Math3ma What is a Natural Transformation? Definition and Examples, Part 2 Continuing our list of examples of natural transformations, here is... Example #2: double dual space This is really the archetypical example of a natural transformation. You'll recall (or let's observe) that every finite dimensional vector space $V$ over a field $\mathbb{k}$ is isomorphic to both its dual space $V^*$ and to its double dual $V^{**}$. In the first case, if $\{v_1,\ldots,v_n\}$ is a basis for $V$, then $\{v_1^*,\ldots,v_n^*\}$ is a basis for $V^*$ where for each $i$, the map $v_i^*:V\to\mathbb{k}$ is given by $$v_i^*(v_j)= \begin{cases} 1, &\text{if $i=j$};\\ 0 &\text{if $i\neq j$}. \end{cases}$$ Unfortunately, this isomorphism $V\overset{\cong}{\longrightarrow} V^*$ is not canonical. That is, a different choice of basis yields a different isomorphism. What's more, the isomorphism can't even materialize until we pick a basis.* On the other hand, there is an isomorphism $V\overset{\cong}{\longrightarrow}V^{**}$ that requires no choice of basis: for each $v\in V$, let $\text{eval}_v:V^*\to\mathbb{k}$ be the evaluation map. That is, whenever $f:V\to \mathbb{k}$ is an element in $V^*$, define $\text{eval}_v(f):=f(v)$. Folks often refer to this isomorphism as natural. It's natural in the sense that it's there for the taking---it's patiently waiting to be acknowledged, irrespective of how we choose to "view" $V$ (i.e. irrespective of our choice of basis). This is evidenced in the fact that $\text{eval}$ does the same job on each vector space throughout entire category. One map to rule them all.** For this reason, the totality of all the evaluation maps assembles into a natural transformation (a natural isomorphism, in fact) between two functors! To see this, let $(-)^{**}:\mathsf{Vect}_{\mathbb{k}}\to\mathsf{Vect}_{\mathbb{k}}$ be the the double dual functor $(-)^{**}$ that sends a vector space $V$ to $V^{**}$ and that sends a linear map $V\overset{\phi}{\longrightarrow}W$ to $V^{**}\overset{\phi^{**}}{\longrightarrow} W^{**}$, where $\phi^{**}$ is precomposition with $\phi^{*}$ (which we've defined before). And let $\text{id}:\mathsf{Vect}_{\mathbb{k}}\to\mathsf{Vect}_{\mathbb{k}} $ be the identity functor. Now let's check that $\text{eval}:\text{id}\Longrightarrow (-)^{**}$ is indeed a natural transformation. By picking a $v\in V$ and chasing it around the diagram below, notice that the square commutes if and only if $\text{eval}_v\circ \phi^*=\text{eval}_{\phi(v)}$. (Here I'm using the fact that $\phi^{**}(\text{eval}_v)=\text{eval}_v\circ \phi^*$.) Does this equality hold? Let's check! Suppose $f:W\to\mathbb{k}$ is an element of $W^{*}$. Then $$ \begin{align*} \text{eval}_v(\phi^*(f))&=\text{eval}_v(f\circ \phi)\\ &=(f\circ\phi)(v)\\ &= f(\phi(v))\\ &=\text{eval}_{\phi(v)}(f). \end{align*} $$ Voila! And because each $V\overset{\text{eval}}{\longrightarrow} V^{**}$ is an isomorphism, we've got ourselves a natural ismorphism $\text{id}\Longrightarrow (-)^{**}$. As per our discussion last time, this suggests that $\text{id}$ and $(-)^{**}$ are really the same functor up to a change in perspective. Indeed, this interpretation pairs nicely with the observation that any vector $v\in V$ can either be viewed as, well, a vector, or it can be viewed as an assignment that sends a linear function $f$ to the value $f(v)$. In short, $V$ is genuinely and authentically just like its double dual. They are - quite naturally - isomorphic. Example #3: representability and Yoneda In our earlier discussion on functors we noted that a functor $F:\mathsf{C}\to\mathsf{Set}$ is representable if, loosely speaking, there is an object $c\in\mathsf{C}$ so that for all objects $x$ in $\mathsf{C}$, the elements of $F(x)$ are "really" just maps $c\to x$ (or maps $x\to c$, if $F$ is contravariant). As an illustration, we noted that the functor $\mathscr{O}:\mathsf{Top}^{op}\to\mathsf{Set}$ that sends a topological space $X$ to its set $\mathscr{O}(X)$ of open subsets is represented by the Sierpinski space $S$ since $$\mathscr{O}(X)\cong \text{hom}_{\mathsf{Top}}(X,S)$$ where I'm using $\cong$ to denote a set bijection/isomorphism. So in other words, an open subset of $X$ is essentially the same thing as a continuous function $X\to S.$ (We discussed this in length here.) Now it turns out that this $\cong$ is not just a typical, plain-vanilla isomorphism. It's natural! That is, the ensemble of isomorphisms $\mathscr{O}(X)\overset{\cong}{\longrightarrow}\text{hom}_{\mathsf{Top}}(X,S)$ (one for each $X$) assemble to form a natural isomorphism between the two functors $\mathscr{O}$ and $\text{hom}_{\mathsf{Top}}(-,S)$.*** In general, then, we say a functor $F:\mathsf{C}\to\mathsf{Set}$ is representable if there is an object $c\in\mathsf{C}$ so that $F$ is naturally isomorphic to the hom functor $\text{hom}_{\mathsf{C}}(c,-)$, i.e. if $$F(x)\cong\text{hom}_{\mathsf{C}}(c,x) \qquad \text{naturally, for all $x\in \mathsf{C}$}$$ (or if $F$ is contravariant, $F(x)\cong\text{hom}_{\mathsf{C}}(x,c)$). Here's a very simple example. Suppose $A$ is any set and let $*$ denote the set with one element. Notice that a function from $*$ to $A$ has exactly one element in its image, i.e. the range of $*\to A$ is $\{a\}$ for some $a\in A$. This suggests that a map $*\to A$ is really just a choice of element in $A$! Intuitively then, the elements of $A$ are in bijection with functions $*\to A$, $$A\cong\text{hom}_{\mathsf{Set}}(*,A).$$ But more is true! The isomorphism $A\to \text{hom}_{\mathsf{Set}}(*,A)$ which sends $a\in A$ to the function, say, $\bar{a}:*\to A$, where $\bar{a}(*)=a$, is natural. That is, for any $A\overset{f}{\longrightarrow}B$, the following square commutes Commutativity just says that given an element $a\in A$, we can think of the element $f(a)$ as a map $*\to B$ in one of two equivalent ways: either send $a$ to $f(a)$ via $f$ and then think of $f(a)$ as a map $*\to B$. OR first think of $a$ as a map $*\to A$, and then postcompose it with $f$. In short, the identity functor $\text{id}:\mathsf{Set}\to\mathsf{Set}$ is represented by the one-point set $*$ since every function $*\to A$ is really just a choice of an element $a\in A$. Representability is really the launching point for the Yoneda Lemma which is "arguably the most important result in category theory." We'll certainly chat about Yoneda in a future post. To whet your appetite, I'll quickly say that one consequence of the Lemma is that we are prompted to think of an object $x$ -- no longer as an object, but now -- as a (representable) functor $\text{hom}(x,-)$, similar to how we may think of a point $a\in A$ as a map $*\to A$. This perspective -- coupled with the idea that morphisms out of $x$ (i.e. the elements of $\text{hom}(x,-)$) are simply "the relationships of $x$ with other objects" -- motivates the categorical mantra that an object is completely determined by its relationships to other objects. As a wise person once said, "You tell me who your friends are, and I'll tell you who YOU are." The upshot is that this proverb holds in life as well as in category theory. And that is The Most Obvious Secret of Mathematics! *One can show that there is no "natural" isomorphism of a vector space with its dual. For instance, see p. 234 of Eilenberg and Mac Lane's 1945 paper, "The General Theory of Natural Equivalences." ** This sort of reminds me of the difference between pointwise and uniform convergence. A sequences of functions $\{f_n:X\to\mathbb{R}\}$ converges to a function $f$ pointwise if, from the vantage point of some $x\in X$, the $f_n$ are eventually within some $\epsilon$ of $f$. But that value of $\epsilon$ might be different at a different vantage point, i.e. at a different $x'\in X$. On the other hand, the sequence converges uniformly if there's an $\epsilon$ that does the job no matter where you stand, i.e. for all $x\in X$. ***Here, $\text{hom}_{\mathsf{Top}}(-,S):\mathsf{Top}^{op}\to\mathsf{Set}$ is the contravariant functor that sends a topological space $X$ to the set $\text{hom}_{\mathsf{Top}}(X,S)$ of continuous functions $X\to S$ and that sends a continuous function $X\overset{f}{\longrightarrow} Y$ to its pullback $\text{hom}_{\mathsf{Top}}(Y,S)\overset{f^*}{\longrightarrow}\text{hom}_{\mathsf{Top}}(X,S).$ Limits and Colimits, Part 1 (Introduction) What is an Adjunction? Part 3 (Examples) Brouwer's Fixed Point Theorem (Proof) Viewing Matrices & Probability as Graphs
CommonCrawl
Edmonds–Karp algorithm In computer science, the Edmonds–Karp algorithm is an implementation of the Ford–Fulkerson method for computing the maximum flow in a flow network in $O(|V||E|^{2})$ time. The algorithm was first published by Yefim Dinitz (whose name is also transliterated "E. A. Dinic", notably as author of his early papers) in 1970[1][2] and independently published by Jack Edmonds and Richard Karp in 1972.[3] Dinic's algorithm includes additional techniques that reduce the running time to $O(|V|^{2}|E|)$.[2] The Wikibook Algorithm implementation has a page on the topic of: Edmonds-Karp Algorithm The algorithm is identical to the Ford–Fulkerson algorithm, except that the search order when finding the augmenting path is defined. The path found must be a shortest path that has available capacity. This can be found by a breadth-first search, where we apply a weight of 1 to each edge. The running time of $O(|V||E|^{2})$ is found by showing that each augmenting path can be found in $O(|E|)$ time, that every time at least one of the $E$ edges becomes saturated (an edge which has the maximum possible flow), that the distance from the saturated edge to the source along the augmenting path must be longer than last time it was saturated, and that the length is at most $|V|$. Another property of this algorithm is that the length of the shortest augmenting path increases monotonically. There is an accessible proof in Introduction to Algorithms.[4] Pseudocode algorithm EdmondsKarp is input: graph (graph[v] should be the list of edges coming out of vertex v in the original graph and their corresponding constructed reverse edges which are used for push-back flow. Each edge should have a capacity, flow, source and sink as parameters, as well as a pointer to the reverse edge.) s (Source vertex) t (Sink vertex) output: flow (Value of maximum flow) flow := 0 (Initialize flow to zero) repeat (Run a breadth-first search (bfs) to find the shortest s-t path. We use 'pred' to store the edge taken to get to each vertex, so we can recover the path afterwards) q := queue() q.push(s) pred := array(graph.length) while not empty(q) cur := q.pop() for Edge e in graph[cur] do if pred[e.t] = null and e.t ≠ s and e.cap > e.flow then pred[e.t] := e q.push(e.t) if not (pred[t] = null) then (We found an augmenting path. See how much flow we can send) df := ∞ for (e := pred[t]; e ≠ null; e := pred[e.s]) do df := min(df, e.cap - e.flow) (And update edges by that amount) for (e := pred[t]; e ≠ null; e := pred[e.s]) do e.flow := e.flow + df e.rev.flow := e.rev.flow - df flow := flow + df until pred[t] = null (i.e., until no augmenting path was found) return flow Example Given a network of seven nodes, source A, sink G, and capacities as shown below: In the pairs $f/c$ written on the edges, $f$ is the current flow, and $c$ is the capacity. The residual capacity from $u$ to $v$ is $c_{f}(u,v)=c(u,v)-f(u,v)$, the total capacity, minus the flow that is already used. If the net flow from $u$ to $v$ is negative, it contributes to the residual capacity. Capacity Path Resulting network ${\begin{aligned}&\min(c_{f}(A,D),c_{f}(D,E),c_{f}(E,G))\\=&\min(3-0,2-0,1-0)=\\=&\min(3,2,1)=1\end{aligned}}$ $A,D,E,G$ ${\begin{aligned}&\min(c_{f}(A,D),c_{f}(D,F),c_{f}(F,G))\\=&\min(3-1,6-0,9-0)\\=&\min(2,6,9)=2\end{aligned}}$ $A,D,F,G$ ${\begin{aligned}&\min(c_{f}(A,B),c_{f}(B,C),c_{f}(C,D),c_{f}(D,F),c_{f}(F,G))\\=&\min(3-0,4-0,1-0,6-2,9-2)\\=&\min(3,4,1,4,7)=1\end{aligned}}$ $A,B,C,D,F,G$ ${\begin{aligned}&\min(c_{f}(A,B),c_{f}(B,C),c_{f}(C,E),c_{f}(E,D),c_{f}(D,F),c_{f}(F,G))\\=&\min(3-1,4-1,2-0,0-(-1),6-3,9-3)\\=&\min(2,3,2,1,3,6)=1\end{aligned}}$ $A,B,C,E,D,F,G$ Notice how the length of the augmenting path found by the algorithm (in red) never decreases. The paths found are the shortest possible. The flow found is equal to the capacity across the minimum cut in the graph separating the source and the sink. There is only one minimal cut in this graph, partitioning the nodes into the sets $\{A,B,C,E\}$ and $\{D,F,G\}$, with the capacity $c(A,D)+c(C,D)+c(E,G)=3+1+1=5.\ $ Notes 1. Dinic, E. A. (1970). "Algorithm for solution of a problem of maximum flow in a network with power estimation". Soviet Mathematics - Doklady. Doklady. 11: 1277–1280. 2. Yefim Dinitz (2006). "Dinitz' Algorithm: The Original Version and Even's Version" (PDF). In Oded Goldreich; Arnold L. Rosenberg; Alan L. Selman (eds.). Theoretical Computer Science: Essays in Memory of Shimon Even. Springer. pp. 218–240. ISBN 978-3-540-32880-3. 3. Edmonds, Jack; Karp, Richard M. (1972). "Theoretical improvements in algorithmic efficiency for network flow problems" (PDF). Journal of the ACM. 19 (2): 248–264. doi:10.1145/321694.321699. S2CID 6375478. 4. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein (2009). "26.2". Introduction to Algorithms (third ed.). MIT Press. pp. 727–730. ISBN 978-0-262-03384-8.{{cite book}}: CS1 maint: multiple names: authors list (link) References 1. Algorithms and Complexity (see pages 63–69). https://web.archive.org/web/20061005083406/http://www.cis.upenn.edu/~wilf/AlgComp3.html Optimization: Algorithms, methods, and heuristics Unconstrained nonlinear Functions • Golden-section search • Interpolation methods • Line search • Nelder–Mead method • Successive parabolic interpolation Gradients Convergence • Trust region • Wolfe conditions Quasi–Newton • Berndt–Hall–Hall–Hausman • Broyden–Fletcher–Goldfarb–Shanno and L-BFGS • Davidon–Fletcher–Powell • Symmetric rank-one (SR1) Other methods • Conjugate gradient • Gauss–Newton • Gradient • Mirror • Levenberg–Marquardt • Powell's dog leg method • Truncated Newton Hessians • Newton's method Constrained nonlinear General • Barrier methods • Penalty methods Differentiable • Augmented Lagrangian methods • Sequential quadratic programming • Successive linear programming Convex optimization Convex minimization • Cutting-plane method • Reduced gradient (Frank–Wolfe) • Subgradient method Linear and quadratic Interior point • Affine scaling • Ellipsoid algorithm of Khachiyan • Projective algorithm of Karmarkar Basis-exchange • Simplex algorithm of Dantzig • Revised simplex algorithm • Criss-cross algorithm • Principal pivoting algorithm of Lemke Combinatorial Paradigms • Approximation algorithm • Dynamic programming • Greedy algorithm • Integer programming • Branch and bound/cut Graph algorithms Minimum spanning tree • Borůvka • Prim • Kruskal Shortest path • Bellman–Ford • SPFA • Dijkstra • Floyd–Warshall Network flows • Dinic • Edmonds–Karp • Ford–Fulkerson • Push–relabel maximum flow Metaheuristics • Evolutionary algorithm • Hill climbing • Local search • Parallel metaheuristics • Simulated annealing • Spiral optimization algorithm • Tabu search • Software
Wikipedia
\begin{document} \begin{frontmatter} \title{Variance asymptotics and central limit theorems for generalized growth processes with applications to convex hulls\\ and maximal points} \runtitle{Variance asymptotics and central limit theorems} \pdftitle{Variance asymptotics and central limit theorems for generalized growth processes with applications to convex hulls and maximal points} \begin{aug} \author[A]{\fnms{T.} \snm{Schreiber}\ead[label=e1]{[email protected]}\thanksref{t1}} and \author[B]{\fnms{J. E.} \snm{Yukich}\corref{}\ead[label=e2]{[email protected]}\thanksref{t2}} \thankstext{t1}{Supported in part by Polish Minister of Scientific Research and Information Technology Grant 1 P03A 018 28 (2005--2007).} \thankstext{t2}{Supported by NSF Grant DMS-02-03720.} \runauthor{T. Schreiber and J. E. Yukich} \affiliation{Nicholas Copernicus University and Lehigh University} \address[A]{Faculty of Mathematics and Computer Science\\ Nicholas Copernicus University\\ Toru\'n\\ Poland\\ \printead{e1}} \address[B]{Department of Mathematics\\ Lehigh University\\ Bethlehem, Pennsylvania 18015\\ USA\\ \printead{e2}} \end{aug} \received{\smonth{5} \syear{2006}} \revised{\smonth{2} \syear{2007}} \begin{abstract} We show that the random point measures induced by vertices in the convex hull of a Poisson sample on the unit ball, when properly scaled and centered, converge to those of a mean zero Gaussian field. We establish limiting variance and covariance asymptotics in terms of the density of the Poisson sample. Similar results hold for the point measures induced by the maximal points in a Poisson sample. The approach involves introducing a generalized spatial birth growth process allowing for cell overlap. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{60F05} \kwd[; secondary ]{60D05}. \end{keyword} \begin{keyword} \kwd{Convex hulls} \kwd{maximal points} \kwd{spatial birth growth processes} \kwd{Gaussian limits}. \end{keyword} \end{frontmatter} \section{Introduction, main results}\label{GenRes} Given $X_i, i \geq1$, i.i.d. random variables with values in a $d$-dimensional convex set $S$, $d \geq2,$ a classic problem in convex geometry involves determining the distribution of the number of points in the set of extreme points $\mathcal{V} (\{X_i\}_{i=1}^n)$, defined as the vertices in the convex hull of $\{X_i\}_{i=1}^n$. This problem was first considered by R\'enyi and Sulanke \cite{RS}, with recent notable progress by Reitzner \cite{Re1,Re2,Re3,Re4} and Vu \cite{VV}. A closely related problem involves determining, for a given $K \subset\mathbb{R}^d$, the distribution of the number of points in the set $\mathcal{M}_K(\{X_i\}_{i=1}^n)$ of $K$-maximal points, where a point $X_j$ belongs to $\mathcal{M}_K(\{X_i\}_{i=1}^n)$ iff $(X_j \oplus K) \ \cap\{X_i\}_{i=1}^n = X_j$, where here and henceforth, for all $B \subset\mathbb{R}^d$ and $x \in \mathbb{R}^d$ we write $x \oplus B := \{x + y\dvtx y \in B\}$. When $K$ is $(\mathbb{R}_+)^d$, then $\mathcal{M}_K(\{X_i\}_{i=1}^n)$ is simply the set of maximal points, that is, those points $X_j$ in $\{X_i\}_{i=1}^n$ having the property that no point $X_i$, $i \neq j$, exceeds it in all coordinates. The limit theory for the number of maximal points in $\mathcal{M}_K(\{X_i\}_{i=1}^n)$ was first considered by R\'enyi \cite{Re} and Barndorff-Nielsen and Sobel \cite{BS}. Chen, Hwang and Tsai \cite{Ch} surveys the vast literature, which includes books by Ehrgott \cite{Eh}, Pomerol and Barba-Romero \cite{PB}, and recent papers of \cite{BCHL,BHLT,BX,BY4,De}. In this paper we establish convergence of the finite-dimensional distributions of the re-scaled point measures induced by the random point sets $\mathcal{V} ({\mathcal{P}_{{\lambda}\rho} })$, where ${\mathcal{P}_{{\lambda}\rho} }$ denotes a Poisson point process of intensity ${\lambda}{\rho}$ on $B_d$, the unit radius \mbox{$d$-dimensional} ball centered at the origin and where ${\rho}$ is a continuous density on $B_d$. For sets $K := \{(w_1,\ldots,w_d) \dvtx w_d \geq(w_1^2 + \cdots+ w_{d-1}^2)^{\alpha/2}\}$, where $\alpha\in(0,1]$ is fixed, we also establish convergence of the finite-dimensional distributions of the point measures induced by $\mathcal{M}_K ({\mathcal {P}_{{\lambda}\rho} })$, where ${\mathcal{P}_{{\lambda}\rho} }$ denotes the Poisson point process of intensity ${\lambda}{\rho}$ on $A \times\mathbb{R}_+$, where $A \subset\mathbb{R}^{d-1}$ is compact and convex and where ${\rho}\dvtx A \times\mathbb{R}_+$ is continuous. These results are facilitated by introducing a generalized spatial birth--growth process as a means toward obtaining explicit variance asymptotics and central limit theorems for random measures arising in convex geometry. The relevant spatial birth--growth process, possibly of independent interest, modifies the classical spatial birth--growth process introduced by Kolmogorov \cite{Ko} as a model for crystal growth by allowing the possibility of \textit{cell overlap}. As in \cite{Ko}, cells may grow at nonconstant growth rates. In the context of the set of extreme points $\mathcal{V} ({\mathcal {P}_{{\lambda} \rho} })$, the approach taken here adds to the work of Reitzner \cite{Re1,Re2,Re3,Re4} and Vu \cite{VV} in the following ways. First, the present set-up establishes convergence of the finite-dimensional distributions of the canonical point measures induced by $\mathcal{V} ({\mathcal{P}_{{\lambda}\rho} })$, whereas \cite {Re1,Re2,Re3,Re4} and \cite{VV} deal with one-dimensional central limit theorems. Second, we establish a formula for variance and covariance asymptotics. Third, the present paper concerns the limit theory for nonuniform samples, whereas \cite{Re1,Re2,Re3,Re4} and \cite{VV} treat uniform random samples. In the context of the set of maximal points $\mathcal{M}_K({\mathcal {P}_{{\lambda} \rho} })$, the present set-up establishes convergence of the finite-dimensional distributions of the canonical point measures induced by $\mathcal {M}_K({\mathcal{P}_{{\lambda}\rho} })$, with covariances, whereas previous work \cite{BX,De} is concerned with one dimensional central limit theorems without a formula for covariance asymptotics and/or is limited to the case when $K$ is a cone \cite{BY4}. \subsection{Terminology\textup{,} $\psi$-growth processes}\label {Terminology} Let the function $\psi\dvtx\mathbb{R}_+ \to\mathbb{R}_+$ satisfy the following conditions: \begin{longlist} \item[($\Psi$1)] $\psi$ is monotone and $\lim_{l\to\infty} \psi(l) = \infty$, and \item[($\Psi$2)] there exists $\alpha> 0$ such that $\psi(l) = l^{\alpha} (1+o(1))$ for $l$ small enough. \end{longlist} Let $\mathbf{0}$ denote the origin of $\mathbb{R}^{d-1}$, $d \geq2$, and let $|y|$ denote the Euclidean norm of $y \in\mathbb{R}^d$. We define $K[\mathbf{0}]$ to be the $\psi$-epigraph $\{ (y,h) \in \mathbb{R}^{d-1} \times\mathbb{R}_+ \dvtx \ h \geq\psi(|y|) \}$ and, more generally, for $\bar{x} := (x,h_x) \in\mathbb{R}^{d-1} \times\mathbb{R}_+$, we define its \mbox{$\psi$-epigraph} (or upward cone) by \begin{equation}\label{upcone} K[\bar{x}]:= \bar{x} \oplus K[\mathbf{0}]:= \{ (y,h) \in\mathbb{R}^{d-1} \times\mathbb{R}_+ \dvtx h \geq h_x + \psi(|y-x|) \}. \end{equation} Given a point set $\mathcal{X} \subseteq\mathbb{R}^{d-1} \times\mathbb{R}_+,$ a point $\bar{x} \in\mathcal{X}$ is called \textit{$\psi$-extremal} in $\mathcal{X}$ iff $K[\bar{x}] \not \subseteq \bigcup_{\bar{y} \in\mathcal{X} \setminus\{ \bar{x} \}} K[\bar{y}]$, that is to say the $\psi$-epigraph of $\bar{x}$ is not completely covered by the union of the $\psi $-epigraphs of points in $\mathcal{X} \setminus\{x\}$. Define the functional \begin{equation}\label{XiPsi} \xi(\bar{x},\mathcal{X}):= \xi(\psi; \bar{x},\mathcal{X}):= \cases{ 1,&\quad if $\bar{x}$ is $\psi$-extremal in $\mathcal{X}$, \cr 0,&\quad otherwise. } \end{equation} With $D$ standing for some bounded domain in $\mathbb{R}^{d-1} \times\mathbb{R}_+,$ we consider the version $\xi_D(\cdot,\cdot)$ of $\xi(\cdot,\cdot)$ restricted to $D,$ by setting $\xi_D(\bar {x},\mathcal{X})$ to be $1$ iff $K[\bar{x}] \cap D \not\subseteq \bigcup_{\bar{y} \in (\mathcal{X}\setminus\{ \bar{x}\}) \cap D} K[\bar{y}],$ in which case we declare $\bar{x}$ to be $\psi$-extremal in $D \cap\mathcal{X},$ and otherwise we set $\xi_D(\bar{x},\mathcal{X})$ to be zero. In case $\bar{x} \notin\mathcal{X}$ we abbreviate notation and write $\xi(\bar{x},\mathcal{X})$ for $\xi(\bar{x},\mathcal{X} \cup\bar{x})$ and similarly for $\xi_D(\bar{x},\mathcal{X})$. To provide a physical interpretation of these functionals, we regard $\mathbb{R}^{d-1} \times\mathbb{R}_+$ as $d$-dimensional space time, with $\mathbb{R}_+$ standing for the time coordinate, and we interpret the graph $\partial(K[\bar{x}]),\; \bar{x}:= (x,t)$, as the boundary of a $(d-1)$-dimensional spherical particle born at $x$ at time $t$ (at which time it has initial radius zero) and growing thereupon with radial speed $v(t):= \frac{d}{dt}[\psi^{-1}(t)]$, provided the derivative exists. The particles (spheres) grow independently and do not exhibit exclusion, that is, they may overlap or penetrate one another. A particle is \textit{extreme} iff at some time it is not completely covered by other particles. When $\psi$ is the identity, so that the $\psi$ graph gives a cone, we see that $\psi$-extremal points coincide with maximal points \cite{BY4}. In the context of this representation, it should be noted that, unlike the one stated here, the classic growth process (see, e.g., \cite{BY2,CQ,Ko,PY2}) assumes that particles, upon being born at random locations $x \in\mathbb{R}^{d-1}$ at random times $h_x \in\mathbb{R}^+$, form a cell by growing radially in all directions with a possibly nonconstant speed, that is, with $\psi$ possibly nonlinear. When one growing cell touches another, it stops growing in that direction, that is, no overlap is allowed. Furthermore, a particle born inside an existing cell is \textit{discarded}, otherwise it is \textit{accepted}. Letting $\hat{\xi}(\bar{x},\mathcal{X})$ be zero or one according to whether $\bar{x}$ is accepted or not, this paper also considers such functionals $\hat{\xi}$. The growth process giving rise to the functional $\xi$ will henceforth be called the \textit{$\psi$-growth process with overlap}, while the process corresponding to $\hat{\xi}$ will be referred to as the \textit{$\psi$-growth process without overlap}. This paper will mainly concentrate on applications of the first concept and the corresponding functional $\xi,$ but the subsequently developed general theory also treats the latter concept in the special case of linear $\psi$. Throughout, let $A$ be a compact convex subset of $\mathbb{R}^{d-1}.$ We shall also admit the case $A := \mathbb{R}^{d-1}$ in the sequel, in which case we assume that $\rho$ is uniformly bounded. Consider a density function $\rho$ on $A_+ := A \times\mathbb{R}_+,$ not necessarily integrable, such that \begin{longlist} \item[(R1)] $\rho$ is continuous on $A_+,$ \item[(R2)] there exists a constant $\delta\geq0$ and a continuous function $\rho_0\dvtx A \to\mathbb{R}_+$ bounded away from zero such that \[ \rho(x,h) = \rho_0(x) h^{\delta} \bigl(1+o(1)\bigr) \] for $h$ small enough and $\rho(x,h) = O (h^{\delta})$ for large $h$ uniformly in $x \in A$. \end{longlist} For $\lambda> 0$, we recall that ${\mathcal{P}}_{{\lambda}\rho}$ denotes the Poisson point process on $A_+$ with intensity measure ${\lambda}\rho(x,h) \,dx\, dh$. The ``extreme point'' empirical measures $\mu^{\xi}_{{\lambda}\rho}$ and $\mu^{\hat{\xi }}_{{\lambda}\rho}$ generated by ${\mathcal{P}}_{{\lambda}\rho}$ are \begin{equation}\label{Poiss} \mu^{\xi}_{{\lambda}\rho} := \sum_{\bar{x} \in {\mathcal{P}}_{{\lambda}\rho}} \xi(\bar{x},{\mathcal{P}}_{{\lambda}\rho}) \delta_{\bar{x}} \end{equation} and \begin{equation}\label{Poisshat} \mu^{\hat\xi}_{{\lambda}\rho} := \sum_{\bar{x} \in{\mathcal {P}}_{{\lambda}\rho}} \hat\xi(\bar{x},{\mathcal{P}}_{{\lambda}\rho}) \delta_{\bar {x}}, \end{equation} with $\delta_{x}$ standing for the unit point mass at $x \in\mathbb{R}^d$. For any random measure $\sigma$ on $\mathbb{R}^d$, we write $\bar{\sigma}$ for its centered version $\sigma- \mathbb{E}[\sigma]$, so that, for example, $ \bar\mu^{\xi}_{{\lambda}\rho} := \mu^{\xi}_{{\lambda}\rho} - \mathbb{E}[ \mu^{\xi}_{{\lambda}\rho}]$. Notice that for small $\alpha$ the upward cones $K[\bar{x}]$ have relatively narrow aperatures, making it less likely that cones having apexes with a small temporal coordinate get covered by $\psi$-epigraphs, that is, one expects more $\psi$-extreme points as $\alpha$ gets smaller. Also, roughly speaking, for small $\delta$, one expects more points in ${\mathcal{P}}_{{\lambda}\rho}$ with small temporal coordinate and thus more $\psi$-extreme points in this case as well. One of the goals of this paper is to show (see Theorem \ref{LLN}) that the expected total mass of the extreme point empirical measures (\ref{Poiss})--(\ref{Poisshat}) is asymptotically proportional to ${\lambda}^{\tau}$, where \begin{equation}\label{TAUU} \tau:= \tau(d, \alpha, \delta) := {d-1 \over d -1 + \alpha(1 + {\delta}) }. \end{equation} More general goals include establishing the variance asymptotics and the convergence of the finite-dimensional distributions of the appropriately scaled measures (\ref{Poiss})--(\ref{Poisshat}) to Gaussian distributions (see Theorems \ref{VAR} and \ref{CLT}) and to treat the applications to extreme and maximal points described at the outset. \textit{Notation.} Given $\alpha> 0$, put \begin{equation}\label{SCALINGLIMIT1} \psi^{(\infty)}(l) := l^{\alpha}. \end{equation} Recalling the definition of $\xi$, we define the functional $\xi^{(\infty)}$ by $\xi^{(\infty)}(\cdot, \cdot) := \xi(\psi^{(\infty)}; \cdot, \cdot)$ and similarly for $\hat\xi^{(\infty)}.$ We also let $\mathcal{P}_*$ stand for the Poisson point process in $\mathbb{R}^{d-1} \times\mathbb{R}_+$ with intensity measure $h^{\delta}\, dx\, dh$. For all $\bar{x}:=(x, h_x)$ and $\bar{y}: = (y, h_y)$, let \[ m^{(\infty)}(\bar{x}):= \mathbb{E}\bigl[\xi^{(\infty)} (\bar {x},\mathcal{P}_* )\bigr] \] and \begin{eqnarray*} c_*^{(\infty)}(\bar{x}, \bar{y})&:=& \mathbb{E}\bigl[ \xi ^{(\infty)} (\bar {x},\mathcal{P}_* \cup\bar{y}) \xi^{(\infty)} (\bar{y},\mathcal {P}_* \cup \bar{x})\bigr]\\ &&{} - \mathbb{E}\bigl[ \xi^{(\infty)} (\bar{x},\mathcal {P}_*)\bigr] \mathbb{E} [\xi^{(\infty)} (\bar{y},\mathcal{P}_* )] \end{eqnarray*} respectively denote the one and two point correlation functions for the $\psi^{(\infty)}$ growth process with overlap. For sets $A$ and $B \subset\mathbb{R}^d$, let $d(A,B):= \inf\{|x-y|\dvtx x \in A,\ y \in B\}$. Let $B_d(y,r)$ denote the $d$-dimensional Euclidean ball centered at $y \in\mathbb{R}^d$ with radius $r \in(0, \infty)$. Given a subset $B$ of $\mathbb{R}^d$, let $\mathcal{C}_b(B)$ denote the bounded continuous functions on~$B$. For any signed measure $\mu$ on $A_+$ and $f \in\mathcal{C}_b(A_+)$, let $\langle f, \mu\rangle := \int f\, du.$ Unless otherwise specified, $C$ denotes a generic positive constant whose value may change from line to line. \subsection{Limit theory for $\Psi$-growth functionals} For all $f \in\mathcal{C}_b(A_+)$ with $A \subset\mathbb{R}^{d-1}$ compact and convex, we define the average of the product of $f$ and the one and two point correlation functions as follows: \begin{equation}\label{onept} I(f):= \int_{A} \int_{0}^{\infty} f(x,0) m^{(\infty)} (\mathbf{0},h') \rho_0^{\tau}(x) (h')^{\delta}\, dh'\, dx \end{equation} and \begin{eqnarray}\label{twoptt} J(f)&:=& \int_A \int_{0}^{\infty} \int_{\mathbb{R}^{d-1}} \int_0^{\infty} f(x,0) c_*^{(\infty)}((\mathbf{0},h'), (y',h_y'))\nonumber\\[-8pt]\\[-8pt] &&\phantom{\int_A \int_{0}^{\infty} \int_{\mathbb{R}^{d-1}} \int_0^{\infty}} {}\times\rho_0^{\tau}(x) (h_y')^{\delta} (h')^{\delta} \,d h_y' \,dy' \,dh' \,dx.\nonumber \end{eqnarray} The finiteness of $I(f)$ follows by Lemmas \ref{expbds} and \ref{L1conv} [see the bound (\ref{IFF})], whereas the finiteness of $J(f)$ follows from Lemmas \ref{corlimit} and \ref{corbds} [see the bound (\ref{JFF})] which imply rapid enough decay of two-point correlation functions. The following are our main results. We state the results for $\mu^{\xi}_{{\lambda}\rho}$ and note that analogous results hold for $\mu^{\hat\xi}_{{\lambda}\rho}$ when $\psi$ is linear. The first result specifies first-order behavior, whereas the second provides second-order asymptotics. \begin{thm}\label{LLN} We have for all $f \in\mathcal{C}_b(A_+)$ \begin{equation}\label{LLN1} \lim_{{\lambda}\to\infty} {\lambda }^{-\tau} \mathbb{E}[\langle f, \mu^{\xi}_{{\lambda}\rho} \rangle] = I(f). \end{equation} \end{thm} \begin{thm}\label{VAR} We have for all $f \in\mathcal{C}_b(A_+)$ \begin{equation}\label{varlimit} \lim_{{\lambda}\to\infty} {\lambda}^{-\tau} \operatorname{Var}[\langle f, \mu^{\xi }_{{\lambda}\rho} \rangle] = I(f^2) + J(f^2). \end{equation} \end{thm} The next result establishes the convergence of the finite-dimensional distributions of $({\lambda}^{-\tau/2} \overline{\mu}^{\xi}_{{\lambda}\rho})$. \begin{thm} \label{CLT} The finite-dimensional distributions ${\lambda}^{-\tau/2}(\langle f_1, \bar{\mu}^{\xi}_{{\lambda}} \rangle,\ldots,\break\langle f_k, \bar{\mu}^{\xi}_{{\lambda}} \rangle), f_1,\ldots,f_k \in \mathcal{C}_b(A_+),$ of $({\lambda}^{-\tau/2} \bar{\mu}^{\xi}_{{\lambda}\rho})$ converge as ${\lambda}\to\infty$ to those of a mean zero Gaussian field with covariance kernel \begin{equation}\label{CLT1} (f,g) \mapsto I(fg) + J(fg), \qquad f, g \in \mathcal{C}_b(A_+). \end{equation} \end{thm} Section \ref{s2} describes applications of $\psi$-growth processes with overlap, as given by the general limits of Theorems \ref{LLN}--\ref{CLT}, to convex hulls and maximal points of i.i.d. samples. \begin{remarks*} (i) \textit{Applications to the $\psi$-growth process $\hat\xi$ without overlap}. The results of Theorems \ref{LLN}--\ref{CLT} for the functional $\hat\xi$ provide variance asymptotics and central limit theorems for the classic spatial birth--growth model in $\mathbb{R}^{d-1}$, whereby seeds are born at random locations in $ \mathbb{R}^{d-1}$ and times in $\mathbb{R}_+$ according to the Poisson point process ${\lambda}{\mathcal{P}}_{{\lambda}\rho}$ on ${\lambda}^{1/d}A \times \mathbb{R}_+$ and grow linearly in time. Theorems \ref{LLN}--\ref{CLT} for $\hat\xi$ provide a central limit theorem for the number of seeds accepted in such models. This generalizes and extends \cite{BY2,PY2}, which builds on work of Chiu and Quine \cite{CQ,CQa}, Chiu \cite{Chiu} and Chiu and Lee \cite{CL}, which do not consider convergence of finite-dimensional distributions and which often restrict to models with homogeneous temporal input. amount=0pt \begin{longlist}[(iii)] \item[(ii)] \textit{Scaling.} The scaling ${\lambda}^{-\tau}$ arises in the following way. From a conceptual and analytic point of view, it is convenient to re-scale the $\psi$-growth process in time and space so as to obtain an equivalent growth process on Poisson points of approximately unit intensity density on a region of volume ${\lambda}$. The scaling is designed to asymptotically preserve the $\psi$-epigraphs and the behavior of the density locally close to $h = 0$. To achieve this, we scale $A_+$ in the $d-1$ spatial directions by ${\lambda }^{{\beta}}$ and in the temporal direction by ${\lambda}^{\gamma}$. Under this temporal scaling and under (R2), the density $\rho$ exhibits growth $(h {\lambda}^{\gamma})^{\delta}$ for small temporal $h$, and we thus require ${\lambda}^{{\beta}(d-1) + \gamma(1 + \delta)} = {\lambda}$. This scaling maps $|x|$ and $h_x$ to ${\lambda}^{{\beta}}|x|$ and ${\lambda}^{\gamma} h_x$, respectively, and therefore, it asymptotically preserves the $\psi$-epigraphs and condition ($\Psi$2), provided $({\lambda}^{{\beta}}|x|)^ \alpha= {\lambda}^{\gamma}h_x (1+o(1))$ for $(x,h_x)$ lying on the graph of $\psi$, that is, $h_x = \psi(x).$ Since $h_x = |x|^{\alpha} (1+o(1))$ for such $(x,h_x),$ we require ${\lambda}^{{\beta}\alpha} = {\lambda}^{\gamma}$. We thus require the relations \[ \beta(d-1) + \gamma(1 + \delta) = 1 \quad\mbox{and} \quad \beta\alpha= \gamma, \] which yields these values for the scaling exponents \begin{equation} \label{twodef} \beta= \frac{\gamma} {\alpha} \quad\mbox{and} \quad \gamma= \frac{\alpha}{(d-1) + \alpha(1 + \delta)}. \end{equation} Given the re-scaled $\psi$-growth process on ${\lambda}^{\beta} A \times\mathbb{R}_+$, we expect that a point is \mbox{$\psi$-extremal} (i.e., $\xi= 1$) iff its time coordinate is small. Thus, the functional $\mu_{{\lambda}\rho}^{\xi}(A_+)$ should exhibit growth proportional to the Lebesgue measure of ${\lambda }^{{\beta}} A$, that is, proportional to ${\lambda}^{{\beta}(d-1)} = {\lambda }^{\tau}.$ In the special case when $\delta= 0$ and the growth is linear ($\alpha= 1$) the $\psi$-epigraphs are preserved by time and space scaling by ${\lambda}^{1/d}$, that is, $\gamma= 1/d = \beta$. Thus, $\tau= (d-1)/d$ in this case. \item[(iii)] \textit{de-Poissonization.} In Section \ref {ApplSection} we de-Poissonize Theorems \ref{LLN}--\ref{CLT} when $\alpha\in(0,1]$. In other words, we obtain the identical limit theory when ${\mathcal {P}}_{{\lambda }\rho}$ is replaced by i.i.d. random variable $X_1,\ldots,X_n$, chosen in $A_+$ according to the density $\rho,$ assumed to be integrable to $1$. We expect similar de-Poissonization results for $\alpha > 1$, but are unable to prove this. \item[(iv)] We have not tried to establish a.s. convergence in (\ref{LLN1}), but expect that concentration inequalities should be useful in this context. \end{longlist} \end{remarks*} \subsection{Notation and scaling relations}\label{ScaRe} Motivated by remark (ii) above, we place the $\psi$-growth process on its proper scale by re-scaling as follows. With $\beta$ and $\gamma$ as in (\ref{twodef}), for a \textit{fixed} $x \in A$ and any generic point $\bar{y} := (y,h_y) \in A_+$, we put $\bar{y}^{(\lambda)}:= \bar{y}':= (y',h'_y)$ with \begin{equation}\label{RESCALING} y':= y^{({\lambda})} := \lambda^{\beta} (y-x) \quad\mbox{and} \quad h_y':= h_y^{({\lambda})} := \lambda^{\gamma} h_y. \end{equation} Also, for readability, in our notation \textit{we will not explicitly indicate the dependency of the scaling in \textup{(\ref {RESCALING})} on $x.$} The versions of $\psi, \rho,{\mathcal{P}}_{{\lambda}\rho}$ and $\xi$ under this re-scaling are determined by the relations \begin{eqnarray}\label{RESCALING3} \psi^{({\lambda})}(l) &:=& {\lambda}^{\gamma} \psi({\lambda }^{-\beta} l), \\ \label{rdef} \rho^{({\lambda})} (y',h_y') &:=& {\lambda}^{\delta\gamma} \rho(y,h_y), \\ \label{RESCALING4} {\mathcal{P}}^{({\lambda})}_{{\lambda}\rho} &:=& {\mathcal {P}}^{({\lambda})}_{{\lambda}\rho}[x] := \{ (y',h_y')\dvtx (y,h_y) \in{\mathcal{P}}_{{\lambda}\rho} \} \end{eqnarray} and \begin{equation}\label{RESCALING5} \xi^{({\lambda})} ((y',h'_y), \{ (y_i',h'_{y_i}) \}_{i \geq 1} ) := \xi( (y,h_y),\{(y_i,h_{y_i})\}_{i \geq1} ) \end{equation} and likewise for $\hat{\xi}.$ Since $dy' = {\lambda}^{{\beta }(d-1)}\,dy$ and $dh'_y = {\lambda}^{\gamma}\,dh_y$, it follows that \[ \rho^{({\lambda})}(y',h_y') \,dy'\, dh_y' = {\lambda}\rho(y,h_y) \, dy \,dh_y. \] Note also that \begin{equation}\label{PPequiv} \mathcal{P}_{{\lambda}\rho }^{({\lambda})} \stackrel{\mathcal{D}}{=} {\mathcal{P}}_{\rho^{({\lambda})}}. \end{equation} Moreover, by (\ref{RESCALING}) and (\ref{rdef}), $ \rho^{({\lambda})}(y',h_y') (h_y')^{-\delta}= {\lambda}^{\delta \gamma} \rho(y,h_y)({\lambda}^{\gamma}h_y)^{-\delta}$, where $y = {\lambda }^{-\beta} y' + x$. Under the above re-scaling for each fixed $x \in A$ and for each $(y', h'_y)$, we have the crucial limit \begin{equation}\label{SCALINGH} \lim_{{\lambda}\to\infty} \rho^{({\lambda})}(y',h_y') (h_y')^{-\delta}= \lim_{{\lambda}\to\infty} \rho(y,h_y) (h_y)^{-\delta}= \rho_0(x) \end{equation} and by ($\Psi$2) and (\ref{RESCALING3}), for all $ l \in\mathbb{R}_+$, \begin{equation}\label{SCALINGPSI} \ \lim_{{\lambda}\to\infty} \psi^{({\lambda})}(l) = l^{\alpha}. \end{equation} It is also worth noting that $\xi^{({\lambda})}$ could alternatively be defined by following the original definition of $\xi$ with $\psi$ replaced there by $\psi^{({\lambda})}$; the same applies for $\hat{\xi}^{({\lambda})}$. Observe that in fact it states approximate self-similarity of $\psi$-growth processes under the re-scaling given by (\ref{RESCALING}) and (\ref {RESCALING3}). Motivated by this observation, we have already put $\psi^{(\infty )}(l) := l^{\alpha}$ and now we define, for all $x \in A$ and for all $(y', h'_y) \in\mathbb{R}^{d-1} \times\mathbb{R}_+$, \begin{equation}\label{SCALINGLIMIT} \rho^{(\infty)}(y',h_y') := \rho_x^{(\infty)}(y',h_y') := \rho_0(x) (h_y')^{\delta}. \end{equation} \section{Applications}\label{s2} We describe here applications of the main results. We limit the discussion to the following: \begin{longlist} \item the number of vertices in the convex hull of a Poisson sample, and \item the number of maximal points in a Poisson or i.i.d. sample, \end{longlist} but it should be emphasized that the techniques could potentially be applied to a broader scope of examples. These include, for instance, the variance asymptotics for Johnson--Mehl growth processes \cite{Mo1} with nonlinear growth rates (see, e.g., Section 3.2.2 in \cite{BY2} for the description of the model and the corresponding central limit theorem). Also, as observed in Section 2.3 of \cite{Ba}, the case $\psi(l) = l^2$ (paraboloids) may figure in the limit behavior of some point processes associated with the asymptotic solutions of Burgers equation \[ \frac{\partial v} {\partial t} + v \frac{\partial v} {\partial x} = \varepsilon\Delta v \] in the inviscous limit $\varepsilon\to0$. We will likewise not treat this example either. \subsection{Number of vertices in the convex hull of an i.i.d. sample} Recall that $B_d$ denotes the unit radius ball centered at the origin of $\mathbb{R}^d$ and let $\partial B_d$ denote its boundary. Let $\rho\dvtx B_d \to\mathbb{R}_+$ be a continuous density on $B_d$. We shall assume that $\rho(x) = \rho_0(x/|x|) (1-|x|)^{\delta} (1+o(1))$ for some $\delta\geq0$ and that $\rho_0 \dvtx\partial B_d \to \mathbb{R}_+$ is continuous and bounded away from $0$. Let ${\mathcal{P}}_{{\lambda }\rho}$ be a Poisson point process on $B_d$ with intensity measure ${\lambda}\rho(x)\, dx$ and let $\operatorname{conv}({\mathcal{P}}_{{\lambda}\rho})$ be the random polytope given by the convex hull of ${\mathcal{P}}_{{\lambda}\rho}$. Recalling that $\mathcal{V}( {\mathcal{P}_{{\lambda} \rho} })$ denotes the vertices of $\operatorname{conv}( {\mathcal{P}_{{\lambda}\rho} })$, consider the \textit{vertex empirical point measure} \begin{equation}\label{vepm} \mu_{{\lambda}{\rho}}:= \sum_{ x \in \mathcal{V}( {\mathcal{P}_{{\lambda}\rho} }) } \delta_{x}. \end{equation} As will be shown in Section \ref{ApplSection}, Theorems \ref {LLN}--\ref{CLT} yield the following limit theory for ${\mu}_{ {\lambda}\rho}$. Let $N(0,1)$ denote the standard normal random variable. \begin{thm}\label{convexhullthm} There are constants $M:=M(d,\delta)$ and $V:=V(d,\delta)$ such that for all $f \in\mathcal{C}_b(B_d)$ \begin{eqnarray}\label{explim} &&\lim_{\lambda\to\infty} \lambda^{-(d-1)/(d-1+2(1+\delta))} \mathbb{E}[\langle f, \mu_{\lambda\rho}\rangle]\nonumber\\[-8pt]\\[-8pt] &&\qquad = M \int_{\partial B_d } f(s) \rho_0^{(d-1)/(d-1+2(1+\delta))}(s) \,ds\nonumber \end{eqnarray} and \begin{eqnarray}\label{varlim} &&\lim_{\lambda\to\infty} \lambda^{-(d-1)/(d-1+2(1+\delta))} \operatorname{Var}[\langle f, \mu_{\lambda\rho}\rangle]\nonumber\\[-8pt]\\[-8pt] &&\qquad = V \int_{\partial B_d} f^2(s) \rho _0^{(d-1)/(d-1+2(1+\delta))}(s) \,ds.\nonumber \end{eqnarray} Moreover, the finite-dimensional distributions $\lambda^{-(d-1)/2(d-1+2(1+\delta))}(\langle f_1,\bar\mu_{\lambda \rho} \rangle,\break \ldots, \langle f_k,\bar\mu_{\lambda\rho} \rangle),$ $f_i \in\mathcal{C}_b(B_d),$ of $(\lambda^{-(d-1)/2(d-1+2(1+\delta))} \bar\mu_{\lambda\rho})$ converge as $\lambda\to\infty$ to those of a mean zero Gaussian field with covariance kernel \[ (f,g) \mapsto V \int_{\partial B_d} f(s) g(s) \rho _0^{(d-1)/(d-1+2(1+\delta))}(s) \,ds,\qquad f,g \in\mathcal{C}_b(B_d). \] Additionally, if $\delta= 0$, then for all $f \in\mathcal{C}_b(B_d)$, \begin{eqnarray} \label{convrate} &&\sup_t \biggl| P \biggl[ \frac{ \langle f,\bar\mu_{{\lambda }\rho }\rangle}{\sqrt{\operatorname{Var}\langle f, \bar\mu_{{\lambda }\rho}\rangle } } \leq t \biggr] - P[N(0,1) \leq t] \biggr|\nonumber\\[-8pt]\\[-8pt] &&\qquad= O \bigl({\lambda}^{-(d-1)/2(d + 1)} (\log {\lambda})^{3 + 2(d-1)} \bigr).\nonumber \end{eqnarray} \end{thm} \begin{remarks*} (i) Taking $f_1 \equiv1$ (and all other $f_i \equiv0, i = 2,\ldots,k$) provides a central limit theorem for the cardinality of $\mathcal{V}( {\mathcal{P}_{{\lambda}\rho} })$. amount=0pt \begin{longlist}[(iii)] \item[(ii)] Theorem \ref{convexhullthm} adds to the work of the following authors: (a) Groeneboom \cite{Gr} and Cabo and Groeneboom \cite{CG}, who prove a central limit theorem for the cardinality of $\mathcal{V}({\mathcal{P}}_{{\lambda}\rho})$ when $\rho $ is uniform and when $d = 2$, (b) Reitzner \cite{Re4} who considers the one-dimensional central limit theorem and who establishes a rate of convergence $O ({\lambda}^{-(d-1)/2(d + 1)} (\log{\lambda})^{2 + 2/(d+1)})$ to the normal for $\rho$ uniform (whence $\delta= 0$ in our setting), without giving asymptotics for the limiting variance and covariance, and (c) Vu \cite{VV}, who proves a central limit theorem for the cardinality of $\mathcal{V}(\{X_i\}_{=1}^n)$, $X_i$ i.i.d. uniform, but who also does not consider limiting covariances. Concerning rates, we believe that the power on the logarithm, namely, $3 + 2(d-1)$, can be reduced to $2(d-1)$, but we have not tried for this sharper rate. \item[(iii)] As shown by Reitzner (Lemma 7 of \cite{Re4}), when $\delta =0$, the right-hand side of (\ref{varlim}) is strictly positive and finite whenever $f$ is not identically zero. \end{longlist} \end{remarks*} \subsection{Number of maximal points in an i.i.d. sample}\label{NMAX} For all $\bar{w}:=(w,h_w)$, we define the downward cone \begin{equation}\label{downarrow} K^{\downarrow}[\bar{w}]:=\{(z,h_z) \in\mathbb{R}^{d-1} \times \mathbb{R}_+\dvtx h_z \leq h_w - \psi(|z - w|)\}. \end{equation} Consider $\psi(l) := l^{\alpha}$, $\alpha\in(0,1],$ in Section \ref{Terminology} so that $K[\mathbf{0}] := \{ (w_1,\ldots,w_d)\dvtx\break w_d \geq (w_1^2+\cdots+w_{d-1}^2)^{\alpha/2} \}.$ Given a locally finite set $\mathcal{X}\subset\mathbb{R}^d$, a point $\bar{w} \in\mathcal{X}$ is called \textit{K-maximal} iff $\bar{w}$ does not belong to any $u \oplus K[\mathbf{0}]$ for $u \in\mathcal{X}.$ When $\alpha\in (0,1]$ we have the equivalence $\bar y \in K[\bar x]$ iff $K[\bar y] \subseteq K[\bar x]$ and $\bar x \in K^{\downarrow}[\bar y]$ iff $K^{\downarrow}[\bar x] \subseteq K^{\downarrow}[\bar y]$. It thus follows that for such $\psi$ the present notion of maximality is just a rephrasing of the maximality notion as discussed in Section \ref{GenRes}. Indeed, we see that $\bar{w}$ is $K$-maximal or $\psi$-extremal in $\mathcal {X}$ iff $\bar{w} \oplus K^{\downarrow}[\mathbf{0}]$ contains no other points in $\mathcal{X}$. This is not the case for $\alpha> 1$, where the equivalence $\bar y \in K[\bar x]$ iff $K[\bar y] \subseteq K[\bar x]$ does not hold. Recalling that $\mathcal{M}_K({\mathcal{P}_{{\lambda}\rho} })$ denotes the collection of $K$-maximal points in ${\mathcal {P}_{{\lambda }\rho} }$, and with $\rho$ and $A$ as in Section 1.1, consider the induced \textit{maximal point measure} \[ \mu_{{\lambda}\rho}:= \sum_{x \in\mathcal{M_K}( {\mathcal {P}_{{\lambda }\rho} }) } \delta_{x}. \] Recalling the definitions of $I(f)$ and $J(f)$ at (\ref{onept}) and (\ref{twoptt}), respectively, we have the following: \begin{thm}\label{maximalthm} With $\tau$ as given by \textup{(\ref{TAUU})} and $\alpha\in (0,1]$, for all $f \in\mathcal{C}_b(A_+)$, \begin{equation} \label{explimmax} \lim_{{\lambda}\to\infty} {\lambda}^{-\tau} \mathbb{E}[\langle f, \mu_{{\lambda}\rho}\rangle] = I(f) \end{equation} and \begin{equation}\label{varlimmax} \lim_{{\lambda}\to\infty} {\lambda}^{-\tau} \operatorname{Var} [\langle f, \bar\mu_{{\lambda}\rho}\rangle] = I(f^2) + J(f^2). \end{equation} Moreover, the finite-dimensional distributions $(\langle f_1, {\lambda}^{-\tau/2} \bar\mu_{{\lambda}\rho} \rangle, \ldots,\break \langle f_k, {\lambda}^{-\tau/2} \bar\mu_{{\lambda}\rho} \rangle),$ $f_1,\ldots,f_k \in\mathcal{C}_b(A_+),$ of $ {\lambda}^{-\tau/2} \bar\mu_{{\lambda}\rho}$ converge as ${\lambda}\to\infty$ to those of a mean zero Gaussian field with covariance kernel \[ (f,g) \mapsto I(fg) + J(fg), \qquad f, g \in\mathcal{C}_b(A_+). \] Additionally, if $\delta= 0$, then for all $f \in\mathcal{C}_b(A_+)$, \begin{eqnarray}\label{maxptrate} &&\sup_t \biggl| P \biggl[ { \langle f,\bar\mu_{{\lambda}\rho}\rangle\over\sqrt{\operatorname {Var}\langle f, \bar\mu_{{\lambda}\rho}\rangle} } \leq t \biggr] - P[N(0,1) \leq t] \biggr| \nonumber\\[-8pt]\\[-8pt] &&\qquad= O \bigl({\lambda}^{-(d-1)/2d} (\log{\lambda})^{3 + 2(d-1)} \bigr).\nonumber \end{eqnarray} \end{thm} Theorem \ref{maximalthm} admits de-Poissonization as follows. Let $X_1,\ldots,X_n$ be i.i.d. chosen in $A_+$ according to the density $\rho,$ assumed to be integrable to $1,$ and consider the associated maximal point measure \[ \nu_{n}^{\xi} := \sum_{x \in\mathcal{M_K}(\{ X_i \}_{i=1}^n)} \delta_x. \] We have then the following equivalent of Theorem \ref{maximalthm} for binomial samples. \begin{thm}\label{maximalthmBin} With $\tau$ as given by \textup{(\ref{TAUU})} and $\alpha\in(0,1],$ for all $f \in\mathcal{C}_b(A_+)$, \begin{equation} \label{explimmaxBin} \lim_{n \to\infty} n^{-\tau} \mathbb{E}[\langle f, \nu_{n}^{\xi} \rangle] = I(f) \end{equation} and \begin{equation}\label{varlimmaxBin} \lim_{n \to\infty} n^{-\tau}\operatorname{Var} [\langle f,\bar\nu_{n}^{\xi} \rangle] = I(f^2) + J(f^2). \end{equation} Moreover, the finite-dimensional distributions $(\langle f_1, n^{-\tau/2} \bar\nu_{n}^{\xi} \rangle, \ldots, \langle f_k, n^{-\tau/2} \bar\nu_{n}^{\xi} \rangle),$ $f_1,\ldots,f_k \in\mathcal{C}_b(A_+),$ of $ n^{-\tau/2} \bar\nu_{n}^{\xi}$ converge as $n \to\infty$ to those of a mean zero Gaussian field with covariance kernel \[ (f,g) \mapsto I(fg) + J(fg), \qquad f, g \in\mathcal{C}_b(A_+). \] \end{thm} \begin{remark*} Theorems \ref{maximalthm} and \ref{maximalthmBin} extend and generalize the work of (a) Barbour and Xia \cite{BX}, who establish central limit theorems for the case of homogeneous spatial temporal input, with $K$ the positive octant in $\mathbb{R}^d$, and who consider neither convergence of finite-dimensional distributions nor convergence of variances, (b)~Baryshnikov and Yukich \cite{BY2}, who establish convergence of finite-dimensional distributions but who restrict to homogeneous temporal input (${\delta}= 0$) as well as to the case $\psi(l) = l$ (i.e., $\alpha= 1$), and (c) Baryshnikov \cite{Ba}, who also restricts to homogeneous temporal input and does not consider convergence of finite-dimensional distributions. \end{remark*} \section{Proof of main results}\label{s3} In this section we prove Theorems \ref{LLN}--\ref{CLT}. An essential component of the proofs involves introducing a notion of \textit{ localization}, which quantifies the decoupling property of the considered functional $\xi$ over distant regions. It is straightforward to check that the proofs hold for $\psi$-growth without overlap when $\psi$ is linear. \subsection{Stabilization for $\Psi$-growth functionals}\label{PSISTA} With $B_{d-1}(y,r)$ standing as usual for the $(d -1)$-dimensional ball centered at $y \in\mathbb{R}^{d-1}$ with radius $r \in(0, \infty)$, we denote by $C_{d-1}(y,r)$ the cylinder $B_{d-1}(y,r) \times\mathbb{R}_+$. Recalling $\bar{y}:= (y,h_y)$, consider for all $r > 0$ the finite range version of $\xi(\bar{y},\mathcal{X})$, namely, \[ \xi_{[r]}(\bar{y},\mathcal{X}):= \xi_{C_{d-1}(y,r)} (\bar {y},\mathcal{X}), \] that is, $\xi_{[r]}(\bar{y},\mathcal{X})$ depends only on the local behavior of $\mathcal{X}$ with \textit{ spatial} coordinates restricted to the $r$-neighborhood of $y.$ For a point process $\mathcal{P}$ (usually chosen to be Poisson in the sequel) in $\mathbb{R}^{d-1} \times\mathbb{R}_+,$ the \textit{localization radius} of $\xi$ at $\bar{y} \in\mathbb{R}^{d-1} \times\mathbb{R}_+$ is defined by \begin{equation}\label{RoS} R^{\xi} := R^{\xi}[\bar{y};\mathcal{P}] := \inf\bigl\{ r \in\mathbb{R}_+\dvtx\forall s \geq r\ \xi(\bar{y},\mathcal{P}) = \xi_{[s]}(\bar{y},\mathcal{P}) \bigr\}. \end{equation} In full analogy with $\xi^{({\lambda})}$ given by (\ref {RESCALING5}), we define for all ${\lambda}> 0$ the localization radius $R^{\xi^{({\lambda})}}[ \cdot; \cdot]$ by \[ R^{\xi^{({\lambda})}} := R^{\xi^{({\lambda})}} [\bar{y};\mathcal{P'}] := \inf\bigl\{ r \in\mathbb{R}_+\dvtx\forall s \geq r\ \xi ^{({\lambda})}(\bar {y}',\mathcal{P}') = \xi^{({\lambda})}_{[s]}(\bar{y}',\mathcal {P}') \bigr\}. \] Observe that the localization radius considered here formally differs from the stabilization radii considered in \cite{BY2}, \cite{Pe1,PY2,PY4,PY5}, essentially defined for all $\bar{y}:=(y,h)$ to be the smallest positive real $r$ such that $\xi(\bar{y}, (\mathcal{P} \cap C_{d-1}(y,r)) \cup{\mathcal{A}}) = \xi(\bar{y}, (\mathcal{P} \cap C_{d-1}(y,r)) $ for all finite ${\mathcal{A}}\subset C^c_{d-1}(y,s)$. However, the $\psi$-extremal functional is in general extremely sensitive to the choice of the ``outside'' configuration ${\mathcal {A}}\subset C^c_{d-1}(y,s)$, rendering the existence and use of standard stabilization radii a bit difficult. The benefit of the localization radius is that it considers only the outside configurations involving points from $\mathcal{P}$. However, since the localization radius shares many of the same properties as the stabilization radii in \cite{BY2}, \cite{Pe1,PY2,PY4,PY5}, \textit{we will abuse terminology and henceforth refer to the localization radius $R^{\xi}$ as a stabilization radius}. The following lemma shows that $\xi^{({\lambda})}$ given by (\ref {RESCALING5}) has a stabilization radius whose tail decays exponentially uniformly in large enough ${\lambda}$ when $\mathcal{P}$ is ${\mathcal{P}}_{{\lambda} \rho}^{({\lambda})}$ given by (\ref{RESCALING4}) or when $\mathcal {P}$ is given by ${\mathcal{P}}_{{\lambda}\rho}^{({\lambda})} \cup\{ \bar z'_1,\ldots,\bar z'_k\}$, $k \geq1,$ where $\bar z'_i,\; i=1,\ldots,k$, are certain deterministic points (fixed atoms). This result will prove useful later in showing exponential decay of correlation functions for $\psi$-growth processes. \begin{lemm}\label{StabLemma} \textup{(i)} For $A$ compact and convex, there exists a constant $C$ such that, uniformly in $x$ and ${\lambda}$ large enough, for all $\bar{y}' \in{\lambda}^\beta A \times\mathbb{R}_+$ and for all collections $\{ \bar z'_1,\ldots, \bar z'_k \} \subseteq {\lambda}^{\beta} A \times\mathbb{R}_+$ of deterministic points, $k \geq0,$ we have for all $L > 0$ \begin{equation}\label{SRDecay} P \bigl[ R^{\xi^{({\lambda})}} \bigl[\bar{y}';{\mathcal {P}^*}_{{\lambda }\rho}^{({\lambda})} \bigr] > L \bigr] \leq C\exp\biggl(- \frac{L^{\alpha+ d -1}}{C} \biggr), \end{equation} where ${\mathcal{P}^*}_{{\lambda}\rho}^{({\lambda})}:= \mathcal {P}_{{\lambda}\rho}^{({\lambda} )} \cup \{ \bar z_1,\ldots,z_k\}$, so that, in particular, ${\mathcal{P}^*}_{{\lambda}\rho}^{({\lambda})} = \mathcal {P}_{{\lambda }\rho}^{({\lambda})}$ for $k=0.$ \textup{(ii)} An identical bound holds if instead $A := \mathbb {R}^{d-1}$ and $\mathcal{P}_{{\lambda}\rho }^{({\lambda})}$ is replaced by a homogeneous Poisson point process on $\mathbb{R}^{d-1}$. \end{lemm} \begin{remark*} In place of (\ref{SRDecay}) we have uniformly in $x$ and ${\lambda}$ large enough, for all $\bar{y}' \in{\lambda }^\beta A \times\mathbb{R}_+$ and for all $L > 0$, the simpler bound \begin{equation} \label{SRDecay1} P \bigl[ R^{\xi^{({\lambda})}} \bigl[\bar{y}';{\mathcal {P}^*}_{{\lambda }\rho}^{({\lambda})} \bigr] > L \bigr] \leq C\exp\biggl(- \frac{L}{C} \biggr). \end{equation} \end{remark*} \begin{pf*}{Proof of Lemma \ref{StabLemma}} We will only prove Lemma \ref{StabLemma}(i) as identical arguments handle Lemma \ref{StabLemma}(ii). Also, since the proof relies on probability bounds for certain regions being devoid of points of the underlying point process ${{\mathcal{P}^*}}_{{\lambda}\rho}^{({\lambda})},$ as easily noted below, we can assume without loss of generality that $k=0$ so that ${\mathcal{P}^*}_{{\lambda}\rho}^{({\lambda})} = \mathcal {P}_{{\lambda }\rho}^{({\lambda})}.$ Moreover, to simplify the argument below, we ignore the boundary effects arising when $\bar{y}'$ is close to $\partial({\lambda}^{\beta}A \times\mathbb {R}_+),$ noting that the absence of points of $\mathcal{P}_{{\lambda}\rho}^{({\lambda})}$ in the vicinity of $\bar {y}'$ can only decrease $ R^{\xi^{({\lambda})}} [\bar{y}';\mathcal{P}_{{\lambda}\rho }^{({\lambda})} ].$ This allows us to avoid obvious but technical separate considerations for $\bar{y}'$ close to $\partial({\lambda}^{\beta}A \times\mathbb {R}_+)$. Also, we consider $x$ fixed but arbitrary, keeping in mind that the required uniformity in $x$ follows by the boundedness of $\rho,$ both from above and away from $0.$ Define for fixed $\bar{y}':= (y',h_y')$ and all ${\lambda}\in[0,\infty]$ the \textit{ scaled} upward cone \begin{equation}\label{upconeScaled} K^{({\lambda})}[\bar{y}']:= \bigl\{ (v',h_v') \in\mathbb{R} ^{d-1} \times\mathbb{R}_+\dvtx h'_v \geq h'_y + \psi^{({\lambda})}(|v' - y'|) \bigr\} \end{equation} and the \textit{scaled} downward cone \begin{equation}\label{downconeScaled} K^{\downarrow}_{({\lambda})}[\bar{y}'] := \bigl\{ (v',h'_v) \in \mathbb{R}^{d-1} \times\mathbb{R}_+ \dvtx h'_v \leq h'_y - \psi^{({\lambda})}(|v'-y'|) \bigr\}. \end{equation} Note that $\bar{u}' \in K^{({\lambda})} [\bar{z}']$ iff $h_u' \geq h_z' + \psi(|u' - z'|)$, which is equivalent to $h_z' \leq h_u' - \psi(|z' - u'|)$, and thus, the \textit{duality} $\bar{u}' \in K^{({\lambda})}[\bar{z}']$ iff $\bar{z}' \in K^{\downarrow}_{({\lambda})}[\bar{u}']$. To proceed, note that the event $\{ R^{\xi^{({\lambda})}} [\bar{y}';\mathcal{P}_{{\lambda}\rho }^{({\lambda})} ] > L \} $ is equivalent to the event \[ E:= \bigl\{ \exists{r > L}\dvtx\ \xi^{({\lambda})} \bigl(\bar {y}',\mathcal{P}_{{\lambda} \rho}^{({\lambda})}\bigr) \neq\xi_{[r]}^{({\lambda})}\bigl( \bar{y}',\mathcal{P}_{{\lambda }\rho }^{({\lambda})} \bigr) \bigr\}, \] and moreover, $E \subset E_1 \cup E_2$, where $E_1$ and $E_2$ are defined below. Roughly speaking, the event $E_1$ ensures that $\bar{y}'$ is extremal with respect to ${\mathcal{P}}_{{\lambda}\rho}^{({\lambda})} \cap C_{d-1}(y',r)$ for some $r > L$ but not necessarily with respect to ${\mathcal{P}}_{{\lambda}\rho}^{({\lambda})}$, whereas $E_2$ is just the opposite. \begin{description} \item[Event $E_1$\textup{:}] For some $r > L$, there exists a boundary point $\bar{u}' \in\partial( K^{({\lambda})} [\bar{y}']) \cap C_{d-1}(y',r)$, and such that $\bar{u}'\notin\bigcup_{\bar{z}' \in[\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \setminus\{ \bar {y}'\} ] \cap C_{d-1}(y',r)} K^{({\lambda})}[\bar{z}']$ but $\bar{u}' \in \bigcup_{\bar{z}' \in\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \cap C_{d-1}(y',r)} K^{({\lambda})}[\bar{z}']$, that is, $\xi_{[r]}^{({\lambda})}( \bar {y}',\mathcal{P}_{{\lambda}\rho}^{({\lambda})} )= 1$, but possibly\break $ \xi^{({\lambda})} (\bar{y}',\mathcal {P}_{{\lambda }\rho}^{({\lambda})})= 0$. \item[Event $E_2$\textup{:}] For some $r > L$, there exists a boundary point $\bar{u}' \in\partial( K^{({\lambda})} [\bar{y}']) \cap C^c_{d-1}(y',r)$ such that $\bar{u}' \notin \bigcup_{\bar{z}' \in\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \setminus\{ \bar {y}'\} } K^{({\lambda})} [\bar{z}'],$ but $K^{({\lambda})}[ \bar{y}'] \cap C_{d-1} (\bar{y}',r) \subset \bigcup_{\bar{z}' \in[\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \setminus\{ \bar {y}'\} ] \cap C_{d-1}(y',r)} K^{({\lambda})}[\bar{z}']$, that is, $\xi^{({\lambda})} (\bar {y}',\mathcal{P}_{{\lambda} \rho}^{({\lambda})}) = 1$ but\break $\xi_{[r]}^{({\lambda})}( \bar{y}',\mathcal {P}_{{\lambda}\rho }^{({\lambda})} )= 0$. \end{description} On event $E_1$ writing $\bar{u}':= (u',h'_u),$ we easily check that \begin{equation}\label{HU} h'_u \geq\psi^{({\lambda})} \biggl(\frac{L}{2} \biggr). \end{equation} Indeed, we have: \begin{itemize} \item either $|u'-y'| \geq r \slash2$ or \item$d(u',\partial B_{d-1}(y',r)) \geq r \slash2$ and, hence, $d(u',z') \geq r \slash2$ for all $\bar{z}' \in\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \cap C^c_{d-1}(y',r)$. \end{itemize} In both cases, on $E_1$, $(u',h'_u)$ falls into $K^{({\lambda})}[\bar{v}']$ for some $\bar{v}'$ such that $|v'-u'| \geq r \slash2,$ either with $\bar{v}' = \bar{y}'$ or $\bar{v}' \in\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \cap C^c_{d-1}(y',r)$. Consequently, recalling that $r > L$ and using the definition of $K^{({\lambda})}[\cdot]$, we obtain (\ref{HU}) as required. On $E_1$ we have $\bar{u}'\notin\bigcup_{\bar{z}' \in[\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \setminus\{ \bar {y}' \}] \cap C_{d-1}(y',r)} K^{({\lambda})}[\bar{z}']$, implying that the downward cone $ K^{\downarrow}_{({\lambda})}[\bar{u}']$ is devoid of points of $\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \ \cap\ C_{d-1}(y',r).$ By the assumed properties of $\psi$ and $\rho$, the integral of $\rho^{({\lambda})}$ over $K^{\downarrow}_{({\lambda })}[\bar{u}']$ is $\Omega(\operatorname{Vol}(K^{\downarrow}_{({\lambda})}[\bar {u}']))$, which is \begin{eqnarray}\label{volinvert} \Omega\biggl(\int_0^{h'_u} \bigl(\bigl[\psi^{({\lambda })}\bigr]^{-1}(h'_u-h')\bigr)^{d-1}\, dh' \biggr) &=& \Omega\biggl(\int_0^{h'_u} (h_u'-h')^{(d-1)/\alpha} \, dh' \biggr) \nonumber\\[-8pt]\\[-8pt] &=& \Omega\bigl( (h'_u)^{(\alpha+ d - 1)/ \alpha} \bigr),\nonumber \end{eqnarray} with the second equality following by the definition of $[\psi^{({\lambda})}]^{-1}$, and where we use $f({\lambda}) = \Omega(g({\lambda}))$ to signify that $f({\lambda})/g({\lambda})$ is asymptotically bounded away from zero. Clearly, the integral of $\rho^{({\lambda})}$ over $K^{\downarrow}_{({\lambda })}[\bar {u}'] \cap C_{d-1}(y',r)$ for $\bar{u}' \in C_{d-1}(y',r)$ is of the same order. Recalling from (\ref{PPequiv}) that the intensity measure of the Poisson process $\mathcal {P}_{{\lambda} \rho}^{({\lambda})}$ has its density given by $\rho^{({\lambda})}$, we thus conclude for fixed $\bar{u}'$ that the probability of the considered event $\Xi[\bar{u}'] := \{ K^{\downarrow}_{({\lambda})}[\bar{u}'] \cap[\mathcal {P}_{{\lambda }\rho}^{({\lambda})} \setminus\{ \bar{y}' \}] \cap C_{d-1}(y',r) = \varnothing\}$ satisfies \begin{equation}\label{AddedEqn} P[\Xi[\bar{u}']] \leq\exp\bigl( -\Omega\bigl( (h'_u)^{(\alpha+ d - 1)/ \alpha}\bigr)\bigr). \end{equation} To proceed, we recall that $r > L$ and we partition $\mathbb{R}^{d-1} \times\mathbb{R}_+$ into unit volume cubes and we let $q_1,q_2,\ldots$ be an enumeration of those cubes having nonempty intersection with $\partial(K^{({\lambda})} [\bar{y}'])$. Let \[ p_i := P \bigl[ \exists{\bar{u}' \in q_i}\dvtx K^{\downarrow }_{({\lambda})}[\bar{u}'] \cap\bigl[\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \setminus\{ \bar{y}' \}\bigr] \cap C_{d-1}(y',L) = \varnothing\bigr] \] for all $i=1,2,\ldots$ and note that, by (\ref{AddedEqn}), we have \[ p_i \leq \exp\bigl(-\Omega\bigl((h'_q)^{(\alpha+ d - 1)/\alpha}\bigr )\bigr), \] where $h'_{q_i}$ is the last coordinate of the center of the cube $q_i.$ We now have \[ P[E_1] \leq\sum_{i=1}^{\infty} p_i \leq C \int_{\psi^{({\lambda} )}(L/2)}^{\infty} L^{d-2} \exp\biggl( -\frac{1}{C} (h'_u)^{(\alpha+ d - 1)/\alpha} \biggr )\, dh'_u \] for some $0 < C <\infty$ in view of the discussion above. Here $CL^{d-2}$ bounds the number of cubes in the set $q_1,q_2,\ldots$ of any fixed height $h'_u \geq \psi^{({\lambda})}(L/2)$. Recalling that $\psi^{({\lambda})}(L/2) = (1+o(1))(L/2)^{\alpha}$, it follows (using a different choice of $C$ if necessary) that \[ P[E_1] \leq C \exp\biggl(- \frac{1}{C} L^{\alpha+ d - 1} \biggr). \] To estimate $P[E_2]$, note that for $\bar{u}' := (u',h'_u) \in \partial(K[\bar{y}'])$ lying in $C_{d-1}^c(y',r)$ we must have \begin{equation}\label{HU2} h'_u \geq\psi^{({\lambda})}(r). \end{equation} Further, since $\bar{u}' \notin\bigcup_{\bar{z}' \in\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \setminus\{ \bar {y}' \} } K^{({\lambda})} [\bar{z}']$, we have $K^{\downarrow}_{({\lambda})}[\bar{u}'] \cap[\mathcal {P}_{{\lambda}\rho }^{({\lambda})} \setminus\{ \bar{y}' \}] = \varnothing.$ Denoting this event $\Xi^*[\bar{u}'] := \{ K^{\downarrow}_{({\lambda})}[\bar{u}'] \cap[\mathcal {P}_{{\lambda }\rho}^{({\lambda} )} \setminus\{ \bar{y}' \}] = \varnothing\}$, noting that as in (\ref{AddedEqn}) we have \begin{equation}\label{AddedEqn2} P[\Xi^*[\bar{u}']] \leq\exp\bigl( -\Omega\bigl( (h'_u)^{(\alpha + d - 1)/ \alpha}\bigr)\bigr), \end{equation} recalling that $r>L$ and proceeding in analogy with the case of event $E_1$ above, with (\ref{HU}) and (\ref{AddedEqn}) there replaced by (\ref{HU2}) and (\ref{AddedEqn2}) respectively and with $C^c_{d-1}(y',L)$ partitioned into unit volume cubes, we bound $P[E_2]$ by \[ P[E_2] \leq C \int_{s = L}^{\infty} s^{d-2} \int_{h_y' + \psi^{({\lambda})}(s)}^{\infty} \exp\biggl( -\frac{1}{C} (h'_u)^{(\alpha + d - 1)/ \alpha} \biggr)\,dh_u'\,ds \] for some $0 < C < \infty$. It follows that $P[E_2] \leq C \exp(-{L^{\alpha+ d - 1}/ C} ). $ Since $ P [ R^{\xi^{({\lambda})}} [\bar{y}';\mathcal {P}_{{\lambda} \rho}^{({\lambda})} ] > L ] = P[E] \leq P[E_1] + P[E_2]$, Lemma \ref{StabLemma} follows. \end{pf*} Given $\bar{y}:=(y',h'_y)$, we expect for large temporal $h'_y$, that $\bar{y}$ is $\psi$-extremal with small probability. Also, as previously noted in Section \ref{Terminology}, we expect for small $\alpha$ that $\bar{y}$ is more likely to be $\psi$-extremal. The next lemma makes these probabilities a bit more precise and shows that the probability of having $(y',h_y')$ extreme in ${\mathcal{P}^*}_{{\lambda}\rho }^{({\lambda})}:= \mathcal{P}_{{\lambda}\rho}^{({\lambda})} \cup\{ \bar z'_1,\ldots ,\bar z'_k\}, k\geq0,$ with respect to $\psi^{({\lambda})}$ decays exponentially with $h_y'$ uniformly in ${\lambda}$ for ${\lambda}$ large enough. \begin{lemm}\label{expbds} There exists a constant $C$ such that, uniformly in ${\lambda}$ large enough, for all $\bar{y}' \in{\lambda}^\beta A \times\mathbb{R}_+$ and $\{ \bar z'_1,\ldots,\bar z'_k \}$, we have \[ P \bigl[ \xi^{({\lambda})}\bigl(\bar{y}',{\mathcal {P}^*}_{{\lambda}\rho }^{({\lambda})}\bigr) = 1 \bigr] \leq C\exp\biggl(-\frac{1}{C}(h'_y)^{(\alpha+ d - 1)/ \alpha} \biggr). \] \end{lemm} \begin{pf} Clearly, since adding extra points to $\mathcal{P}_{{\lambda}\rho }^{({\lambda})}$ decreases the probability of $(y',h_y')$ being extreme, we may without loss of generality choose $k=0$ so that ${\mathcal{P}^*}_{{\lambda}\rho}^{({\lambda})} = \mathcal{P}_{{\lambda}\rho}^{({\lambda})}.$ On the event $E:= \{ \xi^{({\lambda})}(\bar{y}',\mathcal{P}_{{\lambda}\rho }^{({\lambda })}) = 1 \}$ there exists $\bar{u}':= (u',h'_u) \in \partial(K^{({\lambda})}[\bar{y}'])$ such that $\bar{u}' \notin \bigcup_{\bar{z}' \in\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \setminus\{ \bar {y}' \}} K^{({\lambda})}[\bar{z}'],$ which is equivalent to $K^{\downarrow}_{({\lambda})}[\bar{u}'] \cap [\mathcal{P}_{{\lambda}\rho}^{({\lambda})} \setminus\{ \bar{y}' \} ] = \varnothing .$ As in the proof of Lemma 3.1, for fixed $\bar{u}'$, the probability of the last event does not exceed \[ \exp\biggl[-\int_{K^{\downarrow}_{({\lambda})}[\bar{u}']} \rho ^{({\lambda} )}(v'h'_v)\,dv'\,dh'_v \biggr] \leq C\exp\biggl(-\frac{1}{C}(h'_u)^{(\alpha+ d - 1)/ \alpha} \biggr). \] Recalling the relation $h'_u = h'_y + \psi^{({\lambda})}(|u'-y'|),$ putting $|u'-y'| = s$, and resorting again to a partition of $\mathbb{R}^{d-1} \times\mathbb {R}_+$ into unit volume cubes and summing up the respective probabilities as in the proof of Lemma \ref{StabLemma}, we obtain the required bound \begin{eqnarray*} P[E] &\leq& C \int_0^{\infty} s^{d-2} \int_{h_y'}^{\infty} \exp\biggl( -\frac{1}{C} (h'_u)^{(\alpha+ d - 1)/ \alpha} \biggr)\,dh_u'\,ds \\ &\leq& C\exp\biggl(-\frac{1}{C}(h'_y)^{(\alpha+ d - 1)/ \alpha} \biggr). \end{eqnarray*} \upqed \end{pf} \subsection{\texorpdfstring{Proof of Theorem \textup{\protect\ref {LLN}}}{Proof of Theorem 1.1}}\label{LLNproof} Recall the definition of ${\mathcal{P}}_{\rho_x^{(\infty)}}$ from (\ref{SCALINGLIMIT}). One benefit of stabilization is that the one point correlation function\break $\mathbb{E} [\xi^{(\infty)} ((\mathbf{0},h'),{\mathcal {P}}_{\rho_x^{(\infty)}} ) ]$ is approximated for large $r$ by the finite range version \[ \mathbb{E} \bigl[\xi^{(\infty)}_{[r]} \bigl((\mathbf {0},h'),{\mathcal{P}}_{\rho_x^{(\infty)}} \bigr) \bigr] \] and, similarly, $\mathbb{E} [\xi^{({\lambda})} ((\mathbf{0},h'), {\mathcal {P}}_{{\lambda}\rho}^{({\lambda})} ) ]$ is approximated by its finite range version $\mathbb{E} [\xi_{[r]}^{({\lambda})} ((\mathbf{0},h'), {\mathcal {P}}_{{\lambda}\rho}^{({\lambda})} ) ]$. Using the large ${\lambda}$ weak convergence of ${\mathcal{P}}_{{\lambda}\rho}^{({\lambda})}$ to ${\mathcal {P}}_{\rho_x^{(\infty)}}$, one may approximate the first mentioned finite range version by the second and thus show that $\mathbb{E} [\xi^{({\lambda})} ((\mathbf{0},h'), {\mathcal{P}}_{{\lambda} \rho}^{({\lambda})} ) ]$ is asymptotically equal to $\mathbb{E} [\xi^{(\infty)} ((\mathbf{0},h'),{\mathcal {P}}_{\rho_x^{(\infty)}} ) ]$. This is spelled out in Lemma \ref{L1conv} below, which captures the essence of stabilization and which lies at the heart of the proof of Theorem \ref{LLN}. Note that when Lemma~\ref{L1conv} is combined with Lemma \ref{expbds}, then it shows \begin{equation}\label{IFF} \mathbb{E} \bigl[ \xi^{(\infty)} ((\mathbf{0},h'),{\mathcal{P}}^* ) \bigr] \leq C\exp\biggl(-\frac{1}{C}(h')^{(\alpha+ d - 1)/ \alpha} \biggr) \end{equation} and, therefore, $I(f) < \infty$ for $f \in\mathcal{C}_b(A_+)$. Recall from (\ref{RESCALING5}) that $\xi^{({\lambda})}$ is the re-scaled version of $\xi$ with dependency on $x$ fixed. \begin{lemm}\label{L1conv} For all $x \in A$ and $h' \in\mathbb {R}_+$, we have \[ \lim_{{\lambda}\to\infty} \mathbb{E} \bigl[\xi^{({\lambda })} \bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}\rho}^{({\lambda})} \bigr) \bigr]= \mathbb{E} \bigl[ \xi^{(\infty)} \bigl((\mathbf{0},h'),{\mathcal{P}}_{\rho _x^{(\infty)}} \bigr) \bigr]. \] \end{lemm} \begin{pf} Fix $x \in A$. Taking into account (\ref{SCALINGH}) and (\ref{SCALINGLIMIT}) and using the results of Section 3.5 in \cite{RES} [see Proposition 3.22 or Proposition 3.19 there combined with Proposition 3.6(ii) ibidem], we observe that as ${\lambda}\to\infty$, ${\mathcal{P}}_{{\lambda }\rho}^{({\lambda})}$ converges weakly to ${\mathcal{P}}_{\rho_x^{(\infty)}}$ as a point process; see ibidem. Using Theorem 5.5 in \cite{BILL} with $h_{{\lambda}} := \xi _{[r]}^{({\lambda} )}((\mathbf{0},h'),\cdot)$ and $h := \xi_{[r]}^{(\infty)}((\mathbf{0},h'),\cdot)$ there, we easily see that, by Lemmas \ref{StabLemma} and \ref{expbds}, under the law of the limit process ${\mathcal{P}}_{\rho_x^{(\infty)}}$, the discontinuity event $E$ ibidem [an infinitesimal move of the point configuration alters the $\xi$-value for $(\mathbf{0},h)$] is contained up to an event of probability $0$ in the set of point configurations $\mathcal{X}$ such that either the spatial coordinates of two points in $\mathcal{X}$ coincide or such that there are at least two points $\bar{y}',\bar{y}'' \in\mathcal{X}$ such that the boundaries of the upward cones $K^{(\infty)}[\bar{y}']$ and $K^{(\infty)}[\bar{y}'']$ [recall (\ref{upconeScaled})] intersect in a point lying on the boundary of the upward cone $K^{(\infty )}[(\mathbf{0},h')],$ which clearly happens with probability $0$ under the law of ${\mathcal{P}}_{\rho_x^{(\infty)}}.$ Indeed, Lemma \ref{expbds} states that no effects coming from $h\to \infty$ arise (no infinite range dependencies in $h$). A similar statement in space is provided by Lemma \ref{StabLemma}. Combining both these statements allows us to draw conclusions from the weak convergence of point processes as we do in the above argument; see ibidem in \cite{RES}. Thus, Theorem 5.5 in \cite{BILL} yields \begin{equation}\label{limit1} \lim_{{\lambda}\to\infty} \mathbb{E} \bigl[\xi_{[r]}^{({\lambda })} \bigl((\mathbf{0},h'), {\mathcal{P}} _{{\lambda}\rho}^{({\lambda})} \bigr) \bigr] = \mathbb{E} \bigl[ \xi_{[r]}^{(\infty)} \bigl((\mathbf{0},h'), {\mathcal{P}}_{\rho_x^{(\infty)}} \bigr) \bigr]. \end{equation} Let $R^{\xi}:= R^{\xi^{({\lambda})}}[(\mathbf{0},h'); {\mathcal {P}}_{{\lambda}\rho}^{({\lambda})} ].$ We have for all $r > 0$ and all ${\lambda}> 0$ \begin{eqnarray*} &&\mathbb{E} \bigl[\xi^{({\lambda})} \bigl((\mathbf{0},h'), {\mathcal {P}}_{{\lambda}\rho}^{({\lambda})} \bigr) \bigr]\\ & &\qquad= \mathbb{E} \bigl[ \xi^{({\lambda})} \bigl((\mathbf {0},h'), {\mathcal{P}}_{{\lambda} \rho}^{({\lambda})} \bigr) \mathbf{1}_{R^{\xi} \leq r } \bigr] + \mathbb{E} \bigl[\xi^{({\lambda})} \bigl((\mathbf{0},h'), {\mathcal {P}}_{{\lambda}\rho}^{({\lambda})} \bigr) \mathbf{1}_{R^{\xi} > r } \bigr] \\ &&\qquad= \mathbb{E} \bigl[ \xi_{[r]}^{({\lambda})} \bigl( (\mathbf{0},h'), {\mathcal{P}}_{{\lambda}\rho}^{({\lambda})} \bigr) {\bf1}_{R^{\xi} \leq r } \bigr] + \mathbb{E} \bigl[\xi^{({\lambda})} \bigl((\mathbf{0},h'), {\mathcal {P}}_{{\lambda}\rho}^{({\lambda})} \bigr) {\bf 1}_{R^{\xi} > r } \bigr]. \end{eqnarray*} By Lemma \ref{StabLemma}(i) [recall the bound (\ref{SRDecay1})], Cauchy--Schwarz, and the boundedness of $\xi^{({\lambda})}_{[r]}$, uniformly in large ${\lambda}$ and all $r > 0$, \[ \mathbb{E} \bigl[\xi_{[r]}^{({\lambda})} \bigl( (\mathbf{0},h'), {\mathcal{P}}_{{\lambda}\rho}^{({\lambda})} \bigr) {\bf1}_{R^{\xi} > r } \bigr] \leq C\exp\biggl(- \frac {r}{C} \biggr) \] for some $C$ not depending on $x.$ Likewise, uniformly in large ${\lambda},$ we have $\mathbb{E} [\xi^{({\lambda})} ( (\mathbf{0},h'), {\mathcal{P}}_{{\lambda} \rho}^{({\lambda})} ) {\bf1}_{R^{\xi} > r } ] \leq C \exp(-r/C )$. It follows that, for large ${\lambda} > 0$ and all $r > 0$, \begin{equation}\label{DOD0} \bigl| \mathbb{E} \bigl[\xi^{({\lambda })} \bigl( (\mathbf{0},h'), {\mathcal{P}} _{{\lambda} \rho}^{({\lambda})} \bigr) \bigr] - \mathbb{E} \bigl[\xi _{[r]}^{({\lambda})} \bigl( (\mathbf{0},h'), {\mathcal{P}}_{{\lambda}\rho}^{({\lambda})} \bigr) \bigr] \bigr| \leq2C \exp\biggl(-\frac{r}{C} \biggr). \end{equation} Similarly, Lemma \ref{StabLemma}(ii) gives, for all $r > 0$, \[ \bigl| \mathbb{E} \bigl[\xi^{(\infty)} \bigl( (\mathbf{0},h'), {\mathcal{P}}_{\rho _x^{(\infty) }} \bigr) \bigr] - \mathbb{E} \bigl[\xi_{[r]}^{(\infty)} \bigl( (\mathbf{0},h'), {\mathcal{P}}_{\rho_x^{(\infty)}} \bigr) \bigr] \bigr| \leq2C \exp\biggl(-\frac{r}{C} \biggr). \] Write \begin{eqnarray}\label{tri-ineq} && \bigl| \mathbb{E} \bigl[\xi^{({\lambda})} \bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}\rho}^{({\lambda})} \bigr) \bigr] - \mathbb{E} \bigl[ \xi^{(\infty)} \bigl((\mathbf{0},h'), {\mathcal{P}}_{\rho_x^{(\infty)}} \bigr) \bigr] \bigr|\nonumber \\ &&\qquad\leq\bigl| \mathbb{E} \bigl[\xi^{({\lambda})} \bigl((\mathbf{0},h'), {\mathcal {P}}_{{\lambda}\rho}^{({\lambda})} \bigr) \bigr] - \mathbb{E} \bigl[\xi_{[r]}^{({\lambda})} \bigl((\mathbf{0},h'), {\mathcal{P}} _{{\lambda} \rho}^{({\lambda})} \bigr) \bigr] \bigr|\nonumber \\[-8pt]\\[-8pt] &&\quad\qquad{}+ \bigl| \mathbb{E} \bigl[\xi_{[r]}^{({\lambda})} \bigl((\mathbf {0},h'), {\mathcal{P}}_{{\lambda} \rho}^{({\lambda})} \bigr) \bigr] - \mathbb{E} \bigl[ \xi _{[r]}^{(\infty)} \bigl((\mathbf{0},h'), {\mathcal{P}}_{\rho_x^{(\infty)}} \bigr) \bigr] \bigr|\nonumber \\ &&\quad\qquad{}+ \bigl| \mathbb{E} \bigl[ \xi_{[r]}^{(\infty)} \bigl((\mathbf{0},h'), {\mathcal{P}}_{\rho_x^{(\infty)}} \bigr) \bigr] - \mathbb {E} \bigl[ \xi^{(\infty)} \bigl((\mathbf{0},h'), {\mathcal{P}}_{\rho_x^{(\infty)}} \bigr) \bigr] \bigr|.\nonumber \end{eqnarray} For fixed $r$, the second term on the right-hand side of (\ref{tri-ineq}) goes to zero as ${\lambda}\to\infty$ by (\ref{limit1}). The first and third terms are bounded above by $ 2C \exp(-r/C)$. Letting $r \to\infty$ completes the proof of Lemma \ref{L1conv}. \end{pf} Given Lemmas \ref{expbds} and \ref{L1conv}, we now prove Theorem \ref{LLN} as follows. We have \[ \mathbb{E}[ \langle f, \mu_{{\lambda}{\rho}}^{\xi} \rangle] = \int_A \int_0^{\infty} f(x,h_x) \mathbb{E}[ \xi((x,h_x), {\mathcal{P}}_{{\lambda}{\rho}}) ] {\lambda}\rho(x,h_x)\, dh_x \,dx. \] By (\ref{RESCALING5}), we have $\xi((x,h_x), {\mathcal{P}}_{{\lambda }{\rho}}) = \xi^{({\lambda})} ((\mathbf{0},h'_x), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} )$ and by (\ref {rdef}), we have $\rho(x,h_x) = {\lambda}^{-\gamma\delta} \rho^{({\lambda})}(\mathbf{0},h'_x)$. Thus, putting $h'_x:= {\lambda}^{\gamma}h_x$ and recalling $1 - \gamma(\delta+ 1) = \tau$ [see (\ref{TAUU}) and (\ref{twodef})], we obtain \[ \mathbb{E}[ \langle f, \mu_{{\lambda}{\rho}}^{\xi} \rangle] = \int_A \int_0^{\infty} f(x,h'_x \lambda^{-\gamma}) \mathbb{E}\bigl[ \xi^{({\lambda})} \bigl((\mathbf{0},h'_x), {\mathcal {P}}_{{\lambda}{\rho}}^{({\lambda})} \bigr) \bigr] {\lambda}^{\tau} \rho^{({\lambda})}(\mathbf{0},h'_x) \,dh'_x\, dx \] or, simply, \[ {\lambda}^{-\tau}\mathbb{E}[ \langle f, \mu_{{\lambda}{\rho }}^{\xi} \rangle] = \int_A \int_0^{\infty} f(x,h'_x \lambda^{-\gamma}) \mathbb{E}\bigl[ \xi ^{({\lambda})} \bigl((\mathbf{0},h'_x), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \bigr) \bigr] \rho ^{({\lambda })}(\mathbf{0},h'_x) \,dh'_x \,dx. \] We put \[ g_{{\lambda}}(x,h'_x):= \mathbb{E}\bigl[ \xi^{({\lambda})} \bigl ((\mathbf {0},h'_x), {\mathcal{P}}_{{\lambda}{\rho }}^{({\lambda})} \bigr) \bigr] \rho^{({\lambda})}(\mathbf{0},h'_x). \] For all $x \in A$ and $h'_x \in\mathbb{R}_+$, we have by Lemma \ref{L1conv} and (\ref{SCALINGH}) \[ \lim_{{\lambda}\to\infty} g_{{\lambda}}(x,h'_x) = \mathbb{E}\bigl [ \xi ^{(\infty)} \bigl((\mathbf{0},h'_x), {\mathcal{P}}_{ \rho_x^{(\infty) } } \bigr) \bigr] \rho_0(x) {h'}_x^{\delta} \] and moreover, by Lemma \ref{expbds} for all $(x,h) \in A_+$, $ g_{{\lambda}}(x,h'_x)$ is bounded uniformly in ${\lambda}$ by the function $(x,h') \mapsto C' (h')^{\delta} \exp(-h'/C)$, which is integrable on $A_+$. Consequently, the dominated convergence theorem yields \begin{eqnarray}\label{DomConv} &&\quad\qquad\lim_{{\lambda}\to\infty} {\lambda}^{-\tau} \mathbb {E}[\langle f, \mu^{\xi}_{{\lambda} \rho} \rangle]\nonumber\\[-8pt]\\[-8pt] &&\qquad\qquad= \int_{A} \int_{0}^{\infty} f(x,0) \mathbb{E} \bigl[\xi^{(\infty)} \bigl((\mathbf{0},h'),{\mathcal {P}}_{\rho_x^{(\infty)}} \cup \{ (\mathbf{0},h') \} \bigr) \bigr] \rho_0(x) (h')^{\delta} \,dh' \,dx.\nonumber \end{eqnarray} Using the scaling relations (\ref{RESCALING}), (\ref{RESCALING3}), (\ref{SCALINGLIMIT1}) and (\ref{SCALINGLIMIT}), we see that \begin{eqnarray}\label{SCALINGLIMIT2} &&\xi^{(\infty)} \bigl((\mathbf{0},h'),{\mathcal{P}}_{\rho _x^{(\infty)}} \cup\{ (\mathbf{0} ,h') \} \bigr)\nonumber\\[-8pt]\\[-8pt] &&\qquad\stackrel{\mathcal{D}}{=} \xi^{(\infty)} \bigl((\mathbf{0},[\rho_0(x)]^{\gamma} h'),{\mathcal{P}}_{*} \cup\{ (\mathbf{0},[\rho_0(x)]^{\gamma} h') \} \bigr),\nonumber \end{eqnarray} with $\stackrel{\mathcal{D}}{=}$ standing for equality in law. Theorem \ref{LLN} follows by using (\ref{SCALINGLIMIT2}), changing variables $h'' := [\rho _0(x)]^{\gamma}h'$ in the integral in (\ref{DomConv}) and recalling that $\tau= 1 - \gamma(\delta+ 1).$ \subsection{\texorpdfstring{Proof of Theorem \textup{\protect\ref {VAR}}}{Proof of Theorem 1.2}}\label{VARproof} Fix $x \in A$ and recall from (\ref{RESCALING4}) that ${\mathcal {P}}_{{\lambda}{\rho }}^{({\lambda})}:= {\mathcal{P}}_{{\lambda} {\rho}}^{({\lambda})}[x]$. For all ${\lambda} > 0$, $h' \in\mathbb{R}_+$, and $(y',h_y') \in{\lambda}^{\beta}A \times\mathbb{R}_+$, consider the pair correlation function for the re-scaled growth process: \begin{eqnarray}\label{twopt} &&c^{({\lambda})}((\mathbf{0},h'), (y',h_y'))\nonumber\\ &&\qquad:= c_x^{({\lambda})}((\mathbf{0},h'),(y',h_y'))\nonumber\\[-8pt]\\[-8pt] &&\qquad:= \mathbb{E} \bigl[ \xi^{({\lambda})} \bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda }{\rho}}^{({\lambda})} \cup (y',h_y') \bigr) \xi^{({\lambda})} \bigl((y',h_y'), {\mathcal {P}}_{{\lambda}{\rho }}^{({\lambda})} \cup(\mathbf{0},h') \bigr) \bigr]\nonumber\\ &&\quad\qquad{}- \mathbb{E}\xi^{({\lambda})} \bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \bigr) \mathbb {E}\xi^{({\lambda})} \bigl((y',h_y'), {\mathcal{P}}_{{\lambda} {\rho}}^{({\lambda})} \bigr).\nonumber \end{eqnarray} Consider also the pair correlation function for the limit growth process $\xi^{(\infty)}$: \begin{eqnarray*} && c_x^{(\infty)}((\mathbf{0},h'), (y',h_y'))\\ &&\qquad:= \mathbb{E} \bigl[ \xi^{(\infty)} \bigl((\mathbf{0},h), {\mathcal {P}}_{\rho_x^{(\infty) }} \cup (y',h_y') \bigr) \xi^{(\infty)} \bigl((y',h_y'), {\mathcal{P}}_{\rho_x^{(\infty) }} \cup(\mathbf{0},h') \bigr) \bigr]\\ &&\qquad\quad{}- \mathbb{E} \xi^{(\infty)} \bigl((\mathbf{0},h'), {\mathcal{P}}_{\rho _x^{(\infty) }} \bigr) \mathbb{E} \xi^{(\infty)} \bigl((y',h_y'), {\mathcal{P}}_{\rho_x^{(\infty) }} \bigr). \end{eqnarray*} A second benefit of stabilization, as shown by the next lemma, is that it facilitates convergence of pair correlation functions and thus leads to variance asymptotics. The next lemma is the second-order counterpart to Lemma \ref{L1conv}. \begin{lemm}[(Convergence of two point correlation function)]\label{corlimit} For all $(x,\break h_x):= (x,h) \in A_+$, and $(y',h'_y) \in{\lambda }^{\beta}A \times\mathbb{R}_+$, we have \[ \lim_{{\lambda}\to\infty} c^{({\lambda})}_x((\mathbf{0},h'), (y',h_y')) = c_x^{(\infty)}((\mathbf{0},h'), (y',h_y')). \] \end{lemm} \begin{pf} In view of Lemma \ref{L1conv}, it will suffice to show \begin{eqnarray}\label{covlimit} &&\quad\qquad\lim_{{\lambda}\to\infty} \mathbb{E} \bigl[ \xi ^{({\lambda })} \bigl((\mathbf{0},h'), {\mathcal{P}} _{{\lambda} {\rho}}^{({\lambda})} \cup(y',h_y') \bigr) \xi^{({\lambda})} \bigl((y',h_y'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \cup(\mathbf {0},h') \bigr) \bigr] \nonumber\\[-8pt]\\[-8pt] &&\qquad\quad\qquad= \mathbb{E} \bigl[ \xi^{(\infty)} \bigl ((\mathbf{0},h'), {\mathcal {P}}_{\rho_x^{(\infty) }} \cup(y',h_y') \bigr) \xi^{(\infty)} \bigl((y',h_y'), {\mathcal{P}}_{\rho_x^{(\infty) }} \cup(\mathbf{0},h') \bigr) \bigr].\nonumber \end{eqnarray} Let $R^{\xi}:= R^{\xi^{({\lambda})}} [(\mathbf{0},h'); {\mathcal {P}}_{{\lambda}{\rho}}^{({\lambda})} \cup (y',h'_y)]$ and let $R_{y'}^{\xi}:= R^{\xi^{({\lambda})}}[(y',h'_y); {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \cup(\mathbf {0},h')].$ For all $r > 0$, we let $E_r:= \{R_{y'}^{\xi} \leq r, \ R^{\xi} \leq r \}.$ We split the left-hand side of (\ref{covlimit}) as \begin{eqnarray*} &&\mathbb{E} \bigl[ \xi^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal {P}}_{{\lambda}{\rho}}^{({\lambda})} \cup(y',h_y')\bigr) \xi^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda}{\rho }}^{({\lambda})} \cup(\mathbf{0},h')\bigr) \mathbf{1}_{E_r} \bigr ] \\ &&\qquad{} + \mathbb{E} \bigl[ \xi^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal {P}}_{{\lambda}{\rho}}^{({\lambda})} \cup(y',h_y')\bigr) \xi^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda}{\rho }}^{({\lambda})} \cup(\mathbf{0},h')\bigr) \mathbf{1}_{E_r^c} \bigr]. \end{eqnarray*} The second expectation is bounded by $C \exp(-r/C)$ for some $C$ not depending on $x$ by Lemma \ref{StabLemma}(i) and by Cauchy--Schwarz. By the definition of the stabilization radius, the first is simply \[ \mathbb{E} \bigl[ \xi_{[r]}^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \cup (y',h_y')\bigr) \xi_{[r]}^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda }{\rho }}^{({\lambda})} \cup(\mathbf{0} ,h')\bigr)\mathbf{1}_{E_r} \bigr]. \] Again, by Lemma \ref{StabLemma}(i) and by Cauchy--Schwarz, for all $r > 0$, this is within $C \exp(-r/C)$ of \[ \mathbb{E} \bigl[ \xi_{[r]}^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \cup (y',h_y')\bigr) \xi_{[r]}^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda }{\rho }}^{({\lambda})} \cup(\mathbf{0},h')\bigr) \bigr], \] hence, \begin{eqnarray}\label{DOD1} && \bigl| \mathbb{E} \bigl[ \xi^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \cup(y',h_y')\bigr) \xi^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda}{\rho }}^{({\lambda})} \cup(\mathbf{0},h')\bigr) \bigr]\nonumber\\ &&\qquad{} - \mathbb{E} \bigl[ \xi_{[r]}^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda} )}\cup(y',h_y') \bigr) \xi_{[r]}^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda }{\rho }}^{({\lambda})} \cup(\mathbf{0},h')\bigr) \bigr] \bigr|\\ &&\qquad\leq 2 C \exp\biggl(\frac{-r}{C} \biggr)\nonumber \end{eqnarray} uniformly in $x.$ Now, in analogy with (\ref{limit1}), we have \begin{eqnarray}\label{DOD3} &&\qquad\lim_{{\lambda}\to\infty} \mathbb{E} \bigl[ \xi_{[r]}^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \cup (y',h_y')\bigr) \xi_{[r]}^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda }{\rho }}^{({\lambda})} \cup(\mathbf{0},h')\bigr) \bigr]\nonumber \\[-8pt]\\[-8pt] &&\qquad\qquad= \mathbb{E} \bigl[ \xi_{[r]}^{(\infty)}\bigl((\mathbf{0},h'), {\mathcal {P}}_{\rho_x^{(\infty)}} \cup (y',h_y')\bigr) \xi_{[r]}^{(\infty)}\bigl((y',h_y'), {\mathcal{P}}_{\rho _x^{(\infty)}} \cup(\mathbf{0} ,h')\bigr) \bigr].\nonumber \end{eqnarray} By Lemma \ref{StabLemma}(ii), we have for all $r > 0$ \begin{eqnarray}\label{DOD2} && \bigl| \mathbb{E} \bigl[ \xi^{(\infty)}\bigl((\mathbf{0},h'), {\mathcal {P}}_{\rho_x^{(\infty)}} \cup(y',h_y')\bigr) \xi^{(\infty)}\bigl((y',h_y'), {\mathcal{P}}_{\rho_x^{(\infty)}} \cup (\mathbf{0},h')\bigr) \bigr] \nonumber\\ &&\qquad{}- \mathbb{E} \bigl[ \xi_{[r]}^{(\infty)}\bigl((\mathbf{0},h'), {\mathcal {P}}_{\rho_x^{(\infty)}} \cup(y',h_y')\bigr) \xi_{[r]}^{(\infty)}\bigl((y',h_y'), {\mathcal{P}}_{\rho _x^{(\infty)}} \cup(\mathbf{0},h')\bigr) \bigr] \bigr|\\ &&\qquad\leq 2 C \exp\biggl(-\frac{r}{ C} \biggr)\nonumber \end{eqnarray} as in (\ref{DOD1}). Again, note that $C$ does not depend on $x$ since $\rho_0(x)$ is bounded away from zero. Combining (\ref{DOD1}), (\ref{DOD3}) and (\ref{DOD2}) yields \begin{eqnarray*} &&\limsup_{{\lambda}\to\infty} \bigl|\mathbb{E} \bigl[ \xi^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda }{\rho }}^{({\lambda})} \cup(y',h_y')\bigr) \xi^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda}{\rho }}^{({\lambda})} \cup(\mathbf{0},h')\bigr) \bigr] \\ &&\qquad{}- \mathbb{E} \bigl[ \xi^{(\infty)}\bigl((\mathbf{0},h'), {\mathcal {P}}_{\rho_x^{(\infty)}} \cup (y',h_y') \bigr) \xi^{(\infty)}\bigl((y',h_y'), {\mathcal{P}}_{\rho_x^{(\infty)}} \cup (\mathbf{0},h')\bigr) \bigr] \bigr| \\ &&\qquad\leq4 C \exp\biggl(-\frac{r}{C} \biggr) \end{eqnarray*} for all $r > 0.$ We conclude the proof of Lemma \ref{corlimit} by letting $r\to\infty.$ \end{pf} Lemma \ref{corlimit} is not enough to establish second-order asymptotics. We will also need that $c_x^{({\lambda})}$ is bounded by an integrable function on $A_+ \times{\lambda}^{\beta}A \times \mathbb{R}_+$, that is, we will need to establish the exponential decay of the correlation function (\ref{twopt}). This is done in the following lemma, which combined with Lemma \ref{corlimit}, shows that \begin{equation}\label{JFF} \bigl|c^{(\infty)}_x((\mathbf{0},h'), (y',h_y'))\bigr| \leq C \exp \biggl(- \frac{1 }{C} \max\biggl( \frac{|y'|}{2}, h'_y, h' \biggr) \biggr) \end{equation} and, therefore, $J(f) < \infty$ for all $f \in\mathcal{C}_b(A_+)$. \begin{lemm} \label{corbds} There exists a constant $C$ such that, for all ${\lambda}> 0$, $(x,h_x):= (x,h) \in A_+$, and $(y',h'_y) \in{\lambda}^{\beta}A \times\mathbb{R}_+$, we have \[ \bigl|c^{({\lambda})}_x((\mathbf{0},h'), (y',h_y'))\bigr| \leq C \exp\biggl(- \frac{1}{C} \max\biggl( \frac{|y'|}{2}, h'_y, h' \biggr) \biggr). \] \end{lemm} \begin{pf} Let $r \leq|y'|/2$ and note that, by definition of $\xi _{[r]}^{({\lambda} )}$, we have \begin{eqnarray*} &&\mathbb{E} \bigl[ \xi_{[r]}^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \cup (y',h_y')\bigr) \xi_{[r]}^{({\lambda})}\bigl((y',h_y'), {\mathcal{P}}_{{\lambda }{\rho }}^{({\lambda})} \cup(\mathbf{0},h')\bigr) \bigr] \\ &&\qquad= \mathbb{E} \bigl[ \xi_{[r]}^{({\lambda})}\bigl((\mathbf{0},h'), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})} \bigr) \bigr] \mathbb{E} \bigl[ \xi_{[r]}^{({\lambda})}\bigl((y',h_y'), {\mathcal {P}}_{{\lambda}{\rho}}^{({\lambda} )}\bigr) \bigr]. \end{eqnarray*} Recalling (\ref{DOD0}) and (\ref{DOD1}), we see that \[ \bigl|c^{({\lambda})}_x((\mathbf{0},h'), (y',h_y'))\bigr| \leq4C \exp \biggl(-\frac{r}{C} \biggr) \] for all $r \leq|y'|/2.$ In other words, putting $r = |y'|/2$ yields for all $(x,h) \in A_+$ and $(y',h'_y) \in{\lambda}^{\beta}A \times\mathbb{R}_+$ \[ \bigl|c^{({\lambda})}_x((\mathbf{0},h'), (y',h_y'))\bigr| \leq C \exp\biggl(- \frac{ |y'| }{ 2C} \biggr). \] Appealing to Lemma \ref{expbds} shows \[ \bigl|c^{({\lambda})}_x((\mathbf{0},h'), (y',h_y'))\bigr| \leq2 C \exp \biggl(-\frac{1}{C} \max( h'_y, h') \biggr). \] Combining the previous two displays concludes the proof of Lemma \ref{corbds}. \end{pf} Given Lemmas \ref{corlimit} and \ref{corbds}, we now prove Theorem \ref{VAR} as follows. By the Palm theory for Poisson processes (see, e.g., Theorem 1.6 of \cite{Pe}), we express $\operatorname{Var}[ \langle f, \mu_{{\lambda}\rho}^{\xi } \rangle]$ as \begin{eqnarray}\label{VarEqn} &&{\lambda}\int_{A+} f^2(\bar{x}) \mathbb{E}[\xi(\bar{x}, {\mathcal {P}}_{{\lambda}{\rho}} )] {\rho}(\bar{x}) \,d\bar{x}\nonumber\\[-8pt]\\[-8pt] &&\qquad{}+ {\lambda}^2 \int_{A_+} \int_{A_+} f(\bar{x})f(\bar{y}) c^{(1)}_x\bigl((\mathbf{0},h_x),(y-x,h_y)\bigr) {\rho}(\bar{x}) {\rho}(\bar {y}) \,d\bar{x} \,d\bar{y},\nonumber \end{eqnarray} where $\bar{x}:= (x,h_x)$ and $\bar{y} := (y,h_y).$ Following verbatim the proof of Theorem \ref{LLN} shows that after normalization by ${\lambda}^{\tau}$, the first integral converges as ${\lambda}\to\infty$ to \[ \int_{A} \int_0^{\infty} f^2(x,0) \mathbb{E}\bigl[\xi^{(\infty )}((\mathbf{0},h_x'), {\mathcal{P}}_{\rho_x^{(\infty) }} ) \bigr] {\rho}_0(x) (h_x')^{\delta} \,dh_x' \,dx, \] which by the definition of $m^{(\infty)}$ and the scaling relation (\ref{SCALINGLIMIT2}) equals \[ \int_{A} \int_0^{\infty} f^2(x,0) m^{(\infty)}(\mathbf{0},h_x') {\rho}_0^{\tau}(x) (h_x')^{\delta} \,dh_x' \,dx. \] Making again the usual substitutions $y' = {\lambda}^{\beta}(y - x)$, $ h'_x = {\lambda }^{\gamma} h_x,$ and $h'_y = {\lambda}^{\gamma} h_y$ and recalling ${\rho}(x,h_x)= {\lambda}^{-\gamma\delta} {\rho}^{({\lambda})}(\mathbf{0},h'_x)$, the second integral in (\ref{VarEqn}) becomes \begin{eqnarray*} &&{\lambda}^{2 - 2\gamma- 2\gamma\delta- \beta(d-1)} \int_{A} \int_{{\lambda}^{\beta} A} \int_0^{\infty} \int_0^{\infty} f(x, h'_x{\lambda}^{-\gamma}) f({\lambda}^{-{\beta}} y' + x, h_y' {\lambda}^{-\gamma} )\\ &&\phantom{{\lambda}^{2 - 2\gamma- 2\gamma\delta- \beta(d-1)} \int_{A} \int_{{\lambda}^{\beta} A} \int_0^{\infty} \int_0^{\infty}} {}\times c^{({\lambda})}_x ((\mathbf{0},h'_x), (y',h_y')) \\ &&\phantom{{\lambda}^{2 - 2\gamma- 2\gamma\delta- \beta(d-1)} \int_{A} \int_{{\lambda}^{\beta} A} \int_0^{\infty} \int_0^{\infty}} {}\times {\rho}^{({\lambda})}(\mathbf{0},h'_x) {\rho}^{({\lambda })}(y',h_y') \,dh'_x \,dh_y' \,dy' \,dx. \end{eqnarray*} Recalling from (\ref{twodef}) that $\beta(d - 1) + \gamma(1 + {\delta}) = 1$, we have by definition of $\tau$ [see (\ref{TAUU})] that $2 - 2\gamma- 2\gamma\delta- \beta(d-1) = 1 - \gamma(1+\delta) = \tau$. After normalization by ${\lambda}^{\tau}$, the above integral equals \begin{eqnarray*} &&\int_{A} \int_{{\lambda}^{\beta} A} \int_0^{\infty} \int_0^{\infty} f(x, h'_x{\lambda}^{-\gamma}) f({\lambda}^{-{\beta}} y' + x, h_y' {\lambda}^{-\gamma} )\\ &&\phantom{\int_{A} \int_{{\lambda}^{\beta} A} \int_0^{\infty} \int_0^{\infty}} {}\times g_{{\lambda}} (x,h'_x,y',h_y')\, dh'_x\,dh_y' \,dy' \,dx, \end{eqnarray*} where we put \[ g_{{\lambda}} (x,h'_x,y',h_y'):= c_x^{({\lambda})} ((\mathbf {0},h_x'), (y',h_y')) {\rho}^{({\lambda})}(\mathbf{0},h_x') {\rho}^{({\lambda})}(y',h_y'). \] Clearly, $f(x, h'_x {\lambda}^{-\gamma}) f({\lambda}^{-{\beta}} y' + x, h_y' {\lambda}^{-\gamma} )$ converges to $f^2(x,0)$ as ${\lambda}\to \infty$. Lemma \ref{corlimit} implies for all $(x,h'_x,y',h_y') \in A_+ \times{\lambda}^{{\beta}} A \times\mathbb{R}_+$ that the product $g_{{\lambda}} (x,h'_x,y',h_y') (h'_x)^{-\delta} (h_y')^{-\delta}$ converges to \[ c_x^{(\infty)}((\mathbf{0},h'_x),(y',h_y')) {\rho}^2_0(x) \] as ${\lambda}\to\infty$. Since, by Lemma \ref{corbds} and (R2), $g_{{\lambda}} (x,h'_x,y',h_y') (h'_x)^{\delta} (h'_y)^{\delta}$ is dominated in absolute value by the integrable function \[ (x,h_x',y',h_y') \mapsto C' (h'_x)^{\delta} (h'_y)^{\delta} \exp \biggl(-\frac{1}{C} \max\biggl(\frac{|y'| }{2}, h_x',h_y' \biggr) \biggr) \] on $A_+ \times\mathbb{R}^{d-1} \times\mathbb{R}_+$, the dominated convergence theorem combined with relation (\ref{SCALINGLIMIT2}) produces the desired limit (\ref{varlimit}). \subsection{\texorpdfstring{Proof of Theorem \textup{\protect\ref {CLT}}}{Proof of Theorem 1.3}}\label{CLTsubsection} Given Theorems 1.1 and 1.2, one may prove Theorem \ref{CLT} either by the method of cumulants \cite{BY2} or by the Stein method \cite{PY5}. The first approach shows that the Fourier transform of ${\lambda}^{-\tau/2} \langle f,\bar{\mu}^{\xi}_{{\lambda}\rho}\rangle$, namely, \[ \mathbb{E}\exp[ i {\lambda}^{-\tau/2} \langle f,\bar {\mu}^{\xi }_{{\lambda}\rho}\rangle ], \] converges as ${\lambda}\to\infty$ to the Fourier transform of a normal mean zero random variable with variance $ \sigma_f^2 := I(f^2) + J(f^2)$. Even though we use a formally different version of stabilization, this is accomplished by following \cite{BY2} nearly verbatim. Indeed, recall that Lemma \ref{corbds} shows the exponential decay of the two point correlation function $c^{({\lambda})}_x((\mathbf{0},h'), (y',h_y'))$. In a similar way we may establish the exponential decay of $k$-point correlation functions, and, more generally, that the $k$-point correlation functions cluster exponentially, as shown in Lemma 5.2 of \cite{BY2}. In this way we show (as in Lemma 5.3 of \cite{BY2}) that for all $k = 3,4,\ldots$ and $f \in\mathcal {C}_b(A_+)$ that \begin{equation}\label{cumlimit} \lim_{{\lambda}\to\infty} {\lambda}^{-\tau k/2} \langle f^{\otimes k}, c_{\lambda}^k \rangle= 0, \end{equation} where $c_{\lambda}^k$ denotes the $k$th cumulant figuring in the logarithm of the Laplace transform (their existence follows by Lemma \ref{expbds}). This consequently shows that ${\lambda}^{-\tau/2} \langle f,\bar{\mu}^{\xi}_{{\lambda}\rho}\rangle$ converges to a mean zero normal random variable with variance $\sigma_f^2$. The convergence of the finite-dimensional distributions follows from the Cram\'er--Wold device and is standard (see, e.g., page 251 of \cite{BY2} or \cite{Pe1}). Alternatively, we may also use the Stein method \cite{Pe1,PY5}. This is a bit simpler and has the advantage of yielding rates of convergence when $\sigma_f^2 > 0$, as would be the case when $\delta= 0$ and $\alpha= 2$ (Lemma 7 of \cite{Re4} combined with Section \ref{ApplSection} below) or when $\alpha= 1$ (Theorem 2.2 of \cite{BY4}). (When $\sigma_f^2 = 0$, then ${\lambda}^{\tau /2}\langle f, \bar{\mu}_{{\lambda}\rho}^{\xi} \rangle$ converges to a unit point mass.) Our proof is based closely on \cite{PY5}, which uses a formally different version of stabilization. For simplicity, we assume $A = [0,1]^{d-1}$. Recalling that $\bar{x} := (x, h_x)$, we have \[ \langle f, \mu_{{\lambda}\rho}^{\xi} \rangle= \sum_{ \bar{x} \in {\mathcal{P}}_{{\lambda}{\rho}}} \xi( \bar{x}, {\mathcal {P}}_{{\lambda}{\rho}}) f (\bar{x}) = \sum_{ \bar{x} \in{\mathcal{P}}_{{\lambda}{\rho}}} \xi^{({\lambda})} \bigl( (\mathbf{0}, h'_x), {\mathcal{P}}_{{\lambda} {\rho}}^{({\lambda})}[x]\bigr) f ((x,h_x' {\lambda}^{-\gamma}) ). \] For all $L > 0$, let \begin{eqnarray*} T_{\lambda}&:=& T_{\lambda}(L):= \sum_{ \bar{x} \in{\mathcal {P}}_{{\lambda}{\rho}} \cap ([0,1]^{d-1} \times[0,L {\lambda}^{-\gamma} \log{\lambda}]) } \xi ^{({\lambda})} \bigl( (\mathbf{0}, h'_x), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda })}[x]\bigr) f ((x,h_x' {\lambda}^{-\gamma}) ) \\ &\hspace*{4pt}=& \sum_{ \bar{x} \in{\mathcal{P}}_{{\lambda}{\rho }}: \ h_x' \leq L {\lambda}^{-\gamma} \log {\lambda}} \xi^{({\lambda})} \bigl( (\mathbf{0}, h'_x), {\mathcal {P}}_{{\lambda}{\rho}}^{({\lambda})}[x]\bigr) f ((x,h_x' {\lambda}^{-\gamma}) ). \end{eqnarray*} By Lemma 3.2, given arbitrarily large ${\kappa}> 0$, if $L$ is large enough, then $\langle f, \mu_{{\lambda}\rho}^{\xi} \rangle$ and $T_{\lambda}$ coincide except on a set with probability $O({\lambda}^{-{\kappa}})$ in ${\lambda}$. Thus, $T_{\lambda}$ has the same asymptotic distribution as $\langle f, \mu_{{\lambda}\rho}^{\xi} \rangle$ and it suffices to find a rate of convergence to the standard normal for $(T_{\lambda}- \mathbb{E}T_{\lambda})/\sqrt {\operatorname{Var}T_{{\lambda}}}$. Subdivide $[0,1]^{d-1}$ into $V({\lambda}):= {\lambda}^{{\beta}(d-1)} ({\rho}_{\lambda})^{-(d-1)}$ sub-cubes $C_i^{{\lambda}}$ of edge length ${\lambda}^{-\beta} {\rho}_{\lambda}$ and of volume ${\lambda }^{-{\beta}(d-1)} ({\rho}_{\lambda})^{d-1}$, where $\rho_{\lambda}:= M \log{\lambda }$ for some large $M$, exactly as in Section 4 of \cite{PY5}. Enumerate ${\mathcal{P}_{{\lambda}\rho} }\cap(C^{{\lambda}}_i \times L {\lambda} ^{-\gamma} \log{\lambda})$ by $\{ \bar{X}_{i,j} \}_{j=1}^{N_i}$ where $\bar{X}_{i,j}:= (x_{ij}, h_{ij})$. Re-write $T_{\lambda}$ as \[ T_{\lambda}= \sum_{i=1}^{V({\lambda})} \sum_{j = 1}^{N_i} \xi ^{({\lambda})} \bigl( (\mathbf{0}, h'_{ij}), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda })}[x_{ij}]\bigr) f ((x_{ij},h'_{ij} {\lambda}^{-\gamma} ) ). \] This is the analog of $T_{\lambda}$ in \cite{PY5}. For any random variable $X$ and any $p > 0$, let $\Vert X\Vert_p:= (\mathbb{E}[|X|^p ])^{1/p}.$ For all $1 \leq i \leq V({\lambda})$, we have $\sum_{j = 1}^{N_i} \xi^{({\lambda})} ( (\mathbf{0}, h'_{ij}), {\mathcal{P}}_{{\lambda }{\rho}}^{({\lambda})}[x_{ij}]) \leq N_i$, where $N_i$ is Poisson with mean \[ {\lambda}\int_{C^{{\lambda}}_i \times[0,L{\lambda}^{-\gamma} \log {\lambda}]} \rho(u) \,du = O([\log{\lambda}]^{1+\delta}). \] It follows by the boundedness of $f$ that \[ \Biggl\Vert\sum_{j = 1}^{N_i} \xi^{({\lambda})} \bigl( (\mathbf {0}, h'_{ij}), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})}[x_{ij}]\bigr) f((x_{ij},h'_{ij} {\lambda}^{-\gamma} )) \Biggr\Vert_3 \leq C \Vert f\Vert L^{1+\delta} (\log{\lambda})^{1+\delta} ({\rho }_{\lambda})^{d-1}, \] where $\Vert f\Vert$ denotes the essential supremum of $f$. This is the analog of Lemma 4.3 in \cite{PY5} (putting $q =3$ there) with an extra logarithmic factor. For all $1 \leq i \leq V({\lambda})$ and $j = 1,2,\ldots,$ let $R_{i,j}$ denote the radius of stabilization for $\xi^{({\lambda})}$ at $X_{i,j}$ for ${\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda})}$ if $1 \leq j \leq N_i$ and let $R_{i,j} $ be zero otherwise. As in \cite{PY5}, put $E_{i}:= \bigcap_{j=1}^{\infty} \{R_{i,j} \leq\rho_{\lambda}\}$ and let $E_{\lambda}:= \bigcap _{i=1}^{V({\lambda})} E_i$. Then by Lemma~\ref{StabLemma}(i), we have $P[E_{\lambda}^c] \leq {\lambda }^{-\kappa}$ for $\kappa$ arbitrarily large if $M$ is large enough. This is the analog of (4.11) of \cite{PY5}. Next, recalling ${\rho}_{\lambda}= M \log{\lambda}$, we define the analog of $T'_{\lambda}$ in \cite{PY5}: \[ T'_{\lambda}:= \sum_{i=1}^{V({\lambda})} \sum_{j = 1}^{N_i} \xi^{({\lambda})}_{[{\rho}_{{\lambda}}]} \bigl( (\mathbf{0}, h'_{ij}), {\mathcal{P}}_{{\lambda} {\rho}}^{({\lambda})}[x_{ij}]\bigr) f ((x_{ij},h'_{ij} {\lambda }^{-\gamma} ) ). \] Then we define, for all $1 \leq i \leq V({\lambda})$, \[ S_i:= S_{Q_i}:= (\operatorname{Var}T'_{\lambda})^{-1/2} \sum_{j = 1}^{N_i} \xi^{({\lambda})}_{[{\rho}_{{\lambda}}]} \bigl( (\mathbf{0}, h'_{ij}), {\mathcal{P}}_{{\lambda}{\rho}}^{({\lambda })}[x_{ij}]\bigr) \ f ((x_{ij},h'_{ij} {\lambda}^{-\gamma} ) ). \] We define $S_{{\lambda}}:= \sum_{i=1}^{V({\lambda})} (S_i - \mathbb {E}S_i)$, noting that it is the analog of $S$ in \cite{PY5}. Notice that $T'_{\lambda}$ is a close approximation to $T_{\lambda}$ and that, by definition of $E_i, 1 \leq i \leq V({\lambda})$, it has a high amount of independence between summands. In fact, by the independence property of Poisson point processes, it follows that $S_i$ and $S_k$ are independent whenever $d(C_i^{{\lambda}}, C_k^{{\lambda}}) > 2 {\lambda}^{-\beta} {\rho}_{{\lambda}}.$ Next we define a graph $G_{\lambda}:= (\mathcal{V}_{\lambda}, \mathcal{E}_{\lambda})$ as follows. The set $\mathcal{V}_{\lambda}$ consists of the sub-cubes $C^{\lambda}_1,\ldots,C^{\lambda}_{V({\lambda})}$ and the edges $(C^{\lambda}_i,C^{\lambda}_j)$ belong to $\mathcal{E}_{\lambda}$ if $d(C^{\lambda}_i, C^{\lambda}_j) \leq2 {\lambda}^{-\beta} {\rho}_{\lambda}$. Since $S_i$ and $S_k$ are independent whenever $d(C_i^{{\lambda}}, C_k^{{\lambda}}) > 2 {\lambda}^{-\beta } {\rho}_{{\lambda}}$, it follows that $G_{\lambda}$ is a dependency graph for $\{S_i\}_{i=1}^{V({\lambda})}$. Now proceed exactly as in \cite{PY5}, noting that: \begin{longlist} \item$V({\lambda}) = {\lambda}^{{\beta}(d-1)} ({\rho}_{\lambda })^{-(d-1)}$, \item the maximum degree of $G_{\lambda}$ is bounded by $5^d$, \item for all $1 \leq i \leq V({\lambda})$, we have $\Vert S_i\Vert_3 \leq K (\operatorname{Var}(T'_{\lambda}))^{-1/2} (\log{\lambda})^{1+\delta }\times({\rho}_{\lambda})^{d-1} =: \theta[{\lambda}]$. \end{longlist} These bounds correspond to the analogous bounds (i), (ii) and (iii) on pages 54--55 of \cite{PY5}. Moreover, provided $\sigma_f^2 > 0$, then the counterpart of (v) of \cite{PY5} holds, namely, \[ \operatorname{Var}[T'_{\lambda}] = \Theta(\operatorname {Var}[T_{\lambda}]) = \Theta({\lambda}^{\tau}). \] Putting $q = 3$ in (4.1) and (4.18) of \cite{PY5} gives a rate of convergence for both $S_{\lambda}$ and $(T_{\lambda}- \mathbb{E}T_{\lambda})/\sqrt{\operatorname {Var}T_{{\lambda}}}$ to the standard normal. This rate is \[ O( V({\lambda}) \theta[{\lambda}]^3) = O\bigl( {\lambda}^{{\beta}(d-1)} ({\rho}_{\lambda} )^{-(d-1)} ( {\lambda}^{\tau})^{-3/2} (\log{\lambda})^{3(1+\delta)} \rho _{\lambda}^{3(d-1)} \bigr). \] Recalling that $\tau= \beta(d-1)$, we rewrite this as \begin{equation} \label{rates} O\bigl({\lambda}^{-\tau/2} \log{\lambda}^{3(1+\delta) + 2(d-1)} \bigr). \end{equation} This completes the proof of Theorem 1.3.\quad\qed \section{Proofs of applications}\label{ApplSection} The purpose of the present section is to derive Theorems 2.1 and 2.2 from our general theorems of Section \ref{GenRes}. \begin{pf*}{Proof of Theorem \ref{convexhullthm}} To derive Theorem \ref{convexhullthm} from our general theory, we translate the convex hull problem into the language of $\psi$-growth processes with overlap. To this end, recall first that for a compact convex body $C \subseteq\mathbb{R}^d$ we define its support function $h_C \dvtx S_{d-1} \to\mathbb{R}$ by \[ h_C(u) := \sup_{\bar{x} \in C} \langle\bar{x}, u \rangle ,\qquad u \in S_{d-1}, \] with now $\langle\cdot, \cdot\rangle$ standing for the usual scalar product in $\mathbb{R}^d$; see Section 1.7 in \cite{SCHN}. An easily verified and yet crucial feature of the support functional $h_{ \cdot}(\cdot)$ is that \begin{equation}\label{SuppFnct} h_{\operatorname{conv}\{ \bar{x}_1,\ldots,\bar{x}_k \}}(u) = \max _{1 \leq i \leq k} h_{\{ \bar{x}_i \}}(u),\qquad u \in S_{d-1}, \end{equation} for each collection $\{ \bar{x}_1,\ldots, \bar{x}_k \}$ of points in $\mathbb{R}^d.$ Moreover, by definition, it is clear that, for all $u \in S_{d-1}$, we have $ h_{\{ \bar{x} \}}(u) = \langle\bar{x}, u \rangle$, $u \in S_{d-1}$. This leads to the following way of describing $\mathcal{V}({\mathcal{P}_{{\lambda}\rho} })$ considered in Theorem~\ref {convexhullthm}. For a particular realization $\{ \bar{x}_1,\ldots, \bar{x}_k \}$ of ${\mathcal{P}_{{\lambda}\rho} }$ in $B_d$, we consider the collection $H[\bar{x}_1],\ldots,H[\bar {x}_k]$ of \textit{support epigraphs} given by \begin{equation}\label{KDef} H[\bar{x}] := \bigl\{ (y,h_y) \in S_{d-1} \times\mathbb{R}_+\dvtx h_y \geq 1 - h_{\{ \bar{x} \}}(y) \bigr\}, \end{equation} where $h_y$ stands for the distance between $\bar y$ and the boundary $S_{d-1} = \partial B_d.$ A~compact convex body is uniquely determined by its support functional (cf. Section 1.7 in \cite{SCHN}), and in view of (\ref{SuppFnct}), the set $\operatorname{conv}(\{ \bar {x}_1,\ldots ,\bar{x}_k \})$ is in one-to-one correspondence with the union $\bigcup_{i=1}^k H[\bar{x}_i].$ Further, the number of vertices in the convex hull is easily seen to coincide with the number of those $\bar{x}_i$, $ i = 1,\ldots, k,$ for which $H[\bar {x}_i]$ is not completely contained in the union $\bigcup_{j \neq i} H[\bar{x}_j].$ Next we shall also write $r_y := 1 - h_y$ for the distance between $\bar y$ and the origin of $\mathbb{R}^d$. Note now that the intensity measure $\rho(\bar{x})\, d\bar{x}$, $\bar{x} \in B_d,$ coincides with $\rho((x,r)) r^{d-1}\, dr\, dx = \rho((x,r)) (1-h)^{(d-1)}\, dh\, dx$, where $ \bar{x}:= (x,r),$ with $r \in[0,1]$ denoting the distance between $\bar{x}$ and the origin of $\mathbb{R}^d$, with $h:=1-r$ and with $x \in S_{d-1}$ being the radial projection of $\bar{x}$ onto $\partial B_d = S_{d-1}.$ Observe also that the support epigraph $ H[\bar{x}]$ as given in (\ref{KDef}) can be represented by \[ H[(x,r)] = \{ (y,h_y) \in S_{d-1} \times\mathbb{R}_+ \dvtx h_y \geq1 - r \cos (\mathrm{dist}_{S_{d-1}}(x,y)) \} \] with $\mathrm{dist}_{S_{d-1}}(x,y) := \cos^{-1} \langle x, y \rangle$ denoting the geodesic distance in $S_{d-1}$ between $x$ and $y.$ Now put \[ \psi(l) := 1 - \cos(l). \] Writing the inequality $h_y \!\geq1 - r \cos(\mathrm{dist}_{S_{d-1}}(x,y))$ as $h_y \geq1 - r +\break r\psi(\mathrm{dist}_{S_{d-1}}(x, y))$, we have \begin{equation}\label{KRepr} H[(x,r)] = \{ (y,h_y) \in S_{d-1} \times\mathbb{R}_+ \dvtx h_y \geq h + r \psi (\mathrm{dist}_{S_{d-1}}(x,y)) \}, \end{equation} in other words, the support epigraphs are remarkably similar to the upward cones (\ref{upcone}) described at the outset. The above observations naturally suggest \textit{identifying the cardinality of the studied set $\mathcal{V}({\mathcal{P}_{{\lambda}\rho} })$ with the number of extreme points in the $r\psi$-growth process with overlap in the sense of Section \textup{\ref{GenRes}} with the underlying point density} $\rho((x,r)) r^{d-1} = \rho((x,r)) (1-h)^{d-1}.$ Likewise, the vertex empirical measure $\mu_{{\lambda}\rho}$ in (\ref {vepm}) corresponds to the empirical measure $\mu^{\xi}_{{\lambda}\rho}$, $\xi:= \xi (\psi;\cdot)$; see (\ref{XiPsi}). This identification is valid modulo the following issues though: \begin{longlist} \item[(1)] the ``spatial'' coordinate $x$ of a point $\bar{x}:= (x,r) \in B_d$ falls into $S_{d-1}$ rather than into a subset $A$ of $\mathbb{R}^{d-1},$ as required in Section \ref{GenRes}, \item[(2)] $\psi$ as given above is monotone only in a neighborhood of $0,$ and moreover, we do not have $\lim_{l\to\infty} \psi(l) = \infty,$ which violates $(\Psi1)$, \item[(3)] the support epigraph $H[(x,r)]$ coincides with the $(x,h)$-shifted\break\mbox{$\psi$-epigraph} $K[x,h]$ given by (\ref{upcone}) only when $r = 1$ and, hence, only when $h=0$; in general, for $0 \leq r \leq1$, the set $H[(x,r)]$ is an $(x,h)$-shifted $r \psi$-epigraph. \end{longlist} We claim, however, that the above three restrictions can be neglected in the asymptotic regime ${\lambda}\to\infty$, thus rendering the theory of Section \ref{GenRes} applicable. Indeed, first note that the sphere $S_{d-1}$, unlike the boundary of a general smooth convex body, has a spatially homogeneous structure and so the behavior of $\psi$ is independent of $x$, exactly as in Section \ref{GenRes}. Moreover, the sphere $S_{d-1}$, being a smooth manifold, has a local geometry coinciding with that of $\mathbb{R}^{d-1}$, which takes care of issue (1). Concerning issues (2) and (3), for each $r \in(0,1)$, the convex hull $\operatorname{conv}({\mathcal{P}_{{\lambda}\rho} })$ coincides with $\operatorname{conv}({\mathcal{P}_{{\lambda}\rho} }\cap (B_d \setminus B_d(0,r)))$ with overwhelming probability, that is, the probability of the complement event goes to zero exponentially fast in ${\lambda}$; see the discussion in \cite{KUE} and the references therein. This allows us to focus on the geometry of $\operatorname{conv}({\mathcal{P}_{{\lambda}\rho} })$ in a thin shell $B_d \setminus B_d(0,r)$ within a distance $1-r$ from the boundary $S_{d-1}.$ Consequently, only the behavior of $\psi$ in a neighborhood of $0$ matters. Recalling that the standard re-scaling of Section \ref{ScaRe} involves scaling in the spatial directions by ${\lambda}^{{\beta}}$, it follows that for a given $\bar{x}:= (x,r)$ and support epigraph $H[\bar{x}]$, the contribution of points distant from $x$ by more than $O({\lambda}^{-\beta})$ is negligible in view of the argument in Lemma \ref{StabLemma}(i) and no distortions from the local Euclidean geometry have to be taken into account in the limit under this re-scaling. Likewise, we only have to control the geometry of $H[\bar{x}]$, $\bar{x} := (x,r),$ for $r$ arbitrarily close to~$1$. This allows us to rewrite the proofs of Theorems \ref{LLN}--\ref{CLT} for the thus modified $r$-dependent $\psi.$ Indeed, the stabilization Lemma \ref{StabLemma}, as well as Lemma \ref{expbds}, do not require any modifications in their proofs and neither does Lemma 3.4 nor Lemma 3.5. Consequently, the arguments leading to the central limit theorem in Section \ref{CLTsubsection} do not require modification either. In this context we note that the proof of Lemma \ref{StabLemma} would break down if the sphere $S_{d-1}$ were replaced by a nonconvex set allowing for long-range dependencies between extreme points. It only remains to show the limit arguments in Sections \ref{VARproof} and \ref{LLNproof} remain valid for the modified $\psi.$ To see that this is indeed the case, we note that the arguments rely on two main ingredients: on stabilization which holds with no changes as stated above, and on re-scaling relations discussed in Section \ref{ScaRe}. However, it is easily seen that the re-scaling relations and their proofs can be readily rewritten for the modified $\psi$, the only essential modification being to add one extra argument ($h := 1-r$) to the $\psi $-function, which anyway vanishes in the scaling limit of Section \ref{ScaRe} with $h = 1-r$ tending to $0$ as discussed above (whereas the contribution coming from smaller $h$ is negligible in view of Lemma \ref{expbds}). This discussion takes care of issues (2) and (3) above. Thus, we can now conclude that the considered convex hull process falls into the range of applicability of the general theory of Section \ref{GenRes}, with $\alpha= 2$ in $(\Psi2)$ and $\delta$ in (R2) coinciding with that in the statement of Theorem \ref{convexhullthm}. Thus, we obtain the required Theorem \ref {convexhullthm} as a consequence of the general Theorems \ref{LLN}--\ref{CLT}. The rate of convergence follows from $(\ref{rates})$ by putting $\delta= 0$ and $\alpha= 2$. \end{pf*} \begin{pf*}{Proofs of Theorems \ref{maximalthm} and \ref{maximalthmBin}} Theorem \ref{maximalthm} follows directly by the general theory in Section \ref{GenRes} (Theorems \ref{LLN}--\ref{CLT} with $\alpha\in (0,1]$). The rate (\ref{maxptrate}) follows from (\ref{rates}) by putting $\delta= 0$ and $\alpha= 1$. We thus focus attention on establishing Theorem \ref{maximalthmBin}. The first lemma yields (\ref{explimmaxBin}). \noqed \end{pf*} \begin{lemm}\label{lem4.1} For all $f \in\mathcal{C}_b(A_+)$, we have \begin{equation} \label{differ} |\mathbb{E}[\langle f, {\nu}_n^{\xi} \rangle] - \mathbb{E}[\langle f, \mu_{n \rho}^{\xi} \rangle] | = O(n^{-{\tau}'}). \end{equation} \end{lemm} \begin{pf} For all $\bar w \in A_+$, let $p(\bar w):= \int_{K^{\downarrow} [\bar w]} {\rho}(u) \,du,$ where $ K^{\downarrow}[\bar{w}]$ is as in (\ref{downarrow}) with $\psi(l) = l^{\alpha}.$ Note that in our current setting for all $w \in A_+$ we have $p(w) \in[0,1]$ since ${\rho}$ is a probability density. Also, note that $\psi^{(n)} \equiv\psi$ and $K^{(n)} \equiv K$ with $K^{(n)} := \{ (y^{(n)},h_y^{(n)})\dvtx(y,h_y) \in K \},$ that is, the self-similarity under the re-scaling is immediate rather than emerging as $n \to\infty.$ For all $s \in[0,1]$ and $f \in\mathcal{C}_b(A_+)$, let $B_f(s):= \int_{p(\bar w) \leq s} f(\bar w) {\rho}(\bar w)\, d \bar w$. Recalling that for $\alpha\in(0,1]$ the \mbox{$\psi$-extremality} of a point $w$ in a given sample is equivalent to having no other sample points in $K^{\downarrow}[w]$ (see the discussion at the beginning of Section \ref{NMAX}), we have \begin{eqnarray*} \mathbb{E}[\langle f,{\nu}_n^{\xi} \rangle] &=& n \int_{A_+} \bigl (1 - p(w)\bigr)^{n-1} f(w) {\rho}(w)\, dw \\ &=& n \int_0^1 \int_{p(w) = s} (1 - s)^{n-1}f(w)\rho(w) \,dw \,ds = n \int_0^1 (1 - s)^{n - 1}\, dB_f(s) \end{eqnarray*} by Fubini's theorem. Similarly, \begin{equation} \label{tauber} \mathbb{E}[\langle f, \mu_{n \rho}^{\xi} \rangle] = n \int_0^1 e^{-ns} dB_f(s) \sim C_f n^{\tau}, \end{equation} where the asymptotics are given by Theorem \ref{LLN}. $B_f$ is monotone, nondecreasing and Karamata's Tauberian theorem (e.g., Theorem 2.3 in \cite{Se}) gives $B_f(s) \sim C_f s^{{\tau}'}$ as $s \to0^+$. Notice \begin{eqnarray*} | \mathbb{E}[\langle f, \mu_{n \rho}^{\xi} \rangle] -\mathbb {E}[\langle f, {\nu}_n^{\xi} \rangle] | &=& n \int_0^1 \bigl(e^{-ns} - (1 - s)^{n -1}\bigr)\, dB_f(s) \\ &\leq& n \int_0^1 \bigl(e^{-ns} - e^{n \ln(1 - s)}\bigr)\, dB_f(s) \\ &\leq& Cn^2 \int_0^1 e^{-ns} s^2 \,dB_f(s) \\ &=& Cn^2 \int_0^{1/n} e^{-ns} s^2\, dB_f(s) + Cn^2 \int_{1/n}^1 e^{-ns} s^2 \,dB_f(s). \end{eqnarray*} The first integral behaves like $Cn^{-{\tau}'}$ since $B_f(s) \sim C_f s^{{\tau}'}$, whereas the second behaves like ${C\over n} \int_1^n u^2 e^{-u} dB_f(u/n) \leq C/n$, since $B_f$ is bounded by $B_f(1).$ This gives (\ref{differ}). \end{pf} We now establish the remainder of Theorem \ref{maximalthmBin}. Recall $\bar{u}' := (u',h'_u)$. For all ${\lambda}> 0$, define \[ A'({\lambda}) := \biggl\{ \bar{y}' \in{\lambda}^\beta A \times \mathbb{R}_+ \dvtx\int_{ K^{\downarrow} [\bar{y}']} \rho^{({\lambda})} (u) \,du \leq C \log {\lambda} \biggr\}. \] Let $A({\lambda}):= \{ \bar{y} \in A_+: \bar{y}' \in A'({\lambda}) \}$ and put $a_{\lambda}:=\int_{A({\lambda})} \rho(w) \,dw$. Note that by Lemma \ref{expbds} the probability that a sample point from ${\bar{\mathcal{X}_n} } := \{ X_i \}_{i=1}^n$ in $A_+ \setminus A({\lambda})$ is $\psi$-extremal is at most \begin{equation}\label{P25} C \exp\biggl(- \frac{ n \log{\lambda}}{C {\lambda}} \biggr) \end{equation} and the same holds for $\bar{\mathcal{X}}_n$ replaced by the Poisson sample with intensity $n\rho.$ Indeed, although Lemma~\ref{expbds} was originally established for Poisson samples, it is easily seen that the same proof works also for binomial samples, as it essentially relies on exponentially decaying upper bounds for probabilities of certain sets in $A_+$ being devoid of points of the underlying point process. Thus, the $\psi$-extremal points are predominantly concentrated in $A({\lambda})$, a fact which we will use to show (\ref{varlimmaxBin}). First we find growth bounds for $a_{\lambda}$. \begin{lemm}\label{LMA} We have $a_{\lambda}\leq C ( \log{\lambda})^{ \alpha( 1 + {\delta})/ (\alpha+ d - 1) } {\lambda}^{ -\alpha(1 + {\delta})/(d - 1 + \alpha(1 + \delta)) }$. \end{lemm} \begin{pf} If $M({\lambda}):= \sup\{h_y\dvtx h_y \in A({\lambda})\}$, then note that $\alpha({\lambda})$ grows like $\int_0^{M({\lambda})} h_y^{\delta}\, dh_y = C(M({\lambda}))^{1 + {\delta}}$. We now find $M({\lambda})$. If $\bar{y}':=(y',h_y') \in A'({\lambda})$, then, by (\ref {volinvert}), we have ${h'}_y^{(\alpha+ d - 1)/\alpha} \leq C \log{\lambda}.$ Since $h'_y = {\lambda}^{\gamma} h_y$ and since $\gamma= \beta \alpha$, it follows that \[ h_y^{(\alpha+ d - 1)/\alpha} {\lambda}^{ \beta(\alpha+ d - 1) } \leq C \log{\lambda}. \] Since $\gamma(d-1)/\alpha= \tau$, we have \[ h_y^{(\alpha+ d - 1)/ \alpha} {\lambda}^{ \gamma+ \tau} \leq C \log{\lambda}\quad\mbox{or} \quad h_y \leq( \log{\lambda})^{\alpha/(\alpha+ d - 1) } {\lambda}^{ -\alpha( \gamma+ \tau)/(\alpha+ d - 1) }, \] that is, \[ M({\lambda}) \leq( \log{\lambda})^{\alpha/(\alpha+ d - 1)} {\lambda}^{ {-\alpha( \gamma+ \tau)}/(\alpha+ d - 1) } . \] Recall that $\gamma+ \tau= (\alpha+ d - 1)/(d-1 + \alpha(1 + \delta))$ to get the result. \end{pf} The next lemma yields (\ref{varlimmaxBin}). The proof borrows heavily from \cite{BY4} and, for the sake of completeness, we provide the details. \begin{lemm}\label{varlimB} For all $f \in\mathcal{C}_b(A_+)$, we have $ \lim_{n \to\infty} n^{-\tau} \operatorname{Var}[\langle f, {\nu }_n^{\xi} \rangle] = \lim_{n \to\infty} n^{-\tau} \operatorname {Var}[\langle f, \mu_{n \rho}^{\xi} \rangle].$ \end{lemm} \begin{pf} Recall $\bar{\mathcal{X}_n}:= \{X_i\}_{i=1}^n$. Let $N_n:=\mbox{card} \{ {\bar{\mathcal{X}_n} } \cap A(n) \}$ and $N'_n:=\mbox{card} \{ {\mathcal{P}}_{n{\rho}} \cap A(n) \}$. For all $r = 1,2,\ldots,$ denote by $e(r):= e_f(r)$ the expected value of the functional $\langle f\cdot \bold{1}(A(n)), \mu_n^{\xi} \rangle$ \textit{conditioned on $\{N(n)=r\}$}, and by $v(r)$ the variance of this functional conditioned on $\{N(n) = r\}$. Let ${\nu}_n^A := {\nu}_n^{\xi,A}$ denote the point measure induced by the $\psi$-extremal points in $\{ {\bar{\mathcal {X}_n} }\cap A(n) \}$. Similarly, let $\mu_{n\rho}^A := \mu_{n \rho}^{\xi,A}$ denote the point measure induced by the $\psi$-extremal points in $ \{ {\mathcal {P}}_{n{\rho}} \cap A(n) \}$. By the bound (\ref{P25}) on the probability of a given point outside $A(n)$ being extremal, ${\nu}_n^A$ coincides with ${\nu}_n$ and $\mu_{n\rho}^{\xi,A}$ coincides with $\mu^{\xi}_{n\rho}$ except on a set with probability at most $n C \exp( - C \log n) $ $= C n^{-C+1}.$ Since $C$ can be chosen arbitrarily large, it suffices to show that \begin{equation} \label{equalvar} \lim_{n \to\infty} n^{-\tau}\operatorname {Var}[\langle f, {\nu}_n^{A} \rangle] = \lim_{n \to\infty} n^{-\tau}\operatorname {Var}[\langle f, \mu_{n \rho}^{A} \rangle]. \end{equation} The conditional variance formula implies that \begin{eqnarray*} \operatorname{Var}[\langle f, {\nu}_n^A\rangle] &=&\operatorname {Var}[e(N_n)] + \mathbb{E}[ v(N_n)] \quad \mbox{and} \\ \operatorname{Var}[\langle f,\mu_{n \rho}^{A} \rangle] &= &\operatorname{Var} [e(N'_n)] + \mathbb{E}[v(N'_n)]. \end{eqnarray*} We prove (\ref{equalvar}) by showing that: \begin{longlist} \item the terms $\mathbb{E}[ v(N_n)] $ and $\mathbb{E}[v(N'_n)]$ are dominant and that their ratio tends to one as $n \to\infty$, and \item$\operatorname{Var}[e(N_n)]$ and $\operatorname{Var}[e(N'_n)]$ are both $o(n^{\tau})$. \end{longlist} We will first show (ii) as follows. For all $s > 0$, recall that $B_f(s):= \int_{p(\bar w) \leq s} f(\bar w) {\rho}(\bar w)\, d \bar w.$ By Fubini's theorem, for all $r = 1,2,\ldots$ and with $a_n = \int_{A(n)} {\rho}(w) \,dw$, we obtain \[ e(r)= \frac{r}{a_n} \int_{A(n)} \biggl(1- \frac{p(w)}{a_n} \biggr)^{r-1} f(w) {\rho}(w) \, dw = \frac{r}{a_n } \int_{0}^{a_n} \biggl(1- \frac{s}{a_n} \biggr)^{r-1} \,dB_f(s). \] Letting $\Delta_r$ denote the difference $e(r+1)-e(r)$, we obtain \[ \Delta_r = \frac{1}{a_n} \int_0^{a_n} \biggl(1-\frac{s}{ a_n} \biggr)^{r} - \frac{rs}{ a_n} \biggl(1-\frac{s}{ a_n} \biggr)^{r-1}\,dB_f(s). \] Setting $u = rs/ a_n$ and applying $B_f(s) \sim C_f s^{\tau'}$, we see that ($\tau= 1 - {\tau}'$) \[ |\Delta_r| \leq\frac{C_f }{r} \int_0^{r} \biggl| \biggl(1-\frac{u}{ r} \biggr)^r - u \biggl(1-\frac{u}{r} \biggr)^{r-1} \biggr| \biggl(\frac{ua_n }{r} \biggr)^{ -\tau} \,du. \] Since $ \sup_{r > 0} \int_0^{r} | (1-\frac{u}{r} )^r - u (1-\frac{u}{r} )^{r-1} | u^{-\tau} \,du \leq C$, it follows that $|\Delta_r| \leq C (\frac{a_n }{r} )^{ -\tau}$ . When $r \in I_n:= (na_n - C (\log n) (na_n)^{1/2}, na_n + C (\log n) (na_n)^{1/2})$, then, by Lemma \ref{LMA}, for $n$ large, \begin{eqnarray*} |\Delta_r| &\leq& C (na_n)^{-1} n^{ \tau} = C a_n^{-1} n^{ -{\tau}'} \\ &=& C n^{ \alpha(1 + {\delta})/(d-1 + \alpha(1 + \delta)) } n^{ -{\tau}'} (\log n)^{ -\alpha(1 + {\delta})/(\alpha+ d - 1) }. \end{eqnarray*} Recalling that ${\tau}' = (1 + {\delta})\alpha/(d-1 + \alpha(1 + \delta))$, we see that for $r \in I_n$ we have \[ |\Delta_r| \leq\Delta(n) := C (\log n)^{ -\alpha(1 + \delta) /(\alpha+ d - 1) }. \] Write $e(N_n) = e(1) + \sum_{j = 2}^{N_n}(e(j) - e(j - 1))$ and observe that $e(N_n)$ differs from the constant $e(1) + \sum_{j = 2}^{E[N_n] }(e(j) - e(j - 1))$ by at most \[ \sum_{j \in J_n} \bigl(e(j) - e(j - 1)\bigr), \] where $J_n := ( \min( \mathbb{E}[N_n], N_n ), \ \max( \mathbb {E}[N_n], N_n ) )$. Thus, \begin{eqnarray*} \operatorname{Var}[e(N_n)] &\leq&\mathbb{E} \Biggl[ \sum_{j \in J_n} \bigl(e(j) - e(j - 1)\bigr) \Biggr]^2\\ & \leq&\mathbb{E} \Biggl[ \sum_{j \in J_n} \bigl(e(j) - e(j - 1)\bigr) \mathbf{1}_{N_n \in I_n} \Biggr]^2 + o(1), \end{eqnarray*} by Cauchy--Schwarz and since (by increasing $C$ in the definition of $I_n$) standard concentration inequalities (see, e.g., Proposition A.2.3(ii), (iii) and Proposition A.2.5(ii), (iii) in \cite{BHJ}) show that $P[N_n \in I_n^c]$ can be made smaller than any negative power of $n$. For $j \in J_n$ and $N_n \in I_n$, we have $j \in I_n$ and so $(e(j) - e(j - 1)) \leq \Delta(n)$. Since the length of $J_n$ is bounded by $|N_n - \mathbb{E}N_n|$, it follows that $\operatorname {Var}[e(N_n)] \leq\operatorname{Var}[N_n] (\Delta(n))^2 + o(1)$. Note that $\operatorname{Var}[N_n] \leq Cn^{\tau}(\log n)^{\alpha(1 + {\delta})/ (\alpha+ d - 1)}$. It follows that $\operatorname{Var}[e(N_n)] \leq Cn^{\tau}(\log n)^{-\alpha(1 + {\delta})/ (\alpha+ d - 1)} + o(1)$, that is, $\operatorname{Var}[e(N_n)] = o(n^{\tau}).$ Similarly, $\operatorname{Var}[e(N'_n)] = o(n^{\tau})$ and so condition (ii) holds. We now show condition (i) by showing that the ratio $\mathbb{E} [v(N_n)]/\mathbb{E}[v(N'_n)]$ is asymptotically one, as $n\to\infty $. Let $p_{n,r}:=P[N_n=r]$ and $p'_{n,r}:=P[N'_n=r]$. Stirling's formula implies that, for $|r-a_n n|\leq n^\beta$, where $0 < \beta< 1/2$, \begin{equation}\label{second} \lim_{n \to\infty} \frac {p_{n,r}}{p'_{n,r}} = 1 \end{equation} uniformly. Now, for $|r-a_n n|> n^\beta$, where $\beta\in (0,1/2)$ is chosen so that $n^{2 \beta}/na_n$ grows faster than some (small) power of $n$, we have that both $p_{n,r}$ and $p'_{n,r}$ are bounded by $C\exp(-n^{{\delta}}/C)$ for some $C, \ {\delta}> 0$ (see, e.g., Proposition A.2.3(i) and Proposition A.2.5(i) in \cite{BHJ}). Write \[ \mathbb{E}[v(N_n)]=\sum_{|r- a_n n| \leq n^\beta} v(r) p_{n,r}+ \sum_{|r- a_n n|> n^\beta} v(r) p_{n,r}. \] The second sum is negligible since $0 < v(r) < r^2$ and $p_{n,r}$ is exponentially small. Consider the terms in the first sum. By (\ref{second}), we have $p_{n,r} = p'_{n,r}(1 + o(1))$ uniformly for all $|r - a_n| \leq n^\beta$ and since the terms in the first sum are positive, it follows that \begin{equation}\label{third} \lim_{n \to\infty} \frac{\mathbb{E}[v(N_n)]} {\mathbb{E}[ v(N_n')] } = 1. \end{equation} Now from before we know $\operatorname{Var}[ \langle f, \mu_{n \rho }^A \rangle]$ has asymptotic growth $Cn^{\tau}, C > 0$. It follows that $\mathbb{E}[v(N_n')]$ has the same growth, since $\operatorname{Var}[e(N_n')] = o( n^{\tau} ).$ Thus, by (\ref{third}) and the growth bounds $\operatorname{Var}[ e(N_n)] = o( n^{\tau} )$ and $\operatorname{Var}[e(N_n')] = o(n^{\tau})$, the desired identity (\ref{equalvar}) follows, completing the proof of Lemma \ref{varlimB}. \end{pf} We conclude the proof of Theorem \ref{maximalthmBin} by showing for all $f \in\mathcal{C}_b(A_+)$ \[ \lim_{n \to\infty} d_{\mathrm{TV}}( n^{-\tau/2} \langle f, \bar{{\nu}}_n^{\xi} \rangle,\ n^{-\tau/2}\langle f, \bar{\mu}_{n \rho}^{\xi} \rangle) = 0, \] where the total variation distance between two measures $m_1$ and $m_2$ is\break $d_{\mathrm{TV}}(m_1, m_2):= \sup_B |m_1(B) - m_2(B)|$, where the sup runs over all Borel subsets in $\mathbb{R}^d$. Since $ n^{-\tau/2} | \mathbb{E}[\langle f, {{\nu}}_n^{\xi}\rangle] - \mathbb{E}[\langle f, {\mu}_{n \rho}^{\xi} \rangle] | \to0$ by (\ref{differ}) and since $ n^{-\tau/2}\langle f, \bar{\mu}_{n \rho}^{\xi} \rangle$ converges in law to an appropriate Gaussian distribution, recalling that $a_n = o(1)$ (see Lemma \ref{LMA}), Theorem \ref{maximalthmBin} follows at once from the following: \begin{lemm} For all $f \in\mathcal{C}_b(A_+)$, we have \begin{equation}\label{TV} d_{\mathrm{TV}}( \langle f,{{\nu}}_n^{\xi} \rangle, \langle f, {\mu }_{n \rho }^{\xi}\rangle) = O(a_n). \end{equation} \end{lemm} \begin{pf} We follow the proof of Lemma 7.1 in \cite{BY4}. Recall that ${\nu}_n^A$ is the measure induced by the maximal points in $\{(X_i,h_i)\}_{i=1}^n \cap A(n)$ and, similarly, let $\mu_{n \rho}^{\xi,A}$ be the measure induced by the maximal points in ${\mathcal{P}}_{n {\rho}} \cap A(n)$. If $C$ is large enough in the definition of $A(n)$, then the probability that points in $A_+ \setminus A(n)$ contribute to ${\nu}_n^{\xi}$ or $\mu_{n \rho}^{\xi}$ is $O(n^{-2})$. It follows that, for all $f \in\mathcal{C}_b(A_+)$ \[ d_{\mathrm{TV}}( \langle f, \mu_{n \rho}^{\xi} \rangle, \langle f, \mu_{n \rho}^{\xi,A} \rangle) = O(n^{-2}) = o(a_n) \] and \[ d_{\mathrm{TV}}( \langle f, {\nu}_n^{\xi} \rangle, \langle f, \mu _n^{\xi,A} \rangle) = O(n^{-2}) = o(a_n). \] Thus, we only need to show $d_{\mathrm{TV}}( \langle f, {\nu}_n^A \rangle, \langle f, {\nu}_{n \rho}^A \rangle) = O(a_n). $ Recall that $N_n$ is the number of points from ${\bar{\mathcal{X}_n} }$ belonging to $A(n)$. Conditional on $N = r$, $\langle f, {\nu}_n^A \rangle$ is distributed as $\langle f, \tilde{{\nu}}_r^A \rangle$, where $\tilde{{\nu}}_r^A$ is the point measure induced by considering the maximal points among $r$ points placed randomly according to the restriction of $\rho$ to $A(n)$. The same is true for $\langle f, \mu_{n \rho}^{\xi,A} \rangle$ conditional on the cardinality of $\{{\mathcal{P}}_{n \rho} \cap A(n) \}$ taking the value $r$. Hence, with $\mathit{Bi}(n,p)$ standing for a binomial random variable with parameters $n$ and $p$ and $\mathit{Po}(\alpha)$ standing for a Poisson random variable with parameter $\alpha$, we have for all $f \in \mathcal{C}_b(A_+)$ \[ d_{\mathrm{TV}}( \langle f, {\nu}_n^A \rangle, \langle f, \mu_{n \rho}^A \rangle) \leq C d_{\mathrm{TV}} (\mathit{Bi} (n, a_n ), \ \mathit {Po}(n a_n) ) \leq C \frac{1} {n a_n } \sum_{i=1}^n ( a_n)^2 \leq C a_n, \] where the penultimate inequality follows by standard Poisson approximation bounds (see, e.g., (1.23) of Barbour, Holst and Janson \cite{BHJ}). This is the desired estimate (\ref{TV}). \end{pf} \section*{Acknowledgments} The authors gratefully acknowledge helpful discussions with Yuliy Baryshnikov, who, in particular, encouraged developing theory under the general condition (R2) and who also contributed to the proofs of Lemmas \ref{lem4.1} and \ref{varlimB}. The authors also thank an anonymous referee for comments leading to an improved exposition. \printaddresses \end{document} \subsection{Positivity of limiting variance Ignore this!!} \Comment{should we use salvage any of this for the special cases $\alpha= 1, 2$? JY. I am not sure - perhaps I don't fully understand the argument below, but even though some problems do vanish for $d=1,2...$ I still cannot understand certain points. I will try to specify my questions soon. TS} We show that the limiting variance is strictly positive. It suffices to show that \[ \int_0^{\infty} \mathbb{E}m^{\infty}(0,h')) dh' + \int_0^{\infty} \int_{\R^{d-1}} \int_0^{\infty} c^{\infty}((0,h'),(y',h_y')) dh'_ydy' dh' := I + II \] is bounded away from zero uniformly in $x$ (the missing/implicit variable). For all $h > 0$, let \[ A(\0,h'):= \{w \in\R^{d-1} \times[0, \infty): \ w \notin K^{\downarrow}[\0,h']), \ (\0,h') \notin K^{\downarrow}[w] \}. \] Integral II, when evaluated over $[0, \infty) \times A(\0,h')$ and $[0, \infty) \times(A(\0,h'))^c$ gives (putting $w:=(y',h'_y)$) \[ \int_0^{\infty} \int_{A(\0,h') } \mathbb{E}m^{\infty}((\0,h'), \P _* \cup w) m^{\infty}(w, \P_* \cup(\0,h')) - \mathbb{E}m^{\infty}(\0,h') \mathbb{E} m^{\infty}(w) dh'_ydy' dh' \] \[ -2 \int_0^{\infty} \int_{ w \in K^{\downarrow}[\0,h'] } \mathbb{E} m^{\infty}((\0,h')) \mathbb{E}m^{\infty}(w) dh'_ydy' dh' \] \[ := II_1 + II_2, \] since $\mathbb{E}m^{\infty}((\0,h'), \P_* \cup w) m^{\infty}(w, \P_* \cup(\0,h'))$ vanishes on $(A(\0,h'))^c$. Notice that \[ II_1 = \int_0^{\infty} \int_{A(\0,h') } \left( \exp( - |K^{\downarrow}[\0,h'] \cup K^{\downarrow}[w] | ) - \exp(- |K^{\downarrow}[\0,h']| - | K^{\downarrow}[w]|) \right) dh'_ydy' dh', \] which is positive and bounded away from zero uniformly in (the supressed) $x$. On the other hand, combining $I$ and $II_2$ gives \[ I + II_2 = \int_0^{\infty} \exp( - |K^{\downarrow}[\0,h'] | ) dh' - \] \[ 2 \int_0^{\infty} \int_{w \in K^{\downarrow}[\0,h'] } \exp( - |K^{\downarrow}[\0,h'] | - |K^{\downarrow}[w]|) dh'_ydy' dh', \] \[ \geq\int_0^{\infty} \exp( - |K^{\downarrow}[\0,h'] | ) dh - 2\int_0^{\infty} |K^{\downarrow}[\0,h'] | \exp( - |K^{\downarrow}[\0,h']| ) dh'. \] Since $|K^{\downarrow}[\0,h'] | = s(x,d) {h'}^{(\alpha+ d -1)/\alpha}$ for some $s(x,d, \alpha) > 0$, (recall (\ref{volinvert})) it follows that (setting $\beta:= (\alpha+ d -1)/\alpha$) \[ I + II_2 \geq(s(x,d, \alpha) )^{-1/\beta} [ \Gamma(1/\beta) - 2 \Gamma((1/\beta) + 1) ] = (s(x,d, \alpha))^{-1/\beta} \Gamma(1/\beta)(1 - 2/\beta) \geq0 \] for all $d \geq1 + \alpha$. \end{document}
arXiv
Biotechnology for Biofuels and Bioproducts Deep Eutectic Solvents pretreatment of agro-industrial food waste Alessandra Procentese1, Francesca Raganati2, Giuseppe Olivieri2, Maria Elena Russo1, Lars Rehmann3 & Antonio Marzocchella2 Biotechnology for Biofuels volume 11, Article number: 37 (2018) Cite this article Waste biomass from agro-food industries are a reliable and readily exploitable resource. From the circular economy point of view, direct residues from these industries exploited for production of fuel/chemicals is a winning issue, because it reduces the environmental/cost impact and improves the eco-sustainability of productions. The present paper reports recent results of deep eutectic solvent (DES) pretreatment on a selected group of the agro-industrial food wastes (AFWs) produced in Europe. In particular, apple residues, potato peels, coffee silverskin, and brewer's spent grains were pretreated with two DESs, (choline chloride–glycerol and choline chloride–ethylene glycol) for fermentable sugar production. Pretreated biomass was enzymatic digested by commercial enzymes to produce fermentable sugars. Operating conditions of the DES pretreatment were changed in wide intervals. The solid to solvent ratio ranged between 1:8 and 1:32, and the temperature between 60 and 150 °C. The DES reaction time was set at 3 h. Optimal operating conditions were: 3 h pretreatment with choline chloride–glycerol at 1:16 biomass to solvent ratio and 115 °C. Moreover, to assess the expected European amount of fermentable sugars from the investigated AFWs, a market analysis was carried out. The overall sugar production was about 217 kt yr−1, whose main fraction was from the hydrolysis of BSGs pretreated with choline chloride–glycerol DES at the optimal conditions. The reported results boost deep investigation on lignocellulosic biomass using DES. This investigated new class of solvents is easy to prepare, biodegradable and cheaper than ionic liquid. Moreover, they reported good results in terms of sugars' release at mild operating conditions (time, temperature and pressure). A numbers of studies have lately been reported in the literature regarding the conversion of lignocellulosic biomass into biochemicals according to the biorefinery approach [1]. Main steps of the biorefinery approach are: biomass pretreatment [2], hydrolysis [3], fermentation process, bio-products/energy recovery, and concentration. Lignocellulosic feedstock such as dedicated wood cultivation and agriculture residues have several disadvantages: high lignin content, production spread on the territory, competition for arable lands and water sources. The high lignin content requires severe pretreatment conditions (e.g., high temperature and pressure) to effectively remove the lignin. The territory spread biomass production requires high transportation costs for biomass supply chain [4]. On the contrary, waste biomass from agro-food industries are a reliable and readily exploitable resource. The residue streams from agro-food industries are rich in carbohydrates. The industries have to fulfill the food and drink market and manage the residue/wastes of the production process to reduce the environment impact as well as the production cost. From the circular economy point of view, direct residues from these industries exploited for production of fuel/chemicals is a winning issue, because it reduces the environmental/cost impact and improves the eco-sustainability of productions. Agro-food wastes (AFWs) are characterized by high sugar content and, typically, by low lignin content. AFWs are available almost all year. Their production is spread around the European countries. These production features make AFWs well-suited with the operating and logistic requirements not only for the pretreatment step but also for the entire biorefinery process. Some of the main AFWs produced in Europe are from the following industrial food processes: freezing potatoes, "fresh-cut" fruit, coffee and beer production. As regards the potato industry, about 15 Mt yr−1 of potato are processed in Europe according to the European potato processors association [5]. The value of the European fresh-cut fruit and vegetable market is about 3.4 billion euros and fruit accounts for 7% of the market volume in Europe. The total EU fresh-cut fruit and vegetables consumption is about 1.4 Mt yr−1 [6]. Residues from "fresh cut" industries are a pressing issue, because about 50% of the raw processed vegetables are discarded and their disposal is particularly expensive [7]. Coffee is the second largest traded commodity after petroleum. The European roasted coffee production is about 2 Mt yr−1 [8] and large amounts of by-products are generated in the coffee industry [9]. In particular, coffee silverskin (CS) and spent coffee grounds (SCG) are the main coffee industry residues produced, respectively, during the beans' roasting and the process to prepare "instant coffee". Beer is the fifth most consumed beverage in the world after tea, carbonates, milk, and coffee. About 30 Mt yr−1 of beer is produced in Europe and the main European beer producers are in Germany, UK, Poland, Netherlands, and Spain at a rate of about 9.5, 4, 4 and 3 Mt yr−1, respectively [10]. Several studies focused on pretreatment of agriculture residues such as corncob [11], corn stover [12], and rice straw [13] to produce fermentable sugars. To the author's knowledge, the use of residues of food and drink processing to produce biochemicals was just addressed to a limited extent [7,8,9,10,11,12,13,14]. In particular, papers regarding sugar recovery by processing coffee silverskin and apple residues are very few [9,10,11,12,13,14,15]. The most widely used lignocellulosic pretreatments are energy-intensive processes, because they require high temperature and pressure to remove the lignin [11]. However, recent proposed processes are based on deep eutectic solvents (DESs) that require less energy than the established processes. DESs solubilize the lignin and increase the availability of the cellulose—at low temperature and pressure—for the hydrolysis. DESs are mostly in fluid form composed by two or three ionic compounds able of self-association to form a eutectic mixture [16]. DESs exhibit physicochemical properties close to those of ionic liquids, and lignin solubility included. However, DESs are much more environmental friendly and cheaper than ionic liquids [17]. Potential advantages of DES pretreatments with respect to consolidate processes are reported hereinafter with respect to the corncob exploitation. Zhang et al. [11] reported 54 and 31% of cellulose and lignin content after steam explosion (10 bar, 0.3% H2SO4). Procentese et al. [18] reported 52 and 10% of cellulose and lignin content after DESs (1 bar, 150 °C, Ch-Cl glycerol) pretreatment. By comparing the energy request for the two processes, Procentese et al. [19] pointed out that the energy required for biomass unit by the DES pretreatment was about twice than that required by the steam explosion. Moreover, the concentration of inhibitors (such as, HMF, acetic acid furfural) is low or even absent after DES pretreatment [18]. Procentese et al. [18] pointed out that glycerol–choline chloride was a potential DES to be used for lignocellulosic materials [18]. Typically, lignocellulosic biomass pretreatment investigations were focused on just one biomass (corncob [18], palm wastes [20], lettuce [19]), on the pretreatment temperature [17, 20], and on processing time [19]. However, investigation also pointed out that a key parameter for the industrial development of the DES pretreatment is the solvent consumption and recovery. Indeed, the reduction of the used DES is an economic prerequisite for the industrial development of the process. Therefore, the biomass to solvent ratio is still an open question for the DES pretreatment processes. The present work reports the results of a study focused on four AFWs produced from the European food and drink industries: potato peels, apple residues, coffee silverskin (CS), and brewer's spent grains (BSG). The lignin content of the investigated AFWs ranged between 18% [21, 22] and 33% [23, 24]. The low lignin content of AFWs is typically compatible with the mild conditions for biomass pretreatment adopted by DESs. The investigated AFWS were pretreated with two DESs (choline chloride–glycerol and choline chloride–ethylene glycol) to provide the feedstock for the enzymatic hydrolysis to produce fermentable sugars. Operating conditions of the DES pretreatment were changed in wide intervals. The solid to solvent ratio ranged between 1:8 and 1:32, and the temperature between 60 and 150 °C. The DES reaction time was set at 3 h [19]. The DES-pretreated biomass was hydrolysate according to the NREL protocol. The pretreatment was characterized by two groups of indicators: the first group was assessed after the DES pretreatment, and the second group was assessed after the enzymatic hydrolysis. The first group of indicators included inhibitors, lignin, and sugar content of the recovered pretreated biomass. The second group of indicators included the sugar yield obtained after enzymatic hydrolysis. Operating conditions were tuned to optimize glucose concentration and yield, water consumption, and pretreatment temperature for each AFW. A market analysis to assess the expected European amount of fermentable sugars from the investigated AFWs was also carried out. Characterization of raw biomass Selected biomasses were characterized in term of glucan, xylan, arabinan, and lignin content according to the NREL protocol. The starch content was also assessed for all the AFWs. Results are reported in Table 1. Potato peels were characterized by the highest lignin content (33%) followed by coffee silverskin (30%), brewer's spent grains (22%) and apple residue (19%). As regards the glucan content, the biomasses with the highest value were potato peels and apple residues (31 and 21%, respectively) followed by silverskin and brewer's spent grains (about 17%). The starch content was quite high for potato peels 23% of starch, quite low for coffee silverskin and BSG (7 and 5%, respectively), and negligible in apple residues. Table 1 Composition of the investigated AFWs DES pretreatment Table 1 reports the characterization of the investigated AFWs after the DES pretreatment. Pretreated samples of potato peel, coffee silverskin, brewer's spent grains and apple residues were characterized in term of biomass recovery and composition (glucan, xylan, arabinan, and lignin content). The biomass recovery, the lignin content, and the pentose-polymers (xylan plus arabinan) content of the pretreated samples decreased with the temperature. In particular, the decreasing of the lignin content ranged between 33% (potato peels) and 62% (apple residues). The increasing of the glucan content was 94% (CS), 67% (potato peels), 66% (apple residues) and 59% (BSG). Although the decrease of the lignin content may be advantageous for the successive processes, the loss of biomass and of pentoses does not suggest operating at high temperature. According to the literature [2], temperature higher than 150 °C was not tested to keep the costs of the whole process low. The increase of the solvent mass for unit of mass of AFWs is typically favorable to increase lignin dissolution and, as a consequence, sugar content of the pretreated biomass. Indeed, doubling the solvent content from 8 to 16 g per gram of biomass provided an increase in lignin dissolution in the solvent and glucan content in the pretreated biomass, even though the pentose-polymers slightly decreased. However, the further increase of the solvent mass for unit of mass of AFWs (16–32 gDES/graw biomass) provided very slight (or negligible) increase of glucan content, because almost no further increase in lignin removal and decrease of pentose-polymers were measured. The slight advantages from the increase of the solvent mass for unit of mass of AFWs suggest to keep this ratio as low as possible provided that the lignin dissolution in the solvent and glucan content in the pretreated biomass are close to the asymptotic values. Although the aim of the present investigation was to optimize the solvent amount required for the lignin removal, processes to recover and reuse the DES are on the table. As reported in literature [25], DESs can be easily recovered by distillation as pointed out in patents [26]. However, further investigation is required to have a clear scenario of the overall process: from the DES utilization to the DES recovery. The effects of the temperature and of the solvent mass for unit of mass of AFWs did not changed with the nature of the DESs. The analysis of the results reported in Table 1 points out that the DES made of choline chloride and glycerol provided a slight increase of the delignification process with respect to choline chloride and ethylene glycol. Results of the DES pretreatment—lignin removal and biomass recovered—may depend on the structure of the investigated biomass. Results could be related to the structure of the investigated biomass. The structure characterization could take advantages from physical and chemical analyses such as FTIR, X-ray, SEM and TEM. Further investigation should be carried out to point out relationships between biomass structure and pretreatment performance. The comparison of these results with respect to the results reported in the literature is challenging. However, few studies available in the literature are focused on the combination of AFWs and DES pretreatment. Main results regarding the AFWs pretreatment by means of the "classical" processes (e.g., steam explosion, alkaline and extrusion pretreatments) are reported hereinafter. Main results regarding the AFW pretreatment are: The extrusion of potato peels at 150 °C produced a glucan content increase of about 2% and a lignin content decrease of about 14% [27]; CS pretreatment by 0.1 M NaOH produced a glucan content increase of about 22% [28]; BSG pretreatment by steam explosion at 200 °C and 15.55 bar produced a glucan content increase of about 27% and a lignin content decrease of about 28% [29]; Apple residues pretreated by steam explosion at 5 bar produced the higher soluble dietary fiber value (29.85%) [30]. Agro-food wastes pretreated by DESs include: corncob [18] and lettuce leaves [19]. Both AFWs were pretreated with choline chloride glycerol at 150 °C for 16 h. The lignin content decrease and the glucan content increase were of about 23 and 67% [18] and 40 and 82% [19], respectively, for corncob and lettuce. Altogether, reported results point out that sugar recovery and lignin removal produced by means of DES pretreatment is comparable or even higher than that produced by means of the reported pretreatment methods. In addition, the DES pretreatment is characterized by environmentally friendly conditions. Inhibitor formation Potential inhibitors of the enzymatic hydrolysis and of the sugar fermentations produced during the DES pretreatment were analyzed. HMF, furfural, gallic acid, ferulic acid, coumaric acid concentration was measured in the supernatant recovered from the NREL biomass characterization. The supernatant characterization was carried out for the samples produced after the biomass pretreated with DESs under all the operating conditions investigated. HMF and furfural concentration was lower than 1.5 * 10−2 g L−1. The concentration of gallic, ferulic, and coumaric acid was smaller than the minimum detectable value (1 * 10−1 g L−1). The measured inhibitor concentrations are in agreement with those previously reported for Ch-Cl glycerol pretreatment applied to corncob [17]. The concentration of the measured inhibitors is lower than the typical threshold of enzymatic hydrolysis and fermentation [25]. Therefore, no detoxification strategy is required after DES-based biomass pretreatment. Enzymatic hydrolysis Table 2 reports the glucose yield referred to the glucane Ye (gglucose/gglucan) and to the pretreated biomass Y1 (gglucose/gpretreated biomass) assessed for each pretreated biomass after the enzymatic hydrolysis. The low xylan content measured at high temperature of the pretreatment process (115 and 150 °C) did not address the supplement xylanase to the enzymatic cocktail. As expected, xylose, mannose, and arabinose were not detected in the hydrolysed and were not reported in Table 2. The analysis of the Table 2 points out that both glucose yields increased as the temperature and solid/solvent ratio set during the DES pretreatment (Table 1). As regards the DES couple, the highest glucose yields were measured when choline chloride–glycerol DES was used. In particular, the enzymatic glucose yield (gglucose/gglucan) measured for biomass pretreated with choline chloride–glycerol was larger than that measured after choline chloride–ethylene glycole pretreatment. The highest values were obtained for apple residues and brewer's spent grains which were pretreated at 150 °C with biomass to solvent ratio 1:32. The high Ye measured for apple residues and brewer's spent grains could be due to the low lignin content of these residues with respect to potato peel and coffee silverskin. Table 2 Glucose yield after enzymatic hydrolysis of DES-pretreated AFWs The analysis of Tables 1 and 2 suggests that severe DES pretreatment—high temperature and large solid to solvent ratio—reduced the amount of recovered biomass but increased the enzymatic digestibility of the carbohydrates. The enzyme accessibility to carbohydrate polymers could be increased for harsh severe operating conditions. AFW pretreatment optimization One of the main pressing issues related to DES pretreatment is the water amount required in the washing step of the pretreated biomass before the enzymatic hydrolysis. Table 3 reports the enzymatic hydrolysis yields Ye assessed for tests carried out to assess the effect of the volume of washing water for unit of mass of raw biomass (Vw, mLwater/graw biomass) on the performance of the process. Data in table refer to all investigated AFWs, pretreated with choline chloride–glycerol at temperature set at 115 and 150 °C. The highest VW reported for each set of operating conditions is the minimum volume of water (e.g., minimum number of washing steps) to be used, so that the DES/water phase was completely clear to mark the absence of DES. Table 3 Effect of the extent of washing step on enzymatic hydrolysis of DES-pretreated biomass At 115 °C, as the VW is halved, the enzymatic glucose yield Ye decreases 15–40% with respect to the optimal value: the minimum reduction was measured for potato peels, the maximum reduction was measured for coffee silverskin. The large Ye decrease measured for coffee silverskin and brewer's spent grains may be due to the low glucan content of the raw biomass. Indeed, an extended washing is required to maximize the availability of glucan for the enzymatic hydrolysis. The same behavior may be observed at 150 °C even though the extension of the reduction of Ye is less pronounced. The comparison of the results reported in the present paper with those reported in the literature is not straightforward because—to the authors' knowledge—no study is reported regarding the optimization of the water consumption during DES pretreatment. The reported results address for a further investigation regarding this issue. European fermentable sugars production from AFWs Table 4 reports the European availability for the investigated biomasses and the expected fermentable sugars production in Europe from these AFWs assessed by processing data of Y2 yields measured in the present work. Data refers to AFWs pretreated with choline chloride–glycerol at 115 °C and using 30–40 mLwater/graw biomass during the washing step (see previous section). The comparison of the data assessed in the present investigation and those reported in the literature is challenging. However, a comparison with the literature is challenging due to the lack of data regarding to the investigated biomass and regarding to the investigated parameters (the water consumption during wash step). To the authors' knowledge, this is the first investigation carried out paying attention to AFWs pretreatment processes characterized by low energy–water request. Table 4 Expected European fermentable sugar production from investigated AFWs The largest production of fermentable sugars (170 kt yr−1) is expected from BSG after pretreatment with choline chloride–glycerol DES at 1:16 biomass to solvent ratio, 115 °C and rough biomass washing. Deep eutectic solvent pretreatment of four different agro-food wastes (AFWs) was successfully carried out. Investigated AFWs were: potato peels, coffee silverskin, brewery's spent grains, apple residues. Tests were aimed at the selection of optimal operating conditions to maximize sugar production and minimize water consumption. Optimal operating conditions were: 3 h pretreatment with choline chloride–glycerol at 1:16 biomass to solvent ratio and 115 °C. An analysis of the European agro-food market was carried out to assess the expected fermentable sugar production from the investigated AFWs based on the sugar yields resulting from the experimental investigation. The overall sugar production was about 217 kt yr−1 whose main fraction was from the hydrolysis of BSGs pretreated with choline chloride–glycerol DES at the optimal conditions. Chemicals (choline chloride, glycerol, ethylene glycol) and sterile-filtered water were supplied by Sigma Aldrich®. Raw material, preparation and characterization Potato peels and apple residues were kindled supplied by a potato processing company and a Spanish fruit juice company, respectively. Coffee silverskin (CS) were kindly supplied by Illy caffè S.p.A. The brewer's spent grains (BSG) were kindly supplied by an Italian brewery company. The supplied biomass was oven-dried at 40 °C and sieved. Solids collected in the range 1–0.5 mm were stored in sealed plastic bags at room temperature until used. The raw biomass was characterized in terms of glucan, xylan, arabinan, and lignin content according to the standard protocols of the US National Renewable Energy Laboratory [31]. Two DESs were investigated: (i) choline chloride–glycerol, and (ii) choline chloride–ethylene glycol. The molar ratio of DES components was 1:2 for both couples. The solids were provided by Sigma Aldrich. The DES solutions were prepared by continuously stirring the mixture at 500 rpm in an oil bath at 80 °C until homogenous colorless liquids formed. The raw biomass was mixed with the DES under pre-set operating conditions. The investigated operating conditions were: (reaction time) 3 h according to the results reported by Procentese et al. [18]; (temperature) 60, 115, and 150 °C [32]; (solid to solvent ratio) 1:8, 1:16, and 1:32 [18]. Pretreated biomass recovery The pretreated biomass was recovered by centrifugation. The biomass/DES suspension was mixed with sterile-filtered water to wash the biomass: the two phase suspension (the biomass and the DES/water phase) was centrifuged (3 min at 5000 rpm) to recover the biomass. The washing step was repeated until the DES/water phase was completely clear to mark the absence of DES. The wet slurry was dried at 38 °C until constant weight was reached. The percentage biomass recovery (R) was calculated as the ratio between the dry weight of pretreated biomass (BPT) and the dry weight of raw material (BRAW): $$ R = \, B_{\text{PT}} /B_{\text{RAW}} $$ The pretreated biomass was characterized in terms of glucan, xylan, arabinan, and lignin content according to the standard (NREL) protocols of the US National Renewable Energy Laboratory [26]. Glucan, xylan, and lignin content of biomass samples (raw and pretreated) were determined by quantitative saccharification upon acid hydrolysis and subsequent HPLC and gravimetric analysis, based on standard NREL protocols [31]. The concentration of glucose and xylose was quantified by high-performance liquid chromatography (Agilent 1260 Infinity HPLC) using an 8 µm Hi-Plex H, 30 cm 7.7 mm column at room temperature and a refractive index detector. Deionized water was used as mobile phase at a flow rate of 0.6 mL min−1. Analysis of each biomass sample was carried out in triplicate. The measurement of the concentration of potential enzymatic and fermentation inhibitors was also carried out: hydroxymethyl-furfural (HMF), furfural, gallic acid, ferulic acid, coumaric acid were quantified by high-performance liquid chromatography (Agilent 1100 system Palo Alto, CA). Inhibitors were separated by means of Luna C18 column (5 µm 250 × 4.6 mm) at room temperature and optically detected at 276 nm. Mixtures of formic acid 0.1% vol and pure methanol were used as mobile phase with the following solvents profile: 20 min ramp from 0 to 1.2 mL min−1 flow rate and from 5 to 30% of methanol, 40 min ramp from 1.2 to 1.5 mL min−1 flow rate at constant value 30% of methanol. The enzymatic hydrolysis was carried out according to the procedure proposed by Procentese et al. [19]. The commercial enzyme cocktail Cellic CTec2 (kindly supplied by Novozyme) and Amylases from Megazyme were used. The cellulase activity was adjusted to 142 FPU mL−1. The hydrolysis was carried out in 0.1 M sodium citrate buffer (pH 4.8) supplemented with 80 µL tetracycline and 60 µL cycloheximide to prevent microbial contamination. 100 mL glass bottles were incubated at 50 °C and kept under agitation on a rotary shaker (Minitron Incubator Shaker-Infors HT) at 180 rpm for 60 h. The CTec2 loading was set at 15 mgenzyme/gglucan according to the glucan content assessed on the raw biomass, the amylases loading was fixed at 10 U/gpretreatred biomass. The solid loading was set at 10% (w/v). The enzymatic solution was sampled, centrifuged, filtered, and analyzed to assess sugar concentration at fixed time intervals. Hydrolysis tests were carried out in duplicate. As reported, the enzymatic hydrolysis requires the washing of the DES-pretreated biomass and the water consumption is a pressing issue of the process. A campaign of enzymatic hydrolysis tests was carried out by tuning the water used to wash the DES-pretreated biomass. In particular, the volume of washing water for mass unit of raw biomass (Vw, mLwater/graw biomass) assessed for the DES pretreatment optimal conditions was cut by half. Results of the enzymatic digestibility of pretreated biomass were compared with those assessed for the tests carried out with extended biomass washing. Enzymatic glucose yield, Ye, (gglucose/gglucan) was calculated as the ratio of the glucose produced during the enzymatic hydrolysis and the glucan content in the raw biomass. The glucose yield referred to the pretreated biomass, Y1, (gglucose/gpretreated biomass) was calculated as the ratio between the glucose produced by the enzymatic hydrolysis and the pretreated biomass processed during the enzymatic hydrolysis. The glucose yield referred to the raw biomass, Y2, (gglucose/graw biomass) was calculated as: $$ Y_{2} = Y_{1} * \, R $$ Assessment of the European fermentable sugars production Potato peels The main by-products of the potato processing are: potato peel (around 3%), fresh rejected potato (3–4%), starch (3%), and fried rejected potato (2–3%). These residues are characterized by a high carbohydrate content: a source of fermentable sugars. The European potato peels availability was calculated as the product of the potato mass processed every year and the potato peel fraction (3%). Apple residues According to the reported data, the European apple residue availability was calculated as the product between the total EU fresh-cut fruit (equal to the residue production rate) and vegetables consumed every year and the fruit contribution (approximated as 7% of the mass market volume). Coffee silverskin Coffee silverskin is about 4.2% of coffee beans and the valorisation of this waste according to the biorefinery concept could be a contribution for many industries for the development of circular economy. Carbohydrates content of CS ranges between 34.6 and 80.5% [21]. Therefore, CS could be used for fermentable sugars production. The year European CS availability was calculated as the product between the total EU processed coffee beans every year and the CS fraction (4.2%). Brewer's spent grains Brewers' spent grains (BSGs) are the residues of the beer production and it is about 20% of the beer produced. BSGs are characterized by high sugar concentration and may be used as feedstock to produce fermentable sugars. The European BSG availability was calculated as the product between the total EU beer production per year and the BSGs fraction (20%). The fermentable sugar production for each waste (S i ) was calculated as following: $$ S_{i} = \omega_{i} \times Y_{1} $$ where ω i is the European availability (Mt yr−1) for each waste i, and Y1 (gglucose/graw biomass) is the yield of glucose per gram of raw biomass. Parajuli R, Dalgaard T, Jørgensen U, Adamsen AP, Trydeman KM, Birkved M, Gylling M, Schjørring JK. Biorefining in the prevailing energy and materials crisis: a review of sustainable pathways for biorefinery value chains and sustainability assessment methodologies. Renew Sustain Energ Rev. 2015;43:244–63. Kumar P, Barrett DM, Delwiche MJ, Stroeve P. Methods for pretreatment of lignocellulosic biomass for efficient hydrolysis and biofuel production. Ind Eng Chem Res. 2009;48–8:3713–29. Van Dyk JS, Pletschke BI. A review of lignocellulose bioconversion using enzymatic hydrolysis and synergistic cooperation between enzymes—factors affecting enzymes, conversion and synergy. Biotechnol Adv. 2012;30:1458–80. Procentese A, Raganati F, Olivieri G, Russo ME, de La Feld M, Marzocchella A. Renewable feedstocks for biobutanol production by fermentation. New Biotechnol. 2017;39:135–40. http://www.euppa.eu. Accessed 8 Oct 2016. Florkowoski J, Shewefelt R, Brueckner B, Prussia S. Postharvest handling a systems approach. 3rd ed. Cambridge: Academic press; 2009. ISBN 978-0-12-408137-6. Procentese A, Raganati F, Olivieri G, Russo ME, Marzocchella A. Pre-treatment and enzymatic hydrolysis of lettuce residues as feedstock for bio-butanol production. Biomass Bioener. 2017;96:172–9. http://www.eurostat.eu. Accessed 8 Sept 2016. Mussatto SI, Carneiro LM, Silva JPA, Roberto IC, Teixeira JA. A study on chemical constituents and sugars extraction from spent coffee grounds. Carbohyd Polym. 2011;83:368–74. http://www.brewersofeurope.org. Accessed 7 Sept 2016. Zhang X, Yuan Q, Cheng G. Deconstruction of corncob by steam explosion pretreatment: correlations between sugar conversion and recalcitrant structures. Carbohydr Polym. 2017;156:351–6. Lynam JG, Chow GI, Hyland PL, Coronella CJ. Corn stover pretreatment by ionic liquid and glycerol mixtures with their density, viscosity, and thermogravimetric properties. ACS Sustain Chem Eng. 2016;4:3786–93. Sen B, Chou YP, Wu SY, Liu CM. Pretreatment conditions of rice straw for simultaneous hydrogen and ethanol fermentation by mixed culture. Int J Hydrog Energy. 2016;41:4421–8. Raganati F, Procentese A, Montagnaro F, Olivieri G, Marzocchella A. Butanol production from leftover beverages and sport drinks. Bioener Res. 2015;8:369–79. Magyar M, da Costa Sousa L, Jin M, Sarks C, Balan V. Conversion of apple pomace waste to ethanol at industrial relevant conditions. Appl Microbiol Biotechnol. 2016;100:7349–58. Dai Y, Spronsen GJ, Witkamp R, Verpoorte YH. Ionic liquids and deep eutectic solvents in natural products research: mixtures of solids as extraction solvents. J Nat Prod. 2013;76:2162–73. Gorke JT, Srienc F, Kazlauskas RJ. Hydrolase-catalyzed biotransformations in deep eutectic solvents. Chem Commun. 2008;10:1235–7. Procentese A, Johnson E, Orr V, Garruto A, Wood J, Marzocchella A, Rehmann L. Deep eutectic solvent pretreatment and saccharification of corncob. Bioresour Technol. 2015;192:31–6. Procentese A, Raganati F, Olivieri G, Russo ME, Rehmann L, Marzocchella A. Low-energy biomass pretreatment with deep eutectic solvents for bio-butanol production. Bioresour Technol. 2017;243:464–73. Fang C, Thomsen MH, Frankær CC, Brudecki GP, SchmidtJ E, AlNashef IM. Reviving pretreatment effectiveness of deep eutectic solvents on lignocellulosic date palm residues by prior recalcitrance reduction. Ind Eng Chem Res. 2017;1:3167–74. Muthusamy N. Chemical composition of brewers spent grain: a review. Int J Sci Environ Technol. 2014;3–6:2109–12. De Sancho SO, da Silva ARA, de Dantas ANS, Magalhães TA, Lopes GS, Rodrigues S, da Costa JMC, Fernandes FN, de Silva MGV. Characterization of the industrial residues of seven fruits and prospection of their potential application as food supplements. J Chem. 2015;264284:8. Narita Y, Inouye K. Review on utilization and composition of coffee silverskin. Food Res Int. 2014;61:16–22. Sepelev I, Galoburda R. Industrial potato peel waste application in food production: a review. Res Rural Dev. 2015;1:130–6. Kumar A, Parikh B, Pravakar M. Natural deep eutectic solvent mediated pretreatment of rice straw: bioanalytical characterization of lignin extract and enzymatic hydrolysis of pretreated biomass residue. Environ Sci Pollut Res. 2016;23:9265–75. Patent number WO2013153203 A1. Camire ME, Violette D, Dougherty MP, McLaughlin MA. Potato peel dietary fiber composition: effects of peeling and extrusion cooking processes. J Agric Food Chem. 1997;45:1404–8. Alghooneh A, Amini AM, Behrouzian F, Razavi SMA. Characterisation of cellulose from coffee silverskin. Int J Food Prop. 2017;20(11):2830–43. Kemppainen K, Rommi K, Holopainen U, Kruus K. Steam explosion of Brewer's spent grain improves enzymatic digestibility of carbohydrates and affects solubility and stability of proteins. Appl Biochem Biotechnol. 2016;180:94–108. Liang X, Ran J, Sun J, Wang T, Jiao Z, He H, Zhu M. Steam-explosion-modified optimization of soluble dietary fiber extraction from apple pomace using response surface methodology. J Food. 2018;16(1):20–6. Sluiter A, Hames B, Ruiz R, Scarlata C, Sluiter J, Templeton D et al. Determination of structural carbohydrates and lignin in biomass. Golden: National Renewable Energy Laboratory (NREL); 2011. Report No.: NREL/TP-510-42618. Contract No.: DE-AC36-08-GO28308. Nor NAM, Mustapha WAW, Hassan O. Deep eutectic solvent (DES) as a pretreatment for oil palm empty fruit bunch (OPEFB) in production of sugar. Procedia Chem. 2016;18:147–54. The experiments were designed by AP and LR. The experiments were conducted by AP, FR, and MER. The manuscript was written by AP, GO and AM. The study was directed by AP and AM. All authors read and approved the final manuscript. Availability of supporting data This project has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 654623. Any result reflects only the author's view and that the European Agency is not responsible for any use that may be made of the information it contains. Istituto di Ricerche sulla Combustione – Consiglio Nazionale delle Ricerche, P.le V. Tecchio 80, 80125, Naples, Italy Alessandra Procentese & Maria Elena Russo Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale – Università degli Studi di Napoli Federico II, P.le V. Tecchio 80, 80125, Naples, Italy Francesca Raganati, Giuseppe Olivieri & Antonio Marzocchella Department of Chemical and Biochemical Engineering, University of Western Ontario, 1151 Richmond Street, London, ON, N6A 3K7, Canada Lars Rehmann Alessandra Procentese Francesca Raganati Giuseppe Olivieri Maria Elena Russo Antonio Marzocchella Correspondence to Alessandra Procentese. Procentese, A., Raganati, F., Olivieri, G. et al. Deep Eutectic Solvents pretreatment of agro-industrial food waste. Biotechnol Biofuels 11, 37 (2018). https://doi.org/10.1186/s13068-018-1034-y DOI: https://doi.org/10.1186/s13068-018-1034-y Agro-industrial waste Fermentable sugars Deep eutectic solvents Biomass pretreatment Submission enquiries: [email protected]
CommonCrawl
Simplify: $(\sqrt{5})^4$. We have \[(\sqrt{5})^4 = (5^{\frac12})^4 = 5 ^{\frac12\cdot 4} = 5^2 = \boxed{25}.\]
Math Dataset
Of pdf complexes colour metal transition Home » Kearns » Colour of transition metal complexes pdf Kearns - Colour Of Transition Metal Complexes Pdf Posted on 26.06.2020 in Kearns Places: Corinna Byabarra Hermannsburg Redcliffe Peninsula Coulta Blackmans Bay Lexton Nangeenan Concepts in Transition Metal Chemistry –Answers Electronic Structure and Reactivity of Transition Metal. Magnetism in Transition Metal Complexes All substances display some form of magnetism when placed in a magnetic field. In cases where there is no magnetic interaction, Students will learn that transition metals tend to change color when they bond with pair to the metal ion, forming a metal-ligand bond called a coordinate covalent bond . The ligand, in this case, is the ammonia found in Windex. The color change is most often seen with transition metals because the complex ion absorbs light at a certain energy that corresponds to that of a visible color. SYNTHESIS AND PROPERTIES OF TRANSiTION METAL TO 5.53 colour in complex ions chemrevise.files.wordpress.com. Colors of Coordination Complexes: Crystal Field Splitting. When ligands attach to a transition metal to form a coordination complex, electrons in the d orbital split into high energy and low energy orbitals., Spin crossover (SCO) complexes are essentially $\mathrm{d}^4$в€'$\mathrm{d}^7$ first-row transition-metal octahedral complexes. The occurrence of ST in coordination compounds of transition-metal ions is governed by the relationship between the strength of the ligand field (the electrostatic field acting at the central metal ion) and the mean spin-pairing energy.. This is not a transition metal, but is included because its hydroxide is amphoteric like chromium hydroxide, and its chemistry is similar in this respect. [Al(H 2 O) 6 ] colors of transition metal complexes and compounds: silver and zinc. Both ions have d 10 configurations. Thus all of the d-orbitals are filled and no transitions are possible. Students will learn that transition metals tend to change color when they bond with pair to the metal ion, forming a metal-ligand bond called a coordinate covalent bond . The ligand, in this case, is the ammonia found in Windex. The color change is most often seen with transition metals because the complex ion absorbs light at a certain energy that corresponds to that of a visible color Various transition and inner-transition metal complexes with bi, tri and tetradentate Schiff bases containing nitrogen and oxygen or sulfur donor atoms play an important role in biological systems (Malik et al., 2011, Raman et al., 2007). metal complexes have been tested in vitro against a number of microorganisms. The tested compounds exhibited The tested compounds exhibited significant activity. 18 Investigating the formulae of complex ions Background Complex ions consist of a central metal ion surrounded by a specific number of molecules and ions. Magnetism in Transition Metal Complexes All substances display some form of magnetism when placed in a magnetic field. In cases where there is no magnetic interaction Reactivity of Transition Metal Complexes (H&S 3rd Ed., Chpt. 26) Four main types of reactivity: • rates depends on starting complex and incoming ligand concentration • sensitive to nature of L' (but solvent effects can sometimes mask this) • more likely for low coordination number complexes Dissociative mechanism: • equivalent to a SN1 reaction in organic chemistry • rates Transition metal complexes have a very wide range of colors within the visible spectrum thus they are commonly found as ingredients in colored paint. A transition metal complex Transition Metals form coloured compounds and complexes. These colours can be vary depending on the charge on metal ion, and the number and type of groups of atoms (called ligands) attached to the metal ion. In aqueous solutions, the ions form colours with complexes. 182 Yakubreddy Naini et al.: Synthesis and Characterization of Transition Metal Complexes of Chlorpromazine distilled water was used in all preparations. Metal-complex dyes are synthesised through coordination of bi- or polyvalent transition metal ions with selective acid dyes. They are mainly applied to wool, silk and nylon to achieve better wash fastness for dyed fabrics, compared to those obtained with the parent acid dye. Color of transition metal complexes Synthesis of copper(II)-tetraphenylporphyrin, a metal complex, from tetraphenylporphyrin and copper(II) acetate monohydrate . Transition metal complexes often have spectacular colors caused by electronic transitions by the absorption of light. 1 Transition metal complexes uA transition metal complex is species consisting of a transition metal coordinated (bonded to) one or more ligands (neutral or Colors of Coordination Complexes: Crystal Field Splitting. When ligands attach to a transition metal to form a coordination complex, electrons in the d orbital split into high energy and low energy orbitals. Absorption and luminescence spectroscopy of transition metal compounds: from coordination geometries to excited-state properties [1] Christian Reber* Abstract Absorption spectra of transition metal compounds provide important experimental information for the understanding of their chemistry. The combination with luminescence spectra yields quantitative insight on excited-state properties, in 5/04/2018В В· Non-transition metals don't have partly filled d orbitals. Visible light is only absorbed if some energy Visible light is only absorbed if some energy from the light is used to promote an electron over exactly the right energy gap. Various transition and inner-transition metal complexes with bi, tri and tetradentate Schiff bases containing nitrogen and oxygen or sulfur donor atoms play an important role in biological systems (Malik et al., 2011, Raman et al., 2007). Absorption and luminescence spectroscopy of transition metal compounds: from coordination geometries to excited-state properties [1] Christian Reber* Abstract Absorption spectra of transition metal compounds provide important experimental information for the understanding of their chemistry. The combination with luminescence spectra yields quantitative insight on excited-state properties, in THE COLOUR OF TRANSITION METAL COMPLEXES The colour of iron complexes depends on the oxidation state of the iron and the ligands in the complex. … For example, if the electrons in an octahedral metal complex can absorb green light and get promoted from the d yz orbital to the d z 2 orbital, the compound will reflect all the colors except green. Hence, the complementary color of green will be observed as the color of the compound. reactions became popular by using the late transition metals nickel and palladium. More recently, the increasing number of reactions using catalytic amounts of iron complexes indicates a renaissance of this metal in catalysis. 182 Yakubreddy Naini et al.: Synthesis and Characterization of Transition Metal Complexes of Chlorpromazine distilled water was used in all preparations. The Catalytic Activity of Transition Metal Complexes in Oxidation Reactions Utilizing Hydroperoxides J. J. Weers and C. E. Thomasson Petrolite Corporation Research & Development 369 Marshall Avenue St. Louis, MO. 63119 INTRODUCTION Hydroperoxides and oxy-based radicals have often been identified as residue and color formation precursors in hydrocarbon fuels during storage1. … Reactivity of Transition Metal Complexes (H&S 3rd Ed., Chpt. 26) Four main types of reactivity: • rates depends on starting complex and incoming ligand concentration • sensitive to nature of L' (but solvent effects can sometimes mask this) • more likely for low coordination number complexes Dissociative mechanism: • equivalent to a SN1 reaction in organic chemistry • rates The Transition Metals block elements. We now turn to the more complex chemistry of the d‐block transition metals and the f‐ block lanthanides and actinides. Although these elements are less pervasive in our lives and less famous, 15/02/2018В В· Shapes of complex ions transition metal ions commonly form octahedral complexes with small ligands (e.g. H2O and NH3). transition metal ions commonly form transition metal complexes. The success of this approach comes from counteracting The success of this approach comes from counteracting the tendency of 3d electrons to delocalize, driven by the imperfect cancellation of reactions became popular by using the late transition metals nickel and palladium. More recently, the increasing number of reactions using catalytic amounts of iron complexes indicates a renaissance of this metal in catalysis. Properties of Transition Metal Complexes 1. Highly colored (absorb light in visible, transmit light which eye detects) 2. May exhibit multiple oxidation states of Metal Acetylacetonate Complexes Carbon Carbon 2 Contents Objectives 3 Introduction 3 Experiment 6 Safety 6 AI3+ Complex (t-butanol) in the presence and absence of paramagnetic transition metal complexes with Equation 1. Equation 1: X g = 3Δf 2πfm + X 0 + X 0 (d 0-d s) m Where X g is the mass susceptibility of the solute, Δf is the observed shift in the frequency of the … Transition metals form coloured compounds and complexes. These colours can vary depending on the charge on the metal ion, and the number and type of groups of atoms THE COLOUR OF TRANSITION METAL COMPLEXES The colour of iron complexes depends on the oxidation state of the iron and the ligands in the complex. … The Synthesis and Color of Cr3+ Complexes To learn about Coordination Compounds and Complex Ions. To learn about the Color of Transition Metal Complexes. Colours of Transition Metal Ions in Aqueous Solution Transition Metals a2-level-level-revision chemistry. transition metal complexes are given in table-4. The electronic spectrum of Schiff base ligand exhibit strong The electronic spectrum of Schiff base ligand exhibit strong absorption bands at 282nm and 294nm, which were attributed to ПЂ ПЂ* and n ПЂ* transitions respectively [14]., Absorption and luminescence spectroscopy of transition metal compounds: from coordination geometries to excited-state properties [1] Christian Reber* Abstract Absorption spectra of transition metal compounds provide important experimental information for the understanding of their chemistry. The combination with luminescence spectra yields quantitative insight on excited-state properties, in. Transition Metals Color the World (23 Favorites) AACT. transition metal complexes. The success of this approach comes from counteracting The success of this approach comes from counteracting the tendency of 3d electrons to delocalize, driven by the imperfect cancellation of, The Catalytic Activity of Transition Metal Complexes in Oxidation Reactions Utilizing Hydroperoxides J. J. Weers and C. E. Thomasson Petrolite Corporation Research & Development 369 Marshall Avenue St. Louis, MO. 63119 INTRODUCTION Hydroperoxides and oxy-based radicals have often been identified as residue and color formation precursors in hydrocarbon fuels during storage1. …. The modular synthesis of rare earth-transition metal Coordination Complex Resource Learn About Share and. Pt complexes have had the most effective medicinal properties against certain types of cancers, but in 1995 the first non platinum transition metal anitcancer … This graphic looks at the colours of transition metal ions when they are in aqueous solution (in water), and also looks at the reason why we see coloured compounds and complexes for transition metals. This helps explain, for example, why rust (iron oxide) is an orange colour, and why the Statue of. Electronic Structure and Reactivity of Transition Metal Concepts in Transition Metal Chemistry –Answers Chapter 1 1To obtain the electronic configuration of transition metal ions, remove first the 4s electrons and then the appropriate number of 3d electrons from the configuration of the free atom. Thus Mn4+ has configuration [Ar]3d3 and Cu3+ has the configuration [Ar]3d8. 2Element A is zirconium and B is tin. The elements with four valence In aqueous solution, transition metal ions exist as hydrated complexes with water molecules. (When an ionic crystal (When an ionic crystal dissolves in water, lattice energy has to be overcome and this is compensated by the hydration of ions, ie. complexing with Various transition and inner-transition metal complexes with bi, tri and tetradentate Schiff bases containing nitrogen and oxygen or sulfur donor atoms play an important role in biological systems (Malik et al., 2011, Raman et al., 2007). Transition Metal Complexes - Download as Word Doc (.doc), PDF File (.pdf), Text File (.txt) or read online. Scribd is the world's largest social reading and publishing site. Search Search transition metal complexes. The success of this approach comes from counteracting The success of this approach comes from counteracting the tendency of 3d electrons to delocalize, driven by the imperfect cancellation of The Transition Metals block elements. We now turn to the more complex chemistry of the d‐block transition metals and the f‐ block lanthanides and actinides. Although these elements are less pervasive in our lives and less famous, metal complexes have been tested in vitro against a number of microorganisms. The tested compounds exhibited The tested compounds exhibited significant activity. complexes; (iv) use of metal carbonyls as a source of zerovalent metals in the preparations of binary complexes of 9,10-phenanthrenequinone and (v) syn- thesis of cyclopentadienyl-alkyl and cyclopentadienyl-aryl derivatives of uranium(rv). INTRODUCTION In the past decades chemists have lived in the belief that alkyl and aryl derivatives of transition metals had to be unstable. This belief … Spin crossover (SCO) complexes are essentially $\mathrm{d}^4$в€'$\mathrm{d}^7$ first-row transition-metal octahedral complexes. The occurrence of ST in coordination compounds of transition-metal ions is governed by the relationship between the strength of the ligand field (the electrostatic field acting at the central metal ion) and the mean spin-pairing energy. Transition metals form coloured compounds and complexes. These colours can vary depending on the charge on the metal ion, and the number and type of groups of atoms Spin crossover (SCO) complexes are essentially $\mathrm{d}^4$в€'$\mathrm{d}^7$ first-row transition-metal octahedral complexes. The occurrence of ST in coordination compounds of transition-metal ions is governed by the relationship between the strength of the ligand field (the electrostatic field acting at the central metal ion) and the mean spin-pairing energy. 1.1.1 Why different colors are seen instead of only one color? If the color of the aqueous solution is due to the transition of electrons from one d level to another d level, there should be only one color. Transition metals form coloured compounds and complexes. These colours can vary depending on the charge on the metal ion, and the number and type of groups of atoms Chapter 9: Transition Metals 113 DEMONSTRATION 9.5 THE COLOUR OF TRANSITION METAL COMPLEXES The colour of iron complexes depends on the oxidation state of the iron and This is not a transition metal, but is included because its hydroxide is amphoteric like chromium hydroxide, and its chemistry is similar in this respect. [Al(H 2 O) 6 ] 5/04/2018В В· Non-transition metals don't have partly filled d orbitals. Visible light is only absorbed if some energy Visible light is only absorbed if some energy from the light is used to promote an electron over exactly the right energy gap. Chapter 9: Transition Metals 113 DEMONSTRATION 9.5 THE COLOUR OF TRANSITION METAL COMPLEXES The colour of iron complexes depends on the oxidation state of the iron and Colors of Coordination Complexes: Crystal Field Splitting. When ligands attach to a transition metal to form a coordination complex, electrons in the d orbital split into high energy and low energy orbitals. Transition Metal Complexes A ligand is a molecule or ion that bonds to a metal ion by donating one or more pairs of electrons. The nucleophiles from organic chemistry and Lewis bases from more general inorganic chemistry fulfil the same role. 1 Transition metal complexes uA transition metal complex is species consisting of a transition metal coordinated (bonded to) one or more ligands (neutral or 182 Yakubreddy Naini et al.: Synthesis and Characterization of Transition Metal Complexes of Chlorpromazine distilled water was used in all preparations. Pt complexes have had the most effective medicinal properties against certain types of cancers, but in 1995 the first non platinum transition metal anitcancer … Transition Metals form coloured compounds and complexes. These colours can be vary depending on the charge on metal ion, and the number and type of groups of atoms (called ligands) attached to the metal ion. In aqueous solutions, the ions form colours with complexes. 1 Transition metal complexes uA transition metal complex is species consisting of a transition metal coordinated (bonded to) one or more ligands (neutral or trans-Dichlorobis(ethylenediamine)cobalt(III) Chloride To learn about Coordination Compounds and Complex Ions. To learn about the Color of Transition Metal Complexes. The Color of Transition Metal Complexes!Color results when a complex absorbs frequencies in the visible region of the spectrum, causing transitions from the ground electronic state to This graphic looks at the colours of transition metal ions when they are in aqueous solution (in water), and also looks at the reason why we see coloured compounds and complexes for transition metals. This helps explain, for example, why rust (iron oxide) is an orange colour, and why the Statue of There is transition metal literature precedent for using pd as a bridging ligand to synthesise multi-metallic complexes. The first transition metal complexes of pd were synthesised by Pt complexes have had the most effective medicinal properties against certain types of cancers, but in 1995 the first non platinum transition metal anitcancer … Spin crossover (SCO) complexes are essentially $\mathrm{d}^4$−$\mathrm{d}^7$ first-row transition-metal octahedral complexes. The occurrence of ST in coordination compounds of transition-metal ions is governed by the relationship between the strength of the ligand field (the electrostatic field acting at the central metal ion) and the mean spin-pairing energy. Reactivity of Transition Metal Complexes (H&S 3rd Ed., Chpt. 26) Four main types of reactivity: • rates depends on starting complex and incoming ligand concentration • sensitive to nature of L' (but solvent effects can sometimes mask this) • more likely for low coordination number complexes Dissociative mechanism: • equivalent to a SN1 reaction in organic chemistry • rates Teacher Note: Some of the transition metal complexes formed here are transition metal ions, while others are transition metal compounds. This idea is eliminated from the background information. This idea is eliminated from the background information. of Metal Acetylacetonate Complexes Carbon Carbon 2 Contents Objectives 3 Introduction 3 Experiment 6 Safety 6 AI3+ Complex (t-butanol) in the presence and absence of paramagnetic transition metal complexes with Equation 1. Equation 1: X g = 3Δf 2πfm + X 0 + X 0 (d 0-d s) m Where X g is the mass susceptibility of the solute, Δf is the observed shift in the frequency of the … complexes exhibit color. The energy difference is referred to as the Crystal Field Splitting The energy difference is referred to as the Crystal Field Splitting Energy Difference, and is given the symbol ∆. Transition metal ions: the chemistry of colour •Colour of complex corresponds to wavelengths of light not absorbed. Observed colour is usually trans-Dichlorobis(ethylenediamine)cobalt(III) Chloride To learn about Coordination Compounds and Complex Ions. To learn about the Color of Transition Metal Complexes. Thomas Krause, author of The Behavior-Based Safety Process and Co-founder/CEO of Behavioral Science Technology, Inc. states that "Industrial safety...is a serious subject both … Behaviour based safety training pdf Bindi Bindi Ralph Servati contact by phone: 843.815.2780 contact by e-mail: [email protected] Behavior Based Safety Training PBBS: A web based training program Next - Scl 90 R Manual Pdf Previous - Crystal Grids Hibiscus Moon Pdf Introduction To Container Homes And Buildings Pdf Community Development Project Proposal Pdf Theories Of Business Forecasting Pdf New Yankee Workshop Router Table Plans Pdf Pattern Printing Programs In C Pdf How To Shrink Pdf File Size Acrobat Grey Knights 7th Edition Codex Pdf Also Known As Robin Benway Pdf 1001 Movies To See Before You Die Pdf Comptia A+ Certification All-in-one For Dummies 3rd Edition Pdf Black Ice By Anne Stuart Novel Pdf Free Download Adobe Premiere Pro For Dummies Pdf Download Feminism In International Relations Pdf Blood Guts And Glory Pdf How To Take A Screenshot Of A Pdf On Mac The Radiant City Le Corbusier Book Pdf samantha young down london road pdf download can you create a pdf file on an ipad police and crime prevention pdf using questions in teaching pdf Gungahlin Combaning Rum Jungle Glen Niven Krondorf Petcheys Bay Thorpdale Tjuntjunjtarra Community O'Connor Jellat Jellat Tiwi Carrington St Clair Pioneer Stradbroke Kalamunda Florey Gunnedah Gunn Thane Belair Tewkesbury Southbank Bramley Bruce Bridgman Araluen Somerset Dam Kilburn Eddystone Wickliffe Coondle Weetangera Alstonville Malak Mount Chalmers Urrbrae Blackstone Heights Wunghnu Mandurah Stirling Barrengarry Lambells Lagoon Petrie Willson River Sisters Creek Arawata Pinjarra Kambah Broughton Vale Katherine South Menzies Allendale East Kingston Beach Glenlee Menora Griffith Oaklands Dundee Ebenezer Munno Para Downs Ben Lomond Kalkee Kings Park Hume Newcastle West Darwin River Sippy Downs Blyth Douglas River Lysterfield South Dinninup
CommonCrawl
Gavin J Daigle, Karthik Krishnamurthy, Nandini Ramesh, Ian Casci, John Monaghan, Kevin McAvoy, Earl W Godfrey, Dianne C Daniel, Edward M Johnson, Zachary Monahan, Frank Shewmaker, Piera Pasinelli and Udai Bhan Pandey. Pur-alpha regulates cytoplasmic stress granule dynamics and ameliorates FUS toxicity.. Acta neuropathologica 131(4):605–20, April 2016. Abstract Amyotrophic lateral sclerosis is characterized by progressive loss of motor neurons in the brain and spinal cord. Mutations in several genes, including FUS, TDP43, Matrin 3, hnRNPA2 and other RNA-binding proteins, have been linked to ALS pathology. Recently, Pur-alpha, a DNA/RNA-binding protein was found to bind to C9orf72 repeat expansions and could possibly play a role in the pathogenesis of ALS. When overexpressed, Pur-alpha mitigates toxicities associated with Fragile X tumor ataxia syndrome (FXTAS) and C9orf72 repeat expansion diseases in Drosophila and mammalian cell culture models. However, the function of Pur-alpha in regulating ALS pathogenesis has not been fully understood. We identified Pur-alpha as a novel component of cytoplasmic stress granules (SGs) in ALS patient cells carrying disease-causing mutations in FUS. When cells were challenged with stress, we observed that Pur-alpha co-localized with mutant FUS in ALS patient cells and became trapped in constitutive SGs. We also found that FUS physically interacted with Pur-alpha in mammalian neuronal cells. Interestingly, shRNA-mediated knock down of endogenous Pur-alpha significantly reduced formation of cytoplasmic stress granules in mammalian cells suggesting that Pur-alpha is essential for the formation of SGs. Furthermore, ectopic expression of Pur-alpha blocked cytoplasmic mislocalization of mutant FUS and strongly suppressed toxicity associated with mutant FUS expression in primary motor neurons. Our data emphasizes the importance of stress granules in ALS pathogenesis and identifies Pur-alpha as a novel regulator of SG dynamics. Niran Maharjan, Christina Künzli, Kilian Buthey and Smita Saxena. C9ORF72 Regulates Stress Granule Formation and Its Deficiency Impairs Stress Granule Assembly, Hypersensitizing Cells to Stress.. Molecular neurobiology, April 2016. Abstract Hexanucleotide repeat expansions in the C9ORF72 gene are causally associated with frontotemporal lobar dementia (FTLD) and/or amyotrophic lateral sclerosis (ALS). The physiological function of the normal C9ORF72 protein remains unclear. In this study, we characterized the subcellular localization of C9ORF72 to processing bodies (P-bodies) and its recruitment to stress granules (SGs) upon stress-related stimuli. Gain of function and loss of function experiments revealed that the long isoform of C9ORF72 protein regulates SG assembly. CRISPR/Cas9-mediated knockdown of C9ORF72 completely abolished SG formation, negatively impacted the expression of SG-associated proteins such as TIA-1 and HuR, and accelerated cell death. Loss of C9ORF72 expression further compromised cellular recovery responses after the removal of stress. Additionally, mimicking the pathogenic condition via the expression of hexanucleotide expansion upstream of C9ORF72 impaired the expression of the C9ORF72 protein, caused an abnormal accumulation of RNA foci, and led to the spontaneous formation of SGs. Our study identifies a novel function for normal C9ORF72 in SG assembly and sheds light into how the mutant expansions might impair SG formation and cellular-stress-related adaptive responses. Regina Nostramo, Sapna N Varia, Bo Zhang, Megan M Emerson and Paul K Herman. The Catalytic Activity of the Ubp3 Deubiquitinating Protease Is Required for Efficient Stress Granule Assembly in Saccharomyces cerevisiae.. Molecular and cellular biology 36(1):173–83, January 2016. Abstract The interior of the eukaryotic cell is a highly compartmentalized space containing both membrane-bound organelles and the recently identified nonmembranous ribonucleoprotein (RNP) granules. This study examines in Saccharomyces cerevisiae the assembly of one conserved type of the latter compartment, known as the stress granule. Stress granules form in response to particular environmental cues and have been linked to a variety of human diseases, including amyotrophic lateral sclerosis. To further our understanding of these structures, a candidate genetic screen was employed to identify regulators of stress granule assembly in quiescent cells. These studies identified a ubiquitin-specific protease, Ubp3, as having an essential role in the assembly of these RNP granules. This function was not shared by other members of the Ubp protease family and required Ubp3 catalytic activity as well as its interaction with the cofactor Bre5. Interestingly, the loss of stress granules was correlated with a decrease in the long-term survival of stationary-phase cells. This phenotype is similar to that observed in mutants defective for the formation of a related RNP complex, the Processing body. Altogether, these observations raise the interesting possibility of a general role for these types of cytoplasmic RNP granules in the survival of G0-like resting cells. Laura MacNair, Shangxi Xiao, Denise Miletic, Mahdi Ghani, Jean-Pierre Julien, Julia Keith, Lorne Zinman, Ekaterina Rogaeva and Janice Robertson. MTHFSD and DDX58 are novel RNA-binding proteins abnormally regulated in amyotrophic lateral sclerosis.. Brain : a journal of neurology 139(Pt 1):86–100, January 2016. Abstract Tar DNA-binding protein 43 (TDP-43) is an RNA-binding protein normally localized to the nucleus of cells, where it elicits functions related to RNA metabolism such as transcriptional regulation and alternative splicing. In amyotrophic lateral sclerosis, TDP-43 is mislocalized from the nucleus to the cytoplasm of diseased motor neurons, forming ubiquitinated inclusions. Although mutations in the gene encoding TDP-43, TARDBP, are found in amyotrophic lateral sclerosis, these are rare. However, TDP-43 pathology is common to over 95% of amyotrophic lateral sclerosis cases, suggesting that abnormalities of TDP-43 play an active role in disease pathogenesis. It is our hypothesis that a loss of TDP-43 from the nucleus of affected motor neurons in amyotrophic lateral sclerosis will lead to changes in RNA processing and expression. Identifying these changes could uncover molecular pathways that underpin motor neuron degeneration. Here we have used translating ribosome affinity purification coupled with microarray analysis to identify the mRNAs being actively translated in motor neurons of mutant TDP-43(A315T) mice compared to age-matched non-transgenic littermates. No significant changes were found at 5 months (presymptomatic) of age, but at 10 months (symptomatic) the translational profile revealed significant changes in genes involved in RNA metabolic process, immune response and cell cycle regulation. Of 28 differentially expressed genes, seven had a ≥ 2-fold change; four were validated by immunofluorescence labelling of motor neurons in TDP-43(A315T) mice, and two of these were confirmed by immunohistochemistry in amyotrophic lateral sclerosis cases. Both of these identified genes, DDX58 and MTHFSD, are RNA-binding proteins, and we show that TDP-43 binds to their respective mRNAs and we identify MTHFSD as a novel component of stress granules. This discovery-based approach has for the first time revealed translational changes in motor neurons of a TDP-43 mouse model, identifying DDX58 and MTHFSD as two TDP-43 targets that are misregulated in amyotrophic lateral sclerosis. R Nostramo and P K Herman. Deubiquitination and the regulation of stress granule assembly.. Current genetics, 2016. Abstract Stress granules (SGs) are evolutionarily conserved ribonucleoprotein (RNP) structures that form in response to a variety of environmental and cellular cues. The presence of these RNP granules has been linked to a number of human diseases, including neurodegenerative disorders like amyotrophic lateral sclerosis (ALS) and spinocerebellar ataxia type 2 (Li et al., J Cell Biol 201:361-372, 2013; Nonhoff et al., Mol Biol Cell 18:1385-1396, 2007). Understanding how the assembly of these granules is controlled could, therefore, suggest possible routes of therapy for patients afflicted with these conditions. Interestingly, several reports have identified a potential role for protein deubiquitination in the assembly of these RNP granules. In particular, recent work has found that a specific deubiquitinase enzyme, Ubp3, is required for efficient SG formation in S. cerevisiae (Nostramo et al., Mol Cell Biol 36:173-183, 2016). This same enzyme has been linked to SGs in other organisms, including humans and the fission yeast, Schizosaccharomyces pombe (Takahashi et al., Mol Cell Biol 33:815-829, 2013; Wang et al., RNA 18:694-703, 2012). At first glance, these observations suggest that a striking degree of conservation exists for a ubiquitin-based mechanism controlling SG assembly. However, the devil is truly in the details here, as the precise nature of the involvement of this deubiquitinating enzyme seems to vary in each organism. Here, we briefly review these differences and attempt to provide an overarching model for the role of ubiquitin in SG formation. Hilary Bowden and Dorothee Dormann. Altered mRNP granule dynamics in FTLD pathogenesis.. Journal of neurochemistry, 2016. Abstract In neurons, RNA-binding proteins (RBPs) play a key role in post-transcriptional gene regulation, e.g. alternative splicing, mRNA localization in neurites and local translation upon synaptic stimulation. There is increasing evidence that defective or mislocalized RBPs - and consequently altered mRNA processing - lead to neuronal dysfunction and cause neurodegeneration, including frontotemporal lobar degeneration (FTLD) and amyotrophic lateral sclerosis (ALS). Cytosolic RBP aggregates containing TDP-43 (TAR DNA binding protein of 43 kDa) or FUS (Fused in sarcoma) are a common hallmark of both disorders. There is mounting evidence that translationally silent mRNP granules, such as stress granules or transport granules, play an important role in the formation of these RBP aggregates. These granules are thought to be "catalytic convertors" of RBP aggregation by providing a high local concentration of RBPs. As recently shown in vitro, RBPs that contain a so-called low complexity domain start to "solidify" and eventually aggregate at high protein concentrations. The same may happen in mRNP granules in vivo, leading to "solidified" granules that lose their dynamic properties and ability to fulfill their physiological functions. This may result in a disturbed stress response, altered mRNA transport and local translation, and formation of pathological TDP-43 or FUS aggregates, all of which may contribute to neuronal dysfunction and neurodegeneration. Here, we discuss the general functional properties of these mRNP granules, how their dynamics may be disrupted in FTLD/ALS, e.g. by loss or gain-of-function of TDP-43 and FUS, and how this may contribute to the development of RBP aggregates and neurotoxicity. This article is protected by copyright. All rights reserved. Alyssa N Coyne, Shizuka B Yamada, Bhavani Bagevalu Siddegowda, Patricia S Estes, Benjamin L Zaepfel, Jeffrey S Johannesmeyer, Donovan B Lockwood, Linh T Pham, Michael P Hart, Joel A Cassel, Brian Freibaum, Ashley V Boehringer, Paul J Taylor, Allen B Reitz, Aaron D Gitler and Daniela C Zarnescu. Fragile X protein mitigates TDP-43 toxicity by remodeling RNA granules and restoring translation.. Human molecular genetics 24(24):6886–98, December 2015. Abstract RNA dysregulation is a newly recognized disease mechanism in amyotrophic lateral sclerosis (ALS). Here we identify Drosophila fragile X mental retardation protein (dFMRP) as a robust genetic modifier of TDP-43-dependent toxicity in a Drosophila model of ALS. We find that dFMRP overexpression (dFMRP OE) mitigates TDP-43 dependent locomotor defects and reduced lifespan in Drosophila. TDP-43 and FMRP form a complex in flies and human cells. In motor neurons, TDP-43 expression increases the association of dFMRP with stress granules and colocalizes with polyA binding protein in a variant-dependent manner. Furthermore, dFMRP dosage modulates TDP-43 solubility and molecular mobility with overexpression of dFMRP resulting in a significant reduction of TDP-43 in the aggregate fraction. Polysome fractionation experiments indicate that dFMRP OE also relieves the translation inhibition of futsch mRNA, a TDP-43 target mRNA, which regulates neuromuscular synapse architecture. Restoration of futsch translation by dFMRP OE mitigates Futsch-dependent morphological phenotypes at the neuromuscular junction including synaptic size and presence of satellite boutons. Our data suggest a model whereby dFMRP is neuroprotective by remodeling TDP-43 containing RNA granules, reducing aggregation and restoring the translation of specific mRNAs in motor neurons. Ching-Chieh Chou, Olga M Alexeeva, Shizuka Yamada, Amy Pribadi, Yi Zhang, Bi Mo, Kathryn R Williams, Daniela C Zarnescu and Wilfried Rossoll. PABPN1 suppresses TDP-43 toxicity in ALS disease models.. Human molecular genetics 24(18):5154–73, September 2015. Abstract TAR DNA-binding protein 43 (TDP-43) is a major disease protein in amyotrophic lateral sclerosis (ALS) and related neurodegenerative diseases. Both the cytoplasmic accumulation of toxic ubiquitinated and hyperphosphorylated TDP-43 fragments and the loss of normal TDP-43 from the nucleus may contribute to the disease progression by impairing normal RNA and protein homeostasis. Therefore, both the removal of pathological protein and the rescue of TDP-43 mislocalization may be critical for halting or reversing TDP-43 proteinopathies. Here, we report poly(A)-binding protein nuclear 1 (PABPN1) as a novel TDP-43 interaction partner that acts as a potent suppressor of TDP-43 toxicity. Overexpression of full-length PABPN1 but not a truncated version lacking the nuclear localization signal protects from pathogenic TDP-43-mediated toxicity, promotes the degradation of pathological TDP-43 and restores normal solubility and nuclear localization of endogenous TDP-43. Reduced levels of PABPN1 enhances the phenotypes in several cell culture and Drosophila models of ALS and results in the cytoplasmic mislocalization of TDP-43. Moreover, PABPN1 rescues the dysregulated stress granule (SG) dynamics and facilitates the removal of persistent SGs in TDP-43-mediated disease conditions. These findings demonstrate a role for PABPN1 in rescuing several cytopathological features of TDP-43 proteinopathy by increasing the turnover of pathologic proteins. Emma L Scotter, Han-Jou Chen and Christopher E Shaw. TDP-43 Proteinopathy and ALS: Insights into Disease Mechanisms and Therapeutic Targets.. Neurotherapeutics : the journal of the American Society for Experimental NeuroTherapeutics 12(2):352–63, April 2015. Abstract Therapeutic options for patients with amyotrophic lateral sclerosis (ALS) are currently limited. However, recent studies show that almost all cases of ALS, as well as tau-negative frontotemporal dementia (FTD), share a common neuropathology characterized by the deposition of TAR-DNA binding protein (TDP)-43-positive protein inclusions, offering an attractive target for the design and testing of novel therapeutics. Here we demonstrate how diverse environmental stressors linked to stress granule formation, as well as mutations in genes encoding RNA processing proteins and protein degradation adaptors, initiate ALS pathogenesis via TDP-43. We review the progressive development of TDP-43 proteinopathy from cytoplasmic mislocalization and misfolding through to macroaggregation and the addition of phosphate and ubiquitin moieties. Drawing from cellular and animal studies, we explore the feasibility of therapeutics that act at each point in pathogenesis, from mitigating genetic risk using antisense oligonucleotides to modulating TDP-43 proteinopathy itself using small molecule activators of autophagy, the ubiquitin-proteasome system, or the chaperone network. We present the case that preventing the misfolding of TDP-43 and/or enhancing its clearance represents the most important target for effectively treating ALS and frontotemporal dementia. Aditi, Andrew W Folkmann and Susan R Wente. Cytoplasmic hGle1A regulates stress granules by modulation of translation.. Molecular biology of the cell 26(8):1476–90, April 2015. Abstract When eukaryotic cells respond to stress, gene expression pathways change to selectively export and translate subsets of mRNAs. Translationally repressed mRNAs accumulate in cytoplasmic foci known as stress granules (SGs). SGs are in dynamic equilibrium with the translational machinery, but mechanisms controlling this are unclear. Gle1 is required for DEAD-box protein function during mRNA export and translation. We document that human Gle1 (hGle1) is a critical regulator of translation during stress. hGle1 is recruited to SGs, and hGLE1 small interfering RNA-mediated knockdown perturbs SG assembly, resulting in increased numbers of smaller SGs. The rate of SG disassembly is also delayed. Furthermore, SG hGle1-depletion defects correlate with translation perturbations, and the hGle1 role in SGs is independent of mRNA export. Interestingly, we observe isoform-specific roles for hGle1 in which SG function requires hGle1A, whereas mRNA export requires hGle1B. We find that the SG defects in hGle1-depleted cells are rescued by puromycin or DDX3 expression. Together with recent links of hGLE1 mutations in amyotrophic lateral sclerosis patients, these results uncover a paradigm for hGle1A modulating the balance between translation and SGs during stress and disease. Jessica Lenzi, Riccardo De Santis, Valeria Turris, Mariangela Morlando, Pietro Laneve, Andrea Calvo, Virginia Caliendo, Adriano Chiò, Alessandro Rosa and Irene Bozzoni. ALS mutant FUS proteins are recruited into stress granules in induced pluripotent stem cell-derived motoneurons.. Disease models & mechanisms 8(7):755–66, 2015. Abstract Patient-derived induced pluripotent stem cells (iPSCs) provide an opportunity to study human diseases mainly in those cases for which no suitable model systems are available. Here, we have taken advantage of in vitro iPSCs derived from patients affected by amyotrophic lateral sclerosis (ALS) and carrying mutations in the RNA-binding protein FUS to study the cellular behavior of the mutant proteins in the appropriate genetic background. Moreover, the ability to differentiate iPSCs into spinal cord neural cells provides an in vitro model mimicking the physiological conditions. iPSCs were derived from FUS(R514S) and FUS(R521C) patient fibroblasts, whereas in the case of the severe FUS(P525L) mutation, in which fibroblasts were not available, a heterozygous and a homozygous iPSC line were raised by TALEN-directed mutagenesis. We show that aberrant localization and recruitment of FUS into stress granules (SGs) is a prerogative of the FUS mutant proteins and occurs only upon induction of stress in both undifferentiated iPSCs and spinal cord neural cells. Moreover, we show that the incorporation into SGs is proportional to the amount of cytoplasmic FUS, strongly correlating with the cytoplasmic delocalization phenotype of the different mutants. Therefore, the available iPSCs represent a very powerful system for understanding the correlation between FUS mutations, the molecular mechanisms of SG formation and ALS ethiopathogenesis. Anna Emde, Chen Eitan, Lee-Loung Liou, Ryan T Libby, Natali Rivkin, Iddo Magen, Irit Reichenstein, Hagar Oppenheim, Raya Eilam, Aurelio Silvestroni, Betty Alajajian, Iddo Z Ben-Dov, Julianne Aebischer, Alon Savidor, Yishai Levin, Robert Sons, Scott M Hammond, John M Ravits, Thomas Möller and Eran Hornstein. Dysregulated miRNA biogenesis downstream of cellular stress and ALS-causing mutations: a new mechanism for ALS.. The EMBO journal 34(21):2633–51, 2015. Abstract Interest in RNA dysfunction in amyotrophic lateral sclerosis (ALS) recently aroused upon discovering causative mutations in RNA-binding protein genes. Here, we show that extensive down-regulation of miRNA levels is a common molecular denominator for multiple forms of human ALS. We further demonstrate that pathogenic ALS-causing mutations are sufficient to inhibit miRNA biogenesis at the Dicing step. Abnormalities of the stress response are involved in the pathogenesis of neurodegeneration, including ALS. Accordingly, we describe a novel mechanism for modulating microRNA biogenesis under stress, involving stress granule formation and re-organization of DICER and AGO2 protein interactions with their partners. In line with this observation, enhancing DICER activity by a small molecule, enoxacin, is beneficial for neuromuscular function in two independent ALS mouse models. Characterizing miRNA biogenesis downstream of the stress response ties seemingly disparate pathways in neurodegeneration and further suggests that DICER and miRNAs affect neuronal integrity and are possible therapeutic targets. Amandine Molliex, Jamshid Temirov, Jihun Lee, Maura Coughlin, Anderson P Kanagaraj, Hong Joo Kim, Tanja Mittag and Paul J Taylor. Phase separation by low complexity domains promotes stress granule assembly and drives pathological fibrillization.. Cell 163(1):123–33, 2015. Abstract Stress granules are membrane-less organelles composed of RNA-binding proteins (RBPs) and RNA. Functional impairment of stress granules has been implicated in amyotrophic lateral sclerosis, frontotemporal dementia, and multisystem proteinopathy-diseases that are characterized by fibrillar inclusions of RBPs. Genetic evidence suggests a link between persistent stress granules and the accumulation of pathological inclusions. Here, we demonstrate that the disease-related RBP hnRNPA1 undergoes liquid-liquid phase separation (LLPS) into protein-rich droplets mediated by a low complexity sequence domain (LCD). While the LCD of hnRNPA1 is sufficient to mediate LLPS, the RNA recognition motifs contribute to LLPS in the presence of RNA, giving rise to several mechanisms for regulating assembly. Importantly, while not required for LLPS, fibrillization is enhanced in protein-rich droplets. We suggest that LCD-mediated LLPS contributes to the assembly of stress granules and their liquid properties and provides a mechanistic link between persistent stress granules and fibrillar protein pathology in disease. Yang Li, Mahlon Collins, Rachel Geiser, Nadine Bakkar, David Riascos and Robert Bowser. RBM45 homo-oligomerization mediates association with ALS-linked proteins and stress granules.. Scientific reports 5:14262, January 2015. Abstract The aggregation of RNA-binding proteins is a pathological hallmark of amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD). RBM45 is an RNA-binding protein that forms cytoplasmic inclusions in neurons and glia in ALS and FTLD. To explore the role of RBM45 in ALS and FTLD, we examined the contribution of the protein's domains to its function, subcellular localization, and interaction with itself and ALS-linked proteins. We find that RBM45 forms homo-oligomers and physically associates with the ALS-linked proteins TDP-43 and FUS in the nucleus. Nuclear localization of RBM45 is mediated by a bipartite nuclear-localization sequence (NLS) located at the C-terminus. RBM45 mutants that lack a functional NLS accumulate in the cytoplasm and form TDP-43 positive stress granules. Moreover, we identify a novel structural element, termed the homo-oligomer assembly (HOA) domain, that is highly conserved across species and promote homo-oligomerization of RBM45. RBM45 mutants that fail to form homo-oligomers exhibit significantly reduced association with ALS-linked proteins and inclusion into stress granules. These results show that RMB45 may function as a homo-oligomer and that its oligomerization contributes to ALS/FTLD RNA-binding protein aggregation. Ana\"ıs Aulas and Christine Vande Velde. Alterations in stress granule dynamics driven by TDP-43 and FUS: a link to pathological inclusions in ALS?. Frontiers in cellular neuroscience 9:423, January 2015. Abstract Stress granules (SGs) are RNA-containing cytoplasmic foci formed in response to stress exposure. Since their discovery in 1999, over 120 proteins have been described to be localized to these structures (in 154 publications). Most of these components are RNA binding proteins (RBPs) or are involved in RNA metabolism and translation. SGs have been linked to several pathologies including inflammatory diseases, cancer, viral infection, and neurodegenerative diseases such as amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD). In ALS and FTD, the majority of cases have no known etiology and exposure to external stress is frequently proposed as a contributor to either disease initiation or the rate of disease progression. Of note, both ALS and FTD are characterized by pathological inclusions, where some well-known SG markers localize with the ALS related proteins TDP-43 and FUS. We propose that TDP-43 and FUS serve as an interface between genetic susceptibility and environmental stress exposure in disease pathogenesis. Here, we will discuss the role of TDP-43 and FUS in SG dynamics and how disease-linked mutations affect this process. Hyun-Hee Ryu, Mi-Hee Jun, Kyung-Jin Min, Deok-Jin Jang, Yong-Seok Lee, Hyong Kyu Kim and Jin-A Lee. Autophagy regulates amyotrophic lateral sclerosis-linked fused in sarcoma-positive stress granules in neurons.. Neurobiology of aging 35(12):2822–31, December 2014. Abstract Mutations in fused in sarcoma (FUS), a DNA/RNA binding protein, have been associated with familial amyotrophic lateral sclerosis (fALS), which is a fatal neurodegenerative disease that causes progressive muscular weakness and has overlapping clinical and pathologic characteristics with frontotemporal lobar degeneration. However, the role of autophagy in regulation of FUS-positive stress granules (SGs) and aggregates remains unclear. We found that the ALS-linked FUS(R521C) mutation causes accumulation of FUS-positive SGs under oxidative stress, leading to a disruption in the release of FUS from SGs in cultured neurons. Autophagy controls the quality of proteins or organelles; therefore, we checked whether autophagy regulates FUS(R521C)-positive SGs. Interestingly, FUS(R521C)-positive SGs were colocalized to RFP-LC3-positive autophagosomes. Furthermore, FUS-positive SGs accumulated in atg5(-/-) mouse embryonic fibroblasts (MEFs) and in autophagy-deficient neurons. However, FUS(R521C) expression did not significantly impair autophagic degradation. Moreover, autophagy activation with rapamycin reduced the accumulation of FUS-positive SGs in an autophagy-dependent manner. Rapamycin further reduced neurite fragmentation and cell death in neurons expressing mutant FUS under oxidative stress. Overall, we provide a novel pathogenic mechanism of ALS associated with a FUS mutation under oxidative stress, as well as therapeutic insight regarding FUS pathology associated with excessive SGs. Physiological protein aggregation run amuck: stress granules and the genesis of neurodegenerative disease.. Discovery medicine 17(91):47–52, 2014. Abstract Recent advances in neurodegenerative diseases point to novel mechanisms of protein aggregation. RNA binding proteins are abundant in the nucleus, where they carry out processes such as RNA splicing. Neurons also express RNA binding proteins in the cytoplasm and processes to enable functions such as mRNA transport and local protein synthesis. The biology of RNA binding proteins turns out to have important features that appear to promote the pathophysiology of amyotrophic lateral sclerosis and might contribute to other neurodegenerative disease. RNA binding proteins consolidate transcripts to form complexes, termed RNA granules, through a process of physiological aggregation mediated by glycine rich domains that exhibit low protein complexity and in some cases share homology to similar domains in known prion proteins. Under conditions of cell stress these RNA granules expand, leading to form stress granules, which function in part to sequester specialized transcript and promote translation of protective proteins. Studies in humans show that pathological aggregates occurring in ALS, Alzheimer's disease, and other dementias co-localize with stress granules. One increasingly appealing hypothesis is that mutations in RNA binding proteins or prolonged periods of stress cause formation of very stable, pathological stress granules. The consolidation of RNA binding proteins away from the nucleus and neuronal arbors into pathological stress granules might impair the normal physiological activities of these RNA binding proteins causing the neurodegeneration associated with these diseases. Conversely, therapeutic strategies focusing on reducing formation of pathological stress granules might be neuroprotective. Yun R Li, Oliver D King, James Shorter and Aaron D Gitler. Stress granules as crucibles of ALS pathogenesis.. The Journal of cell biology 201(3):361–72, 2013. Abstract Amyotrophic lateral sclerosis (ALS) is a fatal human neurodegenerative disease affecting primarily motor neurons. Two RNA-binding proteins, TDP-43 and FUS, aggregate in the degenerating motor neurons of ALS patients, and mutations in the genes encoding these proteins cause some forms of ALS. TDP-43 and FUS and several related RNA-binding proteins harbor aggregation-promoting prion-like domains that allow them to rapidly self-associate. This property is critical for the formation and dynamics of cellular ribonucleoprotein granules, the crucibles of RNA metabolism and homeostasis. Recent work connecting TDP-43 and FUS to stress granules has suggested how this cellular pathway, which involves protein aggregation as part of its normal function, might be coopted during disease pathogenesis. Regulated protein aggregation: stress granules and neurodegeneration.. Molecular neurodegeneration 7:56, January 2012. Abstract The protein aggregation that occurs in neurodegenerative diseases is classically thought to occur as an undesirable, nonfunctional byproduct of protein misfolding. This model contrasts with the biology of RNA binding proteins, many of which are linked to neurodegenerative diseases. RNA binding proteins use protein aggregation as part of a normal regulated, physiological mechanism controlling protein synthesis. The process of regulated protein aggregation is most evident in formation of stress granules. Stress granules assemble when RNA binding proteins aggregate through their glycine rich domains. Stress granules function to sequester, silence and/or degrade RNA transcripts as part of a mechanism that adapts patterns of local RNA translation to facilitate the stress response. Aggregation of RNA binding proteins is reversible and is tightly regulated through pathways, such as phosphorylation of elongation initiation factor 2$\alpha$. Microtubule associated protein tau also appears to regulate stress granule formation. Conversely, stress granule formation stimulates pathological changes associated with tau. In this review, I propose that the aggregation of many pathological, intracellular proteins, including TDP-43, FUS or tau, proceeds through the stress granule pathway. Mutations in genes coding for stress granule associated proteins or prolonged physiological stress, lead to enhanced stress granule formation, which accelerates the pathophysiology of protein aggregation in neurodegenerative diseases. Over-active stress granule formation could act to sequester functional RNA binding proteins and/or interfere with mRNA transport and translation, each of which might potentiate neurodegeneration. The reversibility of the stress granule pathway also offers novel opportunities to stimulate endogenous biochemical pathways to disaggregate these pathological stress granules, and perhaps delay the progression of disease. Natalie Gilks, Nancy Kedersha, Maranatha Ayodele, Lily Shen, Georg Stoecklin, Laura M Dember and Paul Anderson. Stress granule assembly is mediated by prion-like aggregation of TIA-1.. Molecular biology of the cell 15(12):5383–98, 2004. Abstract TIA-1 is an RNA binding protein that promotes the assembly of stress granules (SGs), discrete cytoplasmic inclusions into which stalled translation initiation complexes are dynamically recruited in cells subjected to environmental stress. The RNA recognition motifs of TIA-1 are linked to a glutamine-rich prion-related domain (PRD). Truncation mutants lacking the PRD domain do not induce spontaneous SGs and are not recruited to arsenite-induced SGs, whereas the PRD forms aggregates that are recruited to SGs in low-level-expressing cells but prevent SG assembly in high-level-expressing cells. The PRD of TIA-1 exhibits many characteristics of prions: concentration-dependent aggregation that is inhibited by the molecular chaperone heat shock protein (HSP)70; resistance to protease digestion; sequestration of HSP27, HSP40, and HSP70; and induction of HSP70, a feedback regulator of PRD disaggregation. Substitution of the PRD with the aggregation domain of a yeast prion, SUP35-NM, reconstitutes SG assembly, confirming that a prion domain can mediate the assembly of SGs. Mouse embryomic fibroblasts (MEFs) lacking TIA-1 exhibit impaired ability to form SGs, although they exhibit normal phosphorylation of eukaryotic initiation factor (eIF)2alpha in response to arsenite. Our results reveal that prion-like aggregation of TIA-1 regulates SG formation downstream of eIF2alpha phosphorylation in response to stress.
CommonCrawl
\begin{document} \title{A new proof of the Hansen-Mullen irreducibility conjecture} \author{Aleksandr Tuxanidy and Qiang Wang} \address{School of Mathematics and Statistics, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, K1S 5B6, Canada.} \email{[email protected], [email protected]} \keywords{irreducible polynomials, primitive polynomials, Hansen-Mullen conjecture, symmetric functions, $q$-symmetric, discrete Fourier transform, finite fields.\\} \thanks{The research of Qiang Wang is partially supported by NSERC of Canada.} \date{\today} \font\Bbb msbm10 at 12pt \begin{abstract} We give a new proof of the Hansen-Mullen irreducibility conjecture. The proof relies on an application of a (seemingly new) sufficient condition for the existence of elements of degree $n$ in the support of functions on finite fields. This connection to irreducible polynomials is made via the least period of the discrete Fourier transform (DFT) of functions with values in finite fields. We exploit this relation and prove, in an elementary fashion, that a relevant function related to the DFT of characteristic elementary symmetric functions (which produce the coefficients of characteristic polynomials) has a sufficiently large least period (except for some genuine exceptions). This bears a sharp contrast to previous techniques in literature employed to tackle existence of irreducible polynomials with prescribed coefficients. \end{abstract} \maketitle \section{Introduction} Let $q$ be a power of a prime $p$, let $\mathbb{F}_{q}$ be the finite field with $q$ elements, and let $n \geq 2$. In 1992, Hansen-Mullen \cite{hansen-mullen} conjectured (in Conjecture B there; see Theorem \ref{thm: hansen-mullen} below) that, except for a few genuine exceptions, there exist irreducible (and more strongly primitive; see Conjecture A) polynomials of degree $n$ over $\mathbb{F}_{q}$ with any {\em one} of its coefficients prescribed to any value. Conjecture B (appearing as Theorem \ref{thm: hansen-mullen} below) was proven by Wan \cite{wan} in 1997 for $q> 19$ or $n \geq 36$, with the remaining cases being computationally verified soon after in \cite{ham-mullen}. In 2006, Cohen \cite{cohen 2006}, particularly building on some of the work of Fan-Han \cite{fan-han} on $p$-adic series, proved there exists a primitive polynomial of degree $n \geq 9$ over $\mathbb{F}_{q}$ with any one of its coefficients prescribed. The remaining cases of Conjecture A were settled by Cohen-Pre\v{s}ern in \cite{cohen-presern 2006, cohen-presern 2008}. Cohen \cite{cohen 2006} and Cohen-Pre\v{s}ern \cite{cohen-presern 2006, cohen-presern 2008} also gave theoretical explanations for the small cases of $q,n,$ missed out in Wan's original proof \cite{wan}. First for a polynomial $h(x) \in \mathbb{F}_{q}[x]$ and an integer $w$, we denote by $[x^w]h(x)$ the coefficient of $x^w$ in $h(x)$. \begin{thm}\label{thm: hansen-mullen} Let $q$ be a power of a prime, let $c \in \mathbb{F}_{q}$, and let $n \geq 2$ and $w$ be integers with $1 \leq w \leq n$. If $w = n$, assume that $c \neq 0$. If $(n,w,c) = (2,1,0)$, further assume $q$ is odd. Then there exists a monic irreducible polynomial $P(x)$ of degree $n$ over $\mathbb{F}_{q}$ with $[x^{n-w}] P(x) = c$. \end{thm} The Hansen-Mullen conjectures have since been generalized to encompass results on the existence of irreducible and particularly primitive polynomials with {\em several} prescribed coefficients (see for instance \cite{garefalakis, George paper, George thesis, pollack, Ha} for general irreducibles and \cite{Cohen 2004, han, ren, Shparlinski} for primitives). In particular Ha \cite{Ha}, building on some of the work of Pollack \cite{pollack} and Bourgain \cite{Bourgain}, has recently proved that, for large enough $q,n$, there exists irreducibles of degree $n$ over $\mathbb{F}_{q}$ with roughly any $n/4$ coefficients prescribed to any value. This seems to be the current record on the number of {\em arbitrary} coefficients one may prescribe to any values in an irreducible polynomial of degree $n$. The above are existential results obtained through asymptotic estimates. However there is also intensive research on the {\em exact} number of irreducible polynomials with some prescribed coefficients. See for instance \cite{Carlitz, Fitzgerald-Yucas, KMRV, KPW} and references therein for some work in this area. See also \cite{TW} for primitives and $N$-free elements in special cases. There are some differences of approach in tackling existence questions of either general irreducible or primitive polynomials with prescribed coefficients. For instance, when working on irreducibles, and following in the footsteps of Wan \cite{wan}, it has been common practice to exploit the $\mathbb{F}_{q}[x]$-analogue of Dirichlet's theorem for primes in arithmetic progressions; all this is done via Dirichlet characters on $\mathbb{F}_{q}[x]$, $L$-series, zeta functions, etc. See for example \cite{George thesis}. Recently Pollack \cite{pollack} and Ha \cite{Ha}, building on some ideas of Bourgain \cite{Bourgain}, applied the circle method to prove the existence of irreducible polynomials with several prescribed coefficients. On the other hand, in the case of primitives, the problem is usually approached via $p$-adic rings or fields (to account for the inconvenience that Newton's identities ``break down'', in some sense, in fields of positive characteristic) together with Cohen's sieving lemma, Vinogradov's characteristic function, etc. (see for example \cite{fan-han, cohen 2006}). However there is one common feature these methods share, namely, when bounding the ``error'' terms comprised of character sums, the function field analogue of Riemann's hypothesis (Weil's bound) is used (perhaps without exception here). Nevertheless as a consequence of its $O(q^{n/2})$ nature it transpires a difficulty in extending the $n/2$ threshold for the number of coefficients one can prescribe in irreducible or particularly primitive polynomials of degree $n$. As the reader can take from all this, there seems to be a preponderance of the analytic method to tackle the existence problem of irreducibles and primitives with several prescribed coefficients. One then naturally wonders whether other view-points may be useful for tackling such problems. As Panario points out in \cite{Koc}, Chapter 6, Section 2.3, p.115, \say{{\em The long-term goal here is to provide existence and counting results for irreducibles with any number of prescribed coefficients to any given values. This goal is completely out of reach at this time. Incremental steps seem doable, but it would be most interesting if new techniques were introduced to attack these problems}} In this work we take a different approach and give a new proof of the Hansen-Mullen irreducibility conjecture (or theorem), stated in Theorem \ref{thm: hansen-mullen}. We attack the problem by studying the least period of certain functions related to the discrete Fourier transform (DFT) of characteristic elementary symmetric functions (which produce the coefficients of characteristic polynomials). This bears a sharp contrast to previous techniques in literature. The proof theoretically explains, in a unified way, every case of the Hansen-Mullen conjecture. These include the small cases missed out in Wan's original proof \cite{wan}, computationally verified in \cite{ham-mullen}. However we should point out that, in contrast, our proof has the disadvantage of not yielding estimates for the number of irreducibles with a prescribed coefficient. It merely asserts their existence. We wonder whether some of the techniques introduced here can be extended to tackle the existence question for several prescribed coefficients, but we for now leave this to the consideration of the interested reader. The proof relies in an application of the sufficient condition in Lemma \ref{lem: factor of deg n}, which follows from that in (i) of the following lemma. First for a primitive element $\zeta$ of $\mathbb{F}_{q}$ and a function $f : \mathbb{Z}_{q-1} \to \mathbb{F}_{q}$, the DFT of $f$ based on $\zeta$ is the function $\mathcal{F}_{\zeta}[f] : \mathbb{Z}_{q-1} \to \mathbb{F}_{q}$ given by $$ \mathcal{F}_{\zeta}[f](m) = \sum_{j \in \mathbb{Z}_{q-1}} f(j) \zeta^{mj}, \hspace{1em} m \in \mathbb{Z}_{q-1}. $$ Here $\mathbb{Z}_{q-1} := \mathbb{Z}/(q-1) \mathbb{Z}$. The inverse DFT is given by $\mathcal{F}_\zeta^{-1}[f] = -\mathcal{F}_{\zeta^{-1}}[f]$. For a function $g : \mathbb{Z}_{q-1} \to \mathbb{F}_{q}$, we say that $g$ has least period $r$ if $r$ is the smallest positive integer such that $g(m + \bar{r}) = g(m)$ for all $m \in \mathbb{Z}_{q-1}$. Let $\Phi_n(x) \in \mathbb{Z}[x]$ be the $n$-th cyclotomic polynomial. For a function $F$ on a set $A$, let $\operatorname{supp}(F) := \{a \in A \ : \ F(a) \neq 0\}$ be the support of $F$. \iffalse In this section we first characterize in Lemma \ref{lem: period of DFT} the least period of the DFT of a function $f : \mathbb{Z}_N \to \mathbb{F}_{q}$ in terms of specific roots of $f$, i.e., certain values $k \in \mathbb{Z}_N$ such that $f(k) = 0$. In particular we describe in Lemma \ref{lem: period criteria} precisely when does the DFT of $f$ attain maximum least period $N$. As a consequence of this and of Proposition \ref{prop: deg n}, we give a sufficient condition, in Theorem \ref{thm: connection}, for a function $F : \mathbb{F}_{q^n} \to \mathbb{F}_{q^n}$ to have an element of degree $n$ over $\mathbb{F}_{q}$ in its support $\operatorname{supp}(F) := \{y \in \mathbb{F}_{q^n} \ : \ F(y) \neq 0\}$. This condition is the following: The function $f : \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q^n}$ given by $f(k) = F(\zeta^k)$, where $\zeta \in \mathbb{F}_{q^n}$ is primitive, is such that $\mathcal{F}_\zeta[f]$ has least period $r$ satisfying $r > (q^n-1)/\Phi_n(q)$. Additionally, we give in Theorem \ref{thm: connection} a necessary condition for a primitive element of $\mathbb{F}_{q^n}$ to be contained in the support of $F$. \begin{thm}\label{thm: connection} Let $q$ be a power of a prime, $n \geq 2$, and $\zeta$ be a primitive element of $\mathbb{F}_{q^n}$. Let $F : \mathbb{F}_{q^n} \to \mathbb{F}_{q^n}$ and $f : \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q^n}$ be defined by $f(k) = F(\zeta^k)$. If $\mathcal{F}_\zeta[f]$ (or $\mathcal{F}^{-1}_\zeta[f]$) has least period $r$ satisfying $r > (q^n-1)/\Phi_n(q)$, then $\operatorname{supp}(F)$ contains an element of degree $n$ over $\mathbb{F}_{q}$. On the other hand, if $\operatorname{supp}(F)$ contains a primitive element of $\mathbb{F}_{q^n}$, then both $\mathcal{F}_\zeta[f]$ and $\mathcal{F}^{-1}_\zeta[f]$ have maximum least period $q^n-1$. \end{thm} \fi \begin{lem}\label{lem: revision} Let $q$ be a power of a prime, let $n \geq 2$, let $\zeta$ be a primitive element of $\mathbb{F}_{q^n}$, let $F : \mathbb{F}_{q^n} \to \mathbb{F}_{q^n}$, let $f : \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q^n}$ be defined by $f(k) = F(\zeta^k)$, and let $r$ be the least period of $\mathcal{F}_\zeta[f]$ (which is the same as the least period of $\mathcal{F}_{\zeta}^{-1}[f]$). Then we have the following results. \\ (i) If $r \nmid (q^n-1)/\Phi_n(q)$, then $\operatorname{supp}(F)$ contains an element of degree $n$ over $\mathbb{F}_{q}$;\\ (ii) If $\operatorname{supp}(F)$ contains an element of degree $n$ over $\mathbb{F}_{q}$, then $r \nmid (q^d-1)$ for every positive divisor $d$ of $n$ with $d < n$;\\ (iii) If $\operatorname{supp}(F)$ contains a primitive element of $\mathbb{F}_{q^n}$, then $r = q^n-1$. \end{lem} In particular (i) implies the existence of an irreducible factor of degree $n$ for any polynomial $h(x) \in \mathbb{F}_{q}[x]$ satisfying a constraint on the least period as follows. Here $\mathbb{F}_{q^n}^\times$ and $L^\times$ denote the set of all invertible elements in $\mathbb{F}_{q^n}$ and $L$ respectively. \begin{lem}\label{lem: factor of deg n} Let $q$ be a power of a prime, let $n \geq 2$, let $h(x) \in \mathbb{F}_{q}[x]$, and let $L$ be any subfield of $\mathbb{F}_{q^n}$ containing the image $h(\mathbb{F}_{q^n}^\times)$. Define the polynomial $$ S(x) = \left(1 - h(x)^{\#L^\times} \right) \bmod\left( x^{q^n-1} - 1\right) \in \mathbb{F}_{q}[x]. $$ Write $S(x) = \sum_{i=0}^{q^n-2} s_i x^i$ for some coefficients $s_i \in \mathbb{F}_{q}$. If the cyclic sequence $(s_i)_{i=0}^{q^n-2}$ has least period $r$ satisfying $r \nmid (q^n-1)/\Phi_n(q)$, then $h(x)$ has an irreducible factor of degree $n$ over $\mathbb{F}_{q}$. \end{lem} Note Lemma \ref{lem: factor of deg n} immediately yields the following sufficient condition for a polynomial to be irreducible. \begin{prop}\label{prop: irreducible polynomial condition} With the notations of Lemma \ref{lem: factor of deg n}, if $h(x) \in \mathbb{F}_{q}[x]$ is of degree $n \geq 2$, and the cyclic sequence $(s_i)_{i=0}^{q^n-2}$ of the coefficients of $S(x)$ has least period $r$ satisfying $r \nmid (q^n-1)/\Phi_n(q)$, then $h(x)$ is irreducible. \end{prop} To give the reader a flavor for the essence of our proof as an application of Lemma \ref{lem: factor of deg n}, we give the following small example. \begin{eg} Let $q = 2$, let $n = 4$, and let \begin{align*} h(x) &= \sum_{0 \leq i_1 < i_2 \leq 3} x^{2^{i_1} + 2^{i_2} }\\ &= x^{12} + x^{10} + x^9 + x^6 + x^5 + x^3 \in \mathbb{F}_2[x]. \end{align*} Note that $h(\mathbb{F}_{2^4}) \subseteq \mathbb{F}_2$. In fact, for any $\xi \in \mathbb{F}_{2^4}$, $h(\xi)$ is the coefficient of $x^2$ in the characteristic polynomial of degree $4$ over $\mathbb{F}_2$ with root $\xi$. We may take $L = \mathbb{F}_2$ in Lemma \ref{lem: factor of deg n}; hence $\#L^\times = 1$. Thus \begin{align*} S(x) &:= \left(1 + h(x)^{\#L^\times}\right) \bmod\left(x^{2^4-1} + 1 \right) = h(x) + 1\\ &= x^{12} + x^{10} + x^9 + x^6 + x^5 + x^3 + 1 \in \mathbb{F}_2[x]. \end{align*} The cyclic sequence of coefficients $\mathbf{s} = s_0, s_1, \ldots, s_{2^4-2}$ of $S(x) = \sum_{i=0}^{2^4-2}s_i x^i$ is given by $$ \mathbf{s} = 1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0. $$ One can easily check that the least period $r$ of $\mathbf{s}$ is $r = 2^4-1$, the maximum possible. Because $2^4-1 > (2^4-1)/\Phi_4(2)$, Lemma \ref{lem: factor of deg n} implies that $h(x)$ has an irreducible factor $P(x)$ of degree $4$ over $\mathbb{F}_2$. Any root $\xi$ of $P(x)$ must satisfy $h(\xi) = 0$. This is the coefficient of $x^2$ in $P(x)$. Hence there exists an irreducible polynomial of degree $4$ over $\mathbb{F}_2$ with its coefficient of $x^2$ being zero. Indeed, $x^4 + x + 1$ is one such irreducible polynomial. \end{eg} The rest of this work goes as follows. In Section~\ref{section: dft} we review some preliminary concepts regarding the DFT on finite fields, convolution, least period of functions on cyclic groups, and cyclotomic polynomials. In Section~\ref{section: connection between DFT and irreducibles} we study the connection, between the least period of the DFT of functions, and irreducible polynomials. In particular we explicitly describe in Proposition \ref{prop: period of DFT} the least period of the DFT of functions, as well as prove Lemmas \ref{lem: revision} and \ref{lem: factor of deg n}. In Section~\ref{section: delta functions} we introduce the characteristic delta functions as the DFTs of characteristic elementary symmetric functions. We then apply Lemma \ref{lem: factor of deg n} to give a sufficient condition, in Lemma \ref{lem: delta function}, for the existence of an irreducible polynomial with any one of its coefficients prescribed. This is given in terms of the least period of a certain function $\Delta_{w,c}$, closely related to the delta functions. We also review some basic results on $q$-symmetric functions and their convolutions; this will be needed in Section~\ref{section: proof of Hansen-Mullen}. Finally in Section~\ref{section: proof of Hansen-Mullen} we prove that the $\Delta_{w,c}$ functions have sufficiently large period. The proof of Theorem \ref{thm: hansen-mullen} then immediately follows from this. \section{Preliminaries}\label{section: dft} We recall some preliminary concepts regarding the DFT for finite fields, convolution, least period of functions on cyclic groups, and cyclotomic polynomials. Let $q$ be a power of a prime $p$, let $N \in \mathbb{N}$ such that $N \mid q-1$, and let $\zeta_N$ be a primitive $N$-th root of unity in $\mathbb{F}_{q}^*$ (the condition on $N$ guarantees the existence of $\zeta_N$). We shall use the common notation $\mathbb{Z}_N := \mathbb{Z}/ N \mathbb{Z}$. Now the DFT based on $\zeta_N$, on the $\mathbb{F}_{q}$-vector space of functions $f : \mathbb{Z}_N \to \mathbb{F}_{q}$, is defined by $$ \mathcal{F}_{\zeta_N}[f](i) = \sum_{j \in \mathbb{Z}_N} f(j) \zeta_N^{ij}, \hspace{1em} i \in \mathbb{Z}_N. $$ Note $\mathcal{F}_{\zeta_N}$ is a bijective linear operator with inverse given by $\mathcal{F}^{-1}_{\zeta_N} = N^{-1} \mathcal{F}_{\zeta_N^{-1}}$. For $f,g : \mathbb{Z}_N \to \mathbb{F}_{q}$, the convolution of $f,g$ is the function $f \otimes g : \mathbb{Z}_N \to \mathbb{F}_{q}$ given by $$ (f\otimes g)(i) = \sum_{\substack{j + k = i \\ j,k \in \mathbb{Z}_N} } f(j)g(k). $$ Inductively, $f_1 \otimes f_2 \otimes \cdots \otimes f_k = f_1 \otimes (f_2 \otimes \cdots \otimes f_k)$ and so $$ (f_1 \otimes \cdots \otimes f_k)(i) = \sum_{\substack{j_1 + \cdots + j_k = i \\ j_1, \ldots, j_k \in \mathbb{Z}_N}} f_1(j_1) \cdots f_k(j_k). $$ For $m \in \mathbb{N}$, we let $f^{\otimes m}$ denote the $m$-th convolution power of $f$, that is, the convolution of $f$ with itself, $m$ times. The DFT and convolution are related by the fact that $$ \prod_{i=1}^k\mathcal{F}_{\zeta_N}[f_i] = \mathcal{F}_{\zeta_N}\left[ \bigotimes_{i=1}^k f_i\right]. $$ Since $f, \mathcal{F}_{\zeta_N}[f]$, have values in $\mathbb{F}_{q}$ by definition, it follows from the relation above that $f^{\otimes q} = f$. Convolution is associative, commutative and distributive with identity $\delta_0 : \mathbb{Z}_N \to \{0, 1\} \subseteq \mathbb{F}_p$, the Kronecker delta function defined by $\delta_0(i) = 1$ if $i = 0$ and $\delta_0(i) = 0$ otherwise. We set $f^{\otimes 0 } = \delta_0$. Next we recall the concepts of a period and least period of a function $f:\mathbb{Z}_N \to \mathbb{F}_{q}$. For $r \in \mathbb{N}$, we say that $f$ is {\em $r$-periodic} if $f(i ) = f(i + \overline{r})$ for all $i \in \mathbb{Z}_N$. Clearly $f$ is $r$-periodic if and only if it is $\gcd(r, N)$-periodic. The smallest such positive integer $r$ is called the {\em least period} of $f$. Note the least period $r$ satisfies $r \mid R$ whenever $f$ is $R$-periodic. If the least period of $f$ is $N$, we say that $f$ has {\em maximum least period}. There are various operations on cyclic functions which preserve the least period. For instance the {\em $k$-shift} function $f_k(i) := f(i + k)$ of $f$ has the same least period as $f$. The {\em reversal} function $f^*(i) := f(-(1 + i))$ of $f$ also has the same least period. Let $\sigma$ be a permutation of $\mathbb{F}_{q}$. The function $f^{\sigma}(i) := \sigma(f(i))$ keeps the least period of $f$ as well. Next we recall a few elementary facts about cyclotomic polynomials. For $n \in \mathbb{N}$, the $n$-th cyclotomic polynomial $\Phi_n(x) \in \mathbb{Z}[x]$ is defined by $$ \Phi_n(x) = \prod_{k \in \left(\mathbb{Z}/n \mathbb{Z} \right)^\times}\left( x - \zeta_n^k\right), $$ where $\zeta_n \in \mathbb{C}$ is a primitive $n$-th root of unity and $\left(\mathbb{Z}/n \mathbb{Z} \right)^\times$ denotes the unit group modulo $n$. Since $x^n-1 = \prod_{d \mid n}\Phi_d(x)$, the M\"{o}bius inversion formula gives $\Phi_n(x) = \prod_{d \mid n}(x^{n/d} - 1)^{\mu(d)}$, where $\mu$ is the M\"{o}bius function. For any divisor $m$ of $n$, with $0 < m < n$, we note that $$ \dfrac{x^n-1}{x^m - 1} = \dfrac{\prod_{d \mid n} \Phi_d(x)}{\prod_{d \mid m} \Phi_d(x)} = \prod_{\substack{d \mid n\\ d \nmid m}}\Phi_d(x). $$ Hence \begin{equation}\label{eqn: cyclotomic is common divisor} \Phi_n(x) \mid \dfrac{x^n-1}{x^m - 1} \in \mathbb{Z}[x]. \end{equation} In fact, one can show that for $n \geq 2$, $$ \Phi_n(q) = \gcd \left\{ \dfrac{q^n-1}{q^d-1} \ : \ 1\leq d\mid n, \ d < n\right\} $$ and so $$ \dfrac{q^n-1}{\Phi_n(q)} = \operatorname{lcm}\left\{q^d - 1 \ : \ 1\leq d\mid n, \ d < n \right\}. $$ \iffalse In particular we have the following sufficient condition for an element to be of degree $n$ over $\mathbb{F}_{q}$. \begin{prop}\label{prop: deg n} Let $q$ be a power of a prime, let $n \geq 2$, let $\zeta$ be a primitive element of $\mathbb{F}_{q^n}$, and let $i$ be an integer. If $\Phi_n(q) \nmid i$, then $\deg_{\mathbb{F}_{q}}(\zeta^i) = n$. \end{prop} \begin{proof} On the contrary, suppose $\zeta^i \in \mathbb{F}_{q^d}$, and hence $\zeta^i \in \mathbb{F}_{q^d}^*$, for some proper divisor $d$ of $n$. Since $\mathbb{F}_{q^d}^*$ is a cyclic group with generator $\zeta^{(q^n-1)/(q^d-1) }$, it follows that $i$ is divisible by $(q^n-1)/(q^d-1)$. Because $\Phi_n(q) \mid (q^n-1)/(q^d-1)$, we get $\Phi_n(q) \mid i$, a contradiction. \end{proof} \fi Note also that \begin{align} \label{eqn: cyclotomic inequality} \Phi_n(q) &= \left| \Phi_n(q)\right| = \prod_{k \in (\mathbb{Z}/n\mathbb{Z})^{\times}}\left|q - \zeta_n^k\right| \nonumber \\ &> q-1 \end{align} for $n \geq 2$, since $|q-\zeta_n^k | > q-1$ for any primitive $n$-th root $\zeta_n^k \in \mathbb{C}$, whenever $n \geq 2$ (as can be seen geometrically by looking at the complex plane) \footnote{The elementary facts in (\ref{eqn: cyclotomic is common divisor}) and (\ref{eqn: cyclotomic inequality}) have some historical significance. For instance, these make an appearance in Witt's classical proof of Wedderburn's theorem that every finite division ring is a field (see Chapter 5 in \cite{Aigner} for example).}. \section{Least period of the DFT and connection to irreducible polynomials}\label{section: connection between DFT and irreducibles} In this section we study a connection between the least period of the DFT of a function and irreducible polynomials. We start off by giving an explicit formula in Proposition \ref{prop: period of DFT} for the least period of the DFT of a function $f : \mathbb{Z}_N \to \mathbb{F}_{q}$ in terms of the values in its support. Then we prove Lemmas \ref{lem: revision} and \ref{lem: factor of deg n}. \iffalse As a consequence of this and of Proposition \ref{prop: deg n}, we give a sufficient condition, in Theorem \ref{lem: revision}, for a function $F : \mathbb{F}_{q^n} \to \mathbb{F}_{q^n}$ to have an element of degree $n$ over $\mathbb{F}_{q}$ in its support $\operatorname{supp}(F) := \{y \in \mathbb{F}_{q^n} \ : \ F(y) \neq 0\}$. This condition is the following: The function $f : \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q^n}$ given by $f(k) = F(\zeta^k)$, where $\zeta \in \mathbb{F}_{q^n}$ is primitive, is such that $\mathcal{F}_\zeta[f]$ has least period $r$ satisfying $r > (q^n-1)/\Phi_n(q)$. Additionally, we give in Theorem \ref{lem: revision} a necessary condition for a primitive element of $\mathbb{F}_{q^n}$ to be contained in the support of $F$. \fi First we may identify, in the usual way, elements of $\mathbb{Z}_{N} = \mathbb{Z} / N \mathbb{Z}$ with their canonical representatives in $\mathbb{Z}$ and vice versa. In particular this endows $\mathbb{Z}_{N}$ with the natural ordering in $\mathbb{Z}$. We may also sometimes abuse notation and write $a \mid \bar{b}$ for $a \in \mathbb{Z}$ and $\bar{b} \in \mathbb{Z}_{N} $ to state that $a$ divides the canonical representative of $\bar{b}$, and write $a \nmid \bar{b}$ to state the opposite. For an integer $k$ and a non-empty set $A = \{a_1, \ldots, a_s\}$, we write $\gcd(k, A) := \gcd(k, a_1, \ldots, a_s)$. \begin{prop}\label{prop: period of DFT} Let $q$ be a power of a prime, let $N \mid q-1$, let $f : \mathbb{Z}_N \to \mathbb{F}_{q}$ and let $\zeta_N$ be a primitive $N$-th root of unity in $\mathbb{F}_{q}^*$. The least period of $\mathcal{F}_{\zeta_N}[f]$ and $\mathcal{F}^{-1}_{\zeta_N}[f]$ is given by $N/\gcd(N, \operatorname{supp}(f))$. \end{prop} \begin{proof} Note that $d := N/\gcd(N, \operatorname{supp}(f))$ is the smallest positive divisor of $N$ with the property that $N/d$ divides every element in $\operatorname{supp}(f)$. For the sake of brevity write $\widehat{f} = \mathcal{F}_{\zeta_N}[f]$. Now for $i \in \mathbb{Z}_N$ note that \begin{align*} \widehat{f}(i + d) &= \sum_{j \in \mathbb{Z}_N} f(j) \zeta_N^{(i+d)j} = \sum_{k=0}^{d-1} f\left(\frac{N}{d}k\right) \zeta_N^{(i+d) \frac{N}{d}k} = \sum_{k=0}^{d-1} f\left(\frac{N}{d}k\right) \zeta_N^{i\frac{N}{d}k}\\ &= \widehat{f}(i). \end{align*} Thus if $r$ is the least period of $\widehat{f}$, necessarily $r \leq d$. Since $f = \mathcal{F}_{\zeta_N}^{-1}[ \ \widehat{f} \ ]$, then $$ f(i) = N^{-1} \sum_{j \in \mathbb{Z}_N} \widehat{f}(j) \zeta_N^{-ij}, \hspace{2em} i \in \mathbb{Z}_N. $$ Hence for $i \in \mathbb{Z}_N$ we have \begin{align*} Nf(i) &= \sum_{j \in \mathbb{Z}_{N}} \widehat{f}(j) \zeta_N^{-ij} = \sum_{j=0}^{r-1}\widehat{f}(j) \zeta_N^{-ij} + \sum_{j=r}^{2r-1} \widehat{f}(j) \zeta_N^{-ij} + \cdots + \sum_{j = \left(\frac{N}{r}-1\right)r}^{N-1} \widehat{f}(j)\zeta_N^{-ij} \\ &= \sum_{j=0}^{r-1} \widehat{f}(j) \zeta_N^{-ij} + \sum_{j=0}^{r-1} \widehat{f}(j+r)\zeta_N^{-i(j + r)} + \cdots + \sum_{j=0}^{r-1} \widehat{f}\left(j + \left(\frac{N}{r}-1\right)r \right) \zeta_N^{-i \left(j + \left(\frac{N}{r}-1\right)r \right)}\\ &= \sum_{j=0}^{r-1} \widehat{f}(j)\zeta_N^{-ij} + \zeta_N^{-ir}\sum_{j=0}^{r-1} \widehat{f}(j)\zeta_N^{-ij} + \cdots + \zeta_N^{-i\left(\frac{N}{r}-1\right)r}\sum_{j=0}^{r-1} \widehat{f}(j)\zeta_N^{-ij}\\ &= \sum_{k=0}^{\frac{N}{r} - 1} \zeta_N^{-irk } \sum_{j=0}^{r-1}\widehat{f}(j) \zeta_N^{-ij}. \end{align*} If $\frac{N}{r} \nmid i$, then $\zeta_N^{-ir} \neq 1$ and $\sum_{k=0}^{\frac{N}{r} - 1} \zeta_N^{-irk } = \frac{\zeta_N^{-iN} - 1}{ \zeta_N^{-ir} - 1} = 0$. It follows that $f(i) = 0$ whenever $\frac{N}{r} \nmid i$. Equivalently, if $f(i) \neq 0$, then $\frac{N}{r} \mid i$. Now the minimality of $d$ implies that $d \leq r$. But $r \leq d$ (see above) now yields $r = d$. With regards to the least period of $\mathcal{F}_{\zeta_N}^{-1}[f]$, we know that $\mathcal{F}_{\zeta_N}^{-1}[f] = N^{-1} \mathcal{F}_{\zeta_N^{-1}}[f]$. Since $\zeta_N^{-1}$ is a primitive $N$-th root of unity in $\mathbb{F}_{q}^*$ as well, the previous arguments similarly imply that $\mathcal{F}_{\zeta_N^{-1}}[f]$ has the least period $d$. Then so does the function $N^{-1} \mathcal{F}_{\zeta_N^{-1}}[f]$, a non-zero scalar multiple of $\mathcal{F}_{\zeta_N^{-1}}[f]$. \end{proof} \iffalse We obtain the following immediate consequence. Similar to $supp(F)$, we define $supp(f) = \{ y \in \mathbb{Z}_N : f(y) \neq 0 \}$. \begin{prop}\label{lem: period criteria} With the notations of Lemma \ref{lem: period of DFT}, let $r$ be the least period of $\mathcal{F}_\zeta[f]$ (and hence of $\mathcal{F}_\zeta^{-1}[f]$). Then for all positive divisors $d$ of $N$ with $d < r$, there exists $i \in \operatorname{supp}(f)$ with $N/d \nmid i$. In particular $\mathcal{F}_{\zeta_N}[f]$ (and $\mathcal{F}_{\zeta_N}^{-1}[f]$) has maximum least period $N$ if and only if for every divisor $d > 1$ of $N$ there exists $i \in \operatorname{supp}(f)$ such that $d \nmid i$. Finally, if $(\mathbb{Z}/N\mathbb{Z})^\times \cap \operatorname{supp}(f) \neq \emptyset$, then both $\mathcal{F}_{\zeta_N}[f]$ and $\mathcal{F}_{\zeta_N}^{-1}[f]$ have maximum least period $N$. \end{prop} \fi \iffalse Before we prove Theorem \ref{thm: connection}, we stop for a moment to notice that a function has maximum least period if and only if so does any proper self-convolution of it. \begin{prop} Let $N \mid q-1$, let $f : \mathbb{Z}_N \to \mathbb{F}_{q}$ and let $m \in \mathbb{N}$. Then $f$ has maximum least period if and only if so does $f^{\otimes m}$. \end{prop} \begin{proof} We may write $f = \mathcal{F}_{\zeta_N}[g]$ for some primitive $N$-th root of unity and some function $g : \mathbb{Z}_N \to \mathbb{F}_{q}$. By Lemma \ref{lem: period criteria}, $f$ has maximum least period if and only if for all divisors $d > 1$ of $N$ there exists $k \in \operatorname{supp}(g)$ such that $d \nmid k$. Since $g(k) \neq 0$ if and only if $g(k)^m \neq 0$ in $\mathbb{F}_{q}$, then $f$ has maximum least period if and only if for all divisors $d > 1$ of $N$ there exists $k \in \operatorname{supp}(g^m)$ such that $d \nmid k$. Because $f^{\otimes m} = \mathcal{F}_{\zeta_N}[g^m]$, this occurs if and only if $f^{\otimes m}$ has maximum least period, by Lemma \ref{lem: period criteria}. \end{proof} \fi \iffalse As an application of the previous lemmas we obtain a proof of the result in Theorem \ref{thm: connection} relating the existence, of elements of degree $n$ over $\mathbb{F}_{q}$ lying in the support of functions on $\mathbb{F}_{q^n}$, with the least period of the DFT of the functions' ``associate''. It also gives a necessary condition for a primitive element of $\mathbb{F}_{q^n}$ to be contained in the support of such functions. \begin{proof}[{\bf Proof of Theorem \ref{thm: connection}}] Assume $\mathcal{F}_\zeta[f]$ has least period $r$ satisfying $r > (q^n-1)/\Phi_n(q)$. Since $r$ is the least period, Lemma \ref{lem: period criteria} implies that for all divisors $d$ of $q^n-1$, with $d < r$, there exists $i \in \operatorname{supp}(f)$, and hence $\zeta^i \in \operatorname{supp}(F)$, such that $(q^n-1)/d \nmid i$. In particular, since $(q^n-1)/\Phi_n(q) < r$ by assumption, there exists $\zeta^i \in \operatorname{supp}(F)$ such that $(q^n-1)/[(q^n-1)/\Phi_n(q)] = \Phi_n(q) \nmid i$. By Proposition \ref{prop: deg n}, $\deg_{\mathbb{F}_{q}}(\zeta^i) = n$. Thus $\operatorname{supp}(F)$ contains an element of degree $n$ over $\mathbb{F}_{q}$. Assume $\operatorname{supp}(F)$ contains a primitive element of $\mathbb{F}_{q^n}$. Then there exists $k$ relatively prime to $q^n-1$ such that $\bar{k} \in \operatorname{supp}(f)$. Thus $(\mathbb{Z} / (q^n-1) \mathbb{Z})^\times \cap \operatorname{supp}(f) \neq \emptyset$. By Lemma \ref{lem: period criteria}, both $\mathcal{F}_{\zeta}[f]$ and $\mathcal{F}_{\zeta}^{-1}[f]$ have maximum least period $q^n-1$. \end{proof} \fi In particular, if $\zeta$ is primitive in $\mathbb{F}_{q}$ and $F(x) = \sum_{i \in I} a_i x^i \in \mathbb{F}_{q}[x]$ for some subset $I \subseteq [0, q-2]$ of integers with each $a_i \neq 0$, $i \in I$, then the least period of the $(q-1)$-periodic sequence $(F(\zeta^i))_{i \geq 0}$ is given by $(q-1)/\gcd(q-1, I)$. We now prove Lemma \ref{lem: revision}. \begin{proof}[{\bf Proof of Lemma \ref{lem: revision}}] (i) On the contrary, suppose that $\operatorname{supp}(F)$ contains no element of degree $n$ over $\mathbb{F}_{q}$. Then for each $m \in \operatorname{supp}(f)$ there exists a proper divisor $d$ of $n$ with $(q^n - 1)/(q^d-1) \mid m$. Since $\Phi_n(q) \mid (q^n-1)/(q^d-1)$ for all proper divisors $d$ of $n$, then $\Phi_n(q) \mid m$ for all $m \in \operatorname{supp}(f)$. Thus for all $k \in \mathbb{Z}_{q^n-1}$, $$ \hat{f}(k) = \sum_{j \in \mathbb{Z}_{q^n-1}} f(j) \zeta^{kj} = \sum_{a = 1}^{(q^n-1)/\Phi_n(q)} f\left(a \Phi_n(q) \right) \zeta^{k a \Phi_n(q) }, $$ where $\hat{f} = \mathcal{F}_\zeta[f]$. Note that $\hat{f}(k + (q^n-1)/\Phi_n(q)) = \hat{f}(k)$ for all $k \in \mathbb{Z}_{q^n-1}$. Thus $\hat{f}$ is $\frac{q^n-1}{\Phi_n(q)}$-periodic. Necessarily the least period of $\hat{f}$ divides $\frac{q^n-1}{\Phi_n(q)}$, a contradiction. \\ \\ (ii) Assume $\operatorname{supp}(F)$ contains an element of degree $n$ over $\mathbb{F}_{q}$. Then there exists $m \in \operatorname{supp}(f)$ with $(q^n-1)/(q^d-1) \nmid m$ for all proper divisors $d$ of $n$. Let $r$ be the least period of $\hat{f}$. By Proposition \ref{prop: period of DFT}, $(q^n-1)/r \mid m$. Since $(q^n-1)/(q^d-1) \nmid m$, then $r \nmid q^d-1$ for all proper divisors $d$ of $n$. \\ \\ (iii) Assume $\operatorname{supp}(F)$ contains a primitive element of $\mathbb{F}_{q^n}$. Then there exists $k$ relatively prime to $q^n-1$ such that $\bar{k} \in \operatorname{supp}(f)$. Thus $(\mathbb{Z} / (q^n-1) \mathbb{Z})^\times \cap \operatorname{supp}(f) \neq \emptyset$. It follows from Proposition \ref{prop: period of DFT} that both $\mathcal{F}_{\zeta}[f]$ and $\mathcal{F}_{\zeta}^{-1}[f]$ have maximum least period $q^n-1$. \end{proof} Note that, as the following three examples show, the sufficient (respectively necessary) conditions in Lemma \ref{lem: revision} are not necessary (respectively sufficient). These may possibly be improved in accordance with the needs of whoever wishes to apply these tools. Let us start off by showing that the sufficient condition in (i) is not necessary. \begin{eg} Recall that $(q^n-1)/\Phi_n(q) = \operatorname{lcm}\{q^d-1 \ : \ d \mid n, \ d < n\}$. Pick any $n$ with at least two prime factors. Then $(q^n-1)/\Phi_n(q) \nmid q^d-1$ for all $d \mid n$, $d < n$. Thus $\zeta^{\Phi_n(q)}$ is of degree $n$ over $\mathbb{F}_{q}$. Define the function $F : \mathbb{F}_{q^n} \to \mathbb{F}_{q^n}$ by $F(\zeta^{\Phi_n(q)}) = 1$ and $F(\xi) = 0$ for all other elements $\xi \in \mathbb{F}_{q^n}$. Thus $\operatorname{supp}(F)$ contains an element of degree $n$ over $\mathbb{F}_{q}$. The associate function $f : \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q^n}$ is defined by $f(k) = 1$ if $k = \Phi_n(q)$ and $f(k) = 0$ otherwise. By Proposition \ref{prop: period of DFT}, the least period $r$ of $\mathcal{F}_{\zeta}[f]$ is the smallest positive divisor of $q^n-1$ such that $(q^n-1)/r \mid \Phi_n(q)$, since $\operatorname{supp}(f) = \{\Phi_n(q)\}$. This is $r = (q^n-1)/\Phi_n(q)$. Thus we obtain an example of a function which contains an element of degree $n$ over $\mathbb{F}_{q}$ in its support but for which the corresponding least period is a divisor of $(q^n-1)/\Phi_n(q)$. \end{eg} The following example shows that the necessary condition in (ii) is not sufficient. \begin{eg} Similarly as before, pick any $n$ with at least two prime factors. Then $(q^n-1)/\Phi_n(q) \nmid q^d-1$ for all $d \mid n$, $d < n$. Define $F : \mathbb{F}_{q^n} \to \mathbb{F}_{q^n}$ by $F(\zeta^k) = 1$ if $k = (q^n-1)/(q^d-1)$ for some $d \mid n$, $d < n$, and $F(\xi) = 0$ for all other elements $\xi \in \mathbb{F}_{q^n}$. Thus $\operatorname{supp}(F)$ has no element of degree $n$ over $\mathbb{F}_{q}$. This defines the associate function $f : \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q^n}$ of $F$ with $\operatorname{supp}(f) = \{(q^n-1)/(q^d-1) \ : \ d \mid n, \ d < n\}$. Consider the smallest positive divisor $r$ of $q^n-1$, with $(q^n-1)/r \mid (q^n-1)/(q^d-1)$ for all proper divisors $d$ of $n$. Note that $r$ is divisible by each $q^d-1$, for $d \mid n$, $d < n$; it follows $r = \operatorname{lcm}\{q^d-1 \ : \ d \mid n, \ d < n\} = (q^n-1)/\Phi_n(q)$ with $r \nmid q^d-1$ for all $d \mid n$, $d < n$. By Proposition \ref{prop: period of DFT}, $r = (q^n-1)/\Phi_n(q)$ is the least period of $\mathcal{F}_\zeta[f]$. Thus we have constructed a function $F$ with $\operatorname{supp}(F)$ having no element of degree $ n$ over $\mathbb{F}_{q}$ but for which the corresponding least period $r$ satisfies $r \nmid (q^d-1)$ for all $d \mid n$, $d < n$. \end{eg} This last example shows that the necessary condition in (iii) is not sufficient. \begin{eg}\label{eg: prim} Pick $q,n$ such that $q^n-1$ has at least two non-trivial relatively prime divisors, say $a,b > 1$ with $a,b \mid (q^n-1)$ and $\gcd(a,b) = 1$. The smallest positive divisor $r$ of $q^n-1$ with $(q^n-1)/r \mid a,b$ is $r = q^n-1$. Now we note that the function $F : \mathbb{F}_{q^n} \to \mathbb{F}_{q^n}$ defined by $F(\zeta^a) = F(\zeta^b) = 1$ and $F(\xi) = 0$ for all other elements $\xi$ of $\mathbb{F}_{q^n}$, contains no primitive element in its support, but the corresponding least period of $\mathcal{F}_\zeta[f]$ is $q^n-1$, by Proposition \ref{prop: period of DFT}. \end{eg} \begin{rmk} We remark that Example \ref{eg: prim} together with Lemma \ref{lem: revision} (i) imply that for any such $a,b$, there exists $k \in \{a,b\}$ such that $(q^n-1)/(q^d-1) \nmid k $ for all proper divisors $d$ of $n$; that is, either $\zeta^a$ or $\zeta^b$ (or both) is an element of degree $n$ over $\mathbb{F}_{q}$. This may also have applications in determining whether a polynomial $h(x) \in \mathbb{F}_{q}[x]$ has an irreducible factor of degree $n$. Specifically, if there exist divisors $a,b \geq 1$ of $q^n-1$ with $\gcd(a,b) = 1$ and $h(\zeta^a) = h(\zeta^b) = 0$, then $h(x)$ has an irreducible factor of degree $n$. \end{rmk} Finally we prove Lemma~\ref{lem: factor of deg n}. \begin{proof}[{\bf Proof of Lemma \ref{lem: factor of deg n}}] As a function on $\mathbb{F}_{q^n}^\times$, note that $$ S(\xi) = \begin{cases} 1 & \mbox{ if } h(\xi) = 0\\ 0 & \mbox{ otherwise.} \end{cases} $$ Let $\zeta$ be a primitive element of $\mathbb{F}_{q^n}$ and define the function $f : \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q}$ by $f(m) = s_m$. Thus $f$ has least period $r$ satisfying $r \nmid (q^n-1)/\Phi_n(q)$. Note that $S(\zeta^i) = \sum_{j} s_j \zeta^{ij} = \sum_j f(j) \zeta^{ij} = \mathcal{F}_\zeta[f](i)$ for each $i \in \mathbb{Z}_{q^n-1}$. Then by the criteria (i) of Lemma \ref{lem: revision}, there exists an element of degree $n$ over $\mathbb{F}_{q}$ in the support of $S$. It follows $h(x)$ has a root of degree $n$ over $\mathbb{F}_{q}$ and hence has an irreducible factor of degree $n$ over $\mathbb{F}_{q}$. \end{proof} \section{Characteristic elementary symmetric and delta functions}\label{section: delta functions} In this section we apply Lemma \ref{lem: factor of deg n} for the purposes of studying coefficients of irreducible polynomials. We first place the characteristic elementary symmetric functions in the context of their DFT, which we shall refer to here simply as delta functions. These delta functions are indicators, with values in a finite field, for sets of values in $\mathbb{Z}_{q^n-1}$ whose canonical integer representatives have certain Hamming weights in their $q$-adic representation and $q$-digits all belonging to the set $\{0,1\}$. Essentially, characteristic elementary symmetric functions are characteristic generating functions of the sets that the delta functions indicate. Then we give in Lemma \ref{lem: delta function} sufficient conditions for an irreducible polynomial to have a prescribed coefficient. Because the delta functions are $q$-symmetric (see Definition \ref{def: q-sym}), we also review some useful facts needed in Section~\ref{section: proof of Hansen-Mullen}. For $\xi \in \mathbb{F}_{q^n}$, the characteristic polynomial $h_\xi(x) \in \mathbb{F}_{q}[x]$ of degree $n$ over $\mathbb{F}_{q}$ with root $\xi$ is given by $$ h_\xi(x) = \prod_{k=0}^{n-1}\left( x - \xi^{q^k} \right) = \sum_{w=0}^{n}(-1)^w \sigma_w(\xi) x^{n-w}, $$ where for $0 \leq w \leq n$, $\sigma_w(x) \in \mathbb{F}_{q}[x]$ is the {\em characteristic elementary symmetric} polynomial given by $\sigma_0(x) = 1$ and $$ \sigma_w(x) = \sum_{0 \leq i_1 < \cdots < i_w \leq n-1} x^{q^{i_1} + \cdots + q^{i_w}}, $$ for $ 1 \leq w \leq n$. In particular $\sigma_1 = \operatorname{Tr}_{\mathbb{F}_{q^n}/\F}$ is the (linear) trace function and $\sigma_n = N_{\mathbb{F}_{q^n}/\mathbb{F}_{q}}$ is the (multiplicative) norm function. Whenever $q = 2$ and $\xi \neq 0$, then $\sigma_0(\xi) = \sigma_n(\xi) = 1$ always. If $\xi \neq 0$, then (in general) $h_{\xi^{-1}}(x) = (-1)^n \sigma_n(\xi^{-1}) x^n h_\xi(1/x) = h_\xi^*(x)$, where $h_\xi^*(x)$ is the (monic) {\em reciprocal} of $h_\xi(x)$. Thus $\sigma_w(\xi) = \sigma_n(\xi) \sigma_{n-w}(\xi^{-1})$. Clearly $h_\xi(x)$ is irreducible if and only if so is $h_{\xi}^*(x)$. This occurs if and only if $\deg_{\mathbb{F}_{q}}(\xi) = n$. Next we introduce the characteristic delta functions and the sets they indicate. But first let us clarify some ambiguity in our notation: For $a,b \in \mathbb{Z}$, we denote by $a \bmod b$ the remainder of division of $a$ by $b$. That is, $a \bmod b$ is the smallest integer $c$ in $\{0, 1, \ldots, b-1\}$ that is congruent to $a$ modulo $b$, and write $c = a\bmod b$. Similarly if $\bar{a} = a + b\mathbb{Z}$ is an element of $\mathbb{Z}_b$, we use the notation $\bar{a} \bmod b := a \bmod b$ to express the canonical representative of $\bar{a}$ in $\mathbb{Z}$. But we keep the usual notation $k \equiv a \pmod{b}$ to state that $b \mid (k-a)$. We can represent $a \in \mathbb{Z}_{q^n-1}$ uniquely by the $q$-adic representation $(a_0, \ldots, a_{n-1})_q = \sum_{i=0}^{n-1}a_i q^i$, with each $0\leq a_i \leq q-1$, of the canonical representative of $a$ in $\{0, 1, \ldots, q^n-2\} \subset \mathbb{Z}$. For the sake of convenience we write $a = (a_0, \ldots, a_{n-1})_q$. For $w \in [0, n] := \{0, 1, \ldots, n\}$, define the sets $\Omega(w) \subseteq \mathbb{Z}_{q^n-1}$ by $\Omega(0) = \{0\}$ and $$ \Omega(w) = \left\{ k \in \mathbb{Z}_{q^n-1} \ : \ k \bmod (q^n-1) = q^{i_1} + \cdots + q^{i_w}, \ 0 \leq i_1 < \cdots < i_w \leq n-1 \right\} $$ for $1 \leq w \leq n$. That is, $\Omega(w)$ consists of all the elements $k \in \mathbb{Z}_{q^n-1}$ whose canonical representatives in $\{0, 1, \ldots, q^n-2\} \subset \mathbb{Z}$ have Hamming weight $w$ in their $q$-adic representation $(a_0, \ldots, a_{n-1})_q = \sum_{i=0}^{n-1} a_i q^i$, with each $a_i \in \{0,1\}$. Note this last condition that each $a_i \in \{0,1\}$ is automatically redundant when $q = 2$, since in general each $a_i \in [0, q-1]$ in the $q$-adic representation $t = (a_{0}, \ldots, a_m)_q$ of a non-negative integer $ t = \sum_{i=0}^{m}a_i q^i. $ When $q = 2$, note $\Omega(n) = \emptyset$ since there is no integer in $\{0, 1, \ldots, 2^n-2\}$ with Hamming weight $n$ in its binary representation. Observe also that $|\Omega(w)| = {n \choose w}$ for each $0 \leq w \leq n$, unless $(q,w) = (2,n)$. Moreover $\Omega(v) \cap \Omega(w) = \emptyset$ whenever $v \neq w$, by the uniqueness of base representation of integers. For $w \in [0, n]$, define the characteristic (finite field valued) function $\delta_w : \mathbb{Z}_{q^n-1} \to \mathbb{F}_p$ of the set $\Omega(w)$ by $$ \delta_w(k) = \begin{cases} 1 & \mbox{ if } k \in \Omega(w);\\ 0 & \mbox{ otherwise.} \end{cases} $$ Observe that our $\delta_0$ is the Kronecker delta function on $\mathbb{Z}_{q^n-1}$ with values in $\{0,1\} \subseteq \mathbb{F}_p$. \begin{lem}\label{lem: sigma delta} Let $\zeta$ be a primitive element of $\mathbb{F}_{q^n}$ and let $w \in [0, n]$. If $q = 2$, further assume that $w \neq n$. Then $$ \sigma_w(\zeta^k) = \mathcal{F}_\zeta[\delta_w](k), \hspace{2em} k \in \mathbb{Z}_{q^n-1}. $$ \end{lem} \begin{proof} Note $\sigma_0(\zeta^k) = 1$ for each $k$ and so $\sigma_0(\zeta^k) = \mathcal{F}_\zeta[\delta_0](k)$. Now let $ 1 \leq w \leq n$. By definition and the assumption that $(q,w) \neq (2,n)$, we have \begin{align*} \sigma_w(\zeta^k) &= \sum_{0 \leq i_1 < \cdots < i_w \leq n-1} \zeta^{k\left(q^{i_1} + \cdots + q^{i_w}\right)} = \sum_{j \in \mathbb{Z}_{q^n-1}} \delta_w(j) \zeta^{kj} \\ &= \mathcal{F}_\zeta[\delta_w](k) . \end{align*} \end{proof} These functions are related to various mathematical objects in literature: Let $m < q$, let $r_1, \ldots, r_m \in [1, n-1]$, and let $c_0, \ldots, c_{n-1} \in [0, m-1]$ such that $\sum_{i=1}^m r_i = \sum_{j=0}^{n-1} c_j$. View each $\delta_{r_1}, \ldots, \delta_{r_m}$ as having values in $\mathbb{Z}$. Then one can show that $$\delta_{r_1} \otimes \cdots \otimes \delta_{r_m}((c_0, \ldots, c_{n-1})_q)$$ is the number of $m \times n$ matrices, with entries in $\{0, 1\} \subset \mathbb{Z}$, such that the sum of the entries in row $i$, $1 \leq i \leq m$, is $r_i$, and the sum of the entries in column $j$, $0 \leq j \leq n-1$, is $c_j$. Matrices with 0--1 entries and prescribed row and column sums are classical objects appearing in numerous branches of pure and applied mathematics, such as combinatorics, algebra and statistics. See for instance the survey in \cite{Barvinok} and Chapter 16 in \cite{Lint}. An application of Lemma \ref{lem: factor of deg n} yields the following sufficient condition for the existence of irreducible polynomials with a prescribed coefficient. \begin{lem}\label{lem: delta function} Fix a prime power $q$ and integers $n \geq 2$ and $1 \leq w \leq n$. Fix $c \in \mathbb{F}_{q}$. If $q = 2$, further assume that $w \neq n$. If the function $\Delta_{w,c} : \mathbb{Z}_{q^n-1} \to \mathbb{F}_{q}$ given by $$ \Delta_{w,c} = \delta_0 - ((-1)^w \delta_w - c \delta_0)^{\otimes(q-1)} $$ has least period $r$ satisfying $r \nmid (q^n-1)/\Phi_n(q)$, then there exists an irreducible polynomial $P(x)$ of degree $n$ over $\mathbb{F}_{q}$ with $[x^{n-w}]P(x) = c$. \end{lem} \begin{proof} Take $h(x) = (-1)^w\sigma_w(x) - c \in \mathbb{F}_{q}[x]$ in Lemma \ref{lem: factor of deg n}. Since $\sigma_w(\mathbb{F}_{q^n}) \subseteq \mathbb{F}_{q}$, we can pick $L = \mathbb{F}_{q}$. Thus $S(x) \in \mathbb{F}_{q}[x] $ is given by $$ S(x) = \left[1 - ((-1)^w \sigma_w(x) - c)^{q-1}\right] \bmod\left(x^{q^n-1} - 1\right). $$ Let $\zeta$ be a primitive element of $\mathbb{F}_{q^n}$. By Lemma \ref{lem: sigma delta}, the linearity of the DFT, and the fact that $c = \mathcal{F}_\zeta[c \delta_0]$, we have $$ S(\zeta^i) = 1 - \left((-1)^w \sigma_w(\zeta^i) - c\right)^{q-1} = \mathcal{F}_\zeta[\delta_0](i) - \left(\mathcal{F}_\zeta\left[(-1)^w\delta_w - c \delta_0\right](i) \right)^{q-1}. $$ Since the product of DFTs is the DFT of the convolution, then, as a function on $\mathbb{F}_{q^n}$, \begin{align*} S &= \mathcal{F}_\zeta[\delta_0] - \mathcal{F}_\zeta\left[\left((-1)^w\delta_w - c \delta_0\right)^{\otimes(q-1)}\right] = \mathcal{F}_\zeta\left[ \delta_0 - \left((-1)^w\delta_w - c \delta_0\right)^{\otimes(q-1)} \right]\\ &= \mathcal{F}_\zeta[\Delta_{w,c}]. \end{align*} Thus $$ S(\zeta^m) = \sum_{i=0}^{q^n-2}\Delta_{w,c}(i) \zeta^{mi} $$ for each $m \in \mathbb{Z}_{q^n-1}$. As $S(x)$ is already reduced modulo $x^{q^n-1} - 1$, it follows (from the uniqueness of the DFT of a function) that $S(x) = \sum_{i=0}^{q^n-2}\Delta_{w,c}(i) x^i$. Since the least period of $\Delta_{w,c}$ is not a divisor of $(q^n-1)/\Phi_n(q)$ by assumption, Lemma \ref{lem: factor of deg n} implies $h(x)$ has an irreducible factor $P(x)$ of degree $n$ over $\mathbb{F}_{q}$. Any of the roots $\xi$ of $P(x)$ must satisfy $h(\xi) = 0$, that is, $(-1)^w \sigma_w(\xi) = c$. This is the coefficient of $x^{n-w}$ in $P(x)$. Hence $[x^{n-w}]P(x) = c$ with $P(x)$ irreducible of degree $n$ over $\mathbb{F}_{q}$. \end{proof} Note the delta functions also satisfy the property that \begin{equation}\label{eqn: sym property} \delta_{w}((a_0, \ldots, a_{n-1})_q) = \delta_{w}((a_{\rho(0)}, \ldots, a_{\rho(n-1)})_q) \end{equation} for every permutation $\rho$ of the indices in $[0, n-1]$. In particular such functions have a natural well-studied dyadic analogue in the {\em symmetric boolean} functions. These are boolean functions $f : \mathbb{F}_{2}^n \to \mathbb{F}_2$ with the property that $f(x_0, \ldots, x_{n-1}) = f(x_{\rho(0)}, \ldots, x_{\rho(n-1)})$ for every permutation $\rho \in \mathcal{S}_{[0, n-1]}$; hence the value of $f(x_0, \ldots, x_{n-1})$ depends only on the Hamming weight of $(x_0, \ldots, x_{n-1})$. See for example \cite{Canteaut, Castro} for some works on symmetric boolean functions. Nevertheless in our case the domain of these $\delta_w$ functions is $\mathbb{Z}_{q^n-1}$ rather than $\mathbb{F}_{2}^n$. Although one may still represent the elements of $\mathbb{Z}_{q^n-1}$ as $n$-tuples, say by using the natural $q$-adic representation, the arithmetic here is not as nice as in $\mathbb{F}_{2}^n$. One has to consider the possibility that a ``carry'' may occur when adding or subtracting (this can make things quite chaotic) and also worry about reduction modulo $q^n-1$ (although this is much easier to deal with). These issues will come up again in the following section. The symmetry property in (\ref{eqn: sym property}) of $\delta_w$ and of its convolutions will be exploited in the proof of Lemma \ref{lem: period of delta and hansen-mullen} for the case when $(w,c) = (n/2, 0)$. Before we move on to the following section, we need the fact in Lemma \ref{lem: convolution of q-sym is q-sym}. First for a permutation $\rho \in \mathcal{S}_{[0, n-1]}$ of the indices in the set $[0, n-1]$, define the map $\varphi_\rho : \mathbb{Z}_{q^n-1} \to \mathbb{Z}_{q^n-1}$ by \begin{equation}\label{eqn: base q bijection} \varphi_\rho((a_0, \ldots, a_{n-1})_q) = (a_{\rho(0)}, \ldots, a_{\rho(n-1)})_q. \end{equation} Note $\varphi_{\rho}$ is a permutation of $\mathbb{Z}_{q^n-1}$ with inverse $\varphi_\rho^{-1} = \varphi_{\rho^{-1}}$, for each $\rho \in \mathcal{S}_{[0, n-1]}$. For $k \in \mathbb{Z}_{q^n-1}$, let $\epsilon_i(k)$, $0 \leq i \leq n-1$, denote the digit of $q^i$ in the $q$-adic form of its canonical representative. Thus $0 \leq \epsilon_i(k) \leq q-1$. For $a,b \in \mathbb{Z}_{q^n-1}$ with $a +b \neq 0$, it is clear that if $\epsilon_i(a) + \epsilon_i(b) \leq q-1$, then $\epsilon_i(a + b) = \epsilon_i(a) + \epsilon_i(b)$. One can also check, for any $a,b \in \mathbb{Z}_{q^n-1}$ such that $\epsilon_i(a) + \epsilon_i(b) \leq q-1$ holds for every $0 \leq i \leq n-1$, that $\varphi_\rho(a + b) = \varphi_\rho(a) + \varphi_\rho(b)$ for every $\rho \in \mathcal{S}_{[0, n-1]}$, regardless of whether $a + b = 0$ or not. By induction, $\varphi_\rho(a_1 + \cdots + a_s) = \varphi_\rho(a_1) + \cdots + \varphi_\rho(a_s)$, whenever $a_1, \ldots, a_s \in \mathbb{Z}_{q^n-1}$ satisfy $\epsilon_i(a_1) + \cdots + \epsilon_i(a_s) \leq q-1$ for every $0\leq i \leq n-1$. \begin{defi}[{\bf q-symmetric}]\label{def: q-sym} For a function $f$ on $\mathbb{Z}_{q^n-1}$, we say that $f$ is {\em $q$-symmetric} if for all $a = (a_0, \ldots, a_{n-1})_q \in \mathbb{Z}_{q^n-1}$ and all permutations $\rho \in \mathcal{S}_{[0, n-1]}$, we have $f(\varphi_\rho(a)) = f(a)$; that is, $$f((a_{\rho(0)}, \ldots, a_{\rho(n-1)})_q) = f((a_0, \ldots, a_{n-1})_q). $$ \end{defi} Note the $\delta_w$ functions are $q$-symmetric. Because $\epsilon_i(m) \leq 1$ for each $m \in \operatorname{supp}(\delta_w) = \Omega(w)$ and each $0 \leq i \leq n-1$, it follows from the following lemma that the convolution of at most $q-1$ delta functions is also $q$-symmetric. \begin{lem}\label{lem: convolution of q-sym is q-sym} Let $R$ be a ring and let $f_1, \ldots, f_s : \mathbb{Z}_{q^n-1} \to R$ be $q$-symmetric functions such that for each $a_k \in \operatorname{supp}(f_k)$, $1 \leq k \leq s$, we have $\epsilon_i(a_1) + \cdots + \epsilon_i(a_s) \leq q-1$ for every $0 \leq i \leq n-1$. Then $f_1 \otimes \cdots \otimes f_s$ is $q$-symmetric. \end{lem} \begin{proof} Recall the assumption on the supports imply that $\varphi_\tau(a_1 + \cdots + a_s) = \varphi_\tau(a_1) + \cdots + \varphi_\tau(a_s)$ for any $a_k \in \operatorname{supp}(f_k)$, $1 \leq k \leq s$, and any $\tau \in \mathcal{S}_{[0,n-1]}$. Since each $f_k$ is $q$-symmetric, $1 \leq k \leq s$, then $f_k(a) = f_k(\varphi_\tau(a))$ for every $a \in \mathbb{Z}_{q^n-1}$. In particular $a \in \operatorname{supp}(f_k)$ if and only if $\varphi_\tau(a) \in \operatorname{supp}(f_k)$; hence $\varphi_\tau(\operatorname{supp}(f_k)) = \operatorname{supp}(f_k)$. Now let $m \in \mathbb{Z}_{q^n-1}$ and let $\rho \in \mathcal{S}_{[0, n-1]}$. Then it follows from the aforementioned observations that \begin{align*} (f_1 \otimes \cdots \otimes f_s)(\varphi_\rho(m)) &= \sum_{\substack{j_1 + \cdots + j_s = \varphi_\rho(m) \\ j_1, \ldots, j_s \in \mathbb{Z}_{q^n-1}}} f_1(j_1) \cdots f_s(j_s)\\ &= \sum_{\substack{j_1 + \cdots + j_s = \varphi_\rho(m) \\ j_1 \in \operatorname{supp}(f_1), \ldots, j_s \in \operatorname{supp}(f_s)}} f_1(j_1) \cdots f_s(j_s)\\ &= \sum_{\substack{\varphi_{\rho^{-1}}(j_1 + \cdots + j_s) = m \\ j_1 \in \operatorname{supp}(f_1), \ldots, j_s \in \operatorname{supp}(f_s)}} f_1(j_1) \cdots f_s(j_s)\\ &= \sum_{\substack{\varphi_{\rho^{-1}}(j_1) + \cdots + \varphi_{\rho^{-1}}(j_s) = m \\ j_1 \in \operatorname{supp}(f_1), \ldots, j_s \in \operatorname{supp}(f_s)}} f_1(j_1) \cdots f_s(j_s)\\ &= \sum_{\substack{j_1 + \cdots + j_s = m \\ \varphi_\rho(j_1) \in \operatorname{supp}(f_1), \ldots, \varphi_\rho(j_s) \in \operatorname{supp}(f_s)}} f_1(\varphi_\rho(j_1)) \cdots f_s(\varphi_\rho(j_s))\\ &= \sum_{\substack{j_1 + \cdots + j_s = m \\ \varphi_\rho(j_1) \in \operatorname{supp}(f_1), \ldots, \varphi_\rho(j_s) \in \operatorname{supp}(f_s)}} f_1(j_1) \cdots f_s(j_s)\\ &= \sum_{\substack{j_1 + \cdots + j_s = m \\ j_1 \in \varphi_{\rho^{-1}}(\operatorname{supp}(f_1)), \ldots, j_s \in \varphi_{\rho^{-1}}(\operatorname{supp}(f_s))}} f_1(j_1) \cdots f_s(j_s)\\ &= \sum_{\substack{j_1 + \cdots + j_s = m \\ j_1 \in \operatorname{supp}(f_1), \ldots, j_s \in \operatorname{supp}(f_s)}} f_1(j_1) \cdots f_s(j_s)\\ &= \sum_{\substack{j_1 + \cdots + j_s = m \\ j_1, \ldots, j_s \in \mathbb{Z}_{q^n-1}}} f_1(j_1) \cdots f_s(j_s)\\ &= (f_1 \otimes \cdots \otimes f_s)(m), \end{align*} as required. \end{proof} \iffalse \begin{lem} Let $\Lambda$ be the set of $n$-tuples with entries in $[0, q-1] \subset \mathbb{Z}$ excluding $(q-1, \ldots, q-1)$; that is, $$ \Lambda = \left\{ (\lambda_0, \ldots, \lambda_{n-1}) \in [0, q-1]^n \ : \ \lambda \neq (q-1, \ldots, q-1) \right\}. $$ Let $\mathcal{R}$ be a ring and let $S(x_0, \ldots, x_{n-1}) \in \mathcal{R}[x_0, \ldots, x_{n-1}]$ be a symmetric polynomial with form $$ S(x_0, \ldots, x_{n-1}) = \sum_{(\lambda_0, \ldots, \lambda_{n-1}) \in \Lambda} c_{(\lambda_0, \ldots, \lambda_{n-1})} x_0^{\lambda_0} \cdots x_{n-1}^{\lambda_{n-1}}, $$ for some coefficients $c_{(\lambda_0, \ldots, \lambda_{n-1})} \in \mathcal{R}$. Define the function $f : \mathbb{Z}_{q^n-1} \to \mathcal{R}$ by $f((\lambda_0, \ldots, \lambda_{n-1})_q) = c_{(\lambda_0, \ldots, \lambda_{n-1})}$. Then $f$ is $q$-symmetric. \end{lem} \begin{proof} Follows immediately from the fact that $S(x_0, \ldots, x_{n-1})$ is symmetric and so $c_{(\lambda_0, \ldots, \lambda_{n-1})} = c_{(\lambda_{\sigma(0)}, \ldots, \lambda_{\sigma(n-1)})}$ for all permutations $\sigma$ of $[0, n-1]$. \end{proof} \begin{lem}\label{lem: q-symmetric} Let $n \geq 2$, let $1 \leq w < n$ and let $1 \leq s \leq q-1$. Then for all $n$-tuples $(\lambda_0, \ldots, \lambda_{n-1}) \in [0, q-1]^n$, we have $$ \delta_w^{\otimes s}\left(\sum_{i=0}^{n-1} \lambda_i q^i \right) = \delta_w^{\otimes s}\left(\sum_{i=0}^{n-1} \lambda_{\sigma(i)} q^i \right) $$ for every permutation $\sigma$ of $[0, n-1]$. \end{lem} \begin{proof} Write $\lambda = (\lambda_0, \ldots, \lambda_{n-1})_q$ for the $q$-adic representation of the canonical representative of $\lambda \in \mathbb{Z}_{q^n-1}$. Note that $\delta_{w}^{\otimes s}(\lambda)$ is the number of ways, modulo $p$, to write $\lambda$ as a sum of $s$ ordered values in $\Omega(w) \subseteq \mathbb{Z}_{q^n-1}$. Since $\sigma_{w}(x)$ is the characteristic generating function of the set $\Omega(w)$ with every monomial of $\sigma_w^{s}(x)$ of degree $< q^n-1$ (because $1 \leq s \leq q-1$ and $w < n$), it follows that $\delta_w^{\otimes s}(\lambda) = [x^\lambda] \sigma_w^{s}(x)$. Recall that $\sigma_w(x) = e_w(x, x^q, \ldots, x^{q^{n-1}})$, where $e_w(x_0, \ldots, x_{n-1}) \in \mathbb{F}_{q}[x_0, \ldots, x_{n-1}]$ is the $w$-th elementary symmetric polynomial in $n$ indeterminates. It is not hard to see that every monomial $x_0^{m_0} \cdots x_{n-1}^{m_{n-1}}$ of $e_w^{s}(x_0, \ldots, x_{n-1})$ satisfies $0 \leq m_i \leq s \leq q-1$ for each $0 \leq i \leq n-1$, and that $(m_0, \ldots, m_{n-1}) \neq (q-1, \ldots, q-1)$ (since $w < n$). It then follows, by the bijection sending $(m_0, \ldots, m_{n-1}) \mapsto (m_0, \ldots, m_{n-1})_q$ (and the fact $\sigma_w(x) = e_w(x, x^q, \ldots, x^{q^n-1})$) that \begin{equation}\label{eqn: q-symmetric} \delta_w^{\otimes s}(\lambda) = \left[x^\lambda \right] \sigma_w^s(x) = \left[ x_0^{\lambda_0} \cdots x_{n-1}^{\lambda_{n-1}}\right] e_w^s(x_0, \ldots, x_{n-1}). \end{equation} Since $e_w^s(x_0, \ldots, x_{n-1})$ is symmetric, then $$ \left[ x_0^{\lambda_0} \cdots x_{n-1}^{\lambda_{n-1}}\right] e_w^s(x_0, \ldots, x_{n-1}) = \left[ x_0^{\lambda_{\sigma(0)}} \cdots x_{n-1}^{\lambda_{\sigma(n-1)}}\right] e_w^s(x_0, \ldots, x_{n-1}) $$ for every permutation $\sigma$ of $[0, n-1]$. Now the result follows from (\ref{eqn: q-symmetric}). \end{proof} \fi \section{Least period of $\Delta_{w,c}$ and proof of Theorem \ref{thm: hansen-mullen}}\label{section: proof of Hansen-Mullen} In this section we prove in Lemma \ref{lem: period of delta and hansen-mullen} that the $\Delta_{w,c}$ function of Lemma \ref{lem: delta function} has least period larger than $(q^n-1)/\Phi_n(q)$, at least in the cases that suffice for a proof of Theorem \ref{thm: hansen-mullen}. Note the proof of Lemma \ref{lem: period of delta and hansen-mullen} is of a rather elementary and constructive type nature. We then conclude the work with an immediate proof of Theorem~\ref{thm: hansen-mullen}. First for an integer $k = \sum_{i=0}^\infty \epsilon_i(k) q^i$, we let $s_q(k) = \sum_{i=0}^\infty \epsilon_i(k)$ denote the sum of the $q$-digits of $k$. \begin{lem}\label{lem: period of delta and hansen-mullen} Let $q$ be a power of a prime, let $n \geq 2$, let $w$ be an integer with $1 \leq w \leq n/2$, and let $c \in \mathbb{F}_{q}$. If $c = 0$ and $n = 2$, further assume that $q$ is odd. Then the least period $r$ of $\Delta_{w,c}$ satisfies $r > (q^n-1)/\Phi_n(q)$. More precisely, we have the following three results: \\ (i) If $c \neq 0$, or $c = 0$ and $w \neq n/2$, or $n > 2$ and $q$ is even with $(w,c) = (n/2, 0)$, then $r = q^n-1$; \\ (ii) If $c = 0$, $q$ is odd, $n > 2$ and $w = n/2$, then $r \geq (q^n- 1)/2$; \\ (iii) If $c = 0$, $w = 1$, $q$ is odd and $n = 2$, then $r > q-1$. \end{lem} \begin{proof} We shall suppose that $0 < r < q^n-1$ is a period of $\Delta_{w,c}$ and either aim to obtain a contradiction or show that $r \geq (q^n-1)/2$ or $r > q-1$ as required, in accordance with the cases in (i), (ii), (iii). Now since $\Delta_{w,c}$ is $r$-periodic, $\Delta_{w,c}(m) = \Delta_{w,c}(m \pm r)$ for all $m \in \mathbb{Z}_{q^n-1}$. Because $q^n-1 = \sum_{i=0}^{n-1}(q-1)q^i$, we may write $r = \sum_{i=0}^{n-1} r_i q^i$ for some $q$-digits $r_i$ with each $0\leq r_i \leq q-1$, not all $r_i = q-1$, for $0 \leq i \leq n-1$. \\ \\ {\bf Case 1 ($c \neq 0$):} Assume $c \neq 0$. We shall prove that $\Delta_{w,c}$ has maximum least period in this case. On the contrary, suppose that $r$ is a period of $\Delta_{w,c}$ with $0 < r < q^n-1$. By the binomial theorem for convolution, \begin{align*} \left((-1)^w\delta_w -c\delta_0\right)^{\otimes(q-1)} &= \sum_{s=0}^{q-1} {q-1 \choose s} (-c)^{q-1-s} (-1)^{ws} \delta_w^{\otimes s}\\ &= \sum_{s=0}^{q-1} {q-1 \choose s} ((-1)^{w+1} c)^{-s} \delta_w^{\otimes s}. \end{align*} Hence \begin{align*} \Delta_{w,c} &:= \delta_0 - \left((-1)^w\delta_w -c\delta_0\right)^{\otimes(q-1)}\\ &= -\sum_{s=1}^{q-1} {q-1 \choose s} ((-1)^{w+1} c)^{-s} \delta_w^{\otimes s}. \end{align*} By Lucas' theorem, none of the binomial coefficients above are $0$ modulo $p$, where $p$ is the characteristic of $\mathbb{F}_{q}$. Now note for any $m \in \mathbb{Z}_{q^n-1}$ that $\delta_w^{\otimes s}(m)$ is the number, modulo $p$, of ways to write $m$ as a sum of $s$ ordered values in $\Omega(w)$. We avoid dealing with complicated expressions for this number; instead let us note a few simpler facts: (a) For $1 \leq s \leq q-1$, there occurs no carry in the $q$-adic addition of any $s$ non-negative integers with $q$-digits at most $1$. In particular, viewing $\Omega(w)$ as lying in $\{0, 1, \ldots, q^n-2\} \subset \mathbb{Z}$ in the natural way, we conclude there occurs no carry in the addition of any $s$ values in $\Omega(w)$, when $1 \leq s \leq q-1$. Since $w < n$ as well, any such addition of $1 \leq s \leq q-1$ elements in $\Omega(w)$ is strictly smaller than $q^n-1$. (b) $m \in \operatorname{supp}(\Delta_{w,c})$ if and only if there exists a unique $1 \leq s \leq q-1$ such that $m \in \operatorname{supp}(\delta_w^{\otimes s})$. In particular if $m \in \operatorname{supp}(\Delta_{w,c})$, then $s_q(m \bmod(q^n-1)) \leq (q-1)w $. Indeed, if $m \in \operatorname{supp}(\delta_w^{\otimes s})$ for $s$ with $1 \leq s \leq q-1$, then $s_q(m \bmod (q^n-1)) = sw$. Hence for every $k \neq s$ with $1 \leq k \leq q-1$, we get $\delta_w^{\otimes k}(m) = 0$ since $kw \neq sw$. (c) For any $1 \leq s \leq q-1$ and $t \in \Omega(w)$, we have $\delta_w^{\otimes s}(st) = 1$. Indeed, it is not hard to see there is exactly one way to write $st$ as a sum of $s$ values in $\Omega(w)$, namely as $st = t + \cdots + t$, $s$ times. It follows for every $s$ with $1 \leq s \leq q-1$ and every $t \in \Omega(w)$, that $st \in \operatorname{supp}(\Delta_{w,c})$. \iffalse First observe for $1 \leq s \leq q-1$, there occurs no carry in the $q$-ary addition of any $s$ non-negative integers with $q$-digits at most $1$. In particular, viewing $\Omega(w)$ as lying in $\{0, 1, \ldots, q^n-2\} \subset \mathbb{Z}$ in the natural way, we conclude there occurs no carry in the addition of any $s$ values in $\Omega(w)$, when $1 \leq s \leq q-1$. Since $w < n$ as well, any such addition of $1 \leq s \leq q-1$ elements in $\Omega(w)$ is strictly smaller than $q^n-1$. Consequently if $m \in \operatorname{supp}(\delta_w^{\otimes s})$ for $s$ with $1 \leq s \leq q-1$, then $s_q(m \bmod (q^n-1)) = sw$. In this case, for every $k \neq s$ with $1 \leq k \leq q-1$, we get $\delta_w^{\otimes k}(m) = 0$ since $kw \neq sw$. It follows that if $m \in \operatorname{supp}(\Delta_{w,c})$, then there exists a unique $s$, $1 \leq s \leq q-1$, for which $m \in \operatorname{supp}(\delta_w^{\otimes s})$; in this case $s_q(m \bmod(q^n-1)) = sw$. On the other hand, if $m \in \operatorname{supp}(\delta_w^{\otimes s})$ for some $1 \leq s \leq q-1$, then $m \in \operatorname{supp}(\Delta_{w,c})$. In particular if $m \in \operatorname{supp}(\Delta_{w,c})$, then $s_q(m \bmod(q^n-1)) \leq (q-1)w $. Next note that for any $1 \leq s \leq q-1$ and $t \in \Omega(w)$, that $\delta_w^{\otimes s}(st) = 1$. Indeed, it is not hard to see there is exactly one way to write $st$ as a sum of $s$ values in $\Omega(w)$, namely as $st = t + \cdots + t$, $s$ times. It follows for every $s$ with $1 \leq s \leq q-1$ and every $t \in \Omega(w)$, that $st \in \operatorname{supp}(\Delta_{w,c})$. \fi Having gathered a few facts about $\Delta_{w,c}$, we proceed with the proof: Clearly either $s_q(r) \leq (q-1)n/2$ or $s_q(r) > (q-1)n/2$. Suppose $s_q(r) \leq (q-1)n/2$. Let $M := \{i \in [0, n-1] \ : \ r_i = q-1\}$ and let $\eta := \#M$. Clearly $s_q(r) \geq (q-1)\eta$. It is impossible that $\eta > n-w$. Indeed, since $w \leq n/2$, we would have $\eta > n/2$ and $s_q(r) > (q-1)n/2$, a contradiction. Hence $\eta \leq n-w$ and so $w \leq n - \eta$. Then there exists a subset $W \subseteq [0, n-1] \setminus M$ with $\#W = w$. Let $\mathcal{C}$ be the collection of all such subsets $W$. Thus $\mathcal{C} \neq \emptyset$ and $\max_{i \in W} \{r_i\} \leq q-2$ for all $W \in \mathcal{C}$. We claim there exists $W \in \mathcal{C}$ such that \begin{equation}\label{eqn: claim 0} s_q(r) > w \max_{i \in W}\{r_i\}. \end{equation} Indeed, suppose on the contrary that \begin{equation}\label{eqn: claim 1} s_q(r) \leq w \max_{i \in W}\{r_i\} \hspace{1em} \text{ for all } W \in \mathcal{C}. \end{equation} Thus for every $W \subseteq [0, n-1] \setminus M$, with $\#W = w$, there exists $i \in W$ such that $r_i \geq s_q(r)/w$. In particular $s_q(r) \leq (q-2)w$. As $\#([0, n-1]\setminus M) = n - \eta$ and each $\#W = w$, it follows from (\ref{eqn: claim 1}) that $r$ has at least $n - \eta - w + 1$ $q$-digits $r_i$, $i \in [0, n-1]\setminus M$, each satisfying $r_i \geq s_q(r)/w$ (otherwise the number of indices $i \in [0, n-1]\setminus M$ satisfying $r_i < s_q(r)/w$ is at least $w$. This gives a subset $W \subseteq [0, n-1]\setminus M$ of size $w$ for which there exists no $i \in W$ with $r_i \geq s_q(r)/w$, a contradiction). Since $s_q(r) = \sum_{i \in [0, n-1] \setminus M} r_i + (q-1)\eta$, we obtain \begin{equation}\label{eqn: claim 2} s_q(r) \geq \dfrac{n - \eta - w + 1}{w}s_q(r) + (q-1)\eta. \end{equation} Necessarily $(n - \eta - w + 1)/w \leq 1$. Rearranging terms in (\ref{eqn: claim 2}) we get \begin{equation}\label{eqn: claim 3} (q-1) \eta \leq s_q(r) \left( 1 - \dfrac{n - \eta - w + 1}{w}\right). \end{equation} Now the fact that $s_q(r) \leq (q-2) w$ yields \begin{equation}\label{eqn: claim 4} (q-1) \eta \leq (q-2) w \left( 1 - \dfrac{n - \eta - w + 1}{w}\right), \end{equation} which is equivalent to \begin{equation}\label{eqn: claim 5} \eta \leq (q-2)(2w - n - 1). \end{equation} Since $w \leq n/2$, this means that $\eta \leq -(q-2)$. Because $\eta \geq 0$, this however implies that $q = 2$ and $\eta = 0$. But then all digits of $r$ in its binary representation are zero and hence $r = 0$, a contradiction. The claim follows. Let $W \in \mathcal{C}$ such that \begin{equation}\label{eqn: claim 6} s_q(r) > w \max_{i \in W} \{r_i\}. \end{equation} Thus \begin{equation}\label{eqn: claim 7} s_q(r) + (q-1 - \max_{i \in W}\{r_i\})w > (q-1)w. \end{equation} Let $s = q-1 - \max_{i \in W}\{r_i\}$ and $t = \sum_{i \in W} q^i \in \Omega(w)$. Clearly $1 \leq s \leq q-1$ (since $r_i \leq q-2$ for all $i \in W \in \mathcal{C}$). Then $st \in \operatorname{supp}(\Delta_{w,c})$ and so $st + r \in \operatorname{supp}(\Delta_{w,c})$. Note also that $1 \leq s + r_i \leq q-1$ for each $i \in W$. Then there occurs no carry in the $q$-adic addition of $st$ and $ r$. Hence \begin{align} s_q(st + r) &= s_q(r) + s_q(st) = s_q(r) + (q-1 - \max_{i \in W}\{r_i\})w \nonumber\\ &> (q-1)w.\label{eqn: claim 8} \end{align} Since $st, r < q^n-1$ as well, it follows (from the absence of carry) that $st + r \leq q^n-1$. If $st + r = q^n-1 \equiv 0 \pmod{q^n-1}$, then $\Delta_{w,c}(0) = \Delta_{w,c}(st + r)$ and so $0 \in \operatorname{supp}(\Delta_{w,c})$. This contradicts the fact that $\delta_{w}^{\otimes k}(0) = 0$ for all $1 \leq k \leq q-1$ (since $1 \leq w < n$). Then $0 < st + r < q^n-1$ and $st + r = (st + r) \bmod(q^n-1)$. This in conjunction with (\ref{eqn: claim 8}) yields $s_q((st + r) \bmod(q^n-1) ) > (q-1)w$. But then $st + r \not\in \operatorname{supp}(\Delta_{w,c})$, a contradiction. Thus no integer $r$ with $1 \leq r < q^n-1$ and $s_q(r) \leq (q-1)n/2$ can be a period of $\Delta_{w,c}$. Necessarily $s_q(r) > (q-1)n/2$. Let $r' = q^n-1 - r$. Clearly $\Delta_{w,c}$ is $r'$-periodic. However note that $1 \leq r' < q^n-1$ and $s_q(r') < (q-1)n/2$, a contradiction. Necessarily $\Delta_{w,c}$ has maximum least period $q^n-1$. This concludes the proof for the case when $c \neq 0$. \\ \\ {\bf Case 2 ($c = 0$):} Assume $c = 0$. Note $\Delta_{w,0} = \delta_0 - \delta_w^{\otimes(q-1)}$ and $\Delta_{w,0}(0) = 1$. Thus $\Delta_{w,0}(r) = \Delta_{w,0}(0 + r) = 1$. Since $0 < r < q^n-1$, necessarily $\delta_w^{\otimes(q-1)}(r) = -1$. In particular $r \in \operatorname{supp}(\delta_w^{\otimes(q-1)})$ and $s_q(r) = (q-1)w$. Because $1 \leq r < q^n-1$ is a period of $\Delta_{w,0}$, so is $r' = q^n-1 - r$ with $1 \leq r' < q^n-1$. Then the previous arguments similarly imply that $s_q(r') = (q-1)w$. Given that $s_q(r') = (q-1)n - s_q(r)$, it follows $w = n/2$ and $s_q(r) = (q-1)n/2$. In particular $n$ is even and $\Delta_{w,0} = \Delta_{n/2, 0} = \delta_0 - \delta_{n/2}^{\otimes(q-1)}$. Consider the case when $n > 2$: Suppose not all digits of $r$ are the same (since $s_q(r) = (q-1)n/2$, the last is equivalent to supposing that $r \neq (q^n-1)/2$; this is the case in particular when $q$ is even). Clearly either there exists $k \in [0, n-2]$ such that $r_k > r_{k+1}$ or the sequence $r_0, \ldots, r_{n-1}$ is non-decreasing. Suppose the former holds. Fix any such $k$ and let $\sigma$ be the permutation of $[0, n-1]$ which fixes each index in $[0, n-1]\setminus \{k, k+1\}$ and maps $k \mapsto k+1$ and $k+1 \mapsto k$. Thus \begin{equation}\label{eqn: sigma} \varphi_\sigma(r) = r_k q^{k+1} + r_{k+1} q^k + \sum_{i \in [0, n-1]\setminus \{k, k+1\}} r_i q^i > r_{k+1}q^{k+1} + r_kq^k + \sum_{i \in [0, n-1]\setminus\{k, k+1\}} r_i q^i = r, \end{equation} since $r_k > r_{k+1}$. Because $\varphi_\sigma(r)$ is obtained via a permutation of the digits of $r$, and $0 < r < q^n-1$, then $0 < \varphi_\sigma(r) < q^n-1$. Now note \begin{align}\label{eqn: simga 1} \varphi_\sigma(r) - r &= (r_k - r_{k+1})q^{k+1} - (r_k - r_{k+1})q^k \nonumber \\ &= (r_k - r_{k+1} - 1)q^{k+1} + (q - (r_k - r_{k+1}))q^{k}. \end{align} Since $1 \leq r_k - r_{k+1} \leq q-1$, it follows the above coefficients are contained in the set $[0, q-1]$; hence this is the $q$-adic form of $\varphi_\sigma(r) - r$ and one can see that $s_q(\varphi_\sigma(r) - r) = q-1$. Because $\delta_{n/2}$ is $q$-symmetric with $\epsilon_i(m) \leq 1$ for each $m \in \operatorname{supp}(\delta_{n/2}) = \Omega(n/2)$ and each $0 \leq i \leq n-1$, it follows from Lemma \ref{lem: convolution of q-sym is q-sym} that $\delta_{n/2}^{\otimes(q-1)}$ is $q$-symmetric. In particular $\delta_{n/2}^{\otimes(q-1)}(\varphi_\sigma(r)) = \delta_{n/2}^{\otimes(q-1)}(r)$. Since $\varphi_\sigma(r) \neq 0$, then $\Delta_{n/2, 0}(\varphi_\sigma(r)) = -\delta_{n/2}^{\otimes(q-1)}(\varphi_\sigma(r)) = -\delta_{n/2}^{\otimes(q-1)}(r) = 1$; hence $\varphi_\sigma(r) \in \operatorname{supp}(\Delta_{n/2, 0})$. Given that $\Delta_{n/2, 0}$ is $r$-periodic, then $\varphi_\sigma(r) - r \in \operatorname{supp}(\Delta_{n/2, 0})$. Since $0 < \varphi_\sigma(r) - r < q^n-1$, then $\varphi_\sigma(r) - r \in \operatorname{supp}(\delta_{n/2}^{\otimes(q-1)})$. It follows $s_q(\varphi_\sigma(r) - r) = (q-1)n/2$, contradicting $s_q(\varphi_\sigma(r) - r) = q-1$ with $n > 2$. Necessarily the $q$-digits $r_0, \ldots, r_{n-1}$ of $r$ must form a non-decreasing sequence. Since not all digits of $r$ are the same, in particular $r_{n-1} > r_0$. Since $\Delta_{n/2, 0}$ is $r$-periodic, it is $r'' := (qr \bmod(q^n-1))$-periodic. Note $0 < r'' < q^n-1$ and $r'' = (r_{n-1}, r_0, r_1, \ldots, r_{n-2})_q$. However observe that $r_{0} = \epsilon_1(r'') < \epsilon_0(r'') = r_{n-1}$. Then we can reproduce the previous arguments with $r$ and $k$ substituted with $r''$ and $0$, respectively, to obtain a contradiction. Thus for $n > 2$, it is impossible that $\Delta_{n/2, 0}$ is $r$-periodic if $0 < r < q^n-1$ and not all digits of $r$ are the same. In particular when $q$ is even and $n > 2$, $\Delta_{n/2, 0}$ must have maximum least period $q^n-1$. Note that at this point the proof of (i) is complete. In the case of (ii), with $q$ odd and $n > 2$, we have shown that either $r = (q^n-1)/2$ (all digits of $r$ are the same) or no such $r$ with $0 < r < q^n-1$ can be a period of $\Delta_{n/2, 0}$ (when not all digits of $r$ are the same), whence the least period of $\Delta_{n/2, 0}$ must be the maximum, $q^n-1$. Thus the proof of (ii) is complete as well. Consider now the case, (iii), with $n = 2$ and $q$ odd: Here $w = n/2 = 1$ and we need to show $r > q-1$. On the contrary, suppose $r \leq q-1$. Since $s_q(r) = (q-1)n/2 = q-1$, it follows that $r = q-1$. Note there is exactly one way to write $r = q-1$ as a sum of $q-1$ ordered elements in $\Omega(1) = \{1, q\}$, namely as $q - 1 = 1 + \cdots + 1$, a total of $q-1$ times. Thus $\delta_{1}^{\otimes(q-1)}(r) = 1$. This contradicts the fact (see the beginning of the proof of Case 2) that $\delta_{1}^{\otimes(q-1)}(r) = -1$ with $q$ odd. This completes the proof of (iii) and of Case 2 here. It remains to notice from (i), (ii), (iii), that the least period $r$ of $\Delta_{w,c}$ satisfies $r > (q^n-1)/\Phi_n(q)$ in every case. Indeed, both (i), (ii) follow immediately from the fact that $\Phi_n(q) > q-1$ for $n \geq 2$. In the case of (iii), we have $r > q-1 = (q^2-1)/(q + 1) = (q^2-1)/\Phi_2(q)$ as well. This concludes the proof of Lemma \ref{lem: period of delta and hansen-mullen}. \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{thm: hansen-mullen}}] It is elementary to show that every element of $\mathbb{F}_{q}^*$ is the norm of an element of degree $n$ over $\mathbb{F}_{q}$; see for example \cite{hansen-mullen}. Thus we may assume $w < n$. In view of the symmetry between the coefficients of a polynomial and its reciprocal, as well as the fact that a polynomial is irreducible if and only if so is its reciprocal, we may further assume $1 \leq w \leq n/2$. Now the result follows from Lemma \ref{lem: period of delta and hansen-mullen} together with Lemma \ref{lem: delta function}. \end{proof} \end{document}
arXiv
Pôle Automates, structures et vérification Équipe thématique Automates et applications Gestion des séances Jour, heure et lieu Le vendredi à 14h00, salle 3052 Le calendrier des séances (format iCal). Pour ajouter le calendrier des séances à votre agenda favori, souscrire au calendrier en indiquant ce lien. Arthur Jaquard Aliaume Lopez Daniela Petrisan Nous mettons à votre disposition les enregistrements des séminaires passés, attention, ceux-ci sont protégés par un mot de passe très secret à demander aux responsables du séminaire. Prochaines séances Vendredi 3 février 2023, 14 heures, Salle 3052 Florent Koechlin Two criteria to prove the inherent ambiguity of bounded context-free languages A context-free language is inherently ambiguous if any grammar recognizing it is ambiguous, i.e. there is a word that is generated in two different ways. Deciding the inherent ambiguity of a context-free language is a difficult problem, undecidable in general. The first examples of inherently ambiguous languages were discovered in the 1960s, using iteration techniques on derivation trees. They belonged to a particular subfamily of context-free languages, called bounded context-free languages. Although they made it possible to prove the inherent ambiguity of several languages, as for example the language L = a^n b^m c^p with n=m or m=p, iteration techniques are still very laborious to implement, are very specific to the studied language, and seem even sometimes unadapted. For instance, the relative simplicity of the proof of inherent ambiguity of L completely collapses by replacing the constraint "n=m or m=p" by "n≠m or m≠p". In this talk, I will present two useful criteria based on generating series to prove easily the inherent ambiguity of some bounded context-free languages. These languages, which have a rational generating series, resisted both the classical iteration techniques developed in the 1960's and the analytic methods introduced by Philippe Flajolet in 1987. Vendredi 10 février 2023, 14 heures, Salle 3052 Uli Fahrenberg An invitation to higher-dimensional automata theory Yann Ramuzat The Semiring-Based Provenance Framework for Graph Databases The growing amount of data collected by sensors or generated by human interaction has led to an increasing use of graph databases, an efficient model for representing intricate data. Techniques to keep track of the history of computations applied to the data inside classical relational database systems are also topical because of their application to enforce Data Protection Regulations (e.g., GDPR). Our research work mixes the two by considering a semiring-based provenance model for navigational queries over graph databases. In the first part, we will focus on the model itself by introducing a toolkit of provenance-aware algorithms, each targeting specific properties of the semiring of use. We notably introduce a new method based on lattice theory permitting an efficient provenance computation for complex graph queries. We propose an open-source implementation of the above-mentioned algorithms, and we conduct an experimental study over real transportation networks of large size, witnessing the practical efficiency of our approach in practical scenarios. From the richness of the literature, we notably obtain a lower bound for the complexity of the full provenance computation in our setting. We finally consider how this framework is positioned compared to other provenance models such as the semiring-based Datalog provenance model. We make explicit how the methods we applied for graph databases can be extended to Datalog queries, and we show how they can be seen as an extension of the semi-naïve evaluation strategy. To leverage this fact, we extend the capabilities of Soufflé, a state-of-the-art Datalog solver, to design an efficient provenance-aware Datalog evaluator. Experimental results based on our open-source implementation entail the fact this approach stays competitive with dedicated graph solutions, despite being more general. In a final round, we discuss some research ideas for improving the model, and state open questions raised by this work. This is joint work with Silviu Maniu and Pierre Senellart. Vendredi 10 mars 2023, 14 heures, Salle 3052 Alexandra Rogova Non encore annoncé. Quentin Manière Non encore annoncé. Leon Bohn Non encore annoncé. Séances passées Vendredi 6 janvier 2023, 14 heures, Salle 3052 Christian Choffrut Le monoide grammique / The grammic monoid Schensted a montré comment on insère un nouvel élément dans un tableau de Young. Cet algorithme consiste à insérer cet élément dans la ligne (=suite non décroissante d'entiers) du bas du tableau puis à itérer vers le haut du tableau si un élément est chassé de la ligne au cours de l'exécution. On s'intéresse ici à l'action du monoide libre sur une seule ligne sans poursuivre sur les lignes supérieures (si un élément est chassé il est perdu). Pour rappel dans un séminaire de Mai 2022 Christophe Reutenauer s'était intéressé à l'action sur les colonnes. Dans ce dernier cas le monoide de transition, qu'il appelle stylique, est fini (il y a un nombre fini de colonnes pour une alphabet donné) alors que l'action sur les lignes définit un monoide infini, le monoide grammique. La relation qui met ensemble deux mots qui ont la même action sur l'ensemble des lignes est une congruence plus grossière que le congruence plaxique (celle des tableaux de Young) et elle est décidable. Je considère le cas d'un alphabet à 3 lettres pour lequel elle est obtenue en ajoutant aux règles de Knuth une seule nouvelle règle très simple. Je m'aventurerai à proposer un conjecture sur les alphabets à plus de 3 lettres et en passant je parlerai des (très jolis et peut-être utiles) travaux de Okninski et al. sur les algèbres de monoids plaxiques. Schensted showed how to insert a new element in a Young tableau. It consists of inserting this element in the bottom row (row= nondecreasing sequence of elements) and iterating the process further up in case an element of the bottom row is bumped. I am interested in the action of the free monoid which assigns to a row, the row obtainded by Schensted insertion, but ignoring the possible bumped element. Christophe Reutenauer considered in his May seminar the action on the set of columns. His (stylic) monoid is finite (there exist finitely many columns for a fixed alphabet) while mine, the grammic monoid, is infinite. The relation between two words having the same action on the set of rows is a congruence which is coarser than the plactic congruence (that defining the Young tableaux) and decidable. I consider the case of a 3 letter alphabet for which the congreunce is generated by the Knuth rules plus a unique simple rule. I will risk a conjecture for alphabets of more than 3 letters and say a few words on the (very nice and possibly related) works of Okninski and al. on the algebras of the plactic monoids. Vendredi 16 décembre 2022, 14 heures, Salle 3052 Carl-Fredrik Nyberg Brodda Language-theoretic methods in combinatorial semigroup theory Language-theoretic methods in combinatorial (semi)group theory go back to Anisimov in the 1970s, who proved the existence of a deep link between the set of words over a fixed generating set representing the identity element in a group – the word problem of the group – and the group-theoretic properties of the group. In particular, he proved that the word problem of a group is a regular language if and only if the group is finite. Muller & Schupp extended this in 1983, and proved their now famous theorem: a group has context-free word problem if and only if it is virtually free. In this talk, I'll give some background on the history and main ideas of combinatorial semigroup theory, and then focus on some of my own work on studying the language-theoretic properties of special monoids, including a generalisation of the Muller-Schupp theorem to special monoids and one-relation monoids with non-trivial idempotents. Vendredi 9 décembre 2022, 14 heures, Salle 3052 Sarah Winter A Regular and Complete Notion of Delay for Streaming String Transducers The notion of delay between finite transducers is a core element of numerous fundamental results of transducer theory. The goal of this work is to provide a similar notion for more complex abstract machines: we introduce a new notion of delay tailored to measure the similarity between streaming string transducers (SST). We show that our notion is regular: we design a finite automaton that can check whether the delay between any two SSTs executions is smaller than some given bound. As a consequence, our notion enjoys good decidability properties: in particular, while equivalence between non-deterministic SSTs is undecidable, we show that equivalence up to fixed delay is decidable. Moreover, we show that our notion has good completeness properties: we prove that two SSTs are equivalent if and only if they are equivalent up to some (computable) bounded delay. Together with the regularity of our delay notion, it provides an alternative proof that SSTs equivalence is decidable. Finally, the definition of our delay notion is machine-independent, as it only depends on the origin semantics of SSTs. As a corollary, the completeness result also holds for equivalent machine models such as deterministic two-way transducers, or MSO transducers. This is joint work with Emmanuel Filiot, Ismaël Jecker, and Christof Löding. Léo Exibard Runtime monitoring for Hennessy-Milner logic with recursion over systems with data Runtime verification consists in checking whether a program satisfies a given specification by observing the trace it produces during its execution. In the regular setting, Hennessy-Milner logic with recursion (recHML), a variant of the modal $\mu$-calculus, provides a versatile back-end for expressing linear- and branching-time specifications. In this talk, I will discuss an extension of this logic that allows to express properties over data values (i.e. values from an infinite domain) and examine which fragments can be verified at runtime. Data values are manipulated through equality tests in modalities and through first-order quantification outside of them. They can also be stored using parameterised recursion variables. I then examine what kind of properties can be monitored at runtime, depending on the monitor model. A key aspect is that the logic has close links with register automata with non-deterministic reassignment, which yields a monitor synthesis algorithm, and allows to derive impossibility results. In particular, contrary to the regular case, restricting to deterministic monitors strictly reduces the set of monitorable properties. This is joint work with the MoVeMnt team (Reykjavik University): Luca Aceto, Antonis Achilleos, Duncan Paul Attard, Adrian Francalanza, Karoliina Lehtinen. Vendredi 25 novembre 2022, 14 heures, Salle 3052 Moses Ganardi Expressiveness of Subword Constraints We study subword constraints expressed by existential first-order formulas over the (scattered) subword order. While it has been known that the truth problem is undecidable, little was known on definability, i.e. which languages and relations over words are definable by subword constraints. As a first step towards understanding the expressiveness of subword constraints, we prove that for alphabets of size at least 3, a relation is definable by subwords constraints if and only if it is recursively enumerable. Whether the same characterization holds for binary alphabets remains an open problem. This presentation is based on joint work with Pascal Baumann, Ramanathan S. Thinniyam, and Georg Zetzsche (MPI-SWS), which has been presented at STACS 2022 . Anantha Padmanabha Databases and Predicate Modal Logics: A tale of two cities In this talk we will discuss two topics: Databases and Predicate Modal Logics. In the first case we will look at the consistent query answering problem, where a given database violates some specified constraints. We will see why such databases are interesting and how one would evaluate queries in these cases. We will discuss new algorithms that we have introduced and our attempts to solve an open conjecture in the field. This is work in collaboration with Diego Figueira, Luc Segoufin and Cristina Sirangelo. In the second case we will discuss First Order Modal Logic. These logics are notoriously undecidable (for instance restrictions to unary predicates, guarded fragment, two variable fragment are all undecidable). We will discuss some decidable fragments that we have identified. This is a work in collaboration with R. Ramanujam, Yanjing Wang and Mo Liu. Finally we will discuss some possible directions to bring these two seemingly unrelated topics together. Vendredi 28 octobre 2022, 14 heures, Salle 3052 Pierre Vandenhove Characterizing Omega-Regularity through Finite-Memory Determinacy of Games on Infinite Graphs We consider zero-sum games on infinite graphs, with objectives specified as sets of infinite words over some alphabet of colors. A well-studied class of objectives is the one of omega-regular objectives, due to its relation to many natural problems in theoretical computer science. We focus on the strategy complexity question: given an objective, how much memory does each player require to play as well as possible? A classical result is that finite-memory strategies suffice for both players when the objective is omega-regular. We show a reciprocal of that statement: when both players can play optimally with a chromatic finite-memory structure (i.e., whose updates can only observe colors) in all infinite game graphs, then the objective must be omega-regular. This provides a game-theoretic characterization of omega-regular objectives, and this characterization can help in obtaining memory bounds and representations for the objectives as deterministic parity automata. Moreover, a by-product of our characterization is a new one-to-two-player lift: to show that chromatic finite-memory structures suffice to play optimally in two-player games on infinite graphs, it suffices to show it in the simpler case of one-player games on infinite graphs. These results are based on joint work with Patricia Bouyer and Mickael Randour and have been published in the proceedings of STACS 2022. Howard Straubing (Boston College) A Problem about Automata and Logic We survey some recent (and some not-so-recent) progress on an old problem: In the 1990's, it was shown (Barrington, Compton , Straubing, and Théren) that the regular languages definable by first-order sentences with no restrictions on the numerical predicates (i.e., the atomic formulas giving relations on the positions in a string) could be defined by sentences in which all these relations could themselves be computed by finite automata (the regular numerical predicates). More succinctly, regular languages require only regular numerical predicates in their logical definitions. This simple-sounding characterization of first-order definable regular languages was proved by appeal to a famous result in circuit complexity, the theorem of Furst, Saxe and Sipser showing that the circuit complexity class AC0 cannot count the number of 1's in an input string modulo any n>1. In fact, the characterization in turn implies the circuit complexity result. This led to a number of questions, which for the most part remain unsolved: First, does the property hold for logics other than first-order logic? For example, does it continue to hold for Sigma-k formulas, or for Boolean combinations of Sigma-k formulas, or for generalized first-order sentences containing modular counting quantifiers? Second, a somewhat more vague question: is there an 'elementary' proof of this fact about logic and automata, one that does not depend on circuit lower bounds? Such a proof for modular quantifiers would settle a long-standing open question in circuit complexity. (It should be stressed here that 'elementary' is not the same thing as 'easy'!) Much of the talk will be devoted to a 2020 paper (Borlido, Gehrke, Krebs, Straubing) giving such an elementary proof of the property for Boolean combinations of Sigma-1 sentences. I will also mention recent progress (Barloy, Cadilhac, Papeman, Zeume, LICS 2022) on establishing the property for Sigma-2 sentences. Vendredi 7 octobre 2022, 14 heures, Salle 3052 Jacques Sakarovitch (IRIF, CNRS and LTCI, Télécom Paris, IPP) The Net Automaton of a Rational Expression In this talk, we present a new construction of a finite automaton associated with a rational (or regular) expression. It is very similar to the one of the so-called Thompson automaton, but it overcomes the failure of the extension of that construction to the case of weighted rational expressions. At the same time, it preserves all of the properties of the Thompson automaton. This construction has two supplementary outcomes. The first one is the reinterpretation in terms of automata of a data structure introduced by Champarnaud, Laugerotte, Ouardi, and Ziadi for the efficient computation of the position (or Glushkov) automaton of a rational expression, and which consists in a duplicated syntactic tree of the expression decorated with some additional links. The second one supposes that this construction devised for the case of weighted expressions is brought back to the domain of Boolean expressions. It allows then to describe, in terms of automata, the construction of the Star Normal Form of an expression that was defined by Brüggemann-Klein, and also with the purpose of an efficient computation of the position automaton. This is joint work with Sylvain Lombardy (Labri, U. Bordeaux) Vendredi 16 septembre 2022, 14 heures 30, Salle 3052 Alexander Rabinovich On Uniformization in the Full Binary Tree A function f uniformizes a relation R(X,Y) if R(X,f(X)) holds for every X in the domain of R. The uniformization problem for a logic L asks whether for every L-definable relation there is an L-definable function that uniformizes it. Gurevich and Shelah proved that no Monadic Second-Order ($\MSO$) definable function uniformizes relation ``Y is a one element subset of X in the full binary tree. In other words, there is no MSO definable choice function in the full binary tree. The cross-section of a relation R(X,Y) at D is the set of all E such that R(D,E) holds. Hence, a function that uniformizes R chooses one element from every non-empty cross-section. The relation ``Y is a one element subset of X has finite and countable cross-sections. We prove that in the full binary tree the following theorems hold: Theorem (Finite cross-section) If every cross-section of an MSO definable relation is finite, then it has an MSO definable uniformizer. Theorem (Uncountable cross-section) There is an MSO definable relation R such that every MSO definable relation included in R and with the same domain as R has an uncountable cross-section. Vendredi 24 juin 2022, 14 heures 30, Salle 3052 Nikhil Balaji Identity Testing for Radical Expressions This talk is about the Radical Identity Testing problem (RIT): Given an algebraic circuit representing a polynomial $f \in \mathbb{Z}[x_1, \dots, x_k]$ and nonnegative integers $a_1, \dots, a_k$ and $d_1, \dots,$ $d_k$, written in binary, test whether the polynomial vanishes at the real radicals $\sqrt[d_1]{a_1}, \dots,\sqrt[d_k]{a_k}$, i.e., test whether $f(\sqrt[d_1]{a_1}, \dots, \sqrt[d_k]{a_k}) = 0$. We will talk about our recent result, placing the problem in coNP assuming the Generalised Riemann Hypothesis (GRH), improving on the straightforward PSPACE upper bound obtained by reduction to the existential theory of reals. Next we focus on a restricted version, called 2-RIT, where the radicals are square roots of prime numbers, written in binary. It was known since the work of Chen and Kao that 2-RIT is at least as hard as the polynomial identity testing problem, however no better upper bound than PSPACE was known prior to our work. We show that 2-RIT in coRP assuming GRH and in coNP unconditionally. This work is in collaboration with Klara Nosan, Mahsa Shirmohammadi and James Worrell. The results are going to be presented at LICS 2022, and the full-version of the paper can be found here: https://arxiv.org/abs/2202.07961. Vendredi 20 mai 2022, 14 heures 30, Salle 3052 Aliaume Lopez Locality and Preservation Theorems This talk investigates the relativisation of the Łós-Tarski Theorem in the finite through the study of existential local sentences. This method yields new well-behaved classes of finite structures where preservation under extensions holds, namely, we show that under mild assumptions on the class of structures, preservation under extensions holds if and only if it holds locally. The robustness of this proof scheme is explained by its behavior over arbitrary structures, over which we show that existential local sentences match exactly the first-order sentences preserved under local elementary embeddings. Furthermore, we prove that existential local sentences are exactly those that can be written using a positive variant of the Gaifman normal form. Christophe Reutenauer (UQAM, Canada) Le monoïde stylique (seminaire joint Combinatoire et Automates) Le monoïde stylique Styl(A) est un quotient fini du monoïde plaxique de Lascoux et Schützenberger. Il est obtenu par l'action naturelle (insertion de Schensted à gauche) du monoïde libre A* sur l'ensemble des (tableaux) colonnes sur A. Il est en bijection avec un ensemble de tableaux semi-standards particuliers, appelés N-tableaux; la bijection consiste en une variante de l'algorithme de Schensted. On en déduit une bijection avec les partitions (ensemblistes) des sous-ensembles de A, et la cardinalté de Styl(A) est le nombre de Bell B_{n+1}, n=|A|. Une présentation de ce monoïde est obtenue en ajoutant aux relations de Knuth les relations d'idempotence a^2=a, pour chaque générateur a dans A. L'involution naturelle de A*, qui retourne les mots et renverse l'ordre de l'alphabet, induit un anti-automorphisme de Styl(A); il se calcule directement sur les N-tableaux par une variante de l'évacuation de Schützenberger. Le monoïde stylique apparaît comme le monoïde syntaxique de la fonction qui à un mot associe la longueur de son plus long sous-mot décroissant. Vendredi 22 avril 2022, 14 heures 30, Salle 3058 ONLINE Wojtek Przybyszewski Definability of neighborhoods in graphs of bounded twin-width and its consequences. During the talk, we will study set systems formed by neighborhoods in graphs of bounded twin-width. In particular, we will show how, for a given graph from a class of graphs of bounded twin-width, to efficiently encode the neighborhood of a vertex in a given set of vertices A of the graph. For the encoding we will use only a constant number of vertices from A. The obtained encoding can be decoded using FO formulas. This will prove that the edge relation in graphs of bounded twin-width, seen as first-order structures, admits a definable distal cell decomposition, which is a notion from model theory. From this fact we will derive that we can apply to such classes strong combinatorial tools based on the Distal cutting lemma and the Distal regularity lemma (a stronger version of Szemerédi regularity lemma). Vendredi 15 avril 2022, 14 heures 30, Salle 3052 Nguyễn Lê Thành Dũng Polyregular functions: some recent developments The class of polyregular functions is composed of the string-to-string functions computed by pebble transducers. While this machine model (which extends two-way finite transducers) is two decades old, several alternative characterizations of polyregular functions have been discovered recently [Bojańczyk 2018; Bojańczyk, Kiefer & Lhote 2019], demonstrating their canonicity. The name comes from the polynomial bound on the growth rate of these functions: |f(w)| = |w|^O(1) where |w| is the length of the string w. In this talk, after recalling this context, I will present some subsequent developments in which I have been involved: * the subclass of comparison-free polyregular (or "polyblind") functions, definable through a natural restriction of pebble transducers, which Pierre Pradic and I actually discovered while studying a linear λ-calculus; * some results that either relate the growth rate of a polyregular function (comparison-free or not) to the "resources" needed to compute it (number of pebbles or MSO-interpretation dimension), or show that there is no such relationship. This last item is joint work with Mikołaj Bojańczyk, Gaëtan Douéneau-Tabot, Sandra Kiefer and Pierre Pradic, and builds upon a previous work by Nathan Lhote [2020]. Vendredi 1 avril 2022, 13 heures 45, Salle 3052 Pierre Ohlmann Characterising half-positionality in infinite duration games over infinite arenas I will present a new result, asserting that a winning condition (or, more generally, a valuation) which admits a neutral letter is positional over arbitrary arenas if and only if for all cardinals there exists a universal graph which is monotone and well-founded. Here, "positional" refers only to the protagonist; this concept is sometimes also called "half-positionality". This is the first known characterization in this setting. I will explain the result, quickly survey existing related work, show how it is proved and try to argue why it is interesting. Note the unusual time: 13h45. Vendredi 25 mars 2022, 14 heures 30, Salle 3058 Nathan Grosshans Visibly pushdown languages in AC^0 One important research endeavour at the intersection of circuit complexity theory, algebraic automata theory and logic is the classification of regular languages according to their localisation within the internal structure of NC^1, the class of languages decided by Boolean circuits of polynomial size, logarithmic depth and with gates of constant fan-in. In some sense, the search for such a classification concentrates most of the open questions we have about the relationship between NC^1 and its well-studied subclasses. While many questions are still open, one of the greatest successes of this research endeavour has been the characterisation of the regular languages in AC^0, the subclass of NC^1 corresponding to Boolean circuits of polynomial length, constant depth and with gates of unbounded fan-in. This characterisation takes the form of a triple languages-algebra-logic correspondence: a regular language is in AC^0 if and only if its syntactic morphism is quasi-aperiodic if and only if it is definable in first-order logic over words with linear order and modular predicates. It is natural to try to extend such results to classes of formal languages greater than the class of regular languages. A well studied and robust such class is given by visibly pushdown languages (VPLs): languages recognised by pushdown automata where the stack-height-behaviour only depends on the letters read from the input. Over the previous decade, a series of works concentrated on the fine complexity of VPLs, with several achievements: one of those was a characterisation of the class of visibly counter languages (basically VPLs recognised by visibly pushdown automata with only one stack symbol) in AC^0 by Krebs, Lange and Ludwig. However, the characterisation of the VPLs in AC^0 still remains open. In this talk, I shall present a conjectural characterisation of the VPLs in AC^0 obtained with Stefan Göller at the Universität Kassel. It is inspired by the conjectural characterisation given by Ludwig in his Ph.D. thesis as a generalisation of the characterisation for visibly counter languages, but that is actually false. In fact, we give a more precise general conjectural characterisation that builds upon recognisability by morphisms into Ext-algebras, an extension of recognisability by monoid-morphisms proposed by Czarnetzki, Krebs and Lange to suit the case of VPLs. This characterisation classifies the VPLs into three categories according to precise conditions on the Ext-algebra-morphisms that recognise them: - those that are TC^0-hard; - those that are in AC^0; - those that correspond to a well-identified class of "intermediate languages" that we believe to be neither in AC^0 nor TC^0-hard. Edwin Hamel-De Le Court Two-player Boundedness Counter Games We consider two-player zero-sum games with winning objectives beyond regular languages, expressed as a parity condition in conjunction with a Boolean combination of boundedness conditions on a finite set of counters which can be incremented, reset to 0, but not tested. A boundedness condition requires that a given counter is bounded along the play. Such games are decidable, though with non-optimal complexity, by an encoding into the logic WMSO with the unbounded and path quantifiers, which is known to be decidable over infinite trees. Our objective is to give tight or tighter complexity results for particular classes of counter games with boundedness conditions, and study their strategy complexity. In particular, counter games with conjunction of boundedness conditions are easily seen to be equivalent to Streett games, so, they are CoNP-complete. Moreover, finite-memory strategies suffice for Eve and memoryless strategies suffice for Adam. For counter games with a disjunction of boundedness conditions, we prove that they are in solvable in NP and in CoNP, and in PTime if the parity condition is fixed. In that case memoryless strategies suffice for Eve while infinite memory strategies might be necessary for Adam. Finally, we consider an extension of those games with a max operation. In that case, the complexity increases: for conjunctions of boundedness conditions, counter games are EXPTIME-complete. Arthur Jaquard A Complexity Approach to Tree Algebras: the Polynomial Case We consider infinitely sorted tree algebras recognising regular language of finite trees. We pursue their analysis under the angle of their asymptotic complexity, i.e. the asymptotic size of the sorts as a function of the number of variables involved. Our main result establishes an equivalence between the languages recognised by algebras of polynomial complexity and the languages that can be described by nominal word automata that parse linearisation of the trees. On the way, we show that for such algebras, having polynomial complexity corresponds to having uniformly boundedly many orbits under permutation of the variables, or having a notion of bounded support (in a sense similar to the one in nominal sets). We also show that being recognisable by an algebra of polynomial complexity is a decidable property for a regualr language of trees. This is joint work with Thomas Colcombet. Vendredi 18 février 2022, 14 heures 30, Salle 3052 Klara Nosan On computing the algebraic closure of matrix groups We consider the problem of computing the Zariski closure of a finitely generated group of matrices. Algorithms for this problem have been applied in automata theory and program analysis, e.g., for showing decidability of the language emptiness problem for quantum automata and for computing polynomial invariants for affine programs. In this talk we introduce the problem of computing the Zariski closure and describe an existing algorithm, due to Derksen, Jeandel and Koiran, before moving to our main result, which is to obtain an upper bound on the degree of the polynomials that define the Zariski closure. Having an a priori bound allows us to give a simple algorithm for the problem, via linear algebra, similar to Karr's algorithm for obtaining affine invariants for affine programs. Soumyajit Paul Complexity of solving extensive form games with imperfect information In games with Imperfect information players only have partial knowledge about their position in the game. This makes the task of computing optimal strategies hard especially when players forget previously gained information. To further substantiate this hardness we consider two player zero-sum games with imperfect information modeled in the extensive form and provide several new complexity results on computing maxmin value for various classes of imperfect information. For these lower bound results we consider problems such as the Square-root sum problem and also complexity classes which involve computation over reals, more precisely Existential Theory of Reals (ETR) and other fragments of the First Order Theory of Reals (FOT(R)). This is joint work with Hugo Gimbert and B. Srivathsan. Vendredi 4 février 2022, 14 heures 30, Salle 3052 (Online) Bartek Klin Orbit-finite-dimensional vector spaces, with applications to weighted register automata I will discuss vector spaces spanned by orbit-finite sets. These spaces are infinite-dimensional, but their sets of dimensions are so highly symmetric that the spaces have many properties enjoyed by finitely-dimensional spaces. Applications of this include a decision procedure for equivalence of weighted register automata, which are the common generalization of weighted automata and register automata for infinite alphabets. The algorithm runs in exponential time, and in polynomial time for a fixed number of registers. As a special case, we can decide, with the same complexity, language equivalence for unambiguous register automata. (Joint work with Mikołaj Bojańczyk and Joshua Moerman.) Vendredi 21 janvier 2022, 14 heures 30, Salle 3052 Victor Marsault Demonstration of Awali 2.1, a library for weighted automata and transducers. Awali is a software suite for computing with finite weighted automata and transducers with any number of tapes. Many algorithms are implemented including most of the classical ones. Automata and transducers may be weighted over a classical number sets (N,Z,Q,R,C,Z/nZ) but also over several other weightsets (such as the tropical semirings). Awali may be accessed in C++ (awalidyn, or directly using templates) or in Python (awalipy). Awali can also be used interactively from its command-line interface (Cora) or using awalipy together with Jupyter, a top-level Python interpreter. Awali may be downloaded from http://vaucanson-project.org/Awali/2.1/ and I'll be happy to address possible installation issues after the presentation. Mercredi 5 janvier 2022, 16 heures 15, Salle 3052 Léo Exibard Extending Reactive Synthesis to Infinite Data Domains through Machines with Registers In reactive synthesis, the goal is to automatically generate an implementation from a specification of the reactive and non-terminating input/output behaviours of a system. Specifications are usually modelled as logical formulas or automata over infinite sequences of signals (omega‑words), while implementations are represented as transducers. In the classical setting, the set of signals is assumed to be finite. The aim of this talk is to investigate the case of infinite alphabets. Correspondingly, executions are modelled as data omega-words. In this context, we study specifications and implementations respectively given as automata and transducers extended with a finite set of registers, used to store and compare data values. We consider different instances, depending on whether the specification is nondeterministic, universal (a.k.a. co-nondeterministic) or deterministic: contrary to the finite-alphabet case, those classes are expressively distinct. When the number of registers of the target implementation is unbounded, the synthesis problem is undecidable, while decidability is recovered in the deterministic case. In the bounded setting, undecidability still holds for non-deterministic specifications, but decidability is recovered for universal ones. The study was initially conducted over data domains with the equality predicate only, but the techniques can be lifted to the dense order (Q,<) and so-called oligomorphic data domains, over which register automata behave in an omega-regular way. A further exploration of the problem allows to extend the results to the discrete order (N,<), where the behaviours can be regularly approximated. Finally, decidability can be transferred to the case of words with the prefix relation (A^*,<) through a notion of reducibility between domains. Note the unusual day and time! Vendredi 10 décembre 2021, 14 heures 30, Salle 3052 Marie Fortin (University of Liverpool) How undecidable are HyperLTL and HyperCTL*? Temporal logics for the specification of information-flow properties are able to express relations between multiple executions of a system. Two of the most important such logics are HyperLTL and HyperCTL*, which generalise LTL and CTL* by trace quantification. It is known that this expressiveness comes at a price, i.e., satisfiability is undecidable for both logics. We settle the exact complexity of these problems, showing that both are in fact highly undecidable: we prove that HyperLTL satisfiability is \Sigma_1^1-complete and HyperCTL* satisfiability is \Sigma_1^2-complete. To prove \Sigma_1^2 membership for HyperCTL*, we prove that every satisfiable HyperCTL* formula has a model that is equinumerous to the continuum, the first upper bound of this kind. We prove this bound to be tight. This is joint work with Louwe B. Kuijer, Patrick Totzke and Martin Zimmermann. Vendredi 3 décembre 2021, 14 heures 30, Salle 3052 (Online) Jan Otop (University of Wrocław) Active learning automata with syntactic queries Regular languages can be actively learned with membership and equivalence queries in polynomial time. The learning algorithm, called the L^* algorithm, constructs iteratively the right congruence relation of a given regular language L, and returns the minimal DFA recognizing L. The L^* algorithm has been adapted to various types of automata: tree automata, weighted automata, nominal automata. However, an extension to infinite-word automata has been elusive. In this talk, I will present an extension of the active learning framework, in which the algorithm can ask syntactic queries about the automaton representing a given infinite-word language. First, I will discuss why extending L^*, which asks only semantic queries, to infinite-words languages is difficult. Next, I will present an alternative approach; instead of learning some automaton for a hidden language, we assume that there is a hidden automaton and the algorithm is supposed to learn an equivalent automaton. In this approach, the learning algorithm is allowed to ask standard semantic queries (membership and equivalence) and loop-index queries regarding the structure of the hidden automaton. These queries do not reveal the full structure of the automaton and hence do not trivialize the learning task. In the extended framework, there are polynomial-time learning algorithms for various types of infinite-word automata: deterministic Buechi automata, LimSup-automata, deterministic parity automata and limit-average automata. Finally, the idea to incorporate syntactic queries can be adapted to the pushdown framework; I will briefly discuss the learning algorithm for deterministic visibly pushdown automata. Vendredi 26 novembre 2021, 14 heures 30, Salle 3052 Stéphane Le Roux Extensive-form games with incentive stage-bidding To the classical way of playing in finite extensive-form games, we add a bidding mechanism: at each node of the game tree, each non-controlling player bids some amount of utility for one subgame. When the controller chooses one subgame, the utilities that were bid for this subgame are transferred to her. The notion of subgame perfect equilibrium (SPE) is naturally extended to these bidding games, and they always always exist like in the classical games. They also enjoy new properties: - If the game tree is binary-branching, payoff-sum-maximizing SPE always exist. - If the game involves only two players, all SPE are payoff-sum-maximizing with the same payoff-tuple, which is called the bidding value of the game. - This value is computable, whereas SPE payoff-tuples are not even continuous in classical games. This is joint work with Valentin Goranko Vendredi 29 octobre 2021, 14 heures 30, Salle 3052 Nofar Carmeli (ENS) The Fine-Grained Complexity of Answering Database Queries We wish to identify the queries that can be solved with close to optimal time guarantees over relational databases. Computing all query answers requires at least linear time before the first answer (to read the input and determine the answer's existence), and then we must allow enough time to print all answers (which may be many). Thus, we aspire to achieve linear preprocessing time and constant or logarithmic time per answer. A known dichotomy classifies Conjunctive Queries into those that admit such enumeration and those that do not: the main difficulty of query answering is joining tables, which can be done efficiently if and only if the join query is acyclic. However, the join query usually does not appear in a vacuum; for example, it may be part of a larger query, or it may be applied to a database with dependencies. We show how to use this context for more efficient computation and study how the complexity changes in these settings. Next, we aspire for an even more powerful solution for query answering: a structure that simulates an array containing the query answers. Such a structure can be used for example to enumerate all answers in a statistically meaningful order or to efficiently compute a boxplot of query answers. We call this simulation random access and study for which queries random access can be achieved with near-optimal guarantees. Our results are accompanied by conditional lower bounds showing that our algorithms can be applied to all tractable queries in some cases. Among our results, we show that a union of tractable Conjunctive Queries may be intractable w.r.t. random access, but a union of intractable Conjunctive Queries may be tractable w.r.t. enumeration. Dietmar Berwanger (LSV) Telling Everything. Information Quotients in Games with Communication We present a model of games with imperfect information that features explicit communication actions, by which a player can send her entire observation history to another player. Such full-information protocols are common in asynchronous distributed systems, here we consider a synchronous setting and cast it as a game on word-automatic trees. The information structures arising from such games are again automatic trees, but their branching degree can be unbounded, and then the synthesis problem becomes challenging. We present a method for constructing a finite bisimulation quotient for a representative subcase, which solves the problem effectively. The construction is a guess; if time allows, we will speculate on how to find such quotients systematically. The talk is based on joint work (in progress) with Laurent Doyen; a part of the material is presented in [D. Berwanger, L. Doyen (2019): Observation and distinction in infinite games, https://arxiv.org/abs/1809.05978] Vendredi 15 octobre 2021, 14 heures 30, Salle 3052 (Online) https://u-paris.zoom.us/j/87690991231?pwd=QjN4QUJKdExOMXp3a1MrQTNNL1RuZz09 Amaldev Manuel (Indian Institute of Technology Goa) Algebraically characterising first-order logic with neighbour We give an algebraic characterisation of first-order logic with the neighbour relation, on finite words. For this, we consider languages of finite words over alphabets with an involution on them. The natural algebras for such languages are involution semigroups. To characterise the logic, we define a special kind of semidirect product of involution semigroups, called the locally hermitian product. The characterisation theorem for FO with neighbour states that a language is definable in the logic if and only if it is recognised by a locally hermitian product of an aperiodic commutative involution semigroup, and a locally trivial involution semigroup. We then define the notion of involution varieties of languages, namely classes of languages closed under Boolean operations, quotients, involution, and inverse images of involutory morphisms. An Eilenberg-type correspondence is established between involution varieties of languages and pseudovarieties of involution semigroups. This is joint work with Dhruv Nevatia. Vendredi 8 octobre 2021, 14 heures 30, Salle 3052 Thomas Colcombet FO-separation of regular languages over words of ordinal length We show that the existence of a first-order formula separating two monadic second order formulas over countable ordinal words is decidable. This extends the work of Henckell and Almeida on finite words, and of Place and Zeitoun on $\omega$-words. This is a joint work with Rémi Morvan and Sam van Gool Jacques Sakarovitch (IRIF, CNRS & Télécom Paris) Derived terms without derivation The topic of this talk is, once again, the transformation of rational expressions into finite automata, a much laboured subject. In our last joint work, Sylvain Lombardy and I take a shifted perspective on the derivation of expressions method (due to Brzozowski and Antimirov) which reveals that there is indeed no derivation involved. This broadens the scope of the method to expressions over non free monoids. Vendredi 2 juillet 2021, 14 heures 30, Hybride : Salle 3052 et BBB (https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl) Antonio Casares Optimal Transformations of Games and Automata using Muller Conditions Automata and games over infinite words are widely used in verification and synthesis of reactive systems. Several different kinds of acceptance conditions can be used in these systems, which may differ in their complexity and expressive power. For their simplicity and usefulness, parity conditions are of special relevance. However, in many applications such as LTL-synthesis, the automata that are obtained in the first place use more complex conditions (Muller conditions) and we have to transform them in parity ones. In this talk, I will present a construction that takes as input a Muller automaton and transforms it into a parity automaton in an optimal way. More precisely, the resulting parity automaton has minimal size and uses a minimal number of priorities among those automata that admit a locally bijective morphism to the original Muller automaton. This transformation and the optimality result can also be applied to games and other types of transition systems. We show two applications: an improvement on the determinisation of Büchi automata into deterministic parity automata and characterisations of automata that admit parity, Rabin or Streett conditions in top of them. This is joint work with Thomas Colcombet and Nathanaël Fijalkow, and it will appear at ICALP 2021. Vendredi 25 juin 2021, 14 heures 30, https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl Amina Doumane Tree-to-tree functions We study tree-to-tree transformations that can be defined in first-order logic or monadic second-order logic. We prove a decomposition theorem, which shows that every transformation can be obtained from prime transformations, such as tree-to-tree homomorphisms or pre-order traversal, by using combinators such as function composition. Charles Paperman Dynamic Membership for regular language We study the dynamic membership problem for regular languages: fix a language L, read a word w, build in time O(|w|) a data structure indicating if w is in L, and maintain this structure efficiently under substitution edits on w. We consider this problem on the unit cost RAM model with logarithmic world length, where the problem always has a solution in O(log |w| / log log |w|). We show that the problem is in O(log log |w|) for languages in an algebraically-defined class QSG, and that it is in O(1) for another class QLZG. We show that languages not in QSG admit a reduction from the prefix problem for a cyclic group, so that they require \Omega(log n/ log log n) operations in the worst case; and that QSG languages not in QLZG admit a reduction from the prefix problem for the monoid U_1, which we conjecture cannot be maintained in O(1). This yields a conditional trichotomy. We also investigate intermediate cases between O(1) and O(log log n). Our results are shown via the dynamic word problem for monoids and semigroups, for which we also give a classification. We thus solve open problems of the paper of Skovbjerg Frandsen, Miltersen, and Skyum on the dynamic word problem, and additionally cover regular languages. Jan Dreier Lacon- and Shrub-Decompositions: Characterizing First-Order Transductions of Bounded Expansion Classes The concept of bounded expansion provides a robust way to capture sparse graph classes with interesting algorithmic properties. Most notably, every problem definable in first-order logic can be solved in linear time on bounded expansion graph classes. First-order interpretations and transductions of sparse graph classes lead to more general, dense graph classes that seem to inherit many of the nice algorithmic properties of their sparse counterparts. The leading question of this talk is: "How can we generalize the beautiful existing algorithmic results of sparse graphs to dense graphs?" We start with an overview over sparse and dense graph classes and then introduce lacon- and shrub-decompositions. We show that dense graph classes can be exactly characterized by having a sparse lacon- or shrub-decoposition. If one could efficiently compute such a decomposition then one could solve every problem definable in first-order logic in linear time on these classes. Vendredi 4 juin 2021, 14 heures 30, https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl Deacon Linkhorn The pseudofinite monadic second order theory of linear order (and a connection to profinite algebra). The monadic second order theory of a linear order (A,<) can be understood using the first order structure M(A,<) = (P(A),⊆,<) where P(A) is the powerset of A, ⊆ is the usual set-theoretic inclusion, and < is the ordering of (A,<) given on singleton subsets. By the pseudofinite monadic second order theory of linear order we mean the intersection of the first order theories of the structures M(A,<) across all finite linear orders (A,<). I will present an explicit axiomatisation of this shared theory, and characterise the non-standard completions (i.e. those admitting infinite models) in terms of residue functions. I will then talk about a connection with profinite monoids using extended Stone duality. In particular I will discuss a special case of a theorem due to Gehrke, Grigorieff, and Pin saying that the free profinite monoid on one generator is the extended Stone dual of the Boolean algebra of regular languages over a singleton alphabet (together with the binary operation of concatenation). Vendredi 28 mai 2021, 14 heures 30, https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl Jonathan Tanner On the Size of Finite Rational Matrix Semigroups Let n be a positive integer and M a set of rational n×n-matrices such that M generates a finite multiplicative semigroup. We show that any matrix in the semigroup is a product of matrices in M whose length is at most O(n2logn), where g(n) is the maximum order of finite groups over rational n×n-matrices. This result implies algorithms with an elementary running time for deciding finiteness of weighted automata over the rationals and for deciding reachability in affine integer vector addition systems with states with the finite monoid property. Vendredi 21 mai 2021, 14 heures 30, https://u-paris.zoom.us/rec/share/CBacDMMIJL2XuVNP7bx9V23Y1lpOsU0Dql1SwglYizke_yn6MOTtQEwXgFOVqZs.4RJmUCKgDVogKWAj Passcode: k$o$L92E6J Enrico Formenti On the decidability of dynamical properties of addtive cellular automata In this talk we are going to prove the decidability of several properties on the dynamics of additive cellular automata over finite abelian groups. In the first part, we prove the results for a restricted class of additive CA, namely the linear CA by using a new 'ad hoc' result from algebra. In the second part, we show how to lift the all the properties to the whole class of additive CA over finite abelian groups. Vendredi 7 mai 2021, 14 heures 30, https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl Sadegh Soudjani On Decidability of Time-Bounded Reachability in CTMDPs In this talk, I discuss the time-bounded reachability problem for continuous-time Markov decision processes. I show that the problem is decidable subject to Schanuel's conjecture. The decision procedure relies on the structure of optimal policies and the conditional decidability (under Schanuel's conjecture) of the theory of reals extended with exponential and trigonometric functions over bounded domains. I further discuss that any unconditional decidability result would imply unconditional decidability of the bounded continuous Skolem problem, or equivalently, the problem of checking if an exponential polynomial has a non-tangential zero in a bounded interval. The latter problems are also decidable subject to Schanuel's conjecture but finding unconditional decision procedures remain longstanding open problems. Time permitting, I can also discuss some algorithmic approximate computations using Lyapunov theory of dynamical systems. This talk is based on an ICALP 2020 paper joint with Rupak Majumdar and Mahmoud Salamati at MPI-SWS. Vendredi 30 avril 2021, 14 heures 30, https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl Denis Kuperberg (CNRS, ENS de Lyon) Positive first-order logic on words I will present FO+, a restriction of first-order logic where letters are required to appear positively, and the alphabet is partially ordered, for instance by inclusion order if letters are sets of atoms. Restricting predicates to appear positively is very natural when considering for instance logics with fixed points, or various extensions of regular languages. Here we will ask a syntax versus semantics question: FO+-definable languages are monotone in the letters, but can every FO-definable monotone language be defined in FO+ ? On general structures, Lyndon's theorem gives such a syntax/semantics equivalence for monotone first-order formulas, but it is known to fail on finite structures. We will see that it also fails on finite words, giving a much simpler proof for the failure of Lyndon's theorem on finite structures. Finally we will investigate whether FO+-definability is decidable for regular languages on ordered alphabets. Misha Vyalyi Re-pairing brackets and commutative automata. Arthur Jaquard A Complexity Approach to Tree Algebras: the Bounded Case The talk is based on joint work with Thomas Colcombet. We initiate a study of the expressive power of tree algebras, and more generally infinitely sorted algebras, based on their asymptotic complexity. Tree algebras in many of their forms, such as clones, hyperclones, operads, etc, as well as other kind of algebras, are infinitely sorted: the carrier is a multi sorted set indexed by a parameter that can be interpreted as the number of variables or hole types. Finite such algebras - meaning when all sorts are finite - can be classified depending on the asymptotic size of the carrier sets as function of the parameter, that we call the complexity of the algebra. This naturally defines the notions of algebras of bounded, linear, polynomial, exponential or doubly exponential complexity… Our main result precisely characterizes the tree algebras of bounded complexity based on the languages that they recognize as Boolean closures of simple languages. Along the way, we prove that such algebras that are syntactic are exactly those in which, as soon as there are sufficiently many variables, the elements are invariant under permutation of the variables. Vendredi 2 avril 2021, 14 heures 30, https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl Amaury Pouly (IRIF) On the Decidability of Reachability in Continuous Time Linear Time-Invariant Systems We consider the decidability of state-to-state reachability in linear time-invariant control systems over continuous time. We analyze this problem with respect to the allowable control sets, which are assumed to be the image under a linear map of the unit hypercube (\emph{i.e.} zonotopes). This naturally models bounded (sometimes called saturated) controls. Decidability of the version of the reachability problem in which control sets are affine subspaces of R^n is a fundamental result in control theory. We obtain some decidability results in low dimension or when the spectrum of the matrix is special. We also obtain some decidability conditioned on the decidability of the first-order theory of the reals with the exponential function. Finally we obtain a hardness result for a mid generalization of the problem. In this case, we show that the problem is at least as hard as the Continuous Positivity problem if the control set is a singleton, or the Nontangential Continuous Positivity problem if the control set is $[-1,1]$. Vendredi 19 mars 2021, 14 heures 30, http://perso.ens-lyon.fr/pierre.pradic/slides/2021-03-irif.pdf Pierre Pradic Star-free languages, first-order transductions and the non-commutative λ-calculus he talk will be based on joint work with Lê Thành Dũng (Tito) Nguyễn. This work is part of an exploration of the expressiveness of the simply-typed λ-calculus (STLC) and related substructural variants (linear, affine, planar) using Church encodings of datatypes. More specifically, we are interested in the connection with automata theory for string transductions and languages. I will first introduce the setting and motivate the problems using Hillebrand and Kanellakis' result stating that the classes of STLC-definable and regular languages coincide. I will then focus on a result stating that star-free languages correspond exactly to those obtained in a non-commutative refinement of STLC based on linear logic. I will sketch an alternative proof of this result using a semantic evaluation argument and discuss related work-in-progress concerning characterizations in the non-commutative λ-calculus of first-order regular string transductions using planar reversible 2DFTs and tree-walking automata. (the results I will present are based on https://hal.archives-ouvertes.fr/hal-02476219 and http://nguyentito.eu/2021-01-links.pdf) Vendredi 12 mars 2021, 14 heures 30, https://u-paris.zoom.us/rec/share/CF6tuTHp2Y6P2vtynWBE_dDKsv93CiJOIBtvg3ujsYCsqvPpjMS6DCY3Wf_BzmUx.GtIj9JDQznwAW_6w Passcode: F504d+s8@? Nathanaël Fijalkow Search algorithms for Probabilistic Context-Free Grammars A probabilistic context-free grammar (PCFG) defines a distribution over trees: each (finite) tree has some probability of being generated. We consider the following game: a (secret) tree is generated by a PCFG. The PCFG is known and given as input, and the goal is to find the tree. How many equality queries are needed? We'll discuss algorithms for solving this problem, using either enumeration or sampling, and applications to program synthesis. Joint (ongoing) work with Pierre Ohlmann and Guillaume Lagarde. Vendredi 5 mars 2021, 14 heures 30, Online at https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl Manfred Madritsch (Université de Lorraine, CNRS) Three views on numeration systems A numeration system associates to each element of a given set a finite word. The best known of these systems is the decimal system, which associates each positive integer with a word of the alphabet $\{0,1,\ldots,9\}$. This idea can easily be generalised to other positive integers as base such as the binary system or the hexadecimal system. The first part deals with signed numeration systems. In these systems, we add digits to the alphabet such as the digit $-1$ in the binary system. Under certain conditions, on consecutive digits, we obtain unique representations. This is related to the concept of abstract numeration systems. We will study the shift and odometer from the point of view of dynamical systems. Digital restrictions also play an important role in another numeration system: the Zeckendorf expansion. This is an example of the larger class of numeration systems based on linear recurrent sequences, which we discuss in the second part. A way to analyse a numeration system is to examine functions operating on the digital representation. The most famous of these functions is the sum-of-digits function and we investigate it from an analytic point of view. In the expansion of a randomly chosen real, we expect each block of digits to occur with the same frequency. This leads to the concept of normal numbers and the related notion of uniformly distributed sequences. In the last part, we adopt a probabilistic point of view and construct normal numbers and uniformly distributed sequences related to numeration systems. Vendredi 19 février 2021, 14 heures 30, https://bbb-front.math.univ-paris-diderot.fr/recherche/ama-bgy-hx5-3rl Liat Peterfreund 2-Valued Logic for SQL on Incomplete Information The design of SQL is based on a three-valued logic (3VL), rather than the usual two-valued Boolean logic. This 3VL accommodates the additional truth value "unknown" for handling nulls, representing missing values. This third truth value is viewed as indispensable for SQL expressiveness, but is at the same time being criticized a lot for leading to unintuitive behavior of queries and for being a source of programmer mistakes. In a joint work with Leonid Libkin, we show that, contrary to the widely held view, SQL could have been designed based on the standard two-valued logic, without any loss of expressiveness and without giving up nulls. We show that conflating unknown, resulting from conditions referring to nulls, with false leads to an equally expressive version of SQL. Queries written under the two-valued semantics can be efficiently translated into the standard SQL and thus executed on any existing RDBMS. Our results cover the core of the SQL 1999 Standard, including SELECT-FROM-WHERE-GROUP BY-HAVING queries extended with subqueries and IN/EXISTS/ANY/ALL conditions, and recursive queries. In addition, we show that no other many-valued logic for treating nulls could have possibly led to a more expressive language. Julien Grange Successor-Invariant First-Order Logic on Classes of Bounded Degree Successor-invariant first-order logic is the extension of first-order logic where one has access to an additional successor relation on the elements of the structure, as long as the validity of formulas doesn't depend on the choice of a particular successor. It has been shown by Rossman that this formalism allows to express properties that are not FO-definable. However, his separating example involves dense classes of structures, and the expressive power of successor-invariant first-order logic is an open question for sparse classes of structures. We prove that when the degree is bounded, successor-invariant first-order logic is no more expressive than first-order logic. Stefan Göller (University of Kassel) Bisimulation Finiteness of Pushdown Systems Is Elementary It is shown that if a pushdown system is bisimulation equivalent to a finite system, there is already such a finite system whose size is elementary in the description size of the pushdown system. As a consequence, it is elementarily decidable if a pushdown system is bisimulation-finite. This is joint work with Pawel Parys. Vendredi 22 janvier 2021, 14 heures 30, Online Daniela Petrisan Learning automata and transducers: a categorical approach In this talk we present a categorical approach to learning automata over words, in the sense of the $L^*$-algorithm of Angluin. This yields a new generic $L^*$-like algorithm which can be instantiated for learning deterministic automata, automata weighted over fields, as well as subsequential transducers. The generic nature of our algorithm is obtained by adopting an approach in which automata are simply functors from a particular category representing words to a "computation category". We establish that the sufficient properties for yielding the existence of minimal automata in combination with some additional hypotheses relative to termination ensure the correctness of our generic algorithm. This is joint work with Thomas Colcombet and Riccardo Stabile. Ayrat Khalimov Church Synthesis on Register Automata over Infinite Ordered Domains Register automata are finite automata equipped with a finite set of registers in which they can store data, i.e. elements from an infinite alphabet, and compare this data for equality. They provide a simple formalism to specify the behaviour of reactive systems operating data. Initially defined with the equality predicate only, they can be extended to allow for comparison of data with regards to a linear order, like (N,<) or (Q,<). We study the synthesis problem for those specifications. To that end, we extend the classical Church synthesis game to infinite alphabets: two players, Adam and Eve, alternately pick some data, and Eve wins whenever their interaction satisfies the specification, which is a language of infinite words over infinite data alphabet. Unfortunately, such games are undecidable already for specifications described by deterministic register automata. Therefore, we study one-sided Church games, where Eve uses a finite alphabet but Adam still manipulates data. We show such games are decidable in exponential time in the number of registers in the specification, both for Q and N, are determined, and strategies describable by finite-state register transducers suffice for Eve to win. To obtain this result we study constraint sequences, which abstract the behaviour of register automata and allow for reduction of Church games to omega-regular games. Finally, we apply these results to the transducer-synthesis problem. I will end the talk with the discussion of bounded synthesis. There, specification automata are universal (aka co-nondeterministic), and we search for a realizing transducer with an a-priori given number of registers (hence the term 'bounded synthesis'). This problem is known to be decidable for register automata comparing data for equality only, and we will look at the challenges arising for the case of (N,<). (This is the joint work by Léo Exibard, Emmanuel Filiot, Ayrat Khalimov) Damien Pous Cyclic proofs, System T and the power of contraction We study a cyclic proof system C over regular expression types, inspired by linear logic and non-wellfounded proof theory. Proofs in C can be seen as strongly typed goto programs. We show that they denote computable total functions and we analyse the relative strength of C and Gödel's system T. In the general case, we prove that the two systems capture the same functions on natural numbers. In the affine case, i.e., when contraction is removed, we prove that they capture precisely the primitive recursive functions—providing an alternative and more general proof of a result by Dal Lago, about an affine version of system T. Joël Ouaknine (MPI-SWS) Holonomic Techniques, Periods, and Decision Problems Holonomic techniques have deep roots going back to Wallis, Euler, and Gauss, and have evolved in modern times as an important subfield of computer algebra, thanks in large part to the work of Zeilberger and others over the past three decades. In this talk, I will give an overview of the area, and in particular will present a select survey of known and original results on decision problems for holonomic sequences and functions. I will also discuss some surprising connections to the theory of periods and exponential periods, which are classical objects of study in algebraic geometry and number theory; in particular, I will relate the decidability of certain decision problems for holonomic sequences to deep conjectures about periods and exponential periods, notably those due to Kontsevich and Zagier. Parts of this talk will be based on the paper "On Positivity and Minimality for Second-Order Holonomic Sequences", https://arxiv.org/abs/2007.12282 . Vendredi 4 décembre 2020, 14 heures 30, Salle 3052 Georg Zetzsche Rational subsets of Baumslag-Solitar groups We consider the rational subset membership problem for Baumslag-Solitar groups. These groups form a prominent class in the area of algorithmic group theory, and they were recently identified as an obstacle for understanding the rational subsets of GL(2,ℚ). We show that rational subset membership for Baumslag-Solitar groups BS(1,q) with q ≥ 2 is decidable and PSPACE-complete. To this end, we introduce a word representation of the elements of BS(1,q): their pointed expansion (PE), an annotated q-ary expansion. Seeing subsets of BS(1,q) as word languages, this leads to a natural notion of PE-regular subsets of BS(1,q): these are the subsets of BS(1,q) whose sets of PE are regular languages. Our proof shows that every rational subset of BS(1,q) is PE-regular. Since the class of PE-regular subsets of BS(1,q) is well-equipped with closure properties, we obtain further applications of these results. Our results imply that (i) emptiness of Boolean combinations of rational subsets is decidable, (ii) membership to each fixed rational subset of BS(1,q) is decidable in logarithmic space, and (iii) it is decidable whether a given rational subset is recognizable. In particular, it is decidable whether a given finitely generated subgroup of BS(1,q) has finite index. This is joint work with Michaël Cadilhac and Dmitry Chistikov. Nathan Lhote (LaBRI) Pebble Minimization of Polyregular Functions. We show that a polyregular word-to-word function is regular if and only if its output size is at most linear in its input size. Moreover a polyregular function can be realized by: a transducer with two pebbles if and only if its output has quadratic size in its input, a transducer with three pebbles if and only if its output has cubic size in its input, etc. Moreover the characterization is decidable and, given a polyregular function, one can compute a transducer realizing it with the minimal number of pebbles. We apply the result to mso interpretations from words to words. We show that mso interpretations of dimension k exactly coincide with k-pebble transductions. Victor Lutfalla (LIPN) Substitution planar tilings with n-fold rotational symmetry I will present the tools we used to study tilings that are both substitutive and planar (a relaxed version of cut-and-project) in order to present our main result that there exist planar substitution tilings with 2n-fold rotational symmetry for any odd n. Jeudi 12 novembre 2020, 15 heures 30, Salle 3052 Guillermo Alberto Perez (University of Antwerp) Coverability in 1-VASS with Disequality Tests In this talk we will focus on the so-called control-state reachability problem (also called the coverability problem) for 1-dimensional vector addition systems with states (VASS). We show that this problem lies in NC: the class of problems solvable in polylogarithmic parallel time. We will also generalize the problem to allow disequality constraints on transitions (i.e., we allow transitions to be disabled if the accumulated weight is equal to a specific value). For this generalization, we show that the coverability problem is solvable in polynomial time even though a shortest run may have exponential length. Unusual time! Vendredi 6 novembre 2020, 14 heures 30, Salle 3052 Denis Kuperberg (LIP, ENS Lyon, CNRS) Recognizing Good-for-Games automata: the G2 conjecture In the setting of regular languages of infinite words, Good-for-Games (GFG) automata can be seen as an intermediate formalism between determinism and nondeterminism, with advantages from both worlds. Indeed, like deterministic automata, GFG automata enjoy good compositional properties (useful for solving games and composing automata and trees) and easy inclusion checks. Like nondeterministic automata, they can be exponentially more succinct than deterministic automata. I will focus in this talk on the following problem: given a nondeterministic parity automaton on infinite words, is it GFG ? The complexity of this problem is one of the main remaining open questions concerning GFG automata, motivated by the potential applications that would come with an efficient algorithm. After giving the necessary context, I will explain the current understanding on this question, and describe a simple polynomial-time algorithm that is conjectured to solve the problem, but has only been proven correct if the input is a Büchi or a co-Büchi automaton. Wojciech Czerwiński (University of Warsaw) Universality problem for unambiguous Vector Addition Systems with States I will show that the universality problem is ExpSpace-complete for unambiguous VASS, which is in strong contrast with Ackermann-completeness of the same problem for nondeterministic VASS. I also plan to present some more results concerning the interplay between unambiguity and VASS. (joint work with Diego Figueira and Piotr Hofman) Lorenzo Clemente (Faculty of Mathematics, Informatics and Mechanics, University of Warsaw.) Bidimensional linear recursive sequences and universality of unambiguous register automata We study the universality and inclusion problems L(A)⊆L(B) for register automata over equality data. We show that the universality and inclusion problems can be solved in 2-EXPTIME when both automata A, B are without guessing and B is unambiguous, which improves on the recent 2-EXPSPACE upper bound by Mottet and Quaas. We proceed by reducing inclusion to universality, and then universality to the problem of counting the number of orbits of runs of the automaton. We show that the orbit-counting function satisfies a system of bidimensional linear recursive equations with polynomial coefficients (linrec), which generalises analogous recurrences for the Stirling numbers of the second kind, and then we show that universality reduces to the zeroness problem for linrec sequences. While such a counting approach is classical and has successfully been applied to unambiguous finite automata and grammars over finite alphabets, its application to register automata over infinite alphabets is novel. We provide two algorithms to decide the zeroness problem for the linrec sequences arising from orbit-counting functions. Both algorithms rely on skew polynomials. The first algorithm performs variable elimination and has elementary complexity. The second algorithm relies on the computation of the Hermite normal form of matrices over a skew polynomial field. This yields an EXPTIME decision procedure for the zeroness problem, which in turn yields the claimed bounds for the universality and inclusion problems of register automata. Vendredi 9 octobre 2020, 14 heures 30, Salle 3052 and online on BigBlueButton Olivier Bournez (LIX) Characterization of computability and complexity classes with difference equations We will discuss the expressive and computational power of Ordinary Differential Equations (ODEs). We present the general theory of discrete ODEs for computation theory, we illustrate this with various examples of algorithms, and we provide several implicit characterizations of complexity and computability classes. Vendredi 26 juin 2020, 14 heures 30, Held online, on BigBlueButton Laure Daviaud (City University of London) About learning automata and weighted automata In this talk, I will present algorithms to learn deterministic finite automata (due to Angluin) and weigthed automata over the usual semiring R with addition and multiplication (due to Beimel, Bergadano, Bshouty, Kushilevitz and Varricchio). I will then present some related open questions and pinpoint the difficulty that arise when trying to generalise these algorithms to any semiring. Vendredi 19 juin 2020, 14 heures 30, Online on BigBlueButton Sven Dziadek Weighted Logics and Weighted Simple Automata for Context-Free Languages of Infinite Words We investigate weighted context-free languages of infinite words, a generalization of ω-context-free languages (Cohen, Gold 1977) and an extension of weighted context-free languages of finite words (Chomsky, Schützenberger 1963). As in the theory of formal grammars, these weighted languages, or ω-algebraic series, can be represented as solutions of ω-algebraic systems of equations and by weighted ω-pushdown automata. Our results are threefold. We show that ω-algebraic systems can be transformed into Greibach normal form. Our second result proves that simple ω-pushdown automata recognize all ω-algebraic series. Simple pushdown automata do not use ε-transitions and can change the stack only by at most one symbol. We use these results to prove a logical characterization of weighted ω-context-free languages in the sense of Büchi, Elgot and Trakhtenbrot. This is joint work with Manfred Droste and Werner Kuich. Vendredi 12 juin 2020, 14 heures 30, Online (BigBlueButton) Kuize Zhang On detectability of finite automata and labeled Petri nets Abstract: Detectability is a basic property of partially observed dynamic systems. If a system satisfies such a property, then one can use an observed output sequence generated by the system to determine its internal states after some time. This property plays an important role in many problems such as state estimation and controller synthesis. Finite automata and labeled Petri nets are two widely-studied models in discrete-event systems, which consist of transitions between discrete states driven by spontaneous occurrences of events, and can be seen as abstractions of many practical systems. (A supervisory control framework for synthesising controllers in discrete-event systems was initiated by Ramadge and Wonham in the late 1980s.) In this talk, we introduce recent verification results on a particular property called strong detectability, for finite automata and labeled Petri nets, and several related further topics. Vendredi 5 juin 2020, 14 heures 30, Online K. S. Thejaswini (University of Warwick) The Strahler Number of a Parity Game The Strahler number is a measure of branching complexity of rooted trees. We define the Strahler number of a parity game to be the the smallest Strahler number of the tree of any of its attractor decompositions. In this talk, we will argue that the Strahler number of a parity game is a robust, and hence arguably natural, parameter: it coincides with its alternative version based on trees of progress measures and— remarkably—with the register number defined by Lehtinen (2018). We will also look at how parity games can be solved in quasi-linear space and in time that is polynomial in the number of vertices n and linear in (d/2k)^k , where d is the number of priorities and k is the Strahler number. This complexity is quasi-polynomial because the Strahler number is at most logarithmic in the number of vertices. This significantly improves the running times and space achieved for parity games of bounded register number by Lehtinen (2018) and by Parys (2020). Vendredi 29 mai 2020, 14 heures 30, Online Liat Peterfreund (IRIF) Weight Annotation in Information Extraction The framework of document spanners abstracts the task of information extraction from text as a function that maps every document (a string) into a relation over the document's spans (intervals identified by their start and end indices). For instance, the regular spanners are the closure under the Relational Algebra (RA) of the regular expressions with capture variables, and the expressive power of the regular spanners is precisely captured by the class of vset-automata - a restricted class of transducers that mark the endpoints of selected spans. In this work, we embark on the investigation of document spanners that can annotate extractions with auxiliary information such as confidence, support, and confidentiality measures. To this end, we adopt the abstraction of provenance semirings by Green et al., where tuples of a relation are annotated with the elements of a commutative semiring, and where the annotation propagates through the (positive) RA operators via the semiring operators. Hence, the proposed spanner extension, referred to as an annotator, maps every string into an annotated relation over the spans. As a specific instantiation, we explore weighted vset-automata that, similarly to weighted automata and transducers, attach semiring elements to transitions. We investigate key aspects of expressiveness, such as the closure under the positive RA, and key aspects of computational complexity, such as the enumeration of annotated answers and their ranked enumeration in the case of numeric semirings. For a number of these problems, fundamental properties of the underlying semiring, such as positivity, are crucial for establishing tractability. This is a joint work with Johannes Doleschal, Benny Kimelfeld and Wim Martens. Vendredi 22 mai 2020, 14 heures 30, Virtual seminar on BigBlueButton Mikołaj Bojańczyk (MIMUW) Single use transducers over infinite alphabets Automata for infinite alphabets, despite undeniable appeal, are a bit of a theoretical mess. Almost all models are non-equivalent as language recognisers: deterministic/nondeterministic/alternating, one-way/two-way, etc. Also monoids give a different class of languages, and mso gives yet another. In this talk, I will describe how the single-use restriction can bring some order into this zoo. The single-use restriction says that once an atom from a register is queried, then that atom disappears. Among our results: a Factorisation Forest Theorem, a Krohn-Rhodes decomposition, and a class of "regular" transducers which admits four equivalent characterisations. Joint work with Rafał Stefański. Vendredi 15 mai 2020, 14 heures 30, Online, on BigBlueButton (usual link, available on the mailing list) Thomas Colcombet (IRIF) Unambiguous Separators for Tropical Tree Automata In this paper we show that given a max-plus automaton (over trees, and with real weights) computing a function f and a min-plus automaton (similar) computing a function g such that f ⩽ g, there exists effectively an unambiguous tropical automaton computing h such that f ⩽ h ⩽ g. This generalizes a result of Lombardy and Mairesse of 2006 stating that series which are both max-plus and min-plus rational are unambiguous. This generalization goes in two directions: trees are considered instead of words, and separation is established instead of characterization (separation implies characterization). The techniques in the two proofs are very different. Jeudi 7 mai 2020, 14 heures 30, Online, on BigBlueButton (usual link, available on the mailing list) Florent Koechlin Weakly-unambiguous Parikh automata and their link to holonomic series We investigate the connection between properties of formal languages and properties of their generating series, with a focus on the class of holonomic power series. It is a classical result that regular languages have rational generating series and that the generating series of unambiguous context-free languages are algebraic. This connection between automata theory and analytic combinatorics has been successfully exploited. For instance, Flajolet used it in the eighties to prove the inherent ambiguity of some context-free languages using criteria from complex analysis. Settling a conjecture of Castiglione and Massazza, we establish an interesting link between unambiguous Parikh automata and holonomic power series, which also yields characterizations of inherent ambiguity and algorithmic byproducts for these automata models. This is a joint work with Alin Bostan, Arnaud Carayol and Cyril Nicaud. Vendredi 17 avril 2020, 14 heures 30, Online Jan Philipp Wächter (Universität Stuttgart) An Automaton Group with PSPACE-Complete Word Problem Finite automata pose an interesting alternative way to present groups and semigroups. Some of these automaton groups became famous for their peculiar properties and have been extensively studied. In addition to that, there exists also a line of research on the general properties of the class of automaton groups. One aspect of this research is the study of algorithmic properties of automaton groups and semigroups. While many natural algorithmic decision problems have been proven or are generally suspected to be undecidable for these classes, the word problem forms a notable exception. In the group case, it asks whether a given word in the generators is equal to the neutral element in the group in question and is well-known to be decidable for automaton groups. In fact, it was observed in a work by Steinberg published in 2015 that it can be solved in nondeterministic linear space using a straight-forward guess and check algorithm. In the same work, he conjectured that there is an automaton group with a PSPACE-complete word problem. In a recent paper presented at STACS 2020, Armin Weiß and I could prove that there indeed is such an automaton group. To achieve this, we combined two ideas. The first one is a construction introduced by Daniele D'Angeli, Emanuele Rodaro and me to show that there is an inverse automaton semigroup with a PSPACE-complete word problem and the second one is an idea already used by Barrington in 1989 to encode NC¹ circuits in the group of even permutation over five elements. In the talk, we will discuss how Barrington's idea can be applied in the context of automaton groups, which will allow us to prove that the uniform word problem for automaton groups (were the generating automaton and, thus, the group is part of the input) is PSPACE- complete. Afterwards, we will also discuss the ideas underlying the construction to simulate a PSPACE-machine with an invertible automaton, which allow for extending the result to the non-uniform case. Finally, we will briefly look at related problems such as the compressed word problem for automaton groups. Javier Esparza An Efficient Normalisation Procedure for Linear Temporal Logic Joint work with Salomon Sickert In the mid 80s, Lichtenstein, Pnueli, and Zuck proved a classical theorem stating that every formula of LTL with past operators is equivalent to a formula of the form $\bigwedge_{i=1}^n \G\F \varphi_i \vee \F\G \psi_i $, where $\varphi_i$ and $\psi_i$ contain only past operators. Some years later, Chang, Manna, and Pnueli built on this result to derive a similar normal form for the future fragment of LTL. Both normalisation procedures had a non-elementary worst-case blow-up, and followed an involved path from LTL formulas to counter-free automata to star-free regular expressions and back to LTL. We improve on both points. We present a purely syntactic normalisation procedure from LTL to LTL, with single exponential blow-up, that can be implemented in a few dozen lines of Standard ML code. As an application, we derive a simple algorithm to translate LTL into deterministic Rabin automata. The algorithm normalises the formula, translates it into a special very weak alternating automaton, and applies a simple determinisation procedure, valid only for these special automata. Online seminar on BigBlueButton Vendredi 3 avril 2020, 14 heures 30, Online Nathanaël Fijalkow (LaBRI) Assume Guarantee Synthesis for Prompt Linear Temporal Logic An assume guarantee (AG) specification is of the form "Assumption implies Guarantee". AG Synthesis is the following problem: given an AG specification, construct a system satisfying it. In this talk I will discuss the case where both Assumptions and Guarantees are given by Prompt Linear Temporal Logic (Prompt LTL), which is a logic extending LTL by adding bound requirements such as "every request is answered in bounded time". The solution to the AG problem for Prompt LTL will be an invitation to the theory of regular cost functions. Joint work with Bastien Maubert and Moshe Y. Vardi. Séminaire Virtuel sur BigBlueButton Edwin Hamel-De Le Court Non encore annoncé. Vendredi 20 mars 2020, 14 heures 30, Online Pierre Ohlmann (IRIF) Controlling a random population Bertrand et al. (2017) introduced a model of parameterised systems, where each agent is represented by a finite state system, and studied the following control problem: for any number of agents, does there exist a controller able to bring all agents to a target state? They showed that the problem is decidable and EXPTIME-complete in the adversarial setting, and posed as an open problem the stochastic setting, where the agent is represented by a Markov decision process. In this paper, we show that the stochastic control problem is decidable. Our solution makes significant uses of well quasi orders, of the max-flow min- cut theorem, and of the theory of regular cost functions. The seminar will take place virtually using the software BigBlueButton (see intranet). Detailed instructions will follow by email at 14:00. Vendredi 6 mars 2020, 10 heures 30, Salle 3052 Stefan Milius (Friedrich-Alexander Universität Erlangen-Nürnberg) From Equational Specifications of Algebras with Structure to Varieties of Data Languages We present a new category theoretic approach to equationally axiomatizable classes of algebras. This approach is well-suited for the treatment of algebras equipped with additional computationally relevant structure, such as ordered algebras, continuous algebras, quantitative algebras, nominal algebras, or profinite algebras. We present a generic HSP theorem and a sound and complete equational logic, which encompass numerous flavors of equational axiomizations studied in the literature. In addition, we use the generic HSP theorem as a key ingredient to obtain Eilenberg-type correspondences yielding algebraic characterizations of properties of regular machine behaviours. When instantiated for orbit-finite nominal monoids, the generic HSP theorem yields a crucial step for the proof of the first Eilenberg-type variety theorem for data languages. Attention ! Horaire non habituel ! Henning Urbat (FAU Erlangen-Nürnberg) Automata Learning: An Algebraic Approach We propose a generic framework for learning unknown formal languages of various types (e.g. finite or infinite words, weighted and nominal languages). Our approach is parametric in a monad T that represents the given type of languages and their recognizing algebraic structures. Using the concept of an automata presentation of T-algebras, we demonstrate that the task of learning a T-recognizable language can be reduced to learning an abstract form of algebraic automaton whose transitions are modeled by a functor. For the important case of adjoint automata, we devise a learning algorithm generalizing Angluin's L*. The algorithm is phrased in terms of categorically described extension steps; we provide for a termination and complexity analysis based on a dedicated notion of finiteness. Our framework applies to structures like ω-regular languages that were not within the scope of existing categorical accounts of automata learning. In addition, it yields new generic learning algorithms for several types of languages for which no such algorithms were previously known at all, including nominal languages with name binding, and cost functions. This talk is based on joint work with Lutz Schröder. Marie Van Den Bogaard (ULB) Subgame Perfect Equilibria in Quantitative Reachability Games In this talk, we consider multiplayer games on graphs. In such games, each player has his own objective, that does not necessarily clash with the objectives of the other players. In this "non zero-sum" context, equilibria are a better suited solution concept than the classical winning strategy notion. We will focus on a refinement of the well-known Nash Equilibrium concept: Subgame Perfect Equilibrium (SPE for short), where players have to play rationnally in every scenario, even the ones that deviate from the planned outcome. We will explain why this refinement is a relevant solution concept in multiplayer games and show how to handle them in quantitative reachability games, where each player wants to minimize the number of steps to reach its own target set of vertices. Mardi 25 février 2020, 14 heures, Salle 3052 Georg Zetsche (MPI SWS) Extensions of $\omega$-Regular Languages We consider extensions of monadic second order logic over $\omega$-words, which are obtained by adding one language that is not $\omega$-regular. We show that if the added language $L$ has a neutral letter, then the resulting logic is necessarily undecidable. A corollary is that the $\omega$-regular languages are the only decidable Boolean-closed full trio over $\omega$-words. (Joint work with Mikołaj Bojańczyk, Edon Kelmendi, and Rafał Stefański) Note the unusual time (14:00). Luc Dartois (LACL) Reversible Transducers Transducers extend automata by adding outputs to the transition, thus computing functions over words instead of recognizing languages. Deterministic two-way transducers define the robust class of regular functions which is, among other good properties, closed under composition. However, the best known algorithms for composing two-way transducers are rather involved and cause a double exponential blow-up in the size of the input machines. This contrasts with the rather direct and polynomial construction for composing one-way machines. In this talk, I will present the class of reversible transducers, which are machines that are both deterministic and co-deterministic. This class enjoys polynomial composition complexity, even in the two-way case. Although this class is not very expressive in the one-way scenario, I will show that any two-way transducer can be made reversible through a single exponential blow-up. As a consequence, the composition of two-way transducers can be done with a single exponential blow-up in the number of states, enhancing the best known algorithm from the 60s. Maintenu malgré les vacances, car présence attendue d'une dizaine de personnes (après sondage) Vendredi 7 février 2020, 14 heures 30, Salle 3052 Youssouf Oualhadj (LACL) Life is random time is not: Markov decision processes with window objectives The window mechanism was introduced by Chatterjee et al. to strengthen classical game objectives with time bounds. It permits to synthesize system controllers that exhibit acceptable behaviors within a configurable time frame, all along their infinite execution, in contrast to the traditional objectives that only require correctness of behaviors in the limit. The window concept has proved its interest in a variety of two-player zero-sum games, thanks to the ability to reason about such time bounds in system specifications, but also the increased tractability that it usually yields. In this work, we extend the window framework to stochastic environments by considering the fundamental threshold probability problem in Markov decision processes for window objectives. That is, given such an objective, we want to synthesize strategies that guarantee satisfying runs with a given probability. We solve this problem for the usual variants of window objectives, where either the time frame is set as a parameter, or we ask if such a time frame exists. We develop a generic approach for window-based objectives and instantiate it for the classical mean-payoff and parity objectives, already considered in games. Our work paves the way to a wide use of the window mechanism in stochastic models. Joint work with : Thomas Brihaye, Florent Delgrange, Mickael Randour. Arnaud Sangnier (IRIF) Deciding the existence of cut-off in parameterized rendez-vous networks We study networks of processes which all execute the same finite-state protocol and communicate thanks to a rendez-vous mechanism. Given a protocol, we are interested in checking whether there exists a number, called a cut-off, such that in any networks with a bigger number of participants, there is an execution where all the entities end in some final states. We provide decidability and complexity results of this problem under various assumptions, such as absence/presence of a leader or symmetric/asymmetric rendez-vous. This is a joint work with Florian Horn. Marc Zeitoun (LABRI) The star-free closure A language of finite words is star-free when it can be built from letters using Boolean operations and concatenation. A well-known theorem of Schützenberger characterizes star-free languages as those recognized by an aperiodic monoid. Another theorem of Schützenberger gives an alternate definition: these are the languages that can be built using product, union, and, in a limited way, Kleene star (but complement is now disallowed). These definitions can be rephrased using closure operators operating on classes of languages. In this talk, we investigate these operators and generalize the results of Schützenberger. This is joint work with Thomas Place. Karoliina Lehtinen Parity Games – the quasi-polynomial era Parity games are central to the verification and synthesis of reactive systems: various model-checking, realisability and synthesis problems reduce to solving these games. Solving parity games – that is, deciding which player has a winning strategy – is one of the few problems known to be in both UP and co-UP yet not known to be in P. So far, the quest for a polynomial algorithm has lasted over 25 years. In 2017 a major breakthrough occurred: parity games are solvable in quasi-polynomial time. Since then, several seemingly very distinct quasi-polynomial algorithms have been published, both by myself and others, and some of the novel ideas behind them have been applied to address other problems in automata theory. In this talk, I will give an overview of these developments, including my own contribution to them, and the state-of-the art, with a slight automata-theoretic bias. Mardi 17 décembre 2019, 14 heures 30, Salle 0010 Achim Blumensath (Masaryk University) Regular Tree Algebras I present recent developments concerning a very general algebraic theory for languages of infinite trees which is based on the category-theoretical notion of a monad. The main result isolates a class of algebras that precisely captures the notion of regularity for such languages. In particular, we show that these algebras form a pseudo-variety and that syntactic algebras exists. If time permits I will conclude the talk with a few simple characterisation results obtained using this framework. Noter la salle et l'horaire inhabituels. Wesley Fussner Residuation: Origins and Open Problems Residuated lattices are a variety of ordered monoids whose study arises from from three directions: Algebras of ideals of rings, algebras of binary relations, and the semantics of substructural logics. This talk provides a survey of residuated lattices, discussing both their historical origins and current threads of research. We also offer an introduction to some difficult problems that arise their study, in particular connected to structure theorems for special classes of residuated lattices and their duality theory. Dmitry Chistikov (University of Warwick) On the complexity of linear arithmetic theories over the integers Given a system of linear Diophantine equations, how difficult is it to determine whether it has a solution? What changes if equations are replaced with inequalities? If some of the variables are quantified universally? These and similar questions relate to the computational complexity of deciding the truth value of statements in various logics. This includes in particular Presburger arithmetic, the first-order logic over the integers with addition and order. In this talk, I will survey constructions and ideas that underlie known answers to these questions, from classical results to recent developments, and open problems. First, we will recall the geometry of integer linear programming and how it interacts with quantifiers. This will take us from classical results due to von zur Gathen and Sieveking (1978), Papadimitriou (1981), and others to the geometry of the set of models of quantified logical formulas. We will look at rational convex polyhedra and their discrete analogue, hybrid linear sets (joint work with Haase (2017)), and see, in particular, how the latter form a proper sub-family of ultimately periodic sets of integer points in several dimensions (the semi-linear sets, introduced by Parikh (1961)). Second, we will discuss "sources of hardness": which aspects of the expressive power make decision problems for logics over the integers hard. Addition and multiplication combined enable simulation of arbitrary Turing machines, and restriction of multiplication to bounded integers corresponds to resource-bounded Turing machines. How big can these bounded integers be in Presburger arithmetic? This leads to the problem of representing big numbers with small logical formulae, and we will see constructions by Fischer and Rabin (1974) and by Haase (2014). We will also look at the new "route" for expressing arithmetic progressions (in the presence of quantifier alternation) via continued fractions, recently discovered by Nguyen and Pak (2017). Alexis Bes Décider (R,+,<,1) dans (R,+,<,Z) La structure (R,+,<,Z), où R désigne l'ensemble des réels et Z le prédicat unaire "être un entier", admet l'élimination des quantificateurs et est décidable. Elle intervient notamment dans le domaine de la spécification et la vérification de systèmes hybrides. Elle peut être étudiée via les automates, en considérant des automates de Büchi qui lisent des réels représentés dans une base entière fixée. Boigelot et al. ont démontré en particulier que la classe des relations définissables dans (R,+,<,Z) coïncide avec celle des relations reconnaissables par automate en toute base. Une autre structure intéressante est (R,+,<,1), qui est moins expressive que (R,+,<,Z) mais définit les mêmes relations bornées. On présente une caractérisation topologique des relations définissables dans (R,+,<,Z) qui sont définissables dans (R,+,<,1), et on en déduit que le problème de savoir si une relation définissable dans (R,+,<,Z) est définissable dans (R,+,<,1) est décidable. Travail en commun avec Christian Choffrut. Patrick Totzke Timed Basic Parallel Processes I will talk about two fun constructions for reachability analysis of one-clock timed automata, which lead to concise logical characterizations in existential Linear Arithmetic. The first one describes "punctual" reachability relations: reachability in exact time t. It uses a coarse interval abstraction and counting of resets via Parikh-Automata. The other is a "sweep line" construction to compute optimal time to reach in reachability games played on one-clock TA. Together, these can be used to derive a (tight) NP complexity upper bound for the coverability and reachability problems in an interesting subclass of Timed Petri Nets, which naturally lends itself to parametrised safety checking of concurrent, real-time systems. This contrasts with known super-Ackermannian completeness, and undecidability results for unrestricted Timed Petri nets. This is joint work with Lorenzo Clemente and Piotr Hofman, and was presented at CONCUR'19. Full details are available at https://arxiv.org/abs/1907.01240. Daniel Smertnig (University of Waterloo) Noncommutative rational Pólya series A rational series is a noncommutative formal power series whose coefficients are recognized by a weighted finite automaton (WFA). A rational series with coefficients in a field $K$ is a Pólya series if all nonzero coefficients are contained in a finitely generated subgroup of $K^\times$. Generalizing results of Pólya (1921), Benzaghou (1970), and Bézivin (1987) for the univariate case, we show that Pólya series are precisely the ones recognized by unambiguous WFAs. This is joint work with Jason Bell. arXiv:1906.07271 Lundi 28 octobre 2019, 11 heures, Salle 1007 Pierre Ganty (IMDEA Software Institute) Deciding language inclusion problems using quasiorders We study the language inclusion problem L1 ⊆ L2 where L1 is regular or context-free. Our approach checks whether an overapproximation of L1 is included in L2. Such overapproximations are obtained using quasiorder relations on words where the abstraction gives the language of all words "greater than or equal to" a given input word for that quasiorder. We put forward a range of quasiorders that allow us to systematically design decision procedures for different language inclusion problems such as context-free languages into regular languages and regular languages into trace sets of one-counter nets. Luca Reggio (Mathematical Institute, University of Bern) Limits of finite structures: a duality theoretic perspective A systematic approach to the study of limits of finite structures, motivated by investigations in graph theory, has been developed by Nešetřil and Ossona de Mendez starting in 2012. The basic idea consists in embedding the set of finite structures into a space of measures which is complete, so that every converging sequence of finite structures admits a limit. This limit point can be always realized as a measure. I will explain how this embedding into a space of measures dually corresponds to enriching First-Order Logic with certain probability operators. Further, I will relate this construction to first-order quantification in logic on words. This talk is based on joint work with M. Gehrke and T. Jakl. Gaëtan Douéneau-Tabot (IRIF) Pebble transducers for modeling simple programs Several models of automata with outputs (known as transducers) have been defined over the years to describe various classes of "regular-like" functions. Such classes generally have good decidability properties, and they have been shown especially relevant for program verification or synthesis. In this talk, we shall investigate pebble transducers, i.e. finite-state machines that can drop nested marks on their input. We provide various correspondences between these models and transducers that use registers, and we solve related membership problems. These results can be understood as techniques for program optimization, that can be useful in practice. This talk is based on joint work with P. Gastin and E. Filiot. Vendredi 5 juillet 2019, 14 heures 30, Salle 1001 Mahsa Shirmohammadi (CNRS) Büchi Objectives in Countable MDPs We study countably infinite Markov decision processes with Büchi objectives, which ask to visit a given subset of states infinitely often. A question left open by T.P. Hill in 1979 is whether there always exist ε-optimal Markov strategies, i.e., strategies that base decisions only on the current state and the number of steps taken so far. We provide a negative answer to this question by constructing a non-trivial counterexample. On the other hand, we show that Markov strategies with only 1 bit of extra memory are sufficient. This work is in collaboration with Stefan Kiefer, Richard Mayr and Patrick Totzke, and is going to be presented in ICALP 2019. A full version is at https://arxiv.org/abs/1904.11573 Engel Lefaucheux (Max-Planck Institute for Software Systems, Saarbrucken) Simple Priced Timed Games are not That Simple Priced timed games are two-player zero-sum games played on priced timed automata (whose locations and transitions are labeled by weights modeling the price of spending time in a state and executing an action, respectively). The goals of the players are to minimise or maximise the price to reach a target location. While one can compute the optimal values that the players can achieve (and their associated optimal strategies) when the weights are all positive, this problem with arbitrary integer weights remains open. In this talk, I will explain what makes this case more difficult and show how to solve the problem for a subclass of priced timed games (the so-called simple priced timed games). Vendredi 7 juin 2019, 14 heures 30, Salle 3052 Jean-Éric Pin (IRIF) Un théorème de Mahler pour les fonctions de mots. (Jean-Eric Pin et Christophe Reutenauer) Soit $p$ un nombre premier et soit $G_p$ la variété de tous les langages reconnus par un $p$-groupe fini (i.e. un groupe d'ordre une puissance de $p$). On donne deux façons de construire toutes les fonctions $f$ de $A^*$ dans $B^*$ (et même dans $F(B)$, le groupe libre de base $B$) qui possèdent la propriété suivante: si $L$ est une partie de $F(B)$ reconnue par un $p$-groupe fini, alors $f^{-1}(L)$ a la même propriété. Ce résultat découle d'une version non-commutative des séries de Newton et d'un célèbre théorème de Mahler en analyse $p$-adique. Jeremy Sproston (Université de Turin) Probabilistic Timed Automata with Clock-Dependent Probabilities Probabilistic timed automata are classical timed automata extended with discrete probability distributions over edges. In this talk, clock-dependent probabilistic timed automata, a variant of probabilistic timed automata in which transition probabilities can depend on clock values, will be described. Clock-dependent probabilistic timed automata allow the modelling of a continuous relationship between time passage and the likelihood of system events. We show that the problem of deciding whether the maximum probability of reaching a certain location is above a threshold is undecidable for clock-dependent probabilistic timed automata. On the other hand, we show that the maximum and minimum probability of reaching a certain location in clock-dependent probabilistic timed automata can be approximated using a region-graph-based approach. Vendredi 3 mai 2019, 14 heures 30, Salle 3052 Sam Van Gool (Utrecht University) Separation and covering for varieties determined by groups The separation problem for a variety of regular languages V asks to decide whether two disjoint regular languages can be separated by a language in V. The covering problem is a generalization of the separation problem to an arbitrary finite list of regular languages. The covering problem for the variety of star-free languages was shown to be decidable by Henckell. In fact, he gave an algorithm for an equivalent problem, namely, computing the pointlike subsets of a finite semigroup with respect to the variety of aperiodic semigroups, i.e., semigroups all of whose subgroups are trivial. In this talk, I will present the following wide generalization of Henckell's result. Let H be any decidable variety of groups. I will describe an algorithm for computing pointlike sets for the variety of semigroups all of whose subgroups are in H. The correctness proof for the algorithm uses asynchronous transducers, Schützenberger groups, and self-similarity. An application of our result is the decidability of the covering and separation problems for the variety of languages definable in first order logic with modular counting quantifiers. This talk is based on our paper S. v. Gool & B. Steinberg, Adv. in Math. 348, 18-50 (2019). Anaël Grandjean Points apériodiques dans la sous shifts de dimension 2 La théorie des espaces de pavages (sous-shifts) a été profondément façonnée par le résultat historique de Berger : un jeu de tuiles fini peut ne paver le plan que de manière apériodique. Ces points apériodiques sont au coeur de nombreuses directions de recherche du domaine, en mathématiques comme en informatique. Dans cette exposé, nous répondons aux questions suivantes en dimension 2 : Quelle est la complexité calculatoire de déterminer si un jeu de tuiles (espace de type fini) possède un point apériodique ? Comment se comportent les espaces de pavages ne possédant aucun point apériodique ? Nous montrons qu'un espace de pavage 2D sans point apériodique a une structure très forte : il est "équivalent" (presque conjugué) à un espace de pavage 1D, et ce résultat s'applique aux espaces de type fini ou non. Nous en déduisons que le problème de posséder un point apériodique est co-récursivement-énumérable-complet, et que la plupart des propriétés et méthodes propres au cas 1D s'appliquent aux espaces 2D sans point apériodique. La situation en dimension supérieure semble beaucoup moins claire. Cet exposé est issu d'une collaboration avec Benjamin Hellouin de Menibus et Pascal Vanier. Mardi 26 mars 2019, 13 heures, Salle 3052 Francesco Dolce (Université Paris Diderot, IRIF) Generalized Lyndon words A generalized lexicographical order on infinite words is defined by choosing for each position a total order on the alphabet. This allows to define generalized Lyndon words. Every word in the free monoid can be factorized in a unique way as a non-increasing factorization of generalized Lyndon words. We give new characterizations of the first and the last factor in this factorization as well as new characterization of generalized Lyndon words. We also give more specific results on two special cases: the classical one and the one arising from the alternating lexicographical order. This is a joint work with Antonio Restivo and Christophe Reutenauer. Reem Yassawi (CNRS, Institut Camille Jordan - Université Lyon 1 - Claude Bernard) Versions quantitatives du théorème de Christol Pour une suite $\mathbb{a} = (a_n)_{n≥0}$ à valeurs dans un corps fini $\mathbb{F}_q$, le théorème de Christol établit une équivalence entre $q$-automaticité de $\mathbb{a}$ ($\mathbb{a}$ calculable par un automate) et l'algebricité de la série formelle $f(x) = \sum a_n x^n$. Dans ce travail nous étudierons le nombre d'états de l'automate en fonction des paramètres du polynôme annulateur minimal de $f(x)$. Andrew Bridy a récemment donne une démonstration du théorème de Christol en utilisant des outils qui proviennent de la géométrie algébrique. Avec cette démonstration il majore le nombre d'états par une borne qui est optimale. Nous obtenons des bornes presque semblables par une démonstration élémentaire, et nous traçons les liens entre notre démonstration et celle de Bridy. Ceci est un travail en commun avec Boris Adamczewski. Mateusz Skomra (ÉNS Lyon) Condition numbers of stochastic mean payoff games and what they say about nonarchimedean convex optimization In this talk, we introduce a condition number of stochastic mean payoff games. To do so, we interpret these games as feasibility problems over tropically convex cones. In this setting, the condition number is defined as the maximal radius of a ball in Hilbert's projective metric that is included in the (primal or dual) feasible set. We show that this conditioning controls the number of value iterations needed to decide whether a mean payoff game is winning. In particular, we obtain a pseudopolynomial bound for the complexity of value iteration provided that the number of random positions is fixed. We also discuss the implications of these results for convex optimization problems over nonarchimedean fields and present possible directions for future research. The talk is based on joint works with X. Allamigeon, S. Gaubert, and R. D. Katz. Lama Tarsissi (Université Marne-la-Vallée, Paris Est) Christoffel words and applications. It is known that Christoffel words are balanced words on two letters alphabet, where these words are exactly the discretization of line segments of rational slope. Christoffel words are considered also in the topic of synchronization of k process by a word on a k letter alphabet with a balance property in each letter. Some applications for k = 2, we retrieve the usual Christoffel words. While for k > 2, the situation is more complicated and lead to the Fraenkel's conjecture that is an open conjecture for more than 40 years. In this talk, we show some tools that get us close to this conjecture. Another application to this family of words, we define a second order of balance by using some particular matrices, and we prove a recursive relation in constructing them. An interesting property can be deduced from these matrices, allowing us to give a supplementary characteristic for the Fibonnaci sequence. One more application to Christoffel words is discussed in this talk, in fact, by using all the properties of these words, we can apply them on the reconstruction of digital convex polyominoes. Since the boundary word of the digital convex polyominoe is made of Christoffel words with decreasing slopes. Hence we introduce a split operator that respects the decreasing order of the slopes and therefore the convexity is always conserved which is the first step toward this reconstruction. Alexandre Vigny (Université Paris Diderot) Query enumeration and nowhere dense classes of graphs Given a query q and a relational structure D the enumeration of q over D consists in computing, one element at a time, the set q(D) of all solutions to q on D. The delay is the maximal time between two consecutive output and the preprocessing time is the time needed to produce the first solution. Ideally, we would like to have constant delay enumeration after linear preprocessing. Since this it is not always possible to achieve, we need to add restrictions to the classes of structures and/or queries we consider. In this talk I will talk about some restrictions for which such algorithms exist: graphs with bounded degree, tree-like structures, conjunctive queries… We will more specifically consider nowhere dense classes of graphs: What are they? Why is this notion relevant? How to make algorithms from these graph properties? Paul-André Melliès (IRIF) Higher-order parity automata In this talk, I will introduce a notion of higher-order parity automaton which extends the traditional notion of parity tree automaton on infinitary ranked trees to the infinitary simply-typed lambda-calculus. Our main result is that the acceptance of an infinitary lambda-term by a higher-order parity automaton A is decidable, whenever the infinitary lambda-term is generated by a finite and simply-typed lambda-Y-term. The decidability theorem is established by combining ideas coming from automata theory, denotational semantics and infinitary rewriting theory. You will find the extended abstract of the talk here: https://www.irif.fr/~mellies/papers/higher-order-parity-automata.pdf Elise Vandomme (Université Technique Tchèque de Prague) New notions of recurrence in a multidimensional setting In one dimension, an infinite word is said to be recurrent if every prefix occurs at least twice. A straightforward extension of this definition in higher dimensions turns out to be rather unsatisfying. In this talk, we present several notions of recurrence in the multidimensional case. In particular, we are interested in words having the property to be strongly uniformly recurrent: for each direction q, every prefix occurs in that direction (i.e. in positions iq) with bounded gaps. We will provide several constructions of such words and focus on the strongly uniform recurrence in the case of square morphisms. Nathan Grosshans The power of programs over monoids taken from some small varieties of finite monoids The computational model of programs over monoids, introduced by Barrington and Thérien in the late 1980s, gives a way to generalise the notion of (classical) recognition through morphisms into monoids in such a way that almost all open questions about the internal structure of the complexity class NC^1 can be reformulated as understanding what languages (and, in fact, even regular languages) can be program-recognised by monoids taken from some given variety of finite monoids. Unfortunately, for the moment, this finite semigroup theoretical approach did not help to prove any new result about the internal structure of NC^1 and, even worse, any attempt to reprove well-known results about this internal structure (like the fact that the language of words over the binary alphabet containing a number of 1s not divisible by some fixed integer greater than 1 is not in AC^0) using techniques stemming from algebraic automata theory failed. In this talk, I shall present the model of programs over monoids, explain how it relates to "small" circuit complexity classes and present some of the contributions I made during my Ph.D. thesis to the understanding of the computational power of programs over monoids, focusing on the well-known varieties of finite monoids DA and J (giving rise to "small" circuit complexity classes well within AC^0). I shall conclude with a word about ongoing work and future research directions. Adrien Boiret Learning Top-Down Tree Transducers using Myhill Nerode or Lookahead We consider the problem of passive symbolic learning in the case of deterministic top-down tree transducers (DTOP). The passive learning problem deals with identifying a specific transducer in normal form from a finite set of behaviour examples. This problem is solved in word languages using the RPNI algorithm, that relies heavily on the Myhill-Nerode characterization of a minimal normal form on DFA. Its extensions to word transformations and tree languages follow the same pattern: first, a Myhill-Nerode theorem is identified, then the normal form it induces can be learnt from examples. To adapt this result in tree transducers, the Myhill-Nerode theorem requires that DTOP are considered with an inspection, i.e. an automaton that recognized the domain of the transformation. In its original form, the normalization (minimal earliest compatible normal form) and learning of DTOP is limited to deterministic top-down tree automata as inspections. In this talk, we show the challenges that an extension to regular inspections presents, and present two concurrent ways to deal with them: first, by an extension of the Myhill-Nerode theorem on DTOP to the regular case, by defining a minimal *leftmost* earliest compatible normal form. second, by reducing the problem to top-down domains, by using the regular inspection as a lookahead The merits of these methods will be discussed for possible extensions of these methods to data trees. Olivier Carton (IRIF) Discrepancy and nested perfect necklaces M. B. Levin constructed a real number x such that the first N terms of the sequence b^n x mod 1 for n >= 1 have discrepancy $O((log N)^2/N)$. This is the lowest discrepancy known for this kind of sequences. In this talk, we present Levin's construction in terms of nested perfect necklaces, which are a variant of the classical de Bruijn sequences. For base 2 and the order being a power of 2, we give the exact number of nested perfect necklaces and an explicit method based on matrices to construct each of them. Jérôme Leroux (LaBRI) The Reachability Problem for Petri Nets is Not Elementary Petri nets, also known as vector addition systems, are a long established and widely used model of concurrent processes. The complexity of their reachability problem is one of the most prominent open questions in the theory of verification. That the reachability problem is decidable was established by Mayr in his seminal STOC 1981 work, and the currently best upper bound is non-primitive recursive cubic-Ackermannian of Leroux and Schmitz from LICS 2015. We show that the reachability problem is not elementary. Until this work, the best lower bound has been exponential space, due to Lipton in 1976. Joint work with Wojciech Czerwinski, Slawomir Lasota, Ranko Lazic, Filip Mazowiecki. Colin Riba (École Normale Supérieure de Lyon) A Curry-Howard approach to tree automata Rabin's Tree Theorem proceeds by effective translations of MSO-formulae to tree automata. We show that the operations on automata used in these translations can be organized in a deduction system based on intuitionistic linear logic (ILL). We propose a computational interpretation of this deduction system along the lines of the Curry-Howard proofs-as-programs correspondence. This interpretation, relying on the usual technology of game semantics, maps proofs to strategies in categories of two-player games generalizing the usual acceptance games of tree automata. Antoine Amarilli (Télécom ParisTech) Topological Sorting under Regular Constraints We present our work on what we call the constrained topological sorting problem (CTS): given a regular language K and a directed acyclic graph G with labeled vertices, determine if G has a topological sort that forms a word in K. This natural problem applies to several settings, e.g., scheduling with costs or verifying concurrent programs. We consider the problem CTS[K] where the target language K is fixed, and study its complexity depending on K. Our work shows that CTS[K] is tractable when K falls in several language families, e.g., unions of monomials, which can be used for pattern matching. However, we can show that CTS[K] is NP-hard for K = (ab)^* using a shuffle reduction technique that we can use to show hardness for more languages. We also study the special case of the constrained shuffle problem (CSh), where the input graph is a disjoint union of strings, and show that CSh[K] is additionally tractable when K is a group language or a union of district group monomials. We conjecture that a dichotomy should hold on the complexity of CTS[K] or CSh[K] depending on K, and substantiate this by proving a coarser dichotomy under a different problem phrasing which ensures that tractable languages are closed under common operators. Dominique Perrin (Université Paris-Est Marne-la-Vallée) Groups, languages and dendric shifts We present a survey of results obtained on symbolic dynamical systems called dendric shifts. We state and sketch the proofs (sometimes new ones) of the main results obtained on these shifts. This includes the Return Theorem and the Finite Index Basis Theorem which both put in evidence the central role played by free groups in these systems. We also present a series of applications of these results, including some on profinite semigroups and some on dimension groups. Sébastien Labbé (IRIF) Structure substitutive des pavages apériodiques de Jeandel-Rao En 2015, Jeandel et Rao ont démontré par des calculs exhaustifs faits par ordinateur que tout ensemble de tuiles de Wang de cardinalité ≤ 10 soit admettent un pavage périodique du plan Z² soit n'admettent aucun pavage du plan. De plus, ils ont trouvé un ensemble de 11 tuiles de Wang qui pavent le plan mais jamais de façon périodique. Dans cet exposé, nous présenterons une définition alternative des pavages apériodiques de Jeandel-Rao comme le codage d'une Z²-action sur le tore et nous décrivons la structure substitutive de ces pavages. Vendredi 16 novembre 2018, 14 heures 30, Salle 358 Manon Stipulanti (Université de Liège) A way to extend the Pascal triangle to words The Pascal triangle and the corresponding Sierpinski gasket are well-studied objects. They exhibit self-similarity features and have connections with dynamical systems, cellular automata, number theory and automatic sequences in combinatorics on words. In this talk, I will first recall the well-known link between those two objects. Then I will exploit it to define Pascal-like triangles associated with different numeration systems, and their analogues of the Sierpinski gasket. This a work in collaboration with Julien Leroy and Michel Rigo (University of Liège, Belgium). Vendredi 9 novembre 2018, 14 heures 30, Salle 358 Fabian Reiter (LSV) Counter Machines and Distributed Automata: A Story about Exchanging Space and Time I will present the equivalence of two classes of counter machines and one class of distributed automata. The considered counter machines operate on finite words, which they read from left to right while incrementing or decrementing a fixed number of counters. The two classes differ in the extra features they offer: one allows to copy counter values, whereas the other allows to compute copyless sums of counters. The considered distributed automata, on the other hand, operate on directed path graphs that represent words. All nodes of a path synchronously execute the same finite-state machine, whose state diagram must be acyclic except for self-loops, and each node receives as input the state of its direct predecessor. These devices form a subclass of linear-time one-way cellular automata. This is joint work with Olivier Carton and Bruno Guillon. Andrew Rizhikov (University Paris-Est Marne-la-Vallée) Finding short synchronizing and mortal words for prefix codes We study approximation algorithms for two closely related problems: the problems of finding a short synchronizing and a short mortal word for a given prefix code. Roughly speaking, a synchronizing word is a word guaranteeing a unique interpretation, and a mortal word is a word guaranteeing no interpretation for any sequence of codewords. We concentrate on the case of finite prefix codes and consider both the cases where the code is defined by listing all its codewords and where the code is defined by an automaton recognizing the star of the code. This is a joint work with Marek Szykuła (University of Wroclaw). Sam Van Gool (University of Amsterdam, ILLC) Non encore annoncé. Jacques Sakarovitch (IRIF/CNRS and Telecom ParisTech) The complexity of carry propagation for successor functions Given any numeration system, we call 'carry propagation' at a number N the number of digits that are changed when going from the representation of N to the one of N+1 , and 'amortized carry propagation' the limit of the mean of the carry propagations at the first N integers, when N tends to infinity, and if it exists. We address the problem of the existence of the amortized carry propagation and of its value in non-standard numeration systems of various kinds: abstract numeration systems, rational base numeration systems, greedy numeration systems and beta-numeration. We tackle the problem by means of techniques of three different types: combinatorial, algebraic, and ergodic. For each kind of numeration systems that we consider, the relevant method allows to establish sufficient conditions for the existence of the carry propagation and examples show that these conditions are close to be necessary. This is a joint work with Valérie Berthé, Christiane Frougny, and Michel Rigo Nathanaël Fijalkow (LABRI) Where the universal trees grow I will talk about parity games. There are at least three different recent algorithms which solve them in quasipolynomial time. In this talk, I will show that the three algorithms can be seen as solutions of one automata-theoretic problem. Using this framework, I will show tight upper and lower bounds, witnessing a quasipolynomial barrier. This is based on two joint works, the first with Wojtek Czerwinski, Laure Daviaud, Marcin Jurdzinski, Ranko Lazic, and Pawel Parys, and the second with Thomas Colcombet. Pierre Ohlmann (IRIF) Unifying non-commutative arithmetic circuit lower bounds We develop an algebraic lower bound technique in the context of non-commutative arithmetic circuits. To this end, we introduce polynomials for which the multiplication is also non-associative, and focus on their circuit complexity. We show a connection with multiplicity tree automata, leading to a general algebraic characterization. We use it to derive meta-theorems for the non-commutative case, and highlight numerous consequences in terms of lower bounds. Mercredi 13 juin 2018, 15 heures, Salle 3052 Joël Ouaknine (Max Planck Institute) Program Invariants Automated invariant generation is a fundamental challenge in program analysis and verification, going back many decades, and remains a topic of active research. In this talk I'll present a select overview and survey of work on this problem, and discuss unexpected connections to other fields including algebraic geometry, group theory, and quantum computing. (No previous knowledge of these fields will be assumed.) This is joint work with Ehud Hrushovski, Amaury Pouly, and James Worrell. Date inhabituelle : Mercredi Ines Klimann (IRIF) Groups generated by bireversible Mealy automata: a combinatorial explosion The study on how (semi)groups grow has been highlighted since Milnor's question on the existence of groups of intermediate growth (faster than any polynomial and slower than any exponential) in 1968. A very first example of such a group was given by Grigorchuk in 1983 in terms of an automaton group, and, until Nekrashevych's very recent work, all the known examples of intermediate growth groups were automaton groups or based on automaton groups. This talk originates in the following question: is it decidable if an automaton group has intermediate growth? I will show that in the case of bireversible automata, whenever there exists at least one element of infinite order, the growth of the group is necessarily exponential. (This work will be presented at ICALP'18.) Ulrich Ultes-Nitsche (University of Fribourg) A Simple and Optimal Complementation Algorithm for Büchi-Automata In my presentation, I am going to present joint work with Joel Allred on the complementation of Büchi automata. When constructing the complement automaton, a worst-case state-space growth of O((0.76n)^n) cannot be avoided. Experiments suggest that complementation algorithms perform better on average when they are structurally simple. We develop a simple algorithm for complementing Büchi automata, operating directly on subsets of states, structured into state-set tuples, and producing a deterministic automaton. Then a complementation procedure is applied that resembles the straightforward complementation algorithm for deterministic Büchi automata, the latter algorithm actually being a special case of our construction. Finally, we prove our construction to be optimal, i.e. having an upper bound in O((0.76n)^n), and furthermore calculate the 0.76 factor in a novel exact way. Irène Guessarian (IRIF) Congruence preservation, treillis et reconnaissabilite Looking at some monoids and (semi)rings (natural numbers, integers and p- adic integers), and more generally, residually finite algebras (in a strong sense), we prove the equivalence of two ways for a function on such an algebra to behave like the operations of the algebra. The first way is to preserve congruences or stable preorders. The second way is to demand that preimages of recognizable sets belong to the lattice or the Boolean algebra generated by the preimages of recognizable sets by "derived unary operations" of the algebra (such as trans- lations, quotients,. . . ). Davide Mottin (Hasso Platner Institute) Graph Exploration: Graph Search made Easy The increasing interest in social networks, knowledge graphs, protein-interaction, and many other types of networks has raised the question how users can explore such large and complex graph structures easily. In this regard, graph exploration has emerged as a complementary toolbox for graph management, graph mining, or graph visualization in which the user is a first class citizen. Graph exploration combines and expands database, data mining, and machine learning approaches with the user eye on one side and the system perspective on the other. The talk shows how graph exploration can considerably support any analysis on graphs in a fresh and exciting manner, by combining interactive methods, personalized results, adaptive structures, and scalable algorithms. I describe the recent efforts for a graph exploration stack which supports interactivity, personalization, adaptivity, and scalability through intuitive and efficient techniques we recently proposed. The current methods show encouraging results in reducing the effort of experts and novice users in finding the information of interests through example-based approaches, personalized summaries, and active learning theories. Finally, I present the vision for the future in graph exploration research and show the chief challenges in databases, data analysis, and machine learning. Denis Kuperberg (ÉNS Lyon) Width of non-deterministic automata The issue of determinism versus non-determinism is central in computer science. In order to better understand this gap, the intermediary model of Good-for-Games (GFG) automata is currently being explored in its various aspects. A GFG automaton is a non-deterministic automaton on finite or infinite words, where accepting runs can be built on-the-fly on valid input words. I will recall recent advances on this model, and describe a newly introduced generalisation: width. The width of an automaton can be viewed as a measure of its amount of nondeterminism. Width generalises the notion of GFG automata, which correspond to NFAs of width 1. I will describe how GFG or deterministic automata can be built from non-deterministic automata, with width being a crucial parameter in the construction. I will finally mention results and open problems related to the computational complexity of computing GFGness or width of automata. Victor Marsault (LFCS, University of Edinburgh) Formal semantics of the query-language Cypher Cypher is a query-language for property-graphs. It was originally designed and implemented as part of the Neo4j graph database, and it is currently used by several commercial database products and researchers. The semantics of Cypher queries is currently described using natural language and, as a result, it is often not well defined. This work is part of a project to define a full denotational semantics of Cypher queries. The talk will first present the main features of Cypher through examples, including the core mecanism: graph pattern-matching, and then will describe the principle of the formal semantics. Bénédicte Legastelois (LIP6) Extension pondérée des logiques modales dans le cadre des croyances graduelles La formalisation et le raisonnement sur des notions non-vérifonctionnelles, telles que la croyance, le savoir ou la certitude, sont des enjeux actuels de l'intelligence artificielle. Ces notions peuvent mener à représenter et évaluer des informations subjectives et sont, en particulier, formalisées en logique modale. Motivés par la modélisation du raisonnement sur les croyances graduelles, dont l'expressivité est enrichie par rapport aux croyances classiques, mes travaux portent sur les extensions pondérées des logiques modales. Dans le cadre général des logiques modales, je propose d'abord une sémantique proportionnelle pour des opérateurs modaux pondérés, basée sur des modèles de Kripke classiques. J'étudie ensuite la définition d'axiomes modaux pondérés étendant les axiomes classiques et propose une typologie les répartissant en quatre catégories, selon l'enrichissement du cas classique qu'ils produisent et leur correspondance avec la contrainte associée sur la relation d'accessibilité. D'autre part, je m'intéresse à une formalisation des croyances graduelles, basée sur la conception représentationaliste des croyances et reposant sur un modèle ensembliste flou. J'en étudie plusieurs aspects, comme les propriétés arithmétiques et l'application de la négation. Javier Esparza (Technical University of Munich) One Theorem to Rule Them All: A Unified Translation of LTL into omega-Automata We present a unified translation of LTL formulas into deterministic Rabin automata, limit-deterministic Büchi automata, and nondeterministic Büchi automata. The translations yield automata of asymptotically optimal size (double or single exponential, respectively). All three translations are derived from one single Master Theorem of purely logical nature. The Master Theorem decomposes the language of a formula into a positive boolean combination of languages that can be translated into omega-automata by elementary means. In particular, the breakpoint, Safra, and ranking constructions used in other translations are not needed. Joint work with Jan Kretinsky and Salomon Sickert. Séminaire de pôle Prakash Panangaden (McGill University) A canonical form for weighted automata and applications to approximate minimization We study the problem of constructing approximations to a weighted automaton. Weighted finite automata (WFA) are closely related to the theory of rational series. A rational series is a function from strings to real numbers that can be computed by a WFA. This includes probability distributions generated by hidden Markov models and probabilistic automata. The relationship between rational series and WFA is analogous to the relationship between regular languages and ordinary automata. Associated with such rational series are infinite matrices called Hankel matrices which play a fundamental role in the theory of minimal WFA. In this talk I describe: (1) an effective procedure for computing the singular value decomposition (SVD) of such infinite Hankel matrices based on their finite representation in terms of WFA; (2) a new canonical form for WFA based on this SVD decomposition; and, (3) an algorithm to construct approximate minimizations of a given WFA. The goal of the approximate minimization algorithm is to start from a minimal WFA and produce a smaller WFA that is close to the given one in a certain sense. The desired size of the approximating automaton is given as input. I will give bounds describing how well the approximation emulates the behavior of the original WFA. This is joint work with Borja Balle and Doina Precup and was presented at LICS 2015 in Kyoto. Sylvain Schmitz (LSV) Algorithmic Complexity of Well-Quasi-Orders The talk will be based on my habilitation defense talk from Nov. 27 2018, which was dedicated to the algorithmic complexity of well-quasi-orders. The latter find applications in verification, where they allow to tackle systems featuring an infinite state-space, representing for instance integer counters, the number of active threads in concurrent settings, real-time clocks, call stacks, cryptographic nonces, or the contents of communication channels. The talk gives an overview of the complexity questions arising from the use of well-quasi-orders, including the definition of complexity classes suitable for problems with non-elementary complexity and proof techniques for upper bounds. I will mostly focus on the ideas behind the first known complexity upper bound for reachability in vector addition systems and Petri nets. Précédée d'une réunion d'équipe à 13:45. Szymon Toruńczyk (MIMUW) Sparsity and Stability Nowhere dense graph classes, introduced by Nesetril and Ossona de Mendez, are a broad family of sparse graph classes for which many algorithmic problems which are hard in general become tractable. In particular, model checking first-order logic is fixed-parameter tractable over such classes, as shown recently by Grohe, Kreutzer, and Siebertz. With the aim of finding generalizations of this result to dense graph classes, I will talk about some recent developments in the study of the connections between nowheredenseness and stability (developed by Shelah). Verónica Becher (Universidad de Buenos Aires and CONICET) Randomness and uniform distribution modulo one How is algorithmic randomness related to the classical theory of uniform distribution? In this talk we consider the definition of Martin-Löf randomness for real numbers in terms of uniform distribution of sequences. We present a necessary condition for a real number to be Martin-Löf random, and a strengthening of that condition which is sufficient for Martin-Löf randomness. For this strengthening we define a notion of uniform distribution relative to the computably enumerable open subsets of the unit interval. We call the notion Sigma^0_1-uniform distribution. This is joint work with Serge Grigorieff and Theodore Slaman. Camille Bourgaux (Télécom ParisTech) Computing and explaining ontology-mediated query answers over inconsistent data The problem of querying description logic knowledge bases using database-style queries (in particular, conjunctive queries) has been a major focus of recent description logic research. An important issue that arises in this context is how to handle the case in which the data is inconsistent with the ontology. Indeed, since in classical logic an inconsistent logical theory implies every formula, inconsistency-tolerant semantics are needed to obtain meaningful answers. I will first present a practical approach for querying inconsistent DL-Lite knowledge bases using three natural semantics (AR, IAR, and brave) previously proposed in the literature and that rely on the notion of a repair, which is an inclusion-maximal subset of the data consistent with the ontology. Since these three semantics provide answers with different levels of confidence, I will then present a framework for explaining query results, to help the user to understand why a given answer was or was not obtained under one of the three semantics. Patricia Bouyer (LSV, CNRS et ENS Cachan) Nash equilibria in games on graphs with public signal monitoring We study Nash equilibria in games on graphs with an imperfect monitoring based on a public signal. In such games, deviations and players responsible for those deviations can be hard to detect and track. We propose a generic epistemic game abstraction, which conveniently allows to represent the knowledge of the players about these deviations, and give a characterization of Nash equilibria in terms of winning strategies in the abstraction. We then use the abstraction to develop algorithms for some payoff functions. Paul Brunet (University College London) Pomset languages and concurrent Kleene algebras Concurrent Kleene algebras (CKA) and bi-Kleene algebras support equational reasoning about computing systems with concurrent behaviours. Their natural semantics is given by series(-parallel) rational pomset languages, a standard true concurrency semantics, which is often associated with processes of Petri nets. In the first part of the talk, I will present an automaton model designed to describe such languages of pomset, which satisfies a Kleene-like theorem. The main difference with previous constructions is that from expressions to automata, we use Brzozowski derivatives. In a second part, I will use Petri nets to reduce the problem of containment of languages of pomsets to the equivalence of finite state automata. In doing so, we prove decidabilty as well as provide tight complexity bounds. I will finish the presentation by briefly presenting a recent proof of completness, showing that two series-rational expressions are equivalent according to the laws of CKA exactly when their pomset semantics are equal. Joint work with Damien Pous, Georg Struth, Tobias Kappé, Bas Luttik, Alexandra Silva, and Fabio Zanasi Michał Skrzypczak (University of Warsaw) Deciding complexity of languages via games My presentation is about effective characterisations: given a representation of a regular language, decide if the language is "simple" in some specific sense. A classical example of such a characterisation is the result by Schutzenberger, McNaughton, and Papert, saying that it is decidable if a given regular language of finite words can be defined in first-order logic. Over the years, such characterisations were provided for many other natural classes of languages, especially in the case of finite and infinite words. It is often assumed that a "golden standard" for such a characterisation is to provide equations that must be satisfied in a respective algebra representing the language. The aim of my talk is to survey a number of examples in which it is not possible to provide algebraic representation of the considered languages; but instead characterisations can be obtained by a well-designed game of infinite duration. Using these examples, I will try to argue that game-based approach is the natural replacement for algebraic framework in the cases where algebraic representations are not available. Laure Daviaud (University of Warwick) Max-plus automata and tropical identities In this talk I will discuss the following natural question: Given a class of computational models C, does there exist two distinct inputs which give the same output for all the models in the class. I will discuss this question more precisely for weighted automata in general and for max-plus automata in particular. Weighted automata are a quantitative extension of automata which allows to compute values such as costs and probabilities. Max-plus automata are a special case of weighted automata, particularly suitable to model gain optimisation problems. We will see that in this last case, we end up with particularly intricate (and open) questions, related to finding identities in the semiring of tropical matrices. Mikhail V. Volkov (Ural Federal University, Russie) Completely reachable automata: an interplay between semigroups, automata, and trees We present a few results and several open problems concerning complete deterministic finite automata in which every non-empty subset of the state set occurs as the image of the whole state set under the action of a suitable input word. In particular, we give a complete description of such automata with minimal transition monoid size. Sylvain Perifel (IRIF) Lempel-Ziv: a "one-bit catastrophe" but not a tragedy The robustness of the famous compression algorithm of Lempel and Ziv is still not well understood: in particular, until now it was unknown whether the addition of one bit in front of a compressible word could make it incompressible. This talk will answer that question, advertised by Jack Lutz under the name "one-bit catastrophe" and which has been around since at least 1998. We will show that a "well" compressible word remains compressible when a bit is added in front of it, but some "few" compressible words indeed become incompressible. This is a joint work with Guillaume Lagarde. Nahtanaël Fijalkow (University College London) Comparing the speed of semi-Markov decision processes A Markov decision process models the interactions between a controller giving inputs and a stochastic environment. In this well-studied model, transitions are fired instantaneously. We study semi-Markov decision processes, where each transition takes some time to fire, determined by a given probabilistic distribution (for instance, an exponential distribution). The question we investigate is how to compare two semi-Markov decision processes. We introduce and study the algorithmic complexity of two relations, "being faster than", and "being equally fast as". Réunion mensuelle de l'équipe automates à 13:45 dans la même salle Jeudi 13 juillet 2017, 14 heures 30, Amphi Turing Thibault Godin (IRIF) Mealy machines, automaton (semi)groups, decision problems, and random generation (PhD defence) Dans le cadre des journées de clôture du projet MealyM (https://mealym.sciencesconf.org/) Manuscrit disponible ici : https://www.irif.fr/_media/users/godin/these30-06-17.pdf Lundi 10 juillet 2017, 14 heures 30, Amphi Turing Matthieu Picantin (IRIF) Automates, (semi)groupes et dualités (soutenance d'habilitation) Manuscrit disponible ici : https://mealym.sciencesconf.org/data/program/HdR.pdf Vendredi 7 juillet 2017, 14 heures, 0010 Bruno Karelović (IRIF) Analyse Quantitative des Systèmes Stochastiques - Jeux de Priorité et Population de Chaînes de Markov (soutenance de thèse) Thomas Garrity Classifying real numbers using continued fractions and thermodynamics. A new classification scheme for real numbers will be given, motivated by ideas from statistical mechanics in general and work of Knauf and Fiala and Kleban in particular. Critical for this classification of real numbers will be the Diophantine properties of continued fraction expansions. Underneath this classification is a new partition function on the space of infinite sequences of zeros and ones. Pierre Ohlmann (ENS de Lyon) Invariant Synthesis for Linear Dynamical Systems The Orbit Problem consists of determining, given a linear transformation $A$ on $Q^d$, together with vectors $x$ and $y$, whether the orbit of $x$ under repeated applications of $A$ can ever reach $y$. We will investigate this problem with a different point of view: is it possible to synthesise suitable invariants, that is, subsets of $Q^d$ that contain $x$ but not $y$. Such invariants provide natural certificates for negative instances of the Orbit Problem. We will show that semialgebraic invariants exist in all reasonable cases. A more recent (yet unpublished) result is that existence of semilinear invariants is decidable. This is a joint work with Nathanaël Fijalkow, Joël Ouaknine, Amaury Pouly and James Worrell, published in STACS 2017. Michaël Cadilhac (U. Tübingen) Continuity & Transductions, a theory of composability Formal models for the computation of problems, say circuits, automata, Turing machines, can be naturally extended to compute word-to-word functions. But abstracting from the computation model, what does it mean to "lift" a language class to functions? We propose to address that question in a first step, developing a robust theory that incidentally revolves around the (topological) notion of continuity. In language-theoretic terms, a word-to-word function is V-continuous, for a class of languages V, if it preserves membership in V by inverse image. In a second step, we focus on transducers, i.e., automata with letter output. We study the problem of deciding whether a given transducer realizes a V-continuous function, for some classical classes V (e.g., aperiodic languages, group languages, piecewise-testable, …). If time allows, we will also see when there exists a correlation between the transducer structure (i.e., its transition monoid), and its computing a continuous function. Joint work with Olivier Carton, Andreas Krebs, Michael Ludwig, Charles Paperman. Anaël Grandjean (LIRMM) Small complexity classes for cellular automata, dealing with diamond and round neighborhood We are interested in 2-dimensional cellular automata and more precisely in the recognition of langages in small time. The time complexity we consider is called real-time and is rather classic for the study of cellular automata. It consists of the smallest amount of time needed to read the whole imput. It has been shown that this set of langages depend on the neighborhood of the automaton. For example the two most used neighborhoods (Moore and von Neumann ones) are different with respect to this complexity. Our study deals with more generic sets of neighborhoods, round and diamond neighborhoods. We prove that all diamond neighborhoods can recognize the same langages in real time and that the round neighborhoods can recognize stricly less than the diamond ones. Paul-Elliot Anglès D'auriac (LACL) Higher computability and Randomness Several notions of computability have been defined before every one agreed that Turing Machines are the good model of computation, a statement raised to the widely accepted Church-Turing Thesis. However, since then, lots of stronger computability notions have been defined and studied, for the sake of math and because it gives us new insight on some already existing fields. In this talk, we will see two ways to extend usual computability: by defining a more powerful model, or in a more set theoretic fashion. The first method is used to define Infinite Time Turing Machine, a model where Turing Machines are allowed to compute throught infinite time (that is, throught the ordinals instead of the integers). It has a lot of links with admissibility theory. The second method is used to define alpha-recursion, where alpha is any admissible ordinal. It is an abstract and very general definition of computation. Even if it has a very set-theoretic basis, it reflects the idea of computation and contains the notions of Turing Machine and Infinite Time Turing Machines computabilities. It also includes Higher Computability. By investigating which properties on the extensions are needed to lift theorems to the new setting, we are able to isolate the important properties of the classical case. We also apply these generalized recursion theories to define randomness, in the same way that we did in the classical case: a string is said to be random if it has no exceptionnal properties, in a computable sense. Our new definition of computation then gives use new definition of randomness. (No prior knowledge on set theory is assumed.) Sebastián Barbieri (ENS Lyon) Symbolic dynamics and simulation theorems In this talk I will give a gentle introduction to symbolic dynamics and motivate an open question in this field: which are the structures where can we construct aperiodic tilings using local rules. I will then introduce the notion of simulation of an effective dynamical system and show how these results can be used to produce aperiodic tilings in extremely complicated structures. We end the talk by presenting a novel simulation theorem which allows to show the existence of such tilings in the Grigorchuk group. Wolfgang Steiner (IRIF) Recognizability for sequences of morphisms We investigate different notions of recognizability for a free monoid morphism $\sigma: A^* \to B^*$. Full recognizability occurs when each (aperiodic) two-sided sequence over $B$ admits at most one tiling with words $\sigma(a)$, $a \in A$. This is stronger than the classical notion of recognizability of a substitution $\sigma$, where the tiling must be compatible with the language of the substitution. We show that if $A$ is a two-letter alphabet, or if the incidence matrix of $\sigma$ has rank $|A|$, or if $\sigma$ is permutative, then $\sigma$ is fully recognizable. Next we investigate the classical notion of recognizability and improve earlier results of Mossé (1992) and Bezuglyi, Kwiatkowski and Medynets (2009), by showing that any substitution is recognizable for aperiodic points in its substitutive shift. Finally we define (eventual) recognizability for sequences of morphisms which define an $S$-adic shift. We prove that a sequence of morphisms on alphabets of bounded size, such that compositions of consecutive morphisms are growing on all letters, is eventually recognizable for aperiodic points. We provide examples of eventually recognizable, but not recognizable, sequences of morphisms, and sequences of morphisms which are not eventually recognizable. As an application, for a recognizable sequence of morphisms, we obtain an almost everywhere bijective correspondence between the $S$-adic shift it generates and the measurable Bratteli-Vershik dynamical system that it defines. This is joint work with Valérie Berthé, Jörg Thuswaldner and Reem Yassawi. Alan J. Cain (U. Nova Lisbon) Automatic presentations for algebraic and relational structures An automatic presentation (also called an FA-presentation) is a description of a relational structure using regular languages. The concept an FA-presentation arose in computer science, to fulfil a need to extend finite model theory to infinite structures. Informally, an FA-presentation consists of a regular language of abstract representatives for the elements of the structure, such that each relation (of arity $n$, say) can be recognized by a synchronous $n$-tape automaton. An FA-presentation is "unary" if the language of representatives is over a 1-letter alphabet. In this talk, I will introduce and survey automatic presentations, with particular attention to connections with decidability and logic. I will then discuss work with Nik Ruskuc (Univ. of St Andrews, UK) and Richard Thomas (Univ. of Leicester, UK) on algebraic and combinatorial structures that admit automatic presentations or unary automatic presentations. The main focus will be on results that characterize the structures of some type (for example, groups, trees, or partially ordered sets) that admit automatic presentations. Cyril Nicaud (LIGM) Synchronisation d'automates aléatoires Il y a 50 ans, Cerny a posé une conjecture combinatoire sur les automates, qui n'est toujours pas résolue. Un automate est dit synchronisé quand il existe un mot u et un état p tel que depuis n'importe quel état, si on lit u on arrive en p. Sa conjecture est que si l'automate synchronisé possède n états, alors il existe un tel u de longueur au plus (n-1)2. Dans cet exposé, nous nous intéresserons à la version probabiliste de la conjecture de Cerny : on montrera qu'un automate aléatoire est non seulement synchronisé (résultat déjà prouvé par Berlinkov), mais qu'en plus la conjecture de Cerny est vraie avec forte probabilité. Martin Delacourt (U. Orléans) Des automates cellulaires unidirectionnels permutifs et du problème de la finitude pour les groupes d'automates. On s'intéresse au parallèle entre 2 problèmes sur des modèles distincts d'automates. D'une part, les automates de Mealy (transducteurs lettre à lettre complets) qui produisent des semi-groupes engendrés par les transformations sur les mots infinis associées aux états. En 2013, Gillibert a montré que le problème de la finitude de ces semi-groupes était indécidable, en revanche la question est ouverte dans le cas où l'automate de Mealy produit un groupe. D'autre part, les automates cellulaires unidirectionnels pour lesquels la question de la décidabilité de la périodicité est ouverte. On peut montrer l'équivalence de ces problèmes. On fera un pas vers une preuve d'indécidabilité en montrant qu'il est possible de simuler du calcul Turing dans un automate cellulaire unidirectionnel réversible, rendant ainsi des problèmes de prédiction indécidables ainsi que la question de la périodicité partant d'une configuration donnée finie. Fabian Reiter (IRIF) Asynchronous Distributed Automata: A Characterization of the Modal Mu-Fragment I will present the equivalence between a class of asynchronous distributed automata and a small fragment of least fixpoint logic, when restricted to finite directed graphs. More specifically, the considered logic is (a variant of) the fragment of the modal μ-calculus that allows least fixpoints but forbids greatest fixpoints. The corresponding automaton model uses a network of identical finite-state machines that communicate in an asynchronous manner and whose state diagram must be acyclic except for self-loops. As a by-product, the connection with logic also entails that the expressive power of those machines is independent of whether or not messages can be lost. Victor Marsault (University of Liège) An efficient algorithm to decide the periodicity of $b$-recognisable sets using MSDF convention Given an integer base $b>1$, a set of integers is represented in base $b$ by a language over $\{0,1,\dots,b-1\}$. The set is said $b$-recognisable if its representation is a regular language. It is known that eventually periodic sets are $b$-recognisable in every base $b$, and Cobham's theorem imply the converse: no other set is $b$-recognisable in every base $b$. We are interested in deciding whether a $b$-recognisable set of integers (given as a finite automaton) is eventually periodic. Honkala showed in 1986 that this problem is decidable and recent developments give efficient decision algorithms. However, they only work when the integers are written with the least significant digit first. In this work, we consider here the natural order of digits (Most Significant Digit First) and give a quasi-linear algorithm to solve the problem in this case. Guillaume Lagarde (IRIF) Non-commutative lower bounds No knowledge in arithmetic complexity will be assumed. We still don't know an explicit polynomial that requires non-commutative circuits of size at least superpolynomial. However, the context of non commutativity seems to be convenient to get such lower bound because the rigidity of the non-commutativity implies a lot of constraints about the ways to compute. It is in this context that Nisan, in 1991, provides an exponential lower bound against the non commutative Algebraic Branching Programs computing the permanent, the very first one in arithmetic complexity. We show that this result can be naturally seen as a particular case of a theorem about circuits with unique parse tree, and show some extensions to get closer to lower bounds for general NC circuits. Two joint works: with Guillaume Malod and Sylvain Perifel; with Nutan Limaye and Srikanth Srinivasan. Daniela Petrisan (IRIF) Quantifiers on languages and topological recognisers In the first part of the talk I will recall the duality approach to language recognition. To start with, I will explain the following simple fact. The elements of the syntactic monoid of a regular language $L$ over a finite alphabet $A$ are in one to one correspondence with the atoms of the finite sub-Boolean algebra of $P(A^*)$ generated by the quotients of $L$. This correspondence can be seen as an instance of Stone duality for Boolean algebras, and has lead to a topological notion of recognition for non-regular languages, the so called Boolean spaces with internal monoids. A fundamental tool in studying the connection between algebraic recognisers, say classes of monoids, and fragments of logics on words is the availability of constructions on monoids which mirror the action of quantifiers, such as block products or other kinds of semidirect products. In the second part of the talk I will discuss generalisations of these techniques beyond the case of regular languages and present a general recipe for obtaining constructions on the topological recognisers introduced above that correspond to operations on languages possibly specified by transducers. This talk is based on joint work with Mai Gehrke and Luca Reggio. Svetlana Puzynina (IRIF) Additive combinatorics generated by uniformly recurrent words A subset of natural numbers is called an IP-set if it contains an infinite increasing sequence of numbers and all its finite sums. In the talk we show how certain families of uniformly recurrent words can be used to generate IP-sets, as well as sets possessing some related additive properties. Nadime Francis (University of Edinburgh) Schema Mappings for Data Graphs Schema mappings are a fundamental concept in data integration and exchange, and they have been thoroughly studied in different data models. For graph data, however, mappings have been studied in a very restricted context that, unlike real-life graph databases, completely disregards the data they store. Our main goal is to understand query answering under graph schema mappings - in particular, in exchange and integration of graph data - for graph databases that mix graph structure with data. We show that adding data querying alters the picture in a very significant way. As the model, we use data graphs: a theoretical abstraction of property graphs employed by graph database implementations. We start by showing a very strong negative result: using the simplest form of nontrivial navigation in mappings makes answering even simple queries that mix navigation and data undecidable. This result suggests that for the purposes of integration and exchange, schema mappings ought to exclude recursively defined navigation over target data. For such mappings and analogs of regular path queries that take data into account, query answering becomes decidable, although intractable. To restore tractability without imposing further restrictions on queries, we propose a new approach based on the use of null values that resemble usual nulls of relational DBMSs, as opposed to marked nulls one typically uses in integration and exchange tasks. If one moves away from path queries and considers more complex patterns, query answering becomes undecidable again, even for the simplest possible mappings. Nathanaël Fijalkow (Alan Turing Institute) Logical characterization of Probabilistic Simulation and Bisimulation. I will discuss a notion of equivalence between two probabilistic systems introduced by Larsen and Skou in 89 called probabilistic bisimulation. In particular, I will look at logical characterizations for this notion: the goal is to describe a logic such that two systems are bisimilar if and only if they satisfy the same formulas. This question goes all the way back to Hennessey and Millner for non probabilistic transition systems. I will develop topological tools and give very general logical characterization results for probabilistic simulation and bisimulation. Reem Yassawi (IRIF) Extended symmetries of some higher dimensional shift spaces. Let $(X,T)$ be a one-dimensional invertible subshift. The symmetry group of $(X,T)$ is the group of all shift-commuting homeomorphisms $X$. In the larger reversing symmetry group of $(X,T)$, we also consider homeomorphisms $\Phi$ of $X$ where $\Phi \circ T= T^{-1}\circ \Phi$, also called lip conjugacies. We define a generalisation of the reversing symmetry group for higher dimensional shifts, and we find this extended symmetry group for two prototypical higher dimensional shifts, namely the chair substitution shift and the Ledrappier shift. Joint work with M. Baake and J.A.G Roberts. French version: Les automorphismes généralisés des sous shifts. Soit $(X,\mathbb Z^d)$ un soushift inversible. Nous définissons le groupe des automorphismes généralisés: c'est le normalisateur du groupe engendré par le shift dans le groupe d'homéomorphismes de $X$. Nous trouvons les automorphismes généralisés de deux shifts prototyiques: le pavage de la chaise et le soushift Ledrappier. En collaboration avec M. Baake et J.A.G Roberts. Vendredi 6 janvier 2017, 14 heures 30, Salle 1006 Alexandre Vigny (IMJ-PRG) Query enumeration and Nowhere-dense graphs The evaluation of queries is a central problem in database management systems. Given a query q and a database D the evaluation of q over D consists in computing the set q(D) of all answers to q on D. An interesting case is when the query is boolean (aka the model checking problem, where the answer to the query is either a "yes" or a "no"). Even for boolean query, the problem of computing the answer (with input q and D) is already PSpace-complete. For non-boolean queries, the size of the output can blow up to |D|^r, where r is the arity of q. It is therefore not always realistic to compute the entire set of solutions. Moreover, the time needed to construct the set might not reflect the difficulty of the task. In this talk we will discuss query enumeration, that is outputting the solutions one by one. Two parameters enter in play, the delay and the preprocessing time. The delay is the maximal time between two consecutive output and the preprocessing time is the time needed to produce the first solution. We will investigate cases where the delay is constant (does not depend on the size of the database) and the preprocessing is linear (in the size of the database) i.e. constant delay enumeration after linear preprocessing. This is not always possible as this implies a linear model-checking. We will therefore add restriction to the classes of databases and/or queries such as bounded degree databases, tree-like structures, conjunctive queries… Benjamin Hellouin (IRIF) Computing the entropy of mixing tilings The entropy of a language is a measure of its complexity and a well-studied dynamical invariant. I consider two related questions: for a given class of languages, can this parameter be computed, and what values can it take? In 1D tilings (subshifts) of finite type, we have known how to compute the entropy for 30 years, and the method gives an algebraic characterisation of possible values. In higher dimension, a surprise came in 2007: not only is the entropy not computable in general, but any upper-semi-computable real number appears as entropy - a weak computational condition. Since then new works have shown that entropy becomes computable again with aditionnal mixing hypotheses. We do not know yet where the border between computable and uncomputable lies. In this talk, I will explore the case of general subshifts (not of finite type) in any dimension, hoping to shed some light on the finite type case. I relate the computational difficulty of computing the entropy to the difficulty of deciding if a word belongs to the language. I exhibit a threshold in the mixing rate where the difficulty of the problem jumps suddenly, the very phenomenon that is expected in the finite type case. This is a joint work with Silvère Gangloff and Cristobal Rojas. Christian Choffrut (IRIF) Some equational theories of labeled posets Joint work with Zoltán Ésik University of Szeged, Hungary We equip the collection of labeled posets (partially ordered sets), abbreviated l.p., with different operations: series product (concatenation of l.p), parallel product (disjoint union of posets), omega-power (concatenation of an omega sequence of the same poset) and omega-product (concatenation of an omega sequence of possibly different posets, which has therefore infinite arity). We select four subsets of these operations and show that in each case the equational theory is axiomatizable. We characterize the free algebras in the corresponding varieties, both algebraically as classes which are closed under the above operations as well as combinatorially as classes of partially ordered subsets. We also study the decidability issues when the question makes sense. Nous munissons la collection des posets étiquetés (ensembles partiellement), en abrégé p.e., de différentes opérations: lproduit série (concaténation de p.e.), produit parallèle (union disjointe de p.e.), omega puissance (concaténation d'une omega suite du même p.e.) et omega produit (concaténation d'une omega suite de p.e., éventuellement différents, donc d'arité infinie. Nous distinguons quatre sous-ensembles parmi les opérations ci-dessus et nous montrons que dans chaque cas la théorie équationnelle est axiomatisable. Nous caractérisons les algèbres libres dans les variétiés correspondante aussi bien algébriquement en tant classes d'algèbres fermées pour les opérations ci-dessus et combinatoriquement en tant que classes de structures ordonnées. Nous étudions aussi les problèmes de décidabilité quand ils ont un sens. Benedikt Bollig (LSV, ENS de Cachan) One-Counter Automata with Counter Observability In a one-counter automaton (OCA), one can produce a letter from some finite alphabet, increment and decrement the counter by one, or compare it with constants up to some threshold. It is well-known that universality and language inclusion for OCAs are undecidable. In this paper, we consider OCAs with counter observability: Whenever the automaton produces a letter, it outputs the current counter value along with it. Hence, its language is now a set of words over an infinite alphabet. We show that universality and inclusion for that model are PSPACE-complete, thus no harder than the corresponding problems for finite automata. In fact, by establishing a link with visibly one-counter automata, we show that OCAs with counter observability are effectively determinizable and closed under all boolean operations. http://www.lsv.ens-cachan.fr/~bollig/ Nathan Lhote (LaBRI & ULB) Towards an algebraic theory of rational word functions In formal language theory, several different models characterize regular languages, such as finite automata, congruences of finite index, or monadic second-order logic (MSO). Moreover, several fragments of MSO have effective characterizations based on algebraic properties, the most famous example being the Schützenberger-McNaughton and Papert theorem linking first-order logic with aperiodic congruences. When we consider transducers instead of automata, such characterizations are much more challenging, because many of the properties of regular languages do not generalize to regular word functions. In this paper we consider functions that are definable by one-way transducers (rational functions). We show that the canonical bimachine of Reutenauer and Schützenberger preserves certain algebraic properties of rational functions, similar to the syntactic congruence for languages. In particular, we give an effective characterization of functions that can be defined by an aperiodic one-way transducer. Vendredi 4 novembre 2016, 9 heures 20, Salle 3052 Lia Infinis Workshop (09h20 - 09h30) Opening (09h30 - 10h00) Serge Grigorieff : "Algorithmic randomness and uniform distribution modulo one" (10h00 - 10h30) Stéphane Demri : "Reasoning about data repetitions with counter systems" (10h30 - 11h00) Coffee Break (11h00 - 11h30) Michel Habib : "A nice graph problem coming from biology: the study of read networks" (11h30 - 12h00) Delia Kesner : "Completeness of Call-by-Need (A fresh view)" (12h00 - 12h30) Pierre Vial : "Infinite Intersection Types as Sequences: a New Answer to Klop's Problem" (12h30 - 14h00) Lunch (Buffon Restaurant - 17 rue Hélène Brion - Paris 13ème) (14h00 - 14h30) Verónica Becher : "Finite-state independence and normal sequences" (14h30 - 15h00) Brigitte Vallée : "Towards the random generation of arithmetical objects" (15h00 - 15h30) Valérie Berthé : "Dynamical systems and their trajectories" (16h00 - 16h30) Nicolás Alvarez : "Incompressible sequences on subshifts of finite type" (16h30 - 17h00) Eugene Asarin : "Entropy Games" (17h00 - 18h00) Discussion about the future of LIA INFINIS More details are available here. Vincent Jugé (LSV, ENS de Cachan) Is the right relaxation normal form for braids automatic? Representations of braids as isotopy classes of laminations of punctured disks are related with a family of normal forms, which we call relaxation normal forms. Roughly speaking, every braid is identified with a picture on a punctured disk, and reducing step-by-step the complexity of this picture amounts to choosing a relaxation normal form of the braid. We will study the right relaxation normal form, which belongs to this family of normal forms. We will show that it is regular, and that it is synchronously bi-automatic if and only if the braid group has 3 punctures or less. Georg Zetzsche (LSV, ENS de Cachan) Subword Based Abstractions of Formal Languages A successful idea in the area of verification is to consider finite-state abstractions of infinite-state systems. A prominent example is the fact that many language classes satisfy a Parikh's theorem, i.e. for each language, there exists a finite automaton that accepts the same language up to the order of letters. Hence, provided that the abstraction preserves pertinent properties, this allows us to work with finite-state systems, which are much easier to handle. While Parikh-style abstractions have been studied very intensely over the last decades, recent years have seen an increasing interest in abstractions based on the subword ordering. Examples include the set of (non necessarily contiguous) subwords of members of a language (the downward closure), or their superwords (the upward closure). Whereas it is well-known that these closures are regular for any language, it is often not obvious how to compute them. Another type of subword based abstractions are piecewise testable separators. Here, a separators acts as an abstraction of a pair of languages. This talk will present approaches to computing closures, deciding separability by piecewise testable languages, and a (perhaps surprising) connection between these problems. If time permits, complexity issues will be discussed as well. http://zetzsche.xyz/ Léo Exibard Alternating Two-way Two-tape Automata In this talk, we study a model computing relations over finite words, generalising one- and two-way transducers. The model, called two-way two-tape automaton, consists in a finite-state machine with two read-only tapes, each one with a reading head able to go both ways. We first emphasize its relation with 4-way automata, which recognize sets of two-dimensional arrays of letters called picture languages; such correspondence provides a proof of the undecidability of the model, and an example separating determinism and non-determinism. We then describe several techniques which, applied to our model, establish (non-)closure properties of the recognizable relations. Finally, the main result presented in this talk is that alternating two-way two-tape automata are not closed under complementation. The proof is a refinement of one of J. Kari for picture languages. Joint work with Olivier Carton and Olivier Serre. Hubie Chen One Hierarchy Spawns Another: Graph Deconstructions and the Complexity Classification of Conjunctive Queries We study the classical problem of conjunctive query evaluation. This problem admits multiple formulations and has been studied in numerous contexts; for example, it is a formulation of the constraint satisfaction problem, as well as the problem of deciding if there is a homomorphism from one relational structure to another (which transparently generalizes the graph homomorphism problem). We here restrict the problem according to the set of permissible queries; the particular formulation we work with is the relational homomorphism problem over a class of structures A, wherein each instance must be a pair of structures such that the first structure is an element of A. We present a comprehensive complexity classification of these problems, which strongly links graph-theoretic properties of A to the complexity of the corresponding homomorphism problem. In particular, we define a binary relation on graph classes and completely describe the resulting hierarchy given by this relation. This binary relation is defined in terms of a notion which we call graph deconstruction and which is a variant of the well-known notion of tree decomposition. We then use this graph hierarchy to infer a complexity hierarchy of homomorphism problems which is comprehensive up to a computationally very weak notion of reduction, namely, a parameterized form of quantifier-free reductions. We obtain a significantly refined complexity classification of left-hand side restricted homomorphism problems, as well as a unifying, modular, and conceptually clean treatment of existing complexity classifications, such as the classifications by Grohe-Schwentick-Segoufin (STOC 2001) and Grohe (FOCS 2003, JACM 2007). After presenting this new advance, we will compare this line of research with another that aims to classify the complexity of the homomorphism problem where the second (target) structure is fixed, and that is currently being studied using universal-algebraic methods. We will also make some remarks on two intriguing variants, injective homomorphism (also called embedding) and surjective homomorphism. This talk is mostly based on joint work with Moritz Müller that appeared in CSL-LICS '14. In theory, the talk will be presented in a self-contained fashion, and will not assume prior knowledge of any of the studied notions. http://hubiechen.weebly.com/ Vendredi 30 septembre 2016, 14 heures 30, 1006 Équipe automate Journée de rentrée 9h30-9h45 welcome 9h45 Svetlana Puzynina 10h15 Sebastian Schoener 10h30 Célia Borlido 11h Thibault Godin 11h45 Benjamin Hellouin 12h15 Thomas Garrity 14h Olivier Carton 14h30 Sylvain Lombardy (LaBRI)– Démonstration du logiciel Vaucuson-R 15h30 Pablo Rotondo Démonstration du logiciel Vaucuson-R Sylvain Hallé (Université du Québec à Chicoutimi) Solving Equations on Words with Morphisms and Antimorphisms Word equations are combinatorial equalities between strings of symbols, variables and functions, which can be used to model problems in a wide range of domains. While some complexity results for the solving of specific classes of equations are known, currently there does not exist any equation solver publicly available. Recently, we have proposed the implementation of such a solver based on Boolean satisfiability that leverages existing SAT solvers for this purpose. In this paper, we propose a new representation of equations on words having fixed length, by using an enriched graph data structure. We discuss the implementation as well as experimental results obtained on a sample of equations. Arthur Milchior (IRIF) Deterministic Automaton and FO[<,mod] integer set We consider deterministic automata which accept vectors of d integers for a fixed positive integer d. A deterministic automaton is then a finite representation of the sets of vectors it accepts. Many operations are particularly efficient with this representation, such as intersection of sets, testing whether two sets are equal or deciding whethersuch an automaton accepts a Presburger-definable set, that is a FO[+,<]-definable set over integers. We consider a similar problem for less expressive logics such as FO[<,0,moda] or \FO[+1,0,mod], where mod is the class of modular relations. We state that it is decidable in time O(nlog(n)) whether a set of vectors accepted by a given finite deterministic automaton can be defined in the less expressive logic. The case of dimension 1 was already proven by Marsault and Sakarovitch. If the first algorithms gives a positive answer, the second one computes in time O(n^{3}log(n)) an existential formula in this logic that defines the same set. This improves the 2EXP time algorithm that can be easily obtained by combining the results of Leroux and Choffrut. In this talk, it is intended to: -Introduce automata reading vectors of integers, -Present the logic FO[<,0,mod] over integers -Introduce classical tools relating automata to numbers. -Give an idea of how they can be applied to the above-mentionned problem. Bruno Karelovic (IRIF) Perfect-information Stochastic Priority Games We present in this work an alternative solution to perfect-information stochastic parity games. Instead of using the framework of μ-calculus, which hides completely the algorithmic aspect, we solve it by induction on the number of absorbing states. Howard Straubing (Boston College) Two Variable Logic with a Between Predicate We study an extension of FO^2[<], first-order logic interpreted in finite words in which only two variables are used. We adjoin to this language two-variable atomic formulas that say, 'the letter a appears between positions x and y'. This is, in a sense, the simplest property that is not expressible using only two variables. We present several logics, both first-order and temporal, that have the same expressive power, and find matching lower and upper bounds for the complexity of satisfiability for each of these formulations. We also give an effective algebraic characterization of the properties expressible in this logic. This enables us to prove, among many other things, that our new logic has strictly less expressive power than full first-order logic FO[<]. This is joint work with Andreas Krebs, Kamal Lodaya, and Paritosh Pandya, and will be presented at LICS2016. Lundi 30 mai 2016, 14 heures, Salle des thèse (halle aux farines) Bruno Guillon (IRIF - Universitá degli Studi di Milano) Soutenance de Thèse : Two-wayness: Automata and Transducers This PhD is about two natural extensions of Finite Automata: the 2-way FA (2FA) and the 2-way transducers (2T). The 2FA are computably equivalent to FA, even in their nondeterministic (2NFA) variant. However, in the Descriptional Complexity area, some questions remain. Raised by Sakoda and Sipser in 1978, the question of the cost of the simulation of 2NFA by 2DFA is still open. In this manuscript I give an answer in a restricted case in which the nondeterministic choices of the 2NFA may occur at the border of the input only (2ONFA). I show that every 2ONFA can be simulated by a 2DFA of subexponential (but superpolynomial) size. Under the assumptions L=NL, this cost is reduced to the polynomial level. Moreover, I prove that the complementation, and the simulation by a halting 2ONFA is polynomial. Classical transducers (1-way) are well-known and admit nice characterizations (rational relations, logic). But their 2-way variant (2T) is still unknown, especially the nondeterministic case. In this area, my manuscript gives a new contribution: a algebraic characterization of the relations accepted by 2NT when both the input and output alphabets are unary. It can be reformulated as follows: each unary 2NT is equivalent to a sweeping (and even rotating) 2T. I also show that the assumptions made on the size of the alphabets are required. The study of word relations, as algebraic object, and their transitive closure is another subject considered in my phd. When the relation belongs to some low level class, we are able to set the complexity of its transitive closure. This quickly becomes uncomputable when higher classes are considered. Hall F, 5ème étage, thèse disponible à l'adresse https://www.irif.univ-paris-diderot.fr/~guillonb/phd_defense.html Laure Daviaud (LIP – ENS Lyon) A Generalised Twinning Property for Minimisation of Cost Register Automata Weighted automata (WA) extend finite-state automata defining functions from the set of words to a semiring S. Recently, cost register automata (CRA) have been introduced as an alternative model to describe any function realised by a WA by means of a deterministic machine. Regarding unambiguous WA over a group G, they can equivalently be described by a CRA whose registers take their values in G, and are updated by operations of the form X:=Y.c, with c in G and X,Y registers. In this talk, I will give a characterisation of unambiguous weighted automata which are equivalent to cost register automata using at most k registers, for a given k. To this end, I will generalise two notions originally introduced by Choffrut for finite-state transducers: a twinning property and a bounded variation property, here parametrised by an integer k and that characterise WA/functions computing by a CRA using at most k registers. This is a joint work with Pierre-Alain Reynier and Jean-Marc Talbot. Igor Potapov (University of Liverpool) Matrix Semigroups and Related Automata Problems Matrices and matrix products play a crucial role in a representation and analysis of various computational processes. Unfortunately, many simply formulated and elementary problems for matrices are inherently difficult to solve even in dimension two, and most of these problems become undecidable in general starting from dimension three or four. Let us given a finite set of square matrices (known as a generator) which is forming a multiplicative semigroup S. The classical computational problems for matrix semigroups are: Membership (Decide whether a given matrix M belong to a semigroup S) and special cases such as: Identity (i.e if M is the identity matrix) and Mortality (i.e if M is the zero matrix) problems Vector reachability (Decide for a given vectors u and v whether exist a matrix M in S such that Mu=v) Scalar reachability (Decide for a given vectors u, v and a scalar L whether exist a matrix M in S such that uMv=L) Freeness (Decide whether every matrix product in S is unique, i.e. whether it is a code) The undecidability proofs in matrix semigroups are mainly based on various techniques and methods for embedding universal computations into matrix products. The case of dimension two is the most intriguing since there is some evidence that if these problems are undecidable, then this cannot be proved using any previously known constructions. Due to a severe lack of methods and techniques the status of decision problems for 2×2 matrices (like membership, vector reachability, freeness) is remaining to be a long standing open problem. More recently, a new approach of translating numerical problems of 2×2 integer matrices into variety of combinatorial and computational problems on words and automata over group alphabet and studying their transformations as specific rewriting systems have led to a few results on decidability and complexity for some subclasses. Dong Han Kim (Dongguk University, Corée du Sud) Sturmian colorings on regular trees We introduce Sturmian colorings of regular trees, which are colorings of minimal unbounded factor complexity. Then, we classify Sturmian colorings into two families, namely cyclic and acyclic ones. We characterize acyclic Sturmian colorings in a way analogous to continued faction algorithm of Sturmian words. As for cyclic Sturmian colorings, we show that the coloring is a countable union of a periodic coloring, possibly union with a regular subtree colored with one color. This is joint work with Seonhee Lim. Emmanuel Jeandel (LORIA) Un jeu apériodique de 11 tuiles Une tuile de Wang est un carré dont les bords sont colorés. Étant donné un ensemble fini de tuiles de Wang, on cherche à savoir s'il est possible de paver le plan discret tout entier avec ces tuiles, en mettant une tuile par case de sorte que deux tuiles adjacentes aient la même couleur sur le bord qu'elles partagent. On s'intéresse plus particulièrement aux jeux de tuiles apériodiques, ceux pour lesquels un pavage existe, mais où il est impossible de paver le plan périodiquement. Ces jeux de tuiles sont une des briques de base de la majorité des résultats en dynamique symbolique multidimensionnelle. Le premier jeu de tuiles apériodique trouvé par Berger avait 20426 tuiles, et le nombre de tuiles nécessaire a baissé progressivement jusqu'à ce que Culik obtienne en 1996 un jeu de 13 tuiles en utilisant une méthode due à Kari. Avec Michael Rao, nous avons trouvé avec l'aide de plusieurs ordinateurs un jeu apériodique de 11 tuiles. Ce nombre est optimal : il n'existe pas de jeu apériodique de moins de 11 tuiles. Une des principales difficultés de cette recherche guidée par ordinateur est que nous cherchons une aiguille dans une botte de foin indécidable : il n'existe pas d'algorithme qui décide si un jeu de tuiles est apériodique. Après une brève introduction au problème, je présenterai l'ensemble de 11 tuiles, ainsi que les techniques de théorie des automates et de systèmes de transitions qui ont permis de prouver (a) qu'il est apériodique, et (b) que c'est le plus petit. Tim Smith (LIGM Paris Est) Determination and Prediction of Infinite Words by Automata An infinite language L determines an infinite word α if every string in L is a prefix of α. If L is regular, it is known that α must be ultimately periodic; conversely, every ultimately periodic word is determined by some regular language. We investigate other classes of languages to see what infinite words they determine, focusing on languages recognized by various kinds of automata. Next, we consider prediction of infinite words by automata. In the classic problem of sequence prediction, a predictor receives a sequence of values from an emitter and tries to guess the next value before it appears. The predictor masters the emitter if there is a point after which all of the predictor's guesses are correct. We study the case in which the predictor is an automaton and the emitted values are drawn from a finite set; i.e., the emitted sequence is an infinite word. The automata we consider are finite automata, pushdown automata, stack automata (a generalization of pushdown automata), and multihead finite automata, and we relate them to purely periodic words, ultimately periodic words, and multilinear words. Lundi 21 mars 2016, 10 heures, LABRI Colloque En L'honneur De Marcel-Paul Schützenberger (21-25/03/2016) Programme Eugene Asarin (IRIF) Entropy games and matrix multiplication games Two intimately related new classes of games are introduced and studied: entropy games (EGs) and matrix multiplication games (MMGs). An EG is played on a finite arena by two-and-a-half players: Despot, Tribune and the non-deterministic People. Despot wants to make the set of possible People's behaviors as small as possible, while Tribune wants to make it as large as possible. An MMG is played by two players that alternately write matrices from some predefined finite sets. One wants to maximize the growth rate of the product, and the other to minimize it. We show that in general MMGs are undecidable in quite a strong sense. On the positive side, EGs correspond to a subclass of MMGs, and we prove that such MMGs and EGs are determined, and that the optimal strategies are simple. The complexity of solving such games is in NP ∩ coNP. Joint work with Julien Cervelle, Aldric Degorre, Cătălin Dima, Florian Horn, and Victor Kozyakin. Anna-Carla Rousso (IRIF) Non encore annoncé. Thierry Bousch (Paris Sud) La Tour d'Hanoï, revue par Dudeney Laurent Bartholdi (ENS) Non encore annoncé. Viktoriya Ozornova (Universität Bremen) Factorability structures Antoine Amarilli (Télécom ParisTech) Provenance Circuits for Trees and Treelike Instances
CommonCrawl
Bundle metric In differential geometry, the notion of a metric tensor can be extended to an arbitrary vector bundle, and to some principal fiber bundles. This metric is often called a bundle metric, or fibre metric. Definition If M is a topological manifold and π : E → M a vector bundle on M, then a metric on E is a bundle map k : E ×M E → M × R from the fiber product of E with itself to the trivial bundle with fiber R such that the restriction of k to each fibre over M is a nondegenerate bilinear map of vector spaces.[1] Roughly speaking, k gives a kind of dot product (not necessarily symmetric or positive definite) on the vector space above each point of M, and these products vary smoothly over M. Properties Every vector bundle with paracompact base space can be equipped with a bundle metric.[1] For a vector bundle of rank n, this follows from the bundle charts $\phi :\pi ^{-1}(U)\to U\times \mathbb {R} ^{n}$ :\pi ^{-1}(U)\to U\times \mathbb {R} ^{n}} : the bundle metric can be taken as the pullback of the inner product of a metric on $\mathbb {R} ^{n}$; for example, the orthonormal charts of Euclidean space. The structure group of such a metric is the orthogonal group O(n). Example: Riemann metric If M is a Riemannian manifold, and E is its tangent bundle TM, then the Riemannian metric gives a bundle metric, and vice versa.[1] Example: on vertical bundles If the bundle π:P → M is a principal fiber bundle with group G, and G is a compact Lie group, then there exists an Ad(G)-invariant inner product k on the fibers, taken from the inner product on the corresponding compact Lie algebra. More precisely, there is a metric tensor k defined on the vertical bundle E = VP such that k is invariant under left-multiplication: $k(L_{g*}X,L_{g*}Y)=k(X,Y)$ for vertical vectors X, Y and Lg is left-multiplication by g along the fiber, and Lg* is the pushforward. That is, E is the vector bundle that consists of the vertical subspace of the tangent of the principal bundle. More generally, whenever one has a compact group with Haar measure μ, and an arbitrary inner product h(X,Y) defined at the tangent space of some point in G, one can define an invariant metric simply by averaging over the entire group, i.e. by defining $k(X,Y)=\int _{G}h(L_{g*}X,L_{g*}Y)d\mu _{g}$ as the average. The above notion can be extended to the associated bundle $P\times _{G}V$ where V is a vector space transforming covariantly under some representation of G. In relation to Kaluza–Klein theory If the base space M is also a metric space, with metric g, and the principal bundle is endowed with a connection form ω, then π*g+kω is a metric defined on the entire tangent bundle E = TP.[2] More precisely, one writes π*g(X,Y) = g(π*X, π*Y) where π* is the pushforward of the projection π, and g is the metric tensor on the base space M. The expression kω should be understood as (kω)(X,Y) = k(ω(X),ω(Y)), with k the metric tensor on each fiber. Here, X and Y are elements of the tangent space TP. Observe that the lift π*g vanishes on the vertical subspace TV (since π* vanishes on vertical vectors), while kω vanishes on the horizontal subspace TH (since the horizontal subspace is defined as that part of the tangent space TP on which the connection ω vanishes). Since the total tangent space of the bundle is a direct sum of the vertical and horizontal subspaces (that is, TP = TV ⊕ TH), this metric is well-defined on the entire bundle. This bundle metric underpins the generalized form of Kaluza–Klein theory due to several interesting properties that it possesses. The scalar curvature derived from this metric is constant on each fiber,[2] this follows from the Ad(G) invariance of the fiber metric k. The scalar curvature on the bundle can be decomposed into three distinct pieces: RE = RM(g) + L(g, ω) + RG(k) where RE is the scalar curvature on the bundle as a whole (obtained from the metric π*g+kω above), and RM(g) is the scalar curvature on the base manifold M (the Lagrangian density of the Einstein–Hilbert action), and L(g, ω) is the Lagrangian density for the Yang–Mills action, and RG(k) is the scalar curvature on each fibre (obtained from the fiber metric k, and constant, due to the Ad(G)-invariance of the metric k). The arguments denote that RM(g) only depends on the metric g on the base manifold, but not ω or k, and likewise, that RG(k) only depends on k, and not on g or ω, and so-on. References 1. Jost, Jürgen (2011), Riemannian geometry and geometric analysis, Universitext (Sixth ed.), Springer, Heidelberg, p. 46, doi:10.1007/978-3-642-21298-7, ISBN 978-3-642-21297-0, MR 2829653. 2. David Bleecker, "Gauge Theory and Variational Principles" (1982) D. Reidel Publishing (See chapter 9)
Wikipedia
\begin{document} \title{Quantum Optical Version of Classical Optical Transformations and Beyond } \author{Hong-yi Fan$^{1}$ and Li-yun Hu$^{2}$\thanks{{\small Corresponding author. E-mail address: [email protected] (L.Y. Hu)}}\\$^{1}${\small Department of Physics, Shanghai Jiao Tong University, Shanghai 200030, China; }\\{\small Department of Material Science and Engineering, University of Science and Technology of China, Hefei, Anhui 230026, China}\\$^{2}${\small College of Physics \& Communication Electronics, Jiangxi Normal University, Nanchang 330022, China}} \maketitle \begin{abstract} {\small By virtue of the newly developed technique of integration within an ordered product (IWOP) of operators, we explore quantum optical version of classical optical transformations such as optical Fresnel transform, Hankel transform, fractional Fourier transform, Wigner transform, wavelet transform and Fresnel-Hadmard combinatorial transform etc. In this way one may gain benefit for developing classical optics theory from the research in quantum optics, or vice-versa. We can not only find some new quantum mechanical unitary operators which correspond to the known optical transformations, deriving a new theorem for calculating quantum tomogram of density operators, but also can reveal some new classical optical transformations. For examples, we find the generalized Fresnel operator (GFO) to correspond to the generalized Fresnel transform (GFT) in classical optics. We derive GFO's normal product form and its canonical coherent state representation and find that GFO is the loyal representation of symplectic group multiplication rule. We show that GFT is just the transformation matrix element of GFO in the coordinate representation such that two successive GFTs is still a GFT. The ABCD rule of the Gaussian beam propagation is directly demonstrated in the context of quantum optics. Especially, the introduction of quantum mechanical entangled state representations opens up a new area to finding new classical optical transformations. The complex wavelet transform and the condition of mother wavelet are studied in the context of quantum optics too. Throughout our discussions, the coherent state, the entangled state representation of the two-mode squeezing operators and the technique of integration within an ordered product (IWOP) of operators are fully used. All these confirms Dirac's assertion: \textquotedblleft \ }$...${\small for a quantum dynamic system that has a classical analogue, unitary transformation in the quantum theory is the analogue of contact transformation in the classical theory".} {\small Keywords: Dirac's symbolic method; IWOP technique; entangled state of continuum variables; entangled Fresnel transform; Collins formula; Generalized Fresnel operator; complex wavelet transform; complex Wigner transform; complex fractional Fourier transform; symplectic wavelet transform; entangled symplectic wavelet transform; Symplectic-dilation mixed wavelet transform; fractional Radon transform; new eigenmodes of fractional Fourier transform} \end{abstract} \tableofcontents \section{Introduction} The history of quantum mechanics records that from the very beginning the founders of the quantum theory realized that there might exist formal connection between classical optics and quantum mechanics. For example, Schr\"{o}dinger considered that classical dynamics of a point particle should be the \textquotedblleft geometrical optics\textquotedblright \ approximation of a linear wave equation, in the same way as ray optics is a limiting approximation of wave optics; Schr\"{o}dinger also searched for some quantum mechanical state which behaves like a classical `particle', and this state was later recognized as the coherent state \cite{Glauber,Klauder,Schro}, which plays an essential role in quantum optics theory and laser physics; As Dirac wrote in his famous book $<$ Principles of Quantum Mechanics $>$ \cite{Dirac}: \textquotedblleft$\cdot \cdot \cdot$ \textbf{for a quantum dynamic system that has a classical analogue, unitary transformation in the quantum theory is the analogue of contact transformation in the classical theory}". According to Dirac, there should exist a formal correspondence between quantum optics unitary-transform operators and classical optics transformations. Indeed, in the last century physicists also found some rigorous mathematical analogies between classical optics and quantum mechanics, i.e. the similarity between the optical Helmholtz equation and the time-independent Schr\"{o}dinger equation; Since 1960s, the advent of a laser and the appearance of coherent state theory of radiation field \cite{Glauber,Klauder,glau1}, quantum optics has experienced rapid development and achieved great success in revealing and explaining the quantum mechanical features of optical field and non-classical behavior (for instance, Hanbury-Brown-Twiss effect, photon antibunching, squeezing, sub-Poissonian photon statistics) of photons in various photon-atom interactions \cite{mandel}. The relationship between classical and quantum coherence has been discussed in the book of Mandel and Wolf \cite{mandel}; The Hermite-Gauss or the Laguerre-Gauss modes of a laser beam are described using the bosonic operator algebra by Nienhuis and Allen \cite{Nienhuis}. In addition, displaced light beams refracted by lenses according to the law of geometrical optics, were found to be the paraxial optics analog of a coherent state. Besides, phase space correspondence between classical optics and quantum mechanics, say for example, the Wigner function theory, is inspected in the literature \cite{Dragoman}. On the other hand, classical optics, which tackles vast majority of physical-optics experiments and is based on Maxwell's equations, has never ceased its own evolving steps, physicists have endeavored to develop various optical transforms in light propagation through lens systems and various continuous media. The two research fields, quantum optics and classical optics, have their own physical objects and conceptions. From the point of view of mathematics, classical optics is framed in the group transform and associated representations on appropriate function space, while quantum optics deals with operators and state vectors, and their overlap seems little at first glance. It seems to us that if one wants to further relate them to each other, one needs some new theoretical method to "bridge" them. For example, what is the quantum mechanical unitary operator corresponding to the Fresnel transform in Fourier optics? Is there any so-called Fresnel operator as the image of classical generalized Fresnel transform? Since generalized Fresnel transforms are very popularly used in optical instrument design and optical propagation through lenses and various media, it is worth of studying these transforms in the context of quantum optics theory, especially based on coherent state, squeezed state \cite{squeezed1,squeezed2} and the newly invented entangled state theory \cite{entangle,entangle1,entangle2,entangle3}. Fortunately, the recently developed technique of integration within an ordered product (IWOP) of operators \cite{IWOP1,IWOP2,IWOP3} is of great aid to studying quantum optical version of classical optical transformations. Using the IWOP technique one may gain benefit for classical optics from quantum optics' research, or vice-versa. Our present Review is arranged as follows: in section 2 we briefly recall the classical diffraction theory \cite{Born,Goodman}, this is preparing for later sections in which we shall show that most frequently employed classical optical transforms have their counterparts in quantum optics theory. In section 3 we introduce the IWOP technique and demonstrate that the completeness relation of fundamental quantum mechanical representations can be recast into normally ordered Gaussian operator form. Using the IWOP technique we can directly perform the asymmetric ket-bra integration $\mu^{-1/2} \int_{-\infty}^{\infty}dq\left \vert q/\mu \right \rangle \left \langle q\right \vert $ in the coordinate representation, which leads to the normally ordered single-mode squeezing operator, this seems to be a direct way to understanding the squeezing mechanism as a mapping from the classical scaling $q\rightarrow q/\mu$. In section 4 with the help of IWOP technique and based on the concept of quantum entanglement of Einstein-Podolsky-Rosen \cite{EPR} we construct two mutually conjugate entangled states of continuum variables, $\left \vert \eta \right \rangle $ versus $\left \vert \xi \right \rangle ,$ and their \textit{deduced entangled states (or named } \textbf{correlated-amplitude---number-difference entangled states}), they are all qualified to make up quantum mechanical representations. It is remarkable that using the IWOP technique to performing the asymmetric ket-bra integration $\mu^{-1}\int d^{2}\eta \left \vert \eta/\mu \right \rangle \left \langle \eta \right \vert $ leads to the two-mode normally ordered two-mode squeezing operator, this implies that the two-mode squeezed state is simultaneously an entangled state. We point out that the entangled state $\left \vert \eta \right \rangle $ also embodies entanglement in the aspect of correlative amplitude and the phase. We are also encouraged that the overlap between two mutually conjugate \textit{deduced entangled states} is just the Bessel function--- the optical Hankel transform kernel \cite{Fanpla1}$,$ which again shows that the new representations in the context of physics theory match beautiful mathematical formalism exactly. We then employ the \textit{deduced} entangled states to derive \textbf{quantum optical version of classical circular harmonic correlation. }Section 5 is devoted to finding a quantum operator which corresponds to the optical Fresnel transform, with use of the coherent state representation and by projecting the classical sympletic transform $z\rightarrow sz-rz^{\ast}$ ($\left \vert s\right \vert ^{2} -\left \vert r\right \vert ^{2}=1)$ in phase space onto the quantum mechanical Hilbert space, we are able to recognize which operator is the single-mode Fresnel operator (FO). It turns out that the 1-dimensional optical Fresnel transform is just the matrix element of the Fresnel operator $F$ in the coordinate eigenstates. Besides, the coherent state projection operator representation of FO constitutes a loyal realization of symplectic group, which coincides with the fact that two successive optical Fresnel transforms make up a new Fresnel transform. Then in Section 6 based on the coherent state projection representation of FO, we prove $ABCD$ rule for optical propagation in the context of quantum optics. In section 7 the quadratic operator form of FO is also presented and the four fundamental optical operators are derived by decomposing the FO. In section 8 we discuss how to apply the Fresnel operator to quantum tomography theory, by introducing the Fresnel quadrature phase $FXF^{\dagger}=X_{F},$ we point out that Wigner operator's Radon transformation is just the pure state projection operator $\left \vert x\right \rangle _{s,rs,r}$\ $\left \langle x\right \vert $, where $\left \vert x\right \rangle _{s,r}=F\left \vert x\right \rangle $ and $\left \vert x\right \rangle $ is the position eigenstate, so the probability distribution for the Fresnel quadrature phase is the Radon transform of the Wigner function. Moreover, the tomogram of quantum state $\left \vert \psi \right \rangle $ is just the squared modulus of the wave function $_{s,r}\left \langle x\right \vert \left. \psi \right \rangle .$ This new relation between quantum tomography and optical Fresnel transform may provide experimentalists to figure out new approach for testing tomography. In addition, we propose another new theorem for calculating tomogram, i.e., the tomogram of a density operator $\rho$ is equal to the marginal integration of the classical Weyl correspondence function of $F^{\dagger}\rho F$. In section 9 by virtue of the coherent state and IWOP method we propose two-mode generalized Fresnel operator (GFO), in this case we employ the entangled state representation to relate the 2-mode GFO to classical transforms, since the 2-mode GFO is not simply the direct product of two 1-mode GFOs. The corresponding quantum optics $ABCD$ rule for two-mode case is also proved. The 2-mode GFO can also be expressed in quadratic operators form in entangled way. The relation between optical FT and quantum tomography in two-mode case is also revealed. In section 10 we propose a kind of integration transformation, $\iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }h(p,q)\equiv f\left( x,y\right) ,$ which is invertible and obeys Parseval theorem. Remarkably, it can convert chirplet function to the kernel of fractional Fourier transform (FrFT). This transformation can also serve for solving some operator ordering problems. In section 11 we employ the entangled state representation to introduce the complex FrFT (CFrFT), which is not the direct product of two independent 1-dimensional FrFT transform. The eigenmodes on CFrFT is derived. New eigen-modes for light propagation in graded-index medium and the fractional Hankel transform are presented. The Wigner transform theory is extended to the complex form and its relation to CFrFT is shown; The integration transformation in section 10 is also extended to the entangled case. In section 12 we shall treat the adaption problem of Collins diffraction formula to the CFrFT with the use of two-mode (3 parameters) squeezing operator and in the entangled state representation of continuous variables, in so doing the quantum mechanical version of associated theory of classical diffraction and classical CFrFT is obtained, which connects classical optics and quantum optics in this aspect. In section 13 we introduce a convenient way for constructing the fractional Radon transform. the complex fractional Randon transform is also proposed; In sections 14 and 15 we discuss quantum optical version of classical wavelet transforms (WTs), including how to recast the condition of\ mother wavelet into the context of quantum optics; how to introduce complex wavelet transform with use of the entangled state representations. Some properties, such as Parseval theorem, Inversion formula, and orthogonal property, the relation between WT and Wigner-Husimi distribution function are also discussed. In section 16, we generalize the usual wavelet transform to symplectic wavelet transformation (SWT) by using the coherent state representation and making transformation $z\rightarrow s\left( z-\kappa \right) -r\left( z^{\ast}-\kappa^{\ast }\right) $ ($\left \vert s\right \vert ^{2}-\left \vert r\right \vert ^{2}=1)$ in phase space. The relation between SWT and optical Fresnel transformation is revealed. Then the SWT is extended to the entangled case by mapping the classical mixed transformation $\left( z,z^{\prime}\right) \rightarrow \left( sz+rz^{\prime \ast},sz^{\prime}+rz^{\ast}\right) $ in 2-mode coherent state $\left \vert z,z^{\prime}\right \rangle $ representation. At the end of this section, we introduce a new symplectic-dilation mixed WT by employing a new entangled-coherent state representation $\left \vert \alpha,x\right \rangle $. The corresponding classical optical transform is also presented. In the last section, we introduce the Fresnel-Hadamard combinatorial operator by virtue of the IWOP technique and $\left \vert \alpha,x\right \rangle $. This unitary operator plays the role of both Fresnel transformation for mode $\frac{a_{1}-a_{2}}{\sqrt{2}}$ and Hadamard transformation for mode $\frac{a_{1}+a_{2}}{\sqrt{2}},$ respectively, and the two transformations are combinatorial. All these sections are used to prove the existence of a one-to-one correspondence between quantum optical operators that transform state vectors in Hilbert space and the classical optical transforms that change the distribution of optical field. \section{Some typical classical optical transformations} Here we briefly review some typical optical transforms based on light diffraction theory. These transformations, as one can see in later sections, are just the correspondence of some representation transformations between certain quantum mechanical states of which some are newly constructed. It was Huygens who gave a first illustrative explanation to wave theory by proposing every point in the propagating space as a sub-excitation source of a new sub-wave. An intuitive theory mathematically supporting Huygens' principle is the scalar diffraction approximation, so named because optical fields (electromagnetic fields) actually are vector fields, whereby the theory is valid approximately. This theory is based on the superposition of the combined radiation field of multiple re-emission sources initiated by Huygens. Light diffraction phenomena has played an important role in the development of the wave theory of light, and now underlies the Fourier optics and information optics. The formulation of a diffraction problem essentially considers an incident free-space wave whose propagation is interrupted by an obstacle or mask which changes the phase and/or amplitude of the wave locally by a well determined factor \cite{opticalPhyiscs}. A more rigorous, but still in the scheme of scalar wave, derivation has been given by Kirchhoff who reformulated the diffraction problem as a boundary-value problem, which essentially justifies the use of Huygens principle. The Fresnel-Kirchhoff (or Rayleigh-Sommerfeld) diffraction formula is practically reduced to the Fresnel integral formula in paraxial and far-field approximation \cite{Born,Goodman} that reads: \begin{equation} U_{2}\left( x_{2},y_{2}\right) =\frac{\exp \left( ikz\right) }{i\lambda z}\int \int_{-\infty}^{\infty}U_{1}\left( x_{1},y_{1}\right) \exp \left \{ i\frac{k}{2z}\left[ \left( x_{2}-x_{1}\right) ^{2}+\left( y_{2} -y_{1}\right) ^{2}\right] \right \} dx_{1}dy_{1}, \label{2.1} \end{equation} where $U_{1}\left( x_{1},y_{1}\right) $ is the optical distribution of a 2-dimensional light source and $U_{2}\left( x_{2},y_{2}\right) $ is its image on the observation plane, $\lambda$ is the optical wavelength, $k=\frac{2\pi}{\lambda}$ is the wave number in the vacuum and $z$ is the propagation distance. When \begin{equation} z^{2}\gg \frac{k}{2}\left( x_{1}^{2}+y_{1}^{2}\right) _{\max}, \label{2.2} \end{equation} is satisfied, Eq. (\ref{2.1}) reduces to \begin{align} U_{2}\left( x_{2},y_{2}\right) & =\frac{\exp \left( ikz\right) \exp \left[ i\frac{k}{2z}\left( x_{1}^{2}+y_{1}^{2}\right) \right] }{i\lambda z}\nonumber \\ & \times \int \int_{-\infty}^{\infty}U_{1}\left( x_{1},y_{1}\right) \exp \left[ -i\frac{2\pi}{\lambda z}\left( x_{1}x_{2}+y_{1}y_{2}\right) \right] dx_{1}dy_{1}, \label{2.3} \end{align} which is named the Fraunhofer diffraction formula. The Fresnel integral is closely related to the fractional Fourier transform (FrFT), actually, it has been proved that the Fresnel transform can be interpreted as a scaled FrFT with a residual phase curvature \cite{Torre}. The FrFT is a very useful tool in Fourier optics and information optics. This concept was firstly introduced in 1980 by Namias \cite{Namias} but not brought enough attention until FrFT was defined physically, based on propagation in quadratic graded-index media (GRIN media). Mendlovic and Ozaktas \cite{Mendlovic,Ozakatas} defined the $\alpha$th FrFT as follows: Let the original function be input from one side of quadratic GRIN medium, at $z=0$. Then, the light distribution observed at the plane $z=z_{0}$ corresponds to the $\alpha$ equal to the ($z_{0}/L$)th fractional Fourier transform of the input fraction, where $L\equiv(\pi/2)(n_{1}/n_{2})^{1/2}$ is a characteristic distance. The FrFT can also be implemented by lenses. Another approach for introducing FrFT was made by Lohmann who pointed out the algorithmic isomorphism among image rotation, rotation of the Wigner distribution function \cite{Wigner}, and fractional Fourier transforming \cite{Lohmann}. Lohmann proposed the FrFT as the transform performed on a function that leads to a rotation with an angle of the associated Wigner distribution function, in this sense, the FrFT bridges the gap between classical optics and optical Wigner distribution theory. Recently, the FrFT has been paid more and more attention within different contexts of both mathematics and physics \cite{Namias,Mendlovic,Ozakatas,Lohmann,Bernardo}. The FrFT is defined as \begin{align} & \mathcal{F}_{\alpha}\left[ U_{1}\right] \left( x_{2},y_{2}\right) =\frac{e^{i(1-\alpha)\frac{\pi}{2}}}{2\sin \left( \frac{\pi}{2}\alpha \right) }\exp \left[ -\frac{i\left( x_{2}^{2}+y_{2}^{2}\right) }{2\tan \left( \frac{\pi}{2}\alpha \right) }\right] \nonumber \\ & \times \int \int_{-\infty}^{\infty}\frac{dx_{1}dy_{1}}{\pi}\exp \left[ -\frac{i\left( x_{1}^{2}+y_{1}^{2}\right) }{2\tan \left( \frac{\pi}{2} \alpha \right) }\right] \exp \left[ \frac{i\left( x_{2}x_{1}+y_{2} y_{1}\right) }{\sin \left( \frac{\pi}{2}\alpha \right) }\right] U_{1}\left( x_{1},y_{1}\right) . \label{2.4} \end{align} We can see that $F_{0}$ is the identity operator and $F_{\pi/2}$ is just the Fourier transform. The most important property of FRFT is that $F_{\alpha}$ obeys the semigroup property, i.e. two successive FrFTs of order $\alpha$\ and $\beta$ \ makes up the FrFT of order $\alpha+\beta.$ A more general form describing the light propagation in an optical system characterized by the $\left[ A,B;C,D\right] $ ray transfer matrix is the Collins diffraction integral formula \cite{Collins1}, \begin{align} U_{2}\left( x_{2},y_{2}\right) & =\frac{k\exp \left( ikz\right) }{2\pi Bi}\int \int_{-\infty}^{\infty}dx_{1}dy_{1}U_{1}\left( x_{1},y_{1}\right) \nonumber \\ & \times \exp \left \{ \frac{ik}{2B}\left[ A\left( x_{1}^{2}+y_{1} ^{2}\right) -2\left( x_{1}x_{2}+y_{1}y_{2}\right) +D\left( x_{2}^{2} +y_{2}^{2}\right) \right] \right \} dx_{1}dy_{1}, \label{2.5} \end{align} where $AD-BC=1$ if the system is lossless. One can easily find the similarity between Collins formula and the FrFT by some scaling transform and relating the $\left[ A,B,C,D\right] $ matrix to $\alpha$ in the FrFT \cite{MatrixOptics}. Note that $M=\left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) $ is a ray transfer matrix describing optical systems belonging to the unimodular symplectic group. When treating the light propagation in optical elements in near-axis approximation, matrices $M$ representing linear transformations are a convenient mathematical tool for calculating the fundamental properties of optical systems, which is the origin of the name of matrix optics. In cylindrical coordinates the Collins formula is expressed as \cite{Collins1,Collins2} \begin{equation} U_{2}\left( r_{2},\varphi \right) =\frac{i}{\lambda B}{\int}_{0}^{\infty }{\int}_{0}^{2\pi}\exp \left \{ -\frac{i\pi}{\lambda B}\left[ Ar_{1} ^{2}+Dr_{2}^{2}-2r_{1}r_{2}\cos \left( \theta-\varphi \right) \right] \right \} U_{1}\left( r_{1},\theta \right) r_{1}dr_{1}d\theta \label{2.6} \end{equation} where $x_{1}=r_{1}\cos \theta,$ $y_{1}=r_{1}\sin \theta,$ $x_{2}=r_{2} \cos \varphi$ and $y_{2}=r_{2}\sin \varphi.$ When $U_{1}\left( r_{1} ,\theta \right) $ has rotational symmetry \begin{equation} U_{1}\left( r_{1},\theta \right) =u_{1}\left( r_{1}\right) \exp \left( im\theta \right) ,\text{ }U_{2}\left( r_{2},\varphi \right) =u_{2}\left( r_{2}\right) \exp \left( im\varphi \right) , \label{2.7} \end{equation} then (\ref{2.6}) becomes \begin{equation} u_{2}\left( r_{2}\right) =\frac{2\pi}{\lambda B}\exp \left[ i\left( 1+m\right) \frac{\pi}{2}\right] {\int}_{0}^{\infty}\exp \left[ -\frac{i\pi }{\lambda B}\left[ Ar_{1}^{2}+Dr_{2}^{2}\right] \right] J_{m}\left( \frac{2\pi r_{1}r_{2}}{\lambda B}\right) u_{1}\left( r_{1}\right) r_{1}dr_{1}, \label{2.8} \end{equation} where we have used the $m$-order Bessel function \begin{equation} J_{m}\left( x\right) =\frac{1}{2\pi}{\int}_{0}^{2\pi}\exp \left[ ix\cos \theta+im\left( \theta-\frac{\pi}{2}\right) \right] d\theta. \label{2.9} \end{equation} When $A=0$, (\ref{2.8}) reduces to the standard Hankel transform (up to a phase factor) \begin{equation} u_{2}\left( r_{2}\right) \rightarrow \frac{2\pi}{\lambda B}{\int}_{0} ^{\infty}J_{m}\left( \frac{2\pi r_{1}r_{2}}{\lambda B}\right) u_{1}\left( r_{1}\right) r_{1}dr_{1}, \label{2.10} \end{equation} The compact form of one-dimensional Collins formula is \begin{equation} g\left( x_{2}\right) =\int_{-\infty}^{\infty}\mathcal{K}^{M}\left( x_{2},x_{1}\right) f\left( x_{1}\right) dx_{1}, \label{2.11} \end{equation} where the transform kernel is \begin{equation} \mathcal{K}^{M}\left( x_{2},x_{1}\right) =\frac{1}{\sqrt{2\pi iB}} \exp \left[ \frac{i}{2B}\left( Ax_{1}^{2}-2x_{2}x_{1}+Dx_{2}^{2}\right) \right] , \label{2.12} \end{equation} $M$ is the parameter matrix $\left[ A,B,C,D\right] $. Eq. (\ref{2.12}) is called generalized Fresnel transform \cite{GFT,GFT1,Alieva1,agarwal}. In the following sections we will show how we find the quantum optical counterpart for those transformations of classical optics. For this purpose in the next chapter we introduce the IWOP technique to demonstrate how Dirac's symbolic method can be developed and be applied to quantum optics theory. Also, we briefly review some properties of the entangled state \cite{entangle,entangle1,entangle2,entangle3} and reveal the connection between the mutual transform generated by these entangled states and the Hankel transform in classical optics. \section{The IWOP technique and two mutually conjugate entangled states} \subsection{The IWOP technique} The history of mathematics tells us that whenever there appears a new important mathematical symbol, there coexists certain operational rules for it, the quantum mechanical operators in ket-bra projective form (the core of Dirac's symbolic method) also need their own operational rules. The terminology \textquotedblleft symbolic method\textquotedblright \ was first shown in the preface of Dirac's book $<$ The Principle of Quantum Mechanics $>$ : \textquotedblleft \textit{The symbolic method, which deals directly in an abstract way with the quantities of fundamental importance}$\cdot \cdot \cdot $\textit{, however, seems to go more deeply into the nature of things. It enables one to express the physical law in a neat and concise way,} and will probably be increasingly used in the future as it becomes better understood and its own special mathematics gets developed\textbf{.\textquotedblright} \cite{Dirac} Then two questions naturally arise: How to better understand the \textit{symbolic method? How to develop Dirac's symbolic method, especially its mathematics? We noticed that }\textbf{Newton-Leibniz integration rule only applies to commuting functions of continuum variables, while operators made of Dirac's symbols\ (ket versus bra, e.g., }$\left \vert q/\mu \right \rangle \left \langle q\right \vert $\textbf{\ of continuous parameter }$q$\textbf{) in quantum mechanics are usually not commutative. Therefore integrations over the operators of type }$\left \vert \ \right \rangle \left \langle \ \right \vert $ (\textbf{where ket- and bra- state vectors need not to be Hermitian-conjugate to each other) can not be directly performed by the Newton-Leibniz rule. Thus we invented an innovative technique of integration within an ordered product (IWOP) of operators that made the integration of non-commutative operators possible. The core of IWOP technique is to arrange non-commutable quantum operators within an ordered product (say, normal ordering) in a way that they become commutable, in this sense the gap between q-numbers and c-numbers is "narrowed". However, the nature of operators which which are within : : is not changed, they are still q-numbers, not c-numbers. After the integration over c-numbers within ordered product is performed, we can get rid of the normal ordering symbol after putting the integration result in normal ordering. \cite{Weyl}.The IWOP technique thus bridges this mathematical gap between classical mechanics and quantum mechanics, and further reveals the beauty and elegance of Dirac's symbolic method and transformation theory. This technique develops symbolic method significantly, i. e. makes Dirac's representation theory and the transformation theory more plentiful, and consequently to be better understood. The beauty and elegance of Dirac's symbolic method are further revealed. Various applications of the IWOP technique, including constructing the entangled state, developing \textbf{the} nonlinear coherent state theory, Wigner function theory, etc. are found; many new unitary operators and operator-identities as well as new quantum mechanical representations can be derived too, which are partly} summarized in the Review Articles \cite{entangle1}. We begin with listing some properties of normal product of operators which means all the bosonic creation operators $a^{\dagger}$ are standing on the left of annihilation operators $a$ in a monomial of $a^{\dagger}$ and $a$. 1. The order of Bose operators $a$ and $a^{\dagger}$ within a normally ordered product can be permuted. That is to say, even though $\left[ a,a^{\dagger }\right] =1$, we can have $\colon aa^{\dagger}\colon=\colon a^{\dagger }a\colon=a^{\dagger}a,$ where $:$ $:$ denotes normal ordering. 2. $c$-numbers can be taken out of the symbol $:$ $\colon$ as one wishes. 3. The symbol $:$ : which is within another symbol $:$ $\colon$ can be deleted. 4. The vacuum projection operator $|0\rangle \langle0|$ has the normal product form \begin{equation} |0\rangle \langle0|=\colon e^{-a^{\dagger}a}\colon. \label{3.1} \end{equation} 5. A normally ordered product can be integrated or differentiated with respect to a $c$-number provided the integration is convergent. \subsection{The IWOP technique for deriving normally ordered Gaussian form of the completeness relations of fundamental quantum mechanical representations} As an application of IWOP, (in the following, unless particularly mentioned, we take $\hbar=\omega=m=1$ for convenience.) Using \ the Fock representation of the coordinate eigenvector $Q|q\rangle=q|q\rangle,$ ($Q=(a+a^{\dagger })/\sqrt{2}$) \begin{equation} |q\rangle=\pi^{-1/4}e^{-\frac{q^{2}}{2}+\sqrt{2}qa^{\dagger}-\frac {a^{\dagger2}}{2}}|0\rangle, \label{3.7} \end{equation} we perform the integration below \begin{align} S_{1} & \equiv \int_{-\infty}^{\infty}\frac{dq}{\sqrt{\mu}}|\frac{q}{\mu }\rangle \langle q|\nonumber \\ & =\int_{-\infty}^{\infty}\frac{dq}{\sqrt{\pi \mu}}e^{-\frac{q^{2}}{2\mu^{2} }+\sqrt{2}\frac{q}{\mu}a^{\dagger}-\frac{a^{\dagger2}}{2}}|0\rangle \langle0|e^{-\frac{q^{2}}{2}+\sqrt{2}qa-\frac{a^{2}}{2}}. \label{3.8a} \end{align} Substituting (\ref{3.1}) into (\ref{3.8a}) we see \begin{equation} S_{1}=\int_{-\infty}^{\infty}\frac{dq}{\sqrt{\pi \mu}}e^{-\frac{q^{2}}{2\mu ^{2}}+\sqrt{2}\frac{q}{\mu}a^{\dagger}-\frac{a^{\dagger2}}{2}}\colon e^{-a^{\dagger}a}\colon e^{-\frac{q^{2}}{2}+\sqrt{2}qa-\frac{a^{2}}{2}}. \label{3.9} \end{equation} Note that on the left of $\colon e^{-a^{+}a}\colon$ are all creation operators, while on its right are all annihilation operators, so the whole integral is in normal ordering, thus using property 1 we have \begin{equation} S_{1}=\int_{-\infty}^{\infty}\frac{dq}{\sqrt{\pi \mu}}\colon e^{-\frac{q^{2} }{2}(1+\frac{1}{\mu^{2}})+\sqrt{2}q(\frac{a^{\dagger}}{\mu}+a)-\frac{1} {2}(a+a^{\dagger})^{2}}\colon. \label{3.10} \end{equation} As $a$ commutes with $a^{\dagger \text{ }}$ within $:$ $:$, so $a^{\dagger \text{ }}$and $a$ can be considered as if they were parameters while the integration is performing. Therefore, by setting $\mu=e^{\lambda}$, sech$\lambda=\frac{2\mu}{1+\mu^{2}},$ tanh$\lambda=\frac{\mu^{2}+1}{\mu^{2} -1},$ we are able to perform the integration and obtain \begin{align} S_{1} & =\sqrt{\frac{2\mu}{1+\mu^{2}}}\colon \exp \left \{ \frac{\left( \frac{a^{\dagger}}{\mu}+a\right) ^{2}}{1+\frac{1}{\mu^{2}}}-\frac{1} {2}\left( a+a^{\dagger}\right) ^{2}\right \} \colon \nonumber \\ & =\left( \operatorname*{sech}\lambda \right) ^{1/2}e^{-\frac{a^{\dagger2} }{2}\tanh \lambda}\colon e^{\left( \operatorname*{sech}\lambda-1\right) a^{\dagger}a}\colon e^{\frac{a^{2}}{2}\tanh \lambda}, \label{3.01} \end{align} which is just the single-mode squeezing operator in normal ordering appearing in many references. It is worth mentioning that we have not used the SU(1,1) Lie algebra method in the derivation. The integration automatically arranges the squeezing operator in normal ordering. Using \begin{align} e^{\lambda a^{\dagger}a} & =\sum_{n=0}^{\infty}e^{\lambda n}|n\rangle \langle n|=\sum_{n=0}^{\infty}e^{\lambda n}\frac{a^{\dagger n}}{n!}\colon e^{-a^{\dagger}a}\colon a^{n}\nonumber \\ & =\colon \exp[\left( e^{\lambda}-1\right) a^{\dagger}a]\colon, \label{3.02} \end{align} Eq. (\ref{3.01}) becomes \begin{equation} \int_{-\infty}^{\infty}\frac{dq}{\sqrt{\mu}}|\frac{q}{\mu}\rangle \langle q|=e^{-\frac{a^{+2}}{2}\tanh \lambda}e^{(a^{+}a+\frac{1}{2})\ln \operatorname*{sech}\lambda}e^{\frac{a^{2}}{2}\tanh \lambda}. \label{3.03} \end{equation} This shows the classical dilation $q\rightarrow \frac{q}{\mu}$ maps into the normally ordered squeezing operator manifestly. It also exhibits that the fundamental representation theory can be formulated in not so abstract way, as we can now directly perform the integral over ket-bra projection operators. Moreover, the IWOP\ technique can be employed to perform many complicated integrations for ket-bra projection operators. There is a deep ditch between quantum mechanical operators ($q$-numbers) theory and classical numbers ($c$-numbers) theory. The IWOP technique arranges non-commutable operators within an ordered product symbol in a way that they become commutable, in this sense the `ditch' between $q$-numbers and $c$-numbers is \textquotedblright shoaled\textquotedblright. However, the nature of operators are not changed, they are still $q-$numbers, not $c$-numbers. After the integration over $c$-numbers within ordered product is performed, we can finally get rid of the normal ordering symbol by using (\ref{3.02}). When $\mu=1,$ Eq. (\ref{3.03}) becomes \begin{align} \int_{-\infty}^{\infty}dq|q\rangle \langle q| & =\int_{-\infty}^{\infty} \frac{dq}{\sqrt{\pi}}\colon e^{-q^{2}+2q(\frac{a+a^{\dagger}}{\sqrt{2}} )-\frac{1}{2}(a+a^{\dagger})^{2}}\colon \nonumber \\ & =\int_{-\infty}^{\infty}\frac{dq}{\sqrt{\pi}}\colon e^{-\left( q-Q\right) ^{2}}\colon=1,\; \; \label{3.04} \end{align} a\ real simple Gaussian\ integration! This immediately leads us to put the completeness relation of the momentum representation into the normally ordered Gaussian form \begin{equation} \int_{-\infty}^{\infty}dp|p\rangle \langle p|=\int_{-\infty}^{\infty}\frac {dp}{\sqrt{\pi}}\colon e^{-\left( p-P\right) ^{2}}\colon=1, \label{3.05} \end{equation} where $P=\left( a-a^{\dagger}\right) /(i\sqrt{2}),$ and $\left \vert p\right \rangle $ is the momentum eigenvector $P\left \vert p\right \rangle =p\left \vert p\right \rangle $, \begin{equation} \left \vert p\right \rangle =\pi^{-\frac{1}{4}}\exp \left[ -\frac{1}{2} p^{2}+i\sqrt{2}pa^{\dagger}+\frac{1}{2}a^{\dagger2}\right] \left \vert 0\right \rangle . \label{3.8} \end{equation} In addition, we should notice that $\left \vert q\right \rangle $ and $\left \vert p\right \rangle $ are related by the Fourier transform (FT), i.e. $\left \langle p\right \vert \left. q\right \rangle =\frac{1}{\sqrt{2\pi}} \exp \left( -iqp\right) ,$ the integral kernel of the Fraunhofer diffraction formula in 1-dimensional is such a FT, so FT in classical optics has its correspondence in quantum mechanical representations' transform. This enlightens us that in order to find more general analogy between unitary operators in quantum optics and transformations in classical optics we should construct new representations for quantum optics theory, and these are the bi-partite entangled state and many-particle entangled state. These ideal states can be implemented by optical devices and optical network \cite{JPA1}. In the following we focus on the bi-partite entangled state. \subsection{Single-mode Wigner operator} When we combine (\ref{3.04}) and (\ref{3.05}) we can obtain \begin{equation} \pi^{-1}\colon e^{-\left( q-Q\right) ^{2}-\left( p-P\right) ^{2}} \colon \equiv \Delta \left( q,p\right) , \label{3.06} \end{equation} which is just the normally ordered Wigner operator since its marginal integration gives $|q\rangle \langle q|$ and $|p\rangle \langle p|$ respectively, i.e., \begin{align} \int_{-\infty}^{\infty}dq\Delta \left( q,p\right) & =\frac{1}{\sqrt{\pi} }\colon e^{-\left( p-P\right) ^{2}}\colon=|p\rangle \langle p|,\label{3.07}\\ \int_{-\infty}^{\infty}dp\Delta \left( q,p\right) & =\frac{1}{\sqrt{\pi} }\colon e^{-\left( q-Q\right) ^{2}}\colon=|q\rangle \langle q|. \label{3.08} \end{align} Thus the Wigner function of quantum state $\rho$ can be calculated as $W(q,p)=$Tr$[\rho \Delta \left( q,p\right) ]$. On the other hand, the Wigner operator (\ref{3.06}) can be recast into the coherent state representation, \begin{equation} \Delta \left( q,p\right) \rightarrow \Delta \left( \alpha,\alpha^{\ast }\right) =\int \frac{d^{2}z}{\pi}\left \vert \alpha+z\right \rangle \left \langle \alpha-z\right \vert e^{\alpha z^{\ast}-\alpha^{\ast}z}, \label{3.09} \end{equation} where $\left \vert z\right \rangle $ is a coherent state. In fact, using the IWOP technique we can obtain \begin{align} \Delta \left( \alpha,\alpha^{\ast}\right) & =\int \frac{d^{2}z}{\pi} \colon \exp \{-\left \vert z\right \vert ^{2}+\left( \alpha+z\right) a^{\dag }+\left( \alpha^{\ast}-z^{\ast}\right) a\nonumber \\ & +\alpha z^{\ast}-\alpha^{\ast}z-\left \vert \alpha \right \vert ^{2}\} \colon \nonumber \\ & =\frac{1}{\pi}\colon \exp \left \{ -2\left( a-\alpha \right) \left( a^{\dag}-\alpha^{\ast}\right) \right \} \colon, \label{3.010} \end{align} which is the same as (\ref{3.06}). \subsection{Entangled state $\left \vert \eta \right \rangle $ and its Fourier transform in complex form} The concept of quantum entanglement was first employed by Einstein, Rosen and Poldosky (EPR) to challenge that quantum mechanics is incomplete when they observed that two particles' relative position $Q_{1}-Q_{2}$ and the total momentum $P_{1}+P_{2}$ are commutable. Hinted by EPR, the bipartite entangled state $\left \vert \eta \right \rangle $ is introduced as \cite{fank,fanyue} \begin{equation} \left \vert \eta \right \rangle =\exp \left[ -\frac{1}{2}\left \vert \eta \right \vert ^{2}+\eta a_{1}^{\dagger}-\eta^{\ast}a_{2}^{\dagger} +a_{1}^{\dagger}a_{2}^{\dagger}\right] \left \vert 00\right \rangle . \label{3.11} \end{equation} $\left \vert \eta=\eta_{1}+\mathtt{i}\eta_{2}\right \rangle $ is the common eigenstate of relative coordinate $Q_{1}-Q_{2}$ and the total momentum $P_{1}+P_{2}$, \begin{equation} \left( Q_{1}-Q_{2}\right) \left \vert \eta \right \rangle =\sqrt{2}\eta _{1}\left \vert \eta \right \rangle ,\text{ }\, \text{\ }\left( P_{1} +P_{2}\right) \left \vert \eta \right \rangle =\sqrt{2}\eta_{2}\left \vert \eta \right \rangle , \label{3.12} \end{equation} where $Q_{i}=(a_{j}+a_{j}^{\dagger})/\sqrt{2},\ P_{j}=(a_{j}-a_{j}^{\dagger })/(\mathtt{i}\sqrt{2}),$ $j=1,2.$ Using the IWOP technique, we can immediately prove that $\left \vert \eta \right \rangle $ possesses the completeness relation \begin{equation} \int \frac{d^{2}\eta}{\pi}\left \vert \eta \right \rangle \left \langle \eta \right \vert =\int \frac{d^{2}{\eta}}{\pi}\colon e^{-\left[ \eta^{\ast }-(a_{1}^{\dagger}-a_{2})\right] \left[ \eta-(a_{1}-a_{2}^{\dagger})\right] }\colon=1,\,d^{2}\eta=d\eta_{1}d\eta_{2}, \label{3.13} \end{equation} and orthonormal relation \begin{equation} \left \langle \eta \right \vert \left. \eta^{\prime}\right \rangle =\pi \delta(\eta_{1}-\eta_{1}^{\prime})\delta(\eta_{2}-\eta_{2}^{\prime}). \label{3.14} \end{equation} The Schmidt decomposition of $\left \vert \eta \right \rangle $ is \begin{equation} \left \vert \eta \right \rangle =e^{-i\eta_{2}\eta_{1}}\int_{-\infty}^{\infty }dx\left \vert q\right \rangle _{1}\otimes \left \vert q-\sqrt{2}\eta _{1}\right \rangle _{2}e^{i\sqrt{2}\eta_{2}x}, \label{3.21} \end{equation} The $\left \vert \eta \right \rangle $ state can also be Schmidt-decomposed in momentum eigenvector space as \begin{equation} \left \vert \eta \right \rangle =e^{i\eta_{1}\eta_{2}}\int_{-\infty}^{\infty }dp\left \vert p\right \rangle _{1}\otimes \left \vert \sqrt{2}\eta_{2} -p\right \rangle _{2}e^{-i\sqrt{2}\eta_{1}p}. \label{3.22} \end{equation} The $\left \vert \eta \right \rangle $ is physically appealing in quantum optics theory, because the two-mode squeezing operator has its natural representation on $\left \langle \eta \right \vert $ basis \cite{fanyue} \begin{equation} {\displaystyle \int} \frac{d^{2}\eta}{\pi \mu}\left \vert \eta/\mu \right \rangle \left \langle \eta \right \vert =e^{a_{1}^{^{\dagger}}a_{2}^{^{\dagger}}\tanh \lambda} e^{(a_{1}^{^{\dagger}}a_{1}+a_{2}^{^{\dagger}}a_{2}+1)\ln \operatorname*{sech} \lambda}e^{-a_{1}a_{2}\tanh \lambda},\; \mu=e^{\lambda}, \label{3.23} \end{equation} The proof of (\ref{3.23}) is proceeded by virtue of the IWOP technique \begin{align} {\displaystyle \int} \frac{d^{2}\eta}{\pi \mu}\left \vert \eta/\mu \right \rangle \left \langle \eta \right \vert & = {\displaystyle \int} \frac{d^{2}\eta}{\pi \mu}\colon \exp \left \{ -\frac{|\eta|^{2}}{2}\left( 1+\frac{1}{\mu^{2}}\right) +\eta \left( \frac{a_{1}^{^{\dagger}}}{\mu} -a_{2}\right) \right. \nonumber \\ & +\left. \eta^{\ast}\left( a_{1}-\frac{a_{2}^{^{\dagger}}}{\mu}\right) +a_{1}^{\dagger}a_{2}^{^{\dagger}}+a_{1}a_{2}-a_{1}^{\dagger}a_{1} -a_{2}^{\dagger}a_{2}\right \} \nonumber \\ & =\frac{2\mu}{1+\mu^{2}}\colon \exp \left \{ \frac{\mu^{2}}{1+\mu^{2}}\left( \frac{a_{1}^{^{\dagger}}}{\mu}-a_{2}\right) \left( a_{1}-\frac {a_{2}^{^{\dagger}}}{\mu}\right) -\left( a_{1}-a_{2}^{^{\dagger}}\right) \left( a_{1}^{^{\dagger}}-a_{2}\right) \right \} \colon \nonumber \\ & =e^{a_{1}^{^{\dagger}}a_{2}^{^{\dagger}}\tanh \lambda}e^{(a_{1}^{^{\dagger} }a_{1}+a_{2}^{^{\dagger}}a_{2}+1)\ln \operatorname*{sech}\lambda}e^{-a_{1} a_{2}\tanh \lambda}\equiv S_{2}, \label{3.24} \end{align} so the necessity of introducing $\left \vert \eta \right \rangle $ into quantum optics is clear. $S_{2}$ squeezes $\left \vert \eta \right \rangle $ in the manifest way \begin{equation} S_{2}\left \vert \eta \right \rangle =\frac{1}{\mu}\left \vert \eta/\mu \right \rangle ,\text{ \ }\mu=e^{\lambda}\ , \label{3.15} \end{equation} and the two-mode squeezed state itself is an entangled state which entangles the idle mode and signal mode as an outcome of a parametric-down conversion process \cite{PDC}. We can also introduce the conjugate state of $\left \vert \eta \right \rangle $ \cite{PRA}, \begin{equation} \left \vert \xi \right \rangle =\exp \left[ -\frac{1}{2}\left \vert \xi \right \vert ^{2}+\xi a_{1}^{\dagger}+\xi^{\ast}a_{2}^{\dagger}-a_{1}^{\dagger} a_{2}^{\dagger}\right] \left \vert 00\right \rangle ,\text{ }\xi=\xi_{1} +i\xi_{2}, \label{3.17} \end{equation} which obeys the eigen-equations \begin{equation} \left( Q_{1}+Q_{2}\right) \left \vert \xi \right \rangle =\sqrt{2}\xi _{1}\left \vert \xi \right \rangle ,\, \left( P_{1}-P_{2}\right) \left \vert \xi \right \rangle =\sqrt{2}\xi_{2}\left \vert \xi \right \rangle . \label{3.18} \end{equation} Because $\left[ \left( Q_{1}-Q_{2}\right) ,\left( P_{1}-P_{2}\right) \right] =2\mathtt{i},$ so we name the conjugacy between $\left \vert \xi \right \rangle $ and $\left \vert \eta \right \rangle .$ The completeness and orthonormal relations of $\left \vert \xi \right \rangle $ are \begin{align} \int \frac{d^{2}\xi}{\pi}\left \vert \xi \right \rangle \left \langle \xi \right \vert & =\int \frac{d^{2}{\xi}}{\pi}\colon e^{-\left[ \xi^{\ast }-(a_{1}^{\dagger}+a_{2})\right] \left[ \xi-(a_{1}+a_{2}^{\dagger})\right] }\colon=1,\label{3.19a}\\ \left \langle \xi \right \vert \left. \xi^{\prime}\right \rangle & =\pi \delta(\xi_{1}-\xi_{1}^{\prime})\delta(\xi_{2}-\xi_{2}^{\prime}),\text{ } d^{2}\xi=d\xi_{1}d\xi_{2}, \label{3.19} \end{align} respectively. $\left \vert \eta \right \rangle $ and $\left \vert \xi \right \rangle $ can be related to each other by \begin{equation} \left \langle \eta|\xi \right \rangle =\frac{1}{2}\exp \left( \frac{\xi \eta ^{\ast}-\xi^{\ast}\eta}{2}\right) , \label{3.20} \end{equation} since $\xi^{\ast}\eta-\xi \eta^{\ast}$ is a pure imaginary number, Eq. (\ref{3.20}) is the Fourier transform kernel in complex form (or named entangled Fourier transform, this concept should also be extended to multipartite entangled states.) It will be shown in later sections that departing from entangled states $\left \vert \eta \right \rangle $ and $\left \vert \xi \right \rangle $ and the generalized Fresnel operator a new entangled Fresnel transforms in classical optics can be found. \subsection{Two-mode Wigner operator in the $\left \vert \eta \right \rangle $ representation} Combining (\ref{3.13}) and (\ref{3.19a}) we can construct the following operator \begin{align} & \frac{1}{\pi^{2}}\colon e^{-\left[ \sigma^{\ast}-(a_{1}^{\dagger} -a_{2})\right] \left[ \sigma-(a_{1}-a_{2}^{\dagger})\right] -\left[ \gamma^{\ast}-(a_{1}^{\dagger}+a_{2})\right] \left[ \gamma-(a_{1} +a_{2}^{\dagger})\right] }\colon \nonumber \\ & =\Delta \left( \alpha,\alpha^{\ast}\right) \otimes \Delta \left( \beta,\beta^{\ast}\right) \equiv \Delta \left( \sigma,\gamma \right) , \label{3.25} \end{align} where \begin{equation} \sigma=\alpha-\beta^{\ast},\; \gamma=\alpha+\beta^{\ast}. \label{3.26} \end{equation} Eq. (\ref{3.25}) is just equal to the direct product of two single-mode Wigner operators. It is convenient to express the Wigner operator in the $\left \vert \eta \right \rangle $ representation as \cite{R1} \begin{equation} \Delta \left( \sigma,\gamma \right) =\int \frac{d^{2}\eta}{\pi^{3}}\left \vert \sigma-\eta \right \rangle \left \langle \sigma+\eta \right \vert e^{\eta \gamma^{\ast}-\eta^{\ast}\gamma}. \label{3.27} \end{equation} For two-mode correlated system, it prefers to using $\Delta \left( \sigma,\gamma \right) $ to calculate quantum states' Wigner function. For example, noticing $\left \langle \eta \right \vert \left. 00\right \rangle =\exp \{-\left \vert \eta \right \vert ^{2}/2\},$ the two-mode squeezed states' Wigner function is \begin{align} & \left \langle 00\right \vert S_{2}^{\dag}\left( \mu \right) \Delta \left( \sigma,\gamma \right) S_{2}\left( \mu \right) \left \vert 00\right \rangle \nonumber \\ & =\left \langle 00\right \vert \mu^{2}\int \frac{d^{2}\eta}{\pi^{3}}\left \vert \mu \left( \sigma-\eta \right) \right \rangle \left \langle \mu \left( \sigma+\eta \right) \right \vert e^{\eta \gamma^{\ast}-\eta^{\ast}\gamma }\left \vert 00\right \rangle \nonumber \\ & =\pi^{-2}\exp \left[ -\mu^{2}\left \vert \sigma \right \vert ^{2}-\left \vert \gamma \right \vert ^{2}/\mu^{2}\right] . \label{3.28} \end{align} \section{Two deduced entangled state representations and Hankel transform} \subsection{Deduced entangled states} Starting from the entangled state $\left \vert \eta=re^{i\theta}\right \rangle $ and introducing an integer $m$, we can deduce new states \cite{fanzou}, \begin{equation} \left \vert m,r\right \rangle =\frac{1}{2\pi}\int_{0}^{2\pi}d\theta \left \vert \eta=re^{i\theta}\right \rangle e^{-im\theta}, \label{4.1} \end{equation} which is worth of paying attention because when we operate the number-difference operator, \begin{equation} D\equiv a_{1}^{\dagger}a_{1}-a_{2}^{\dagger}a_{2} \label{4.2} \end{equation} on $\left \vert \eta \right \rangle ,$ using Eq.(\ref{3.11}) we see \begin{equation} D\left \vert \eta \right \rangle =\left( \eta a_{1}^{^{\dagger}}+\eta^{\ast }a_{2}^{^{\dagger}}\right) \left \vert \eta \right \rangle =-i\frac{\partial }{\partial \theta}\left \vert \eta \right \rangle ,\; \eta=|\eta|e^{i\theta}, \label{4.3} \end{equation} so the number-difference operator corresponds to a differential operation $i\frac{\partial}{\partial \theta}$ in the $\left \langle \eta \right \vert \;$representation, this is a remarkable property of $\left \langle \eta \right \vert $. It then follows \begin{equation} D\left \vert m,r\right \rangle =\int_{0}^{2\pi}\frac{d\theta}{2\pi}e^{-im\theta }\left( -i\frac{\partial}{\partial \theta}\left \vert \eta=re^{i\theta }\right \rangle \right) =m\left \vert m,r\right \rangle . \label{4.4} \end{equation} On the other hand, by defining \begin{equation} K\equiv(a_{1}-a_{2}^{\dagger})(a_{1}^{\dagger}-a_{2}), \label{4.5} \end{equation} we see $\left[ D,K\right] =0,$ and $\left \vert m,r\right \rangle $ is its eigenstate, \begin{equation} \ K\left \vert m,r\right \rangle =r^{2}\left \vert m,r\right \rangle , \label{4.6} \end{equation} where $K$ is named correlated-amplitude operator since $K\left \vert \eta \right \rangle =|\eta|^{2}\left \vert \eta \right \rangle .$ Thus we name $\left \vert m,r\right \rangle $ correlated-amplitude---number-difference entangled states. It is not difficult to prove completeness and orthonormal property of $\left \vert m,r\right \rangle $, \begin{equation} \sum_{m=-\infty}^{\infty}\int_{0}^{\infty}d\left( r^{2}\right) \left \vert m,r\right \rangle \left \langle m,r\right \vert =1,\; \label{4.7} \end{equation} \begin{equation} \left \langle m,r\right \vert \left. m^{\prime},r^{\prime}\right \rangle =\delta_{m,m^{\prime}}\frac{1}{2r}\delta \left( r-r^{\prime}\right) . \label{4.8} \end{equation} On the other hand, from $\left \vert \xi \right \rangle $ we can derive another state \begin{equation} \left \vert s,r^{\prime}\right \rangle =\frac{1}{2\pi}\int_{0}^{2\pi} d\varphi \left \vert \xi=r^{\prime}e^{i\varphi}\right \rangle e^{-is\varphi}, \label{4.9} \end{equation} which satisfies \begin{equation} D\left \vert \xi \right \rangle =\left( a_{1}^{\dagger}\xi-a_{2}^{\dagger} \xi^{\ast}\right) \left \vert \xi \right \rangle =-i\frac{\partial} {\partial \varphi}\left \vert \xi=r^{\prime}e^{i\varphi}\right \rangle . \label{4.10} \end{equation} So $D$ in $\left \langle \xi=r^{\prime}e^{i\varphi}\right \vert $ representation is equal to $i\frac{\partial}{\partial \varphi}$. Consequently, \begin{equation} D\left \vert s,r^{\prime}\right \rangle =\int_{0}^{2\pi}\frac{d\theta}{2\pi }e^{-is\theta}\left( -i\frac{\partial}{\partial \theta}\left \vert \xi=r^{\prime}e^{i\theta}\right \rangle \right) =s\left \vert s,r^{\prime }\right \rangle . \label{4.11} \end{equation} Note $\left[ D,(a_{1}^{\dagger}+a_{2})(a_{1}+a_{2}^{\dagger})\right] =0$ and \begin{equation} (a_{1}^{\dagger}+a_{2})(a_{1}+a_{2}^{\dagger})\left \vert s,r^{\prime }\right \rangle =r^{\prime2}\left \vert s,r^{\prime}\right \rangle . \label{4.12} \end{equation} $\left \vert s,r^{\prime}\right \rangle $\ is qualified to be a new representation since \begin{equation} \sum_{s=-\infty}^{\infty}\int_{0}^{\infty}d\left( r^{\prime2}\right) \left \vert s,r^{\prime}\right \rangle \left \langle s,r^{\prime}\right \vert =1,\text{ }\left \langle s,r^{\prime}\right \vert \left. s^{\prime} ,r^{\prime \prime}\right \rangle =\delta_{s,s^{\prime}}\frac{1}{2r^{\prime} }\delta \left( r^{\prime}-r^{\prime \prime}\right) . \label{4.13} \end{equation} \subsection{Hankel transform between two deduced entangled state representations} Since $\left \vert \xi \right \rangle $ and $\left \vert \eta \right \rangle $ are mutual conjugate, $\left \vert s,r^{\prime}\right \rangle $ is the conjugate state of $\left \vert m,r\right \rangle $. From the definition of $\left \vert m,r\right \rangle $ and $\left \vert s,r^{\prime}\right \rangle $ and (\ref{3.20}) we calculate the overlap \cite{Fanpla1} \begin{align} \left \langle s,r^{\prime}\right \vert \left. q,r\right \rangle & =\frac {1}{4\pi^{2}}\int_{0}^{2\pi}d\varphi e^{is\varphi}\left \langle \xi=r^{\prime }e^{i\varphi}\right \vert \int_{0}^{2\pi}d\theta \left \vert \eta=re^{i\theta }\right \rangle e^{-im\theta}\nonumber \\ & =\frac{1}{8\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi}e^{is\varphi-im\theta} \exp \left[ irr^{\prime}\sin \left( \theta-\varphi \right) \right] d\theta d\varphi \nonumber \\ & =\frac{1}{8\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi}e^{is\varphi-im\theta} \sum_{l=-\infty}^{\infty}J_{l}\left( rr^{\prime}\right) e^{il\left( \theta-\varphi \right) }\nonumber \\ & =\frac{1}{2}\sum_{l=-\infty}^{\infty}\delta_{l,m}\delta_{l,s}J_{l}\left( rr^{\prime}\right) =\frac{1}{2}\delta_{s,m}J_{s}\left( rr^{\prime}\right) , \label{4.14} \end{align} where we have identified the generating function of the s-order Bessel function $J_{l},$ \begin{equation} e^{ix\sin t}=\sum_{l=-\infty}^{\infty}J_{l}\left( x\right) e^{ilt},\; \label{4.15} \end{equation} and \begin{equation} J_{l}\left( x\right) =\sum_{k=0}^{\infty}\frac{\left( -1\right) ^{l} }{k!\left( l+k\right) !}\left( \frac{x}{2}\right) ^{l+2k}. \label{4.16} \end{equation} Eq. (\ref{4.14}) is remarkable, because $J_{s}\left( rr^{\prime}\right) $ is just the integral kernel of Hankel transform. In fact, if we define \begin{equation} \left \langle m,r\right \vert \left. g\right \rangle \equiv g\left( m,r\right) ,\text{ }\left \langle s,r^{\prime}\right \vert \left. g\right \rangle \equiv \mathcal{G}\left( s,r^{\prime}\right) , \label{4.17} \end{equation} and use (\ref{4.7}) as well as (\ref{4.14}), we obtain \begin{align} \mathcal{G}\left( s,r^{\prime}\right) & =\sum_{m=-\infty}^{\infty}\int _{0}^{\infty}d\left( r^{2}\right) \left \langle s,r^{\prime}\right \vert \left. m,r\right \rangle \left \langle m,r\right \vert \left. g\right \rangle \nonumber \\ & =\frac{1}{2}\int_{0}^{\infty}d\left( r^{2}\right) J_{s}\left( rr^{\prime}\right) g\left( s,r\right) \equiv \mathcal{H}\left[ g\left( s,r\right) \right] , \label{4.18} \end{align} which is just the Hankel transform of $g\left( m,r\right) $ (or it can be regarded as a simplified form of the Collins formula in cylindrical coordinate, see (\ref{2.10})). The inverse transform of (\ref{4.18}) is \begin{align} g\left( m,r\right) & =\left \langle m,\mathfrak{r}\right \vert \sum_{s=-\infty}^{\infty}\int_{0}^{\infty}d\left( r^{\prime2}\right) \left \vert s,r^{\prime}\right \rangle \left \langle s,r^{\prime}\right \vert \left. g\right \rangle \nonumber \\ & =\frac{1}{2}\int_{0}^{\infty}d\left( r^{\prime2}\right) J_{q}\left( rr^{\prime}\right) \mathcal{G}\left( m,r^{\prime}\right) \equiv \mathcal{H}^{-1}\left[ \mathcal{G}\left( m,r^{\prime}\right) \right] . \label{4.19} \end{align} Now we know that the quantum optical image of classical Hankel transform just corresponds to the representation transformation between two mutually conjugate entangled states $\left \langle s,r^{\prime}\right \vert $ and $\left \vert m,r\right \rangle ,$ this is like the case that the Fourier transform kernel is just the matrix element between the coordinate state and the momentum state, a wonderful result unnoticed before. Therefore the bipartite entangled state representations' transforms, which can lead us to the Hankel transform, was proposed first in classical optics, can find their way back in quantum optics. \subsection{Quantum optical version of classical circular harmonic correlation} From Eq.(\ref{4.1}) we can see that its reciprocal relation is the circular harmonic expansion, \begin{equation} \left \vert \eta=re^{i\theta}\right \rangle =\sum_{m=-\infty}^{\infty}\left \vert m,r\right \rangle e^{im\theta}, \label{4.23} \end{equation} or correlated-amplitude---number-difference entangled state $\left \vert m,r\right \rangle $ can be considered as circular harmonic decomposition of $\left \vert \eta=re^{i\theta}\right \rangle .$ Let $g\left( r,\theta \right) $, a general 2-dimensional function expressed in polar coordinates, be periodic in the variable $\theta,$ it can be looked as the wavefunction of the state vector $\left \vert g\right \rangle $ in the $\left \langle \eta =re^{i\theta}\right \vert $ representation \begin{equation} g\left( r,\theta \right) =\left \langle \eta=re^{i\theta}\right. \left \vert g\right \rangle ,\text{ }\text{\ } \label{4.24} \end{equation} using (\ref{4.23}) we have \begin{equation} g\left( r,\theta \right) =\sum_{m=-\infty}^{\infty}g_{m}\left( r\right) e^{-im\theta},\text{ \ }g_{m}\left( r\right) =\left \langle m,r\right. \left \vert g\right \rangle , \label{4.25} \end{equation} $g_{m}\left( r\right) $ is the wavefunction of $\left \vert g\right \rangle $ in $\left \langle m,r\right \vert $ representation. By using\ (\ref{4.7}) and noticing that It then follows from (\ref{4.3}) \begin{equation} e^{-i\alpha(a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger}}a_{2})}\left \vert \eta=re^{i\theta}\right \rangle =e^{-\alpha \frac{\partial}{\partial \theta} }\left \vert \eta=re^{i\theta}\right \rangle =\left \vert \eta=re^{i\left( \theta-\alpha \right) }\right \rangle , \label{4.26} \end{equation} so $e^{-i\alpha \left( a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger}}a_{2}\right) }$ behaves a rotation operator in $\left \vert \eta \right \rangle $ representation, we see that the expectation value of $e^{-i\alpha \left( a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger}}a_{2}\right) }$ in $\left \vert g\right \rangle $ is \begin{align} \pi \left \langle g\right \vert e^{-i\alpha(a_{1}^{^{\dagger}}a_{1}-a_{2} ^{^{\dag}}a_{2})}\left \vert g\right \rangle & =\pi \left \langle g\right \vert \int \frac{d^{2}\eta}{\pi}e^{-i\alpha(a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger }}a_{2})}\left \vert \eta \right \rangle \left \langle \eta \right. \left \vert g\right \rangle \nonumber \\ & =\int_{0}^{\infty}rdrd\theta \int_{0}^{2\pi}\left \langle g\right. \left \vert \eta^{\prime}=re^{i\left( \theta-\alpha \right) }\right \rangle \left \langle \eta=re^{i\theta}\right. \left \vert g\right \rangle \nonumber \\ & =\int_{0}^{\infty}rdr\int_{0}^{2\pi}g^{\ast}\left( r,\theta-\alpha \right) g\left( r,\theta \right) d\theta \equiv R_{\alpha}, \label{4.27} \end{align} which is just the cross-correlation between $g\left( r,\theta \right) $ and an angularly rotated version of the same function, $g^{\ast}\left( r,\theta-\alpha \right) $. On the other hand, using (\ref{4.7}) we have \begin{equation} \left \vert g\right \rangle =\sum_{m=-\infty}^{\infty}\int_{0}^{\infty}d\left( r^{2}\right) \left \vert m,r\right \rangle \left \langle m,r\right \vert \left. g\right \rangle =\sum_{m=-\infty}^{\infty}\int_{0}^{\infty}d\left( r^{2}\right) \left \vert m,r\right \rangle g_{m}\left( r\right) . \label{4.28} \end{equation} Substituting (\ref{4.28}) into (\ref{4.27}) and using the eigenvector equation (\ref{4.4}) as well as (\ref{4.8}) we obtain \begin{align} R_{\alpha} & =\pi \sum_{m^{\prime}=-\infty}^{\infty}\int_{0}^{\infty}d\left( r^{\prime2}\right) \left \langle m^{\prime},r^{\prime}\right \vert g_{m^{\prime}}^{\ast}\left( r^{\prime}\right) e^{-i\alpha \left( a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger}}a_{2}\right) }\sum_{m=-\infty }^{\infty}\int_{0}^{\infty}d\left( r^{2}\right) \left \vert m,r\right \rangle g_{m}\left( r\right) \nonumber \\ & =\pi \sum_{m^{\prime}=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}\int _{0}^{\infty}d\left( r^{\prime2}\right) g_{m^{\prime}}^{\ast}\left( r^{\prime}\right) e^{-im\alpha}\int_{0}^{\infty}d\left( r^{2}\right) g_{m}\left( r\right) \delta_{m,m^{\prime}}\frac{1}{2r}\delta \left( r-r^{\prime}\right) \nonumber \\ & =2\pi \sum_{m=-\infty}^{\infty}e^{-im\alpha}\int_{0}^{\infty}r|g_{m}\left( r\right) |^{2}dr, \label{4.29} \end{align} from which we see that each of the circular harmonic components of the crosscorrelation undergoes a different phase shift $-m\alpha,$ so $R_{\alpha}$ is not rotation invariant. However, when we consider only one harmonic component \begin{equation} R_{\alpha,M}=2\pi e^{-iM\alpha}\int_{0}^{\infty}r|g_{M}\left( r\right) |^{2}dr, \label{4.30} \end{equation} is extracted digitally, then from the phase associated with this component it is possible to determine the angular shift that one version of the object has undergone. When an optical filter that is matched to $R_{\alpha,M}$ of a particular object is constructed, then if that some object is entered as an input to the system with any angular rotation, a correlation peak of strength proportional to $\int_{0}^{\infty}r|g_{M}\left( r\right) |^{2}dr$ will be produced, independent of rotation. Hence an optical correlator can be constructed that will recognize that object independent of rotation \cite{Goodman}. So far we have studied the circular harmonic correlation in the context of quantum optics, we have endowed the crosscorrelation $R_{\alpha}$ with a definite quantum mechanical meaning, i.e. the overlap between $\left \langle g\right \vert $ and the rotated state $e^{i\alpha \left( a_{1}^{^{\dagger} }a_{1}-a_{2}^{^{\dagger}}a_{2}\right) }\left \vert g\right \rangle ,$ in the entangled state representation. Note that Fourier-based correlators is also very sensitive to magnification, however, the magnitude of Mellin transform is independent of scale-size changes in the input \cite{Goodman}. Now we examine when $\left \vert g\right \rangle $ is both rotated and squeezed (by\ a two-mode squeezing operator $S_{2}\left( \lambda \right) =\exp[\lambda(a_{1}^{\dagger} a_{2}^{\dagger}-a_{1}a_{2})])$, then from (\ref{3.15}) and (\ref{4.24}) we have \begin{equation} S_{2}\left( \lambda \right) \left \vert g\right \rangle =\int \frac{d^{2}\eta }{\pi \mu}\left \vert \eta/\mu \right \rangle \left \langle \eta \right. \left \vert g\right \rangle =\int \frac{d^{2}\eta}{\pi \mu}\left \vert \eta/\mu \right \rangle g\left( r,\theta \right) , \label{4.31} \end{equation} it follows the overlap between $\left \langle g\right \vert $ and the state $e^{-i\alpha(a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger}}a_{2})}S\left( \lambda \right) \left \vert g\right \rangle ,$ \begin{align} W_{\alpha,\lambda} & \equiv \pi \left \langle g\right \vert e^{-i\alpha \left( a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger}}a_{2}\right) }S\left( \lambda \right) \left \vert g\right \rangle \nonumber \\ & =\left \langle g\right \vert \int \frac{d^{2}\eta}{\mu}e^{-i\alpha \left( a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger}}a_{2}\right) }\left \vert \eta /\mu \right \rangle g\left( r,\theta \right) \nonumber \\ & =\int_{0}^{\infty}\frac{rdr}{\mu}\int_{0}^{2\pi}\left \langle g\right. \left \vert \eta^{\prime}=e^{i\left( \theta-\alpha \right) }r/\mu \right \rangle g\left( r,\theta \right) d\theta \nonumber \\ & =\int_{0}^{\infty}\frac{rdr}{\mu}\int_{0}^{2\pi}g^{\ast}\left( r/\mu,\theta-\alpha \right) g\left( r,\theta \right) d\theta, \label{4.32} \end{align} which corresponds to the crosscorrelation arising from combination of squeezing and rotation (joint transform correlator). On the other hand, from (\ref{4.1}) and (\ref{3.15}) we see \begin{equation} S_{2}\left( \lambda \right) \left \vert m,r\right \rangle =\frac{1}{2\pi \mu }\int_{0}^{2\pi}d\theta \left \vert \eta=\frac{r}{\mu}e^{i\theta}\right \rangle e^{-im\theta}=\frac{1}{\mu}\left \vert m,\frac{r}{\mu}\right \rangle , \label{4.33} \end{equation} and therefore \begin{align} W_{\alpha,\lambda} & =\frac{\pi}{\mu}\sum_{m^{\prime}=-\infty}^{\infty} \int_{0}^{\infty}d\left( r^{\prime2}\right) \left \langle m^{\prime },r^{\prime}\right \vert g_{m^{\prime}}^{\ast}\left( r^{\prime}\right) e^{-i\alpha \left( a_{1}^{^{\dagger}}a_{1}-a_{2}^{^{\dagger}}a_{2}\right) }\sum_{m=-\infty}^{\infty}\int_{0}^{\infty}d\left( r^{2}\right) \left \vert m,\frac{r}{\mu}\right \rangle g_{m}\left( r\right) \nonumber \\ & =\frac{\pi}{\mu}\sum_{m^{\prime}=-\infty}^{\infty}\sum_{m=-\infty}^{\infty }\int_{0}^{\infty}d\left( r^{\prime2}\right) g_{m^{\prime}}^{\ast}\left( r^{\prime}\right) e^{-im\alpha}\int_{0}^{\infty}d\left( r^{2}\right) g_{m}\left( r\right) \delta_{m,m^{\prime}}\frac{1}{2r^{\prime}}\delta \left( \frac{r}{\mu}-r^{\prime}\right) \nonumber \\ & =\frac{2\pi}{\mu}\sum_{m=-\infty}^{\infty}e^{-im\alpha}\int_{0}^{\infty }rg_{m}\left( r\right) g_{m^{\prime}}^{\ast}\left( re^{-\lambda}\right) dr,\text{ \ } \label{4.34} \end{align} from which one can see that to achieve simultaneous scale and rotation invariance, a two-dimensional object $g\left( r,\theta \right) $ should be entered into the optical system in a distorted polar coordinate system, the distortation arising from the fact that the radial coordinate is stretched by a logarithmic transformation $\left( \lambda=-\ln \mu \right) $, which coincides with Ref. \cite{Casasent}. The quantum optical version is thus established which is a new tie connecting Fourier optics and quantum optics \cite{Fan OC}. At the end of this section, using the two-variable Hermite polynomials' definition \cite{Erd} \begin{equation} H_{m,n}\left( \xi,\xi^{\ast}\right) =\sum_{l=0}^{\min \left( m,n\right) }\frac{m!n!\left( -1\right) ^{l}\xi^{m-l}\xi^{\ast n-l}}{l!\left( m-l\right) !\left( n-l\right) !}, \label{4.20} \end{equation} which is quite different from the product of two single-variable Hermite polynomials, and its generating function formula is \begin{equation} \sum_{m,n=0}^{\infty}\frac{t^{m}t^{\prime n}}{m!n!}H_{m,n}\left( \xi ,\xi^{\ast}\right) =\exp \left[ -tt^{\prime}+t\xi+t^{\prime}\xi^{\ast }\right] , \label{4.21} \end{equation} and noting $H_{m,n}(\xi,\xi^{\ast})=e^{i(m-n)\varphi}H_{m,n}(r^{\prime },r^{\prime}),$ we can directly perform the integral in (\ref{4.9}) and derive the explicit form of $\left \vert s,r^{\prime}\right \rangle ,$ \begin{align} \left \vert s,r^{\prime}\right \rangle & =\frac{1}{2\pi}\int_{0}^{2\pi }d\varphi \exp \{-r^{\prime2}/2+\xi a_{1}^{\dagger}+\xi^{\ast}a_{2}^{\dagger }-is\varphi-a_{1}^{\dagger}a_{2}^{\dagger}\} \left \vert 00\right \rangle \nonumber \\ & =\frac{1}{2\pi}e^{-r^{\prime2}/2}\int_{0}^{2\pi}d\varphi \sum \limits_{m,n=0} ^{\infty}\frac{a_{1}^{\dagger m}a_{2}^{\dagger n}}{m!n!}H_{m,n}(\xi,\xi^{\ast })e^{-is\varphi}\left \vert 00\right \rangle \nonumber \\ & =\frac{1}{2\pi}e^{-r^{\prime2}/2}\int_{0}^{2\pi}d\varphi \sum \limits_{m,n=0} ^{\infty}\frac{1}{\sqrt{m!n!}}H_{m,n}(r^{\prime},r^{\prime})e^{i\varphi \left( m-n-s\right) }\left \vert m,n\right \rangle \nonumber \\ & =e^{-r^{\prime2}/2}\sum \limits_{n=0}^{\infty}\frac{1}{\sqrt{\left( n+s\right) !n!}}H_{n+s,n}(r^{\prime},r^{\prime})\left \vert n+s,n\right \rangle , \label{4.22} \end{align} which is really an entangled state in two-mode Fock space. Eqs. (\ref{4.21}) and (\ref{4.22}) will be often used in the following discussions. In the following we concentrate on finding the generalized Fresnel operators in both one- and two- mode cases with use of the IWOP technique. \section{Single-mode Fresnel operator as the image of the classical Optical Fresnel Transform} In this section we shall mainly introduce so-called generalized Fresnel operators (GFO) (in one-mode and two-mode cases both) \cite{PLAFAN} and some appropriate quantum optical representations (e.g. coherent state representation and entangled state representation) to manifestly link the formalisms in quantum optics to those in classical optics. In so doing, we find that the various transforms in classical optics are just the result of generalized Fresnel operators inducing transforms on appropriate quantum state vectors, i.e. classical optical Fresnel transforms have their counterpart in quantum optics. Besides, we can study the important $ABCD$ rule obeyed by Gaussian beam propagation (also the ray propagation in matrix optics) \cite{Gerrard} in the domain of quantum optics. \subsection{Single-mode GFO gained via coherent state method} For the coherent state $\left \vert z\right \rangle $ in quantum optics \cite{Glauber,Klauder} \begin{equation} \left \vert z\right \rangle =\exp \left[ za^{\dagger}-z^{\ast}a\right] |0\rangle \equiv \left \vert \left( \begin{array} [c]{c} z\\ z^{\ast} \end{array} \right) \right \rangle , \label{3.2} \end{equation} which is the eigenstate of annihilation operator $a,$\ $a\left \vert z\right \rangle =z\left \vert z\right \rangle $, using the IWOP and (\ref{3.1}), we can put the over-completeness relation of $\left \vert z\right \rangle $ into normal ordering \begin{equation} \int \frac{d^{2}{z}}{\pi}\left \vert z\right \rangle \left \langle z\right \vert =\int \frac{d^{2}{z}}{\pi}\colon e^{-\left( z^{\ast}-a^{\dagger}\right) \left( z-a\right) }\colon=1. \label{3.3} \end{equation} the canonical form of coherent state $\left \vert z\right \rangle $ is expressed as \begin{equation} \left \vert z\right \rangle =\left \vert p,q\right \rangle =\exp \left[ i\left( pQ-qP\right) \right] |0\rangle \equiv \left \vert \left( \begin{array} [c]{c} q\\ p \end{array} \right) \right \rangle , \label{3.6} \end{equation} where $z=\left( q+\mathtt{i}p\right) /\sqrt{2}$. It follows that $\left \langle p,q\right \vert Q\left \vert p,q\right \rangle =q,$ $\left \langle p,q\right \vert P\left \vert p,q\right \rangle =p,$ this indicates that the states $\left \vert p,q\right \rangle $ generate a canonical phase-space representation for a state $\left \vert \Psi \right \rangle ,$ $\Psi \left( p,q\right) =\left \langle p,q\right \vert \left. \Psi \right \rangle .$ Thus the coherent state is a good candidate for providing with classical phase-space description of quantum systems. Remembering that the Fresnel transform's parameters $\left( A,B,C,D\right) $ are elements of a ray transfer matrix $M$ describing optical systems, $M$ belongs to the unimodular symplectic group, and the coherent state $\left \vert p,x\right \rangle $ is a good candidate for providing with classical phase-space description of quantum systems, we naturally think of that the symplectic transformation $\left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) \left( \begin{array} [c]{c} q\\ p \end{array} \right) $ in classical phase space may mapping onto a generalized Fresnel operator in Hilbert space through the coherent state basis. Thus we construct the following ket-bra projection operator \begin{equation} \iint \limits_{-\infty}^{\infty}dxdp\left \vert \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) \left( \begin{array} [c]{c} q\\ p \end{array} \right) \right \rangle \left \langle \left( \begin{array} [c]{c} q\\ p \end{array} \right) \right \vert \label{5.1} \end{equation} as the GFO. In fact, using notation of $\left \vert z\right \rangle $ (coherent state)$,$ and introducing complex numbers $s,r,$ \begin{equation} s=\frac{1}{2}\left[ A+D-i\left( B-C\right) \right] ,\;r=-\frac{1} {2}\left[ A-D+i\left( B+C\right) \right] ,\;|s|^{2}-|r|^{2}=1, \label{5.2} \end{equation} from (\ref{3.6}) we know \begin{align} \left \vert \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) \left( \begin{array} [c]{c} q\\ p \end{array} \right) \right \rangle & =\left \vert \left( \begin{array} [c]{cc} s & -r\\ -r^{\ast} & s^{\ast} \end{array} \right) \left( \begin{array} [c]{c} z\\ z^{\ast} \end{array} \right) \right \rangle \equiv \left \vert sz-rz^{\ast}\right \rangle \nonumber \\ & =\exp \left[ -\frac{1}{2}|sz-rz^{\ast}|^{2}+(sz-rz^{\ast})a^{\dagger }\right] |0\rangle, \label{5.3} \end{align} $\left( \begin{array} [c]{cc} s & -r\\ -r^{\ast} & s^{\ast} \end{array} \right) $ is still a symplectic group element, so (\ref{5.1}) becomes \cite{fancommun1} \begin{equation} F_{1}\left( s,r\right) =\sqrt{s}\int \frac{d^{2}z}{\pi}\left \vert sz-rz^{\ast}\right \rangle \left \langle z\right \vert ,\; \; \label{5.4} \end{equation} where the factor $\sqrt{s}$ is attached for anticipating the unitarity of the operator $F_{1}.$ Eq. (\ref{5.1}) tells us that $c$-number transform $\left( q,p\right) \rightarrow \left( Aq+Bp,Cq+Dp\right) $ in coherent state basis maps into $F_{1}\left( s,r\right) $. Now we prove $F_{1}\left( s,r\right) $ is really the FO we want. Using the IWOP technique and Eq. (\ref{5.3}) and (\ref{3.1}) we can perform the integral \begin{align} F_{1}\left( s,r\right) & =\sqrt{s}\int \frac{d^{2}z}{\pi}\colon \exp \left[ -\left \vert s\right \vert ^{2}\left \vert z\right \vert ^{2}+sza^{\dagger }+z^{\ast}\left( a-ra^{\dagger}\right) +\frac{r^{\ast}s}{2}z^{2} +\frac{rs^{\ast}}{2}z^{\ast2}-a^{\dagger}a\right] \colon \nonumber \\ & =\frac{1}{\sqrt{s^{\ast}}}\exp \left( -\frac{r}{2s^{\ast}}a^{\dagger 2}\right) \colon \exp \left \{ \left( \frac{1}{s^{\ast}}-1\right) a^{\dagger }a\right \} \colon \exp \left( \frac{r^{\ast}}{2s^{\ast}}a^{2}\right) \nonumber \\ & =\exp \left( -\frac{r}{2s^{\ast}}a^{\dagger2}\right) \exp \left \{ \left( a^{\dagger}a+\frac{1}{2}\right) \ln \frac{1}{s^{\ast}}\right \} \exp \left( \frac{r^{\ast}}{2s^{\ast}}a^{2}\right) , \label{5.5} \end{align} where we have used the mathematical formula \cite{book} \begin{align} & \int \frac{d^{2}z}{\pi}\exp \{ \zeta \left \vert z\right \vert ^{2}+\xi z+\eta z^{\ast}+fz^{2}+gz^{\ast2}\} \nonumber \\ & =\frac{1}{\sqrt{\zeta^{2}-4fg}}\exp \left \{ \frac{-\zeta \xi \eta+\xi ^{2}g+\eta^{2}f}{\zeta^{2}-4fg}\right \} , \label{5.6} \end{align} with the convergent condition $\operatorname{Re}(\zeta \pm f\pm g)<0,\operatorname{Re}(\frac{\zeta^{2}-4fg}{\zeta \pm f\pm g})<0.$ It then follows \begin{equation} \left \langle z\right \vert F_{1}\left( s,r\right) \left \vert z^{\prime }\right \rangle =\frac{1}{\sqrt{s^{\ast}}}\exp \left[ -\frac{\left \vert z\right \vert ^{2}+\left \vert z^{\prime}\right \vert ^{2}}{2}-\frac{rz^{\ast2} }{2s^{\ast}}+\frac{r^{\ast}z^{\prime2}}{2s^{\ast}}+\frac{z^{\ast}z^{\prime} }{s^{\ast}}\right] . \label{5.8} \end{equation} Then using \begin{equation} \left \langle x_{i}\right \vert \left. z\right \rangle =\pi^{-1/4}\exp \left( -\frac{x_{i}^{2}}{2}+\sqrt{2}x_{i}z-\frac{z^{2}}{2}-\frac{\left \vert z\right \vert ^{2}}{2}\right) . \label{5.9} \end{equation} and the completeness relation of coherent state as well as (\ref{5.2}) we obtain the matrix element of $F_{1}\left( s,r\right) $ ($\equiv F_{1}\left( A,B,C\right) $) in coordinate representation $\left \langle x_{i}\right \vert $, \begin{align} \left \langle x_{2}\right \vert F_{1}\left( s,r\right) \left \vert x_{1}\right \rangle & =\int \frac{d^{2}z}{\pi}\left \langle x_{2}\right \vert \left. z\right \rangle \left \langle z\right \vert F_{1}\left( s,r\right) \int \frac{d^{2}z^{\prime}}{\pi}\left \vert z^{\prime}\right \rangle \left \langle z^{\prime}\right \vert \left. x_{1}\right \rangle \nonumber \\ & =\frac{1}{\sqrt{2\pi iB}}\exp \left[ \frac{i}{2B}\left( Ax_{1}^{2} -2x_{2}x_{1}+Dx_{2}^{2}\right) \right] \equiv \mathcal{K}\left( x_{2} ,x_{1}\right) , \label{5.10} \end{align} which is just the kernel of generalized Fresnel transform $\mathcal{K}\left( x_{2},x_{1}\right) $ in (\ref{2.12}). The above discussions demonstrate how to transit classical Fresnel transform to GFO through the coherent state and the IWOP technique. Now if we define $g\left( x_{2}\right) =\left \langle x_{2}\right \vert \left. g\right \rangle $, $f\left( x_{1}\right) =\left \langle x_{1} \right \vert \left. f\right \rangle $ and using Eq. (\ref{3.10}), we can rewrite Eq. (\ref{2.11}) as \begin{align} \left \langle x_{2}\right \vert \left. g\right \rangle & =\int_{-\infty }^{\infty}dx_{1}\left \langle x_{2}\right \vert F_{1}\left( A,B,C\right) \left \vert x_{1}\right \rangle \left \langle x_{1}\right \vert \left. f\right \rangle \nonumber \\ & =\left \langle x_{2}\right \vert F_{1}\left( A,B,C\right) \left \vert f\right \rangle , \label{5.11} \end{align} which is just the quantum mechanical version of GFO. Therefore, the 1-dimensional GFT in classical optics corresponds to the 1-mode GFO $F_{1}\left( A,B,C\right) $ operating on state vector $\left \vert f\right \rangle $ in Hilbert space, i.e. $\left \vert g\right \rangle =F_{1}\left( A,B,C\right) \left \vert f\right \rangle $. One merit of GFO is: using coordinate-momentum representation transform we can immediately obtain GFT in \textquotedblleft frequency" domain, i.e. \begin{align} & \left \langle p_{2}\right \vert F\left \vert p_{1}\right \rangle =\int _{-\infty}^{\infty}dx_{1}dx_{2}\left \langle p_{2}\right \vert \left. x_{2}\right \rangle \left \langle x_{2}\right \vert F\left \vert x_{1} \right \rangle \left \langle x_{1}\right. \left \vert p_{1}\right \rangle \nonumber \\ & =\frac{1}{\sqrt{2\pi iB}}\int_{-\infty}^{\infty}\frac{dx_{1}dx_{2}}{2\pi }\exp \left[ \frac{iA}{2B}\left( x_{1}^{2}+\frac{x_{1}}{A}\left( Bp_{1}-2x_{2}\right) \right) +\frac{iD}{2B}x_{2}^{2}-ip_{2}x_{2}\right] \nonumber \\ & =\frac{1}{\sqrt{2\pi i\left( -C\right) }}\exp \left[ \frac{i}{2\left( -C\right) }\left( Dp_{1}^{2}-2p_{2}p_{1}+Ap_{2}^{2}\right) \right] . \label{5.12} \end{align} Obviously, $F_{1}\left( A,B,C\right) $ induces the following transform \begin{equation} F_{1}^{-1}\left( A,B,C\right) \left( \begin{array} [c]{c} Q\\ P \end{array} \right) F_{1}\left( A,B,C\right) =\left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) \left( \begin{array} [c]{c} Q\\ P \end{array} \right) . \label{5.13} \end{equation} \subsection{Group Multiplication Rule for Single-mode GFO} Because two successive optical Fresnel transforms is still a Fresnel transform, we wonder if the product of two GFO is still a GFO. On the other hand, we have known that the GFO is the image of the symplectic transform $z\rightarrow sz-rz^{\ast},$ we expect that the product of two symplectic transforms maps into the GFO which is just the product of two GFOs. If this is so, then correspondence between GFT and GFO is perfect. Using (\ref{5.4}), $\left \langle z\right. \left \vert z^{\prime}\right \rangle =\exp \left[ -\frac{1}{2}\left( |z|^{2}+|z^{\prime}|^{2}\right) +z^{\ast}z^{\prime }\right] $ and the IWOP technique we can directly perform the following integrals \begin{align} F_{1}\left( s,r\right) F_{1}\left( s^{\prime},r^{\prime}\right) & =\sqrt{ss^{\prime}}\int \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}\left \vert sz-rz^{\ast}\right \rangle \left \langle z\right. \left \vert s^{\prime }z^{\prime}-r^{\prime}z^{\prime \ast}\right \rangle \left \langle z^{\prime }\right \vert \nonumber \\ & =\frac{1}{\sqrt{s^{\prime \prime \ast}}}\exp \left[ -\frac{r^{\prime \prime} }{2s^{\prime \prime \ast}}a^{\dagger2}\right] \colon \exp \left \{ \left( \frac{1}{s^{\prime \prime \ast}}-1\right) a^{\dagger}a\right \} \colon \exp \left[ \frac{r^{\prime \prime \ast}}{2s^{\prime \prime \ast}}a^{2}\right] \nonumber \\ & =\sqrt{s^{\prime \prime}}\int \frac{d^{2}z}{\pi}\left \vert s^{\prime \prime \ast}z-r^{\prime \prime}z^{\ast}\right \rangle \left \langle z\right \vert =F_{1}\left( s^{\prime \prime},r^{\prime \prime}\right) , \label{7.3} \end{align} where we have set \begin{equation} s^{\prime \prime}=ss^{\prime}+rr^{\prime \ast},\;r^{\prime \prime}=r^{\prime }s+rs^{\prime \ast}, \label{7.2} \end{equation} or \begin{equation} M^{\prime \prime}\equiv \left( \begin{array} [c]{cc} s^{\prime \prime} & -r^{\prime \prime}\\ -r^{\ast \prime \prime} & s^{\ast \prime \prime} \end{array} \right) =\left( \begin{array} [c]{cc} s & -r\\ -r^{\ast} & s^{\ast} \end{array} \right) \left( \begin{array} [c]{cc} s^{\prime} & -r^{\prime}\\ -r^{\prime \ast} & s^{\prime \ast} \end{array} \right) =MM^{\prime},\ \left \vert s^{\prime \prime}\right \vert ^{2}-\left \vert r^{\prime \prime}\right \vert ^{2}=1, \label{7.4} \end{equation} from which we see that it is just the mapping of the above ($A,B,C,D)$ matrices multiplication. Hence $F_{1}\left( s,r\right) F_{1}\left( s^{\prime},r^{\prime}\right) $ is the loyal representation of the product of two symplectic group elements shown in (\ref{7.4}). The above discussion actually reveals an important property of coherent states, though two coherent state vectors are not orthogonal, but the equation \begin{equation} \sqrt{ss^{\prime}}\int \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}\left \vert sz-rz^{\ast}\right \rangle \left \langle z\right. \left \vert s^{\prime }z^{\prime}-r^{\prime}z^{\prime \ast}\right \rangle \left \langle z^{\prime }\right \vert =\sqrt{s^{\prime \prime}}\int \frac{d^{2}z}{\pi}\left \vert s^{\prime \prime \ast}z-r^{\prime \prime}z^{\ast}\right \rangle \left \langle z\right \vert \label{7.5} \end{equation} seems as if their overlap $\left \langle z\right. \left \vert s^{\prime }z^{\prime}-r^{\prime}z^{\prime \ast}\right \rangle $ was a $\delta$-function. The coherent state representation for GFOs' product may be visualized very easily, but it achieves striking importance, because it does not change its form when treating symplectic transform according to $z\rightarrow sz-rz^{\ast}$. As a result of this group multiplication rule of GFO, we immediately obtain \begin{align} & \mathcal{K}^{M^{\prime \prime}}\left( x_{2},x_{1}\right) =\left \langle x_{2}\right \vert F_{1}\left( s^{\prime \prime},r^{\prime \prime}\right) \left \vert x_{1}\right \rangle \nonumber \\ & =\int_{-\infty}^{\infty}dx_{3}\left \langle x_{2}\right \vert F_{1}\left( s,r\right) \left \vert x_{3}\right \rangle \left \langle x_{3}\right \vert F_{1}\left( s^{\prime},r^{\prime}\right) \left \vert x_{1}\right \rangle \nonumber \\ & =\int_{-\infty}^{\infty}dx_{3}\mathcal{K}^{M}\left( x_{2},x_{3}\right) \mathcal{K}^{M^{\prime}}\left( x_{3},x_{1}\right) , \label{7.6} \end{align} provided that the parameter matrices $\left( s^{\prime \prime},r^{\prime \prime}\right) $ satisfy (\ref{7.2}). Thus by virtue of the group multiplication property of GFO we immediately find the successive transform property of GFTs. \section{Quantum Optical ABCD Law for optical propagation ---single-mode case} In classical optics, ray-transfer matrices, $N=\left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) ,$ $AD-BC=1$, have been used to describe the geometrical formation of images by a centered lens system. For an optical ray (a centered spherical wavefront) passing through optical instruments there is a famous law, named ABCD law, governing the relation between input ray $\left( r_{1},\alpha _{1}\right) $ and output ray $\left( r_{2},\alpha_{2}\right) ,$ i.e. \begin{equation} \left( \begin{array} [c]{c} r_{2}\\ \alpha_{2} \end{array} \right) =N\left( \begin{array} [c]{c} r_{1}\\ \alpha_{1} \end{array} \right) , \label{8.1} \end{equation} where $r_{1}$ is the ray height from the optical axis, and $\alpha_{1}$ is named the optical direction-cosine, $r_{1}/\alpha_{1}\equiv R_{1}$ specifies the ray's wavefront shape. Eq. (\ref{8.1}) implies \begin{equation} R_{2}\equiv \frac{r_{2}}{\alpha_{2}}=\frac{AR_{1}+B}{CR_{1}+D}. \label{8.2} \end{equation} This law is the core of matrix optics, since it tells us how the curvature of a centered spherical wavefront changes from one reference plane to the next. Besides, the multiplication rule of matrix optics implies that if the ray-transfer matrices of the $n$ optical components are $N_{1},N_{2} ,N_{3},\cdots,N_{n}$, respectively, then the whole system is determined by a matrix $N=N_{1}N_{2}N_{3}\cdots N_{n}.$ One of the remarkable things of modern optics is the case with which geometrical ray-transfer methods, constituting the matrix optics, can be adapted to describe the generation and propagation of Laser beams. In 1965 Kogelnik \cite{Kogelnik} pointed out that propagation of Gaussian beam also obeys ABCD law via optical diffraction integration, i.e. the input light field $f\left( x_{1}\right) $ and output light field $g\left( x_{2}\right) $ are related to each other by so-called Fresnel integration \cite{Goodman} $g\left( x_{2}\right) =\int_{-\infty}^{\infty}\mathcal{K}\left( A,B,C;x_{2},x_{1}\right) f\left( x_{1}\right) dx_{1},$ where \[ \mathcal{K}\left( A,B,C;x_{2},x_{1}\right) =\frac{1}{\sqrt{2\pi iB}} \exp \left[ \frac{i}{2B}\left( Ax_{1}^{2}-2x_{2}x_{1}+Dx_{2}^{2}\right) \right] . \] The ABCD law for Gaussian beam passing through an optical system is \cite{ABCD} \begin{equation} q_{2}=\frac{Aq_{1}+B}{Cq_{1}+D}, \label{8.3} \end{equation} where $q_{1}$ $(q_{2})$ represents the complex curvature of the input (output) Gaussian beam, Eq. (\ref{8.3}) has the similar form as Eq. (\ref{8.2}). An interesting and important question naturally arises \cite{FANHUOC}: Does ABCD law also exhibit in quantum optics? Since classical Fresnel transform should have its quantum optical counterpart? To see the ABCD law more explicitly, using Eq.(\ref{5.2}) we can re-express Eq.(\ref{5.5}) as \begin{align} F_{1}\left( A,B,C\right) & =\sqrt{\frac{2}{A+D+i\left( B-C\right) } }\colon \exp \left \{ \frac{A-D+i\left( B+C\right) }{2\left[ A+D+i\left( B-C\right) \right] }a^{\dagger2}\right. \nonumber \\ & \left. +\left[ \frac{2}{A+D+i\left( B-C\right) }-1\right] a^{\dagger }a-\frac{A-D-i\left( B+C\right) }{2\left[ A+D+i\left( B-C\right) \right] }a^{2}\right \} \colon, \label{8.4} \end{align} and the multiplication rule for $F_{1}$ is $F\left( A^{\prime},B^{\prime },C^{\prime},D^{\prime}\right) F\left( A,B,C,D\right) =F\left( A^{\prime \prime},B^{\prime \prime},C^{\prime \prime},D^{\prime \prime}\right) ,$ where \begin{equation} \left( \begin{array} [c]{cc} A^{\prime \prime} & B^{\prime \prime}\\ C^{\prime \prime} & D^{\prime \prime} \end{array} \right) =\left( \begin{array} [c]{cc} A^{\prime} & B^{\prime}\\ C^{\prime} & D^{\prime} \end{array} \right) \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) . \label{8.5} \end{equation} Next we directly use the GFO to derive ABCD law in quantum optics. From Eq.(\ref{8.5}) we see that the GFO generates \begin{equation} F_{1}\left( A,B,C\right) \left \vert 0\right \rangle =\sqrt{\frac {2}{A+iB-i\left( C+iD\right) }}\exp \left \{ \frac{A-D+i\left( B+C\right) }{2\left[ A+D+i\left( B-C\right) \right] }a^{\dagger2}\right \} \left \vert 0\right \rangle , \label{8.6} \end{equation} if we identify \begin{equation} \frac{A-D+i\left( B+C\right) }{A+D+i\left( B-C\right) }=\frac{q_{1} -i}{q_{1}+i}, \label{8.7} \end{equation} then \begin{equation} F_{1}\left( A,B,C\right) \left \vert 0\right \rangle =\sqrt{-\frac{2/\left( C+iD\right) }{q_{1}+i}}\exp \left[ \frac{q_{1}-i}{2\left( q_{1}+i\right) }a^{\dagger2}\right] \left \vert 0\right \rangle , \label{8.8} \end{equation} The solution of Eq.(\ref{8.7}) is \begin{equation} q_{1}\equiv-\frac{A+iB}{C+iD}. \label{8.9} \end{equation} Let $F_{1}\left( A,B,C\right) \left \vert 0\right \rangle $ expressed by (\ref{8.8}) be an input state for an optical system which is characteristic by parameters $A^{\prime},B^{\prime},C^{\prime},D^{\prime},$ then the \textit{quantum optical ABCD law} states that the output state is \begin{equation} F_{1}\left( A^{\prime},B^{\prime},C^{\prime}\right) F_{1}\left( A,B,C\right) \left \vert 0\right \rangle =\sqrt{\frac{-2/\left( C^{\prime \prime}+iD^{\prime \prime}\right) }{q_{2}+i}}\exp \left[ \frac{q_{2} -i}{2\left( q_{2}+i\right) }a^{\dagger2}\right] \left \vert 0\right \rangle , \label{8.10} \end{equation} which has the similar form as Eq.(\ref{8.8})$,$ where $\left( C^{\prime \prime},D^{\prime \prime}\right) $ is determined by Eq.(\ref{8.5})$,$ and \begin{equation} \bar{q}_{2}=\frac{A^{\prime}\bar{q}_{1}+B^{\prime}}{C^{\prime}\bar{q} _{1}+D^{\prime}},\text{ \ }\bar{q}_{j}\equiv-q_{j},\text{ \ }\left( j=1,2\right) \label{8.11} \end{equation} which resembles Eq.(\ref{8.3}). Proof: According to the multiplication rule of two GFOs and Eqs.(\ref{8.4} )-(\ref{8.5}) we have \begin{align} & F_{1}\left( A^{\prime},B^{\prime},C^{\prime}\right) F_{1}\left( A,B,C\right) \left \vert 0\right \rangle \nonumber \\ & =\sqrt{\frac{2}{A^{\prime \prime}+D^{\prime \prime}+i\left( B^{\prime \prime }-C^{\prime \prime}\right) }}\exp \left \{ \frac{A^{\prime \prime} -D^{\prime \prime}+i\left( B^{\prime \prime}+C^{\prime \prime}\right) }{2\left[ A^{\prime \prime}+D^{\prime \prime}+i\left( B^{\prime \prime }-C^{\prime \prime}\right) \right] }a^{\dagger2}\right \} \left \vert 0\right \rangle \nonumber \\ & =\sqrt{\frac{2}{A^{\prime}\left( A+iB\right) +B^{\prime}\left( C+iD\right) -iC^{\prime}\left( A+iB\right) -iD^{\prime}\left( C+iD\right) }}\nonumber \\ & \times \exp \left \{ \frac{A^{\prime}\left( A+iB\right) +B^{\prime}\left( C+iD\right) +iC^{\prime}\left( A+iB\right) +iD^{\prime}\left( C+iD\right) }{2\left[ A^{\prime}\left( A+iB\right) +B^{\prime}\left( C+iD\right) -iC^{\prime}\left( A+iB\right) -iD^{\prime}\left( C+iD\right) \right] }a^{\dagger2}\right \} \left \vert 0\right \rangle \nonumber \\ & =\sqrt{\frac{-2/\left( C+iD\right) }{A^{\prime}q_{1}-B^{\prime}-i\left( C^{\prime}q_{1}-D^{\prime}\right) }}\exp \left \{ \frac{A^{\prime} q_{1}-B^{\prime}+i\left( C^{\prime}q_{1}-D^{\prime}\right) }{2\left[ A^{\prime}q_{1}-B^{\prime}-i\left( C^{\prime}q_{1}-D^{\prime}\right) \right] }a^{\dagger2}\right \} \left \vert 0\right \rangle . \label{8.12} \end{align} Using Eq.(\ref{8.9}) we see $\frac{2/\left( C+iD\right) }{C^{\prime} q_{1}-D^{\prime}}=-2/\left( C^{\prime \prime}+iD^{\prime \prime}\right) ,$ together using Eq.(\ref{8.11}) we can reach Eq.(\ref{8.10}), thus the law is proved. Using Eq. (\ref{8.8}) we can re-express Eq.(\ref{8.11}) as \begin{equation} q_{2}=-\frac{A^{\prime}(A+iB)+B^{\prime}(C+iD)}{C^{\prime}(A+iB)+D^{\prime }(C+iD)}=-\frac{A^{\prime \prime}+iB^{\prime \prime}}{C^{\prime \prime }+iD^{\prime \prime}}, \label{8.13} \end{equation} which is in consistent to Eq.(\ref{8.9}). Eqs. (\ref{8.8})-(\ref{8.13}) are therefore self-consistent. As an application of quantum optical ABCD law, we apply it to tackle the time-evolution of a time-dependent harmonic oscillator whose Hamiltonian is \begin{equation} H=\frac{1}{2}e^{-2\gamma t}P^{2}+\frac{1}{2}\omega_{0}^{2}e^{2\gamma t} Q^{2},\text{ \ \ }\hbar=1, \label{8.14} \end{equation} where we have set the initial mass $m_{0}=1,$ $\gamma$ denotes damping. Using $u\left( t\right) =e^{\frac{i\gamma}{2}Q^{2}}e^{-\frac{i\gamma t}{2}\left( QP+PQ\right) }\ $to perform the transformation \begin{align} u\left( t\right) Qu^{-1}\left( t\right) & =e^{-\gamma t}Q,\nonumber \\ u\left( t\right) Pu^{-1}\left( t\right) & =e^{\gamma t}P-\gamma e^{\gamma t}Q, \label{8.15} \end{align} then $i\frac{\partial \left \vert \psi \left( t\right) \right \rangle }{\partial t}=H\left \vert \psi \left( t\right) \right \rangle \ $leads to $i\frac {\partial \left \vert \phi \right \rangle }{\partial t}=\mathcal{H}\left \vert \phi \right \rangle ,$ $\left \vert \phi \right \rangle =u\left( t\right) \left \vert \psi \left( t\right) \right \rangle ,$ \begin{equation} H\rightarrow \mathcal{H}=u\left( t\right) Hu^{-1}\left( t\right) -iu\left( t\right) \frac{\partial u^{-1}\left( t\right) }{\partial t}=\frac{1} {2}P^{2}+\frac{1}{2}\omega^{2}Q^{2},. \label{8.16} \end{equation} where $\omega^{2}=\omega_{0}^{2}-\gamma^{2}.$ $\mathcal{H}$ does not contain $t$ explicitly. The dynamic evolution of a mass-varying harmonic oscillator from the Fock state $\left \vert 0\right \rangle $ at initial time to a squeezed state at time $t$ is \begin{equation} \left \vert \psi \left( t\right) \right \rangle _{0}=u^{-1}\left( t\right) \left \vert 0\right \rangle =e^{\frac{i\gamma t}{2}\left( QP+PQ\right) }e^{-\frac{i\gamma}{2}Q^{2}}\left \vert 0\right \rangle , \label{8.17} \end{equation} if we let $A=D=1,B=0,C=-\gamma;$ and $A^{\prime}=e^{-\gamma t},D^{\prime }=e^{\gamma t},B^{\prime}=C^{\prime}=0,$ then $q_{1}=\frac{1}{\gamma-i},$ $q_{2}=\frac{e^{-2\gamma t}}{\gamma-i}$, according to Eq.(\ref{8.10}) we directly obtain \begin{equation} u^{-1}\left( t\right) \left \vert 0\right \rangle =\sqrt{\frac{2e^{-\gamma t} }{e^{-2\gamma t}+i\gamma+1}}\exp \left[ \frac{e^{-2\gamma t}-1-i\gamma }{2\left( e^{-2\gamma t}+1+i\gamma \right) }a^{\dagger2}\right] \left \vert 0\right \rangle , \label{8.18} \end{equation} so the time evolution of the damping oscillator embodies the quantum optical ABCD law. \section{Optical operator method studied via GFO's decomposition} Fresnel diffraction is the core of Fourier optics \cite{Goodman,Collins1,Alieva1,agarwal}, Fresnel transform is frequently used in optical imaging, optical propagation and optical instrument design. The GFT represents a class of optical transforms which are of great importance for their applications to describe various optical systems. It is easily seen that when we let the transform kernel $\mathcal{K}\left( x_{2},x_{1}\right) =\exp \left( ix_{2}x_{1}\right) $, the GFT changes into the well-known Fourier transform, which is adapted to express mathematically the Fraunhofer diffraction. And if $\mathcal{K}\left( x_{2},x_{1}\right) =\exp[i\left( x_{2}-x_{1}\right) ^{2}]$, the GFT then describes a Fresnel diffraction. In studying various optical transformations one also proposed so-called optical operator method \cite{Nazarathy} which used quantum mechanical operators' ordered product to express the mechanism of optical systems, such that the ray transfer through optical instruments and the diffraction can be discussed by virtue of the commutative relations of operators and the matrix algebra. Two important questions thus naturally arises: how to directly map the classical optical transformations to the optical operator method? How to combine the usual optical transformation operators, such as the square phase operators, scaling operator, Fourier transform operator and the propagation operator in free space, into a concise and unified form? In this section we shall solve these two problems and develop the optical operator method onto a new stage. \subsection{Four fundamental optical operators derived by decomposing GFO} The GFO $F_{1}\left( A,B,C\right) $ can also be expressed in the form of quadratic combination of canonical operators $Q$ and $P$ \cite{fanwuncommun}, i.e., \begin{equation} F_{1}\left( A,B,C\right) =\exp \left( \frac{iC}{2A}Q^{2}\right) \exp \left( -\frac{i}{2}\left( QP+PQ\right) \ln A\right) \exp \left( -\frac{iB} {2A}P^{2}\right) , \label{9.1} \end{equation} where we have set $\hbar=1,$ $A\neq0.$ To confirm this, we first calculate matrix element \begin{align} \left \langle x\right \vert F_{1}\left( A,B,C\right) \left \vert p\right \rangle & =\exp \left( \frac{iC}{2A}x^{2}\right) \left \langle x\right \vert \exp \left( -\frac{i}{2}\left( QP+PQ\right) \ln A\right) \left \vert p\right \rangle \exp \left( -\frac{iB}{2A}p^{2}\right) \nonumber \\ \ & =\frac{1}{\sqrt{2\pi A}}\exp \left( \frac{iC}{2A}x^{2}-\frac{iB} {2A}p^{2}+\frac{ipx}{A}\right) , \label{9.2} \end{align} where we have used the squeezing property \begin{equation} \exp \left[ -\frac{i}{2}\left( QP+PQ\right) \ln A\right] \left \vert p\right \rangle =\frac{1}{\sqrt{A}}\left \vert p/A\right \rangle . \label{9.3} \end{equation} It then follows from (\ref{9.2}) and $AD-BC=1$, we have \begin{equation} \left \langle x_{2}\right \vert F_{1}\left( A,B,C\right) \left \vert x_{1}\right \rangle =\int_{-\infty}^{\infty}dp\left \langle x_{2}\right \vert F_{1}\left( A,B,C\right) \left \vert p\right \rangle \left \langle p\right \vert \left. x_{1}\right \rangle =\mathcal{K}^{M}\left( x_{2},x_{1}\right) . \label{9.4} \end{equation} Thus $F_{1}\left( A,B,C\right) $ in (\ref{9.1}) is really the expected GFO. Next we directly use (\ref{5.1}) and the canonical operator $\left( Q,P\right) $ representation (\ref{9.1}) to develop the optical operator method. By noticing the matrix decompositions \cite{MatrixOptics} \begin{equation} \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) =\left( \begin{array} [c]{cc} 1 & 0\\ C/A & 1 \end{array} \right) \left( \begin{array} [c]{cc} A & 0\\ 0 & A^{-1} \end{array} \right) \left( \begin{array} [c]{cc} 1 & B/A\\ 0 & 1 \end{array} \right) , \label{9.5} \end{equation} and comparing (\ref{5.1}) and (\ref{9.1}) as well as using (\ref{8.5}) we know \begin{equation} F_{1}\left( A,B,C\right) =F_{1}\left( 1,0,C/A\right) F_{1}\left( A,0,0\right) F_{1}\left( 1,B/A,0\right) , \label{9.6} \end{equation} where \begin{align} F_{1}\left( 1,0,C/A\right) & =\frac{\sqrt{2+iC/A}}{2\sqrt{2}\pi}\int dxdp\left \vert \left( \begin{array} [c]{cc} 1 & 0\\ C/A & 1 \end{array} \right) \left( \begin{array} [c]{c} x\\ p \end{array} \right) \right \rangle \left \langle \left( \begin{array} [c]{c} x\\ p \end{array} \right) \right \vert \nonumber \\ & =\exp \left( \frac{iC}{2A}Q^{2}\right) , \label{9.7} \end{align} which is named quadrature phase operator; and \begin{align} F_{1}\left( 1,B/A,0\right) & =\frac{\sqrt{2-iB/A}}{2\sqrt{2}\pi}\int dxdp\left \vert \left( \begin{array} [c]{cc} 1 & B/A\\ 0 & 1 \end{array} \right) \left( \begin{array} [c]{c} x\\ p \end{array} \right) \right \rangle \left \langle \left( \begin{array} [c]{c} x\\ p \end{array} \right) \right \vert \nonumber \\ & =\exp \left( -\frac{iB}{2A}P^{2}\right) , \label{9.8} \end{align} which is named Fresnel propagator in free space; as well as \begin{align} F_{1}\left( A,0,0\right) & =\frac{\sqrt{A+A^{-1}}}{2\sqrt{2}\pi}\int dxdp\left \vert \left( \begin{array} [c]{cc} A & 0\\ 0 & A^{-1} \end{array} \right) \left( \begin{array} [c]{c} x\\ p \end{array} \right) \right \rangle \left \langle \left( \begin{array} [c]{c} x\\ p \end{array} \right) \right \vert \nonumber \\ & =\exp \left[ -\frac{i}{2}\left( QP+PQ\right) \ln A\right] , \label{9.9} \end{align} which is named scaling operator (squeezed operator \cite{squeezed1,squeezed2} ). When $A=D=0,B=1,C=-1,$ from (\ref{8.4}) we see \begin{align} F_{1}\left( 0,1,-1\right) & =\sqrt{-i}\int \frac{dxdp}{2\pi}\left \vert \left( \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right) \left( \begin{array} [c]{c} x\\ p \end{array} \right) \right \rangle \left \langle \left( \begin{array} [c]{c} x\\ p \end{array} \right) \right \vert \nonumber \\ & =\exp \left[ -\left( a^{\dagger}a+\frac{1}{2}\right) \ln i\right] \nonumber \\ & =\exp \left[ -i\frac{\pi}{2}\left( a^{\dagger}a+\frac{1}{2}\right) \right] , \label{9.10} \end{align} which is named the Fourier operator, since it quantum mechanically transforms \cite{Fanshu} \begin{align} \exp \left[ i\frac{\pi}{2}\left( a^{\dagger}a+\frac{1}{2}\right) \right] Q\exp \left[ -i\frac{\pi}{2}\left( a^{\dagger}a+\frac{1}{2}\right) \right] & =P,\nonumber \\ \exp \left[ i\frac{\pi}{2}\left( a^{\dagger}a+\frac{1}{2}\right) \right] P\exp \left[ -i\frac{\pi}{2}\left( a^{\dagger}a+\frac{1}{2}\right) \right] & =-Q. \label{9.11} \end{align} \subsection{Alternate decompositions of GFO} Note that when $A=0$, the decomposition (\ref{9.1}) is not available, instead, from \begin{equation} \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) ^{-1}=\left( \begin{array} [c]{cc} D & -B\\ -C & A \end{array} \right) , \label{9.12} \end{equation} and (\ref{8.4}), (\ref{8.5}) and (\ref{9.1}) we have \begin{equation} F_{1}^{-1}\left( A,B,C\right) =\exp \left( -\frac{iC}{2D}Q^{2}\right) \exp \left[ -\frac{i}{2}\left( QP+PQ\right) \ln D\right] \exp \left( \frac{iB}{2D}P^{2}\right) , \label{9.13} \end{equation} it then follows \begin{equation} F_{1}\left( A,B,C\right) =\exp \left( -\frac{iB}{2D}P^{2}\right) \exp \left[ \frac{i}{2}\left( QP+PQ\right) \ln D\right] \exp \left( \frac{iC}{2D}Q^{2}\right) ,\text{ }D\neq0. \label{9.14} \end{equation} Besides, when we notice \begin{equation} \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) =\left( \begin{array} [c]{cc} 1 & 0\\ D/B & 1 \end{array} \right) \left( \begin{array} [c]{cc} B & 0\\ 0 & 1/B \end{array} \right) \left( \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right) \left( \begin{array} [c]{cc} 1 & 0\\ A/B & 1 \end{array} \right) , \label{9.15} \end{equation} and \begin{equation} \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) =\left( \begin{array} [c]{cc} 1 & A/C\\ 0 & 1 \end{array} \right) \left( \begin{array} [c]{cc} -1/C & 0\\ 0 & -C \end{array} \right) \left( \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right) \left( \begin{array} [c]{cc} 1 & D/C\\ 0 & 1 \end{array} \right) , \label{9.16} \end{equation} we have another decomposition for $B\neq0,$ \begin{align} F_{1}\left( A,B,C\right) & =\exp \left( \frac{iD}{2B}Q^{2}\right) \exp \left( -\frac{i}{2}\left( QP+PQ\right) \ln B\right) \nonumber \\ & \times \exp \left[ -\frac{i\pi}{2}\left( a^{\dagger}a+\frac{1}{2}\right) \right] \exp \left( \frac{iA}{2B}Q^{2}\right) ,\text{ } \label{9.17} \end{align} and for $C\neq0$ \begin{align} F_{1}\left( A,B,C\right) & =\exp \left( -\frac{iA}{2C}P^{2}\right) \exp \left[ -\frac{i}{2}\left( QP+PQ\right) \ln \left( \frac{-1}{C}\right) \right] \nonumber \\ & \times \exp \left[ -\frac{i\pi}{2}\left( a^{\dagger}a+\frac{1}{2}\right) \right] \exp \left( -\frac{iD}{2C}P^{2}\right) . \label{9.18} \end{align} \subsection{Some optical operator identities} For a special optical systems with the parameter $A=0,$ $C=-B^{-1},$ \begin{equation} \left( \begin{array} [c]{cc} 0 & B\\ -B^{-1} & D \end{array} \right) =\left( \begin{array} [c]{cc} 1 & 0\\ D/B & 1 \end{array} \right) \left( \begin{array} [c]{cc} B & 0\\ 0 & B^{-1} \end{array} \right) \left( \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right) , \label{9.20} \end{equation} we have \begin{align} & \exp \left( -\frac{iB}{2D}P^{2}\right) \exp \left( \frac{i}{2}\left( QP+PQ\right) \ln D\right) \exp \left( \frac{-i}{2DB}Q^{2}\right) \nonumber \\ & =\exp \left( \frac{-iD}{2B}Q^{2}\right) \exp \left( \frac{i}{2}\left( QP+PQ\right) \ln B\right) \exp \left[ -\frac{i\pi}{2}\left( a^{\dagger }a+\frac{1}{2}\right) \right] . \label{9.21} \end{align} In particular, when $A=D=0,$ $C=-B^{-1},$ from \begin{equation} \left( \begin{array} [c]{cc} 0 & B\\ -\frac{1}{B} & 0 \end{array} \right) =\left( \begin{array} [c]{cc} B & 0\\ 0 & \frac{1}{B} \end{array} \right) \left( \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right) , \label{9.22} \end{equation} we have \begin{align} & \exp \left[ -\frac{B^{2}-1}{2\left( B^{2}+1\right) }a^{\dagger2}\right] \exp \left[ \left( a^{\dagger}a+\frac{1}{2}\right) \ln \left( \frac {-2Bi}{B^{2}+1}\right) \right] \exp \left[ -\frac{B^{2}-1}{2\left( B^{2}+1\right) }a^{2}\right] \nonumber \\ & =\exp \left[ -\frac{i}{2}\left( QP+PQ\right) \ln B\right] \exp \left[ -i\frac{\pi}{2}\left( a^{\dagger}a+\frac{1}{2}\right) \right] . \label{9.23} \end{align} Using the following relations \begin{equation} \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) =\left( \begin{array} [c]{cc} 1 & \left( A-1\right) /C\\ 0 & 1 \end{array} \right) \left( \begin{array} [c]{cc} 1 & 0\\ C & 1 \end{array} \right) \left( \begin{array} [c]{cc} 1 & \left( D-1\right) /C\\ 0 & 1 \end{array} \right) , \label{9.24} \end{equation} it then follows that \begin{equation} F_{1}\left( A,B,C\right) =\exp \left( -\frac{i\left( A-1\right) }{2C} P^{2}\right) \exp \left( \frac{iC}{2}Q^{2}\right) \exp \left( -\frac {i\left( D-1\right) }{2C}P^{2}\right) ,\text{ } \label{9.25} \end{equation} while from \begin{equation} \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) =\left( \begin{array} [c]{cc} 1 & 0\\ \left( D-1\right) /B & 1 \end{array} \right) \left( \begin{array} [c]{cc} 1 & B\\ 0 & 1 \end{array} \right) \left( \begin{array} [c]{cc} 1 & 0\\ \left( A-1\right) /B & 1 \end{array} \right) \label{9.26} \end{equation} we obtain \begin{equation} F_{1}\left( A,B,C\right) =\exp \left( \frac{i\left( D-1\right) }{2B} Q^{2}\right) \exp \left( -\frac{iB}{2}P^{2}\right) \exp \left( \frac{i\left( A-1\right) }{2B}Q^{2}\right) ,\text{ } \label{9.27} \end{equation} so we have \begin{align} & \exp \left( \frac{i\left( D-1\right) }{2B}Q^{2}\right) \exp \left( -\frac{iB}{2}P^{2}\right) \exp \left( \frac{i\left( A-1\right) }{2B} Q^{2}\right) \nonumber \\ & =\exp \left( -\frac{i\left( A-1\right) }{2C}P^{2}\right) \exp \left( \frac{iC}{2}Q^{2}\right) \exp \left( -\frac{i\left( D-1\right) }{2C} P^{2}\right) . \label{9.28} \end{align} In this section, based on a one-to-one correspondence between classical Fresnel transform in phase space and quantum unitary transform in state-vector space and the IWOP technique as well as the coherent state representation we have found a way to directly map the classical optical transformations to the optical operator method. We have combined the usual optical transformation operators, such as the square phase operators, scaling operator, Fourier transform operator and the propagation operator in free space, into a concise and unified form. The various decompositions of Fresnel operator into the exponential canonical operators are also obtained. \section{Quantum tomography and probability distribution for the Fresnel quadrature phase} In quantum optics theory all possible linear combinations of quadratures $Q$ and $P$\ of the oscillator field mode $a$ and $a^{\dagger}$ can be measured by the homodyne measurement just by varying the phase of the local oscillator. The average of the random outcomes of the measurement, at a given local oscillator phase, is connected with the marginal distribution of Wigner function (WF), thus the homodyne measurement of light field permits the reconstruction of the WF of a quantum system by varying the phase shift between two oscillators. In Ref. \cite{r1} Vogel and Risken pointed out that the probability distribution for the rotated quadrature phase $Q_{\theta }\equiv \lbrack a^{\dagger}\exp(i\theta)+a\exp(-i\theta)]/\sqrt{2},$ $\left[ a,a^{\dagger}\right] =1,$which depends on only one $\theta$ angle, can be expressed in terms of WF, and that the reverse is also true (named as Vogel-Risken relation), i.e., one can obtain the Wigner distribution by tomographic inversion of a set of measured probability distributions, $P_{\theta}\left( q_{\theta}\right) ,$ of the quadrature amplitude. Once the distribution $P_{\theta}\left( q_{\theta}\right) $ are obtained, one can use the inverse Radon transformation familiar in tomographic imaging to obtain the Wigner distribution and density matrix. The Radon transform of the WF is closely related to the expectation values or densities formed with the eigenstates to the rotated canonical observables. The field of problems of the reconstruction of the density operator from such data is called quantum tomography. (Optical tomographic imaging techniques derive two-dimensional data from a three-dimensional object to obtain a slice image of the internal structure and thus have the ability to peer inside the object noninvasively, the slice image is equated with tomogram.) The theoretical development in quantum tomography in the last decade has progressed in the direction of determining more physical relevant parameters of the density from tomographic data\cite{r1,r2,r3,r4,r5}. \subsection{Relation between Fresnel transform and Radon transform of WF} In \cite{r6,r7} the Radon transform of WF which depends on two continuous parameters is introduced, this has the advantage in conveniently associating quantum tomography theory with squeezed coherent state theory. In this subsection we want to derive relations between the Fresnel transform and the Radon transform of WF in quantum optics in tomography theory. By extending the rotated quadrature phase $Q_{\theta}$ to the Fresnel quadrature phase \begin{equation} Q_{F}\equiv \left( s^{\ast}a+ra^{\dagger}+sa^{\dagger}+r^{\ast}a\right) /\sqrt{2}=F_{1}QF_{1}^{\dagger}, \label{10.1} \end{equation} where $s$ and $r$ are related to ABCD through(\ref{5.2}), \begin{equation} s=\frac{1}{2}\left[ A+D-i\left( B-C\right) \right] ,\;r=-\frac{1} {2}\left[ A-D+i\left( B+C\right) \right] ,\;|s|^{2}-|r|^{2}=1, \end{equation} we shall prove that the $(D,B)$ related Radon transform of Wigner operator $\Delta \left( q,p\right) $ is just the pure state density operator $\left \vert q\right \rangle _{s,rs,r}\left \langle q\right \vert $ (named as the tomographic density operator) formed with the eigenstates belonging to the quadrature $Q_{F}$, ( $\left \vert q\right \rangle _{s,r}=F_{1}\left \vert q\right \rangle ,$ $Q$ is the coordinate operator), \begin{equation} F_{1}\left \vert q\right \rangle \left \langle q\right \vert F_{1}^{\dagger }=\left \vert q\right \rangle _{s,rs,r}\left \langle q\right \vert =\int_{-\infty }^{\infty}dq^{\prime}dp^{\prime}\delta \left[ q-\left( Dq^{\prime} -Bp^{\prime}\right) \right] \Delta \left( q^{\prime},p^{\prime}\right) , \label{10.2} \end{equation} \begin{equation} D=\frac{1}{2}\left( s+s^{\ast}+r+r^{\ast}\right) ,\ B=\frac{1}{2i}\left( s^{\ast}-s+r^{\ast}-r\right) , \label{10.3} \end{equation} Since $F$ corresponds to classical Fresnel transform in optical diffraction theory, so Eq. (\ref{10.2}) indicates that the probability distribution for the Fresnel quadrature phase is the Radon transform of WF \cite{fanhuoc2}. Proof: Firstly, from (\ref{5.5}) we see \begin{equation} F_{1}\left( s,r\right) aF_{1}^{\dag}\left( s,r\right) =s^{\ast }a+ra^{\dagger}, \label{10.4} \end{equation} so from $Q=\frac{a+a^{\dagger}}{\sqrt{2}},P=i\frac{a^{\dagger}-a}{\sqrt{2}},$ indeed we have \begin{equation} F_{1}QF_{1}^{\dagger}=F_{1}\frac{a+a^{\dagger}}{\sqrt{2}}F_{1}^{\dagger }=\left( s^{\ast}a+ra^{\dagger}+sa^{\dagger}+r^{\ast}a\right) /\sqrt {2}=Q_{F}. \label{10.5} \end{equation} Secondly, we can derive the explicit form of $\left \vert q\right \rangle _{s,r}.$ Starting from $s^{\ast}+r^{\ast}=D+iB,$\ \ $s^{\ast}-r^{\ast}=A-iC,$ we set up the eigenvector equation \begin{equation} Q_{F}\left \vert q\right \rangle _{s,r}=\left( DQ-BP\right) \left \vert q\right \rangle _{s,r}=q\left \vert q\right \rangle _{s,r},\text{ } \label{10.6} \end{equation} it follows \begin{equation} \left \vert q\right \rangle _{s,r}=F_{1}\left( s,r\right) \left \vert q\right \rangle . \label{10.7} \end{equation} \ In the coordinate and momentum representations we have \begin{align} \left \langle q^{\prime}\right \vert Q_{F}\left \vert q\right \rangle _{s,r} & =\left( Dq^{\prime}+iB\frac{d}{dq^{\prime}}\right) \left \langle q^{\prime }\right \vert \left. q\right \rangle _{s,r}=q\left \langle q^{\prime}\right \vert \left. q\right \rangle _{s,r}.\label{10.9}\\ \left \langle p\right \vert Q_{F}\left \vert q\right \rangle _{s,r} & =\left( iD\frac{d}{dp}-Bp\right) \left \langle p\right \vert \left. q\right \rangle _{s,r}=q\left \langle p\right \vert \left. q\right \rangle _{s,r}. \label{10.10} \end{align} The normalizable solutions to (\ref{10.9}) and (\ref{10.10}) are \begin{align} \left \langle q^{\prime}\right \vert \left. q\right \rangle _{s,r} & =c\left( q\right) \exp \left[ \frac{iq^{\prime}\left( Dq^{\prime}-2q\right) } {2B}\right] ,\label{10.11}\\ \left \langle p\right \vert \left. q\right \rangle _{s,r} & =d\left( q\right) \exp \left[ \frac{ip\left( -Bp-2q\right) }{2D}\right] . \label{10.12} \end{align} Using the Fock representation of $\left \vert q\right \rangle $ and $\left \vert p\right \rangle $ in Eqs.(\ref{3.7}) and (\ref{3.8}), we obtain \begin{align} \left \vert q\right \rangle _{s,r} & =\int_{-\infty}^{\infty}dq^{\prime }\left \vert q^{\prime}\right \rangle \left \langle x^{\prime}\right \vert \left. q\right \rangle _{s,r}\nonumber \\ & =\pi^{-1/4}c\left( q\right) \sqrt{\frac{2B\pi}{B-iD}}\exp \left[ -\frac{q^{2}}{2B\left( B-iD\right) }+\frac{\sqrt{2}a^{\dagger}q}{D+iB} -\frac{D-iB}{D+iB}\frac{a^{\dagger2}}{2}\right] \left \vert 0\right \rangle , \label{10.13} \end{align} and \begin{align} \left \vert q\right \rangle _{s,r} & =\int_{-\infty}^{\infty}dp\left \vert p\right \rangle \left \langle p\right \vert \left. q\right \rangle _{s,r} \nonumber \\ & =d\left( q\right) \pi^{-1/4}\sqrt{\frac{2\pi D}{D+iB}}\exp \left[ -\frac{q^{2}}{2D\left( D+iB\right) }+\allowbreak \frac{\sqrt{2}a^{\dagger} q}{D+iB}-\frac{D-iB}{D+iB}\frac{a^{\dagger2}}{2}\right] \left \vert 0\right \rangle . \label{10.14} \end{align} Comparing Eq.(\ref{10.13}) with (\ref{10.14}) we see \begin{equation} \frac{c\left( q\right) }{d\left( q\right) }=\sqrt{\allowbreak \frac{D}{iB} }\exp \left[ \frac{\allowbreak iA}{2B}q^{2}-\frac{iCq^{2}}{2D}\right] . \label{10.15} \end{equation} On the other hand, according to the orthogonalization of $\left \vert q\right \rangle _{s,r}$, $_{s,r}\left \langle q^{\prime}\right. \left \vert q^{\prime \prime}\right \rangle _{s,r}=\delta \left( q^{\prime}-q^{\prime \prime }\right) ,$ we have \begin{equation} \left \vert c\left( q\right) \right \vert ^{2}=\frac{1}{2\pi B},\text{ \ }\left \vert d\left( q\right) \right \vert ^{2}=\frac{1}{2\pi D}. \label{10.16} \end{equation} Thus combining Eq.(\ref{10.15}) and (\ref{10.16}) we deduce \begin{equation} c\left( q\right) =\frac{1}{\sqrt{2\pi iB}}\exp \left[ \frac{\allowbreak iA}{2B}q^{2}\right] ,\text{ }d\left( q\right) =\frac{1}{\sqrt{2\pi D}} \exp \left[ \frac{iCq^{2}}{2D}\right] , \label{10.17} \end{equation} and \begin{equation} \left \vert q\right \rangle _{s,r}=\frac{\pi^{-1/4}}{\sqrt{D+iB}}\exp \left \{ -\frac{A-iC}{D+iB}\frac{q^{2}}{2}+\frac{\sqrt{2}q}{D+iB}a^{\dagger} -\frac{D-iB}{D+iB}\frac{a^{\dagger2}}{2}\right \} \left \vert 0\right \rangle , \label{10.18} \end{equation} or \begin{equation} \left \vert q\right \rangle _{s,r}\equiv \frac{\pi^{-1/4}}{\sqrt{s^{\ast }+\allowbreak r^{\ast}}}\exp \left \{ -\frac{s^{\ast}-r^{\ast}}{s^{\ast }+r^{\ast}}\frac{q^{2}}{2}+\frac{\sqrt{2}q}{s^{\ast}+r^{\ast}}a^{\dagger }-\frac{s+r}{s^{\ast}+r^{\ast}}\frac{a^{\dagger2}}{2}\right \} \left \vert 0\right \rangle . \label{10.19} \end{equation} It is easily seen that that $\left \vert q\right \rangle _{s,r}$ make up a complete set (so $\left \vert q\right \rangle _{s,r}$ can be named as the tomography representation), \begin{equation} \int_{-\infty}^{\infty}dq\left \vert q\right \rangle _{s,r}{}_{s,r}\left \langle q\right \vert =1. \label{10.20} \end{equation} Then according to the Weyl quantization scheme \cite{Weyl} \begin{equation} H\left( Q,P\right) =\int_{-\infty}^{\infty}dpdq\Delta \left( q,p\right) h\left( q,p\right) , \label{10.21} \end{equation} where $h\left( q,p\right) $ is the Weyl correspondence of $H\left( Q,P\right) ,$ \begin{equation} h\left( q,p\right) =2\pi T_{r}\left[ H\left( Q,P\right) \Delta \left( q,p\right) \right] , \label{10.22} \end{equation} $\Delta \left( q,p\right) $ is the Wigner operator \cite{r13,r14}, \begin{equation} \Delta \left( q,p\right) =\frac{1}{2\pi}\int_{-\infty}^{\infty} due^{ipu}\left \vert q+\frac{u}{2}\right \rangle \left \langle q-\frac{u} {2}\right \vert , \label{10.23} \end{equation} and using (\ref{10.22}), (\ref{10.23}) and (\ref{10.13}) we know that the classical Weyl correspondence (Weyl image) of the projection operator $\left \vert q\right \rangle _{s,rs,r}\left \langle q\right \vert $ is \begin{align} & 2\pi Tr\left[ \Delta \left( q^{\prime},p^{\prime}\right) \left \vert q\right \rangle _{s,rs,r}\left \langle q\right \vert \right] \nonumber \\ & =\left. _{s,r}\left \langle q\right \vert \right. \int_{-\infty}^{\infty }due^{ip^{\prime}u}\left \vert q^{\prime}+\frac{u}{2}\right \rangle \left \langle q^{\prime}-\frac{u}{2}\right \vert \left. q\right \rangle _{s,r}\nonumber \\ & =\frac{1}{2\pi B}\int_{-\infty}^{\infty}du\exp \left[ ip^{\prime}u+\frac {i}{B}u\left( q-Dq^{\prime}\right) \right] \nonumber \\ & =\delta \left[ q-\left( Dq^{\prime}-Bp^{\prime}\right) \right] , \label{10.24} \end{align} which means \begin{equation} \left \vert q\right \rangle _{s,rs,r}\left \langle q\right \vert =\int_{-\infty }^{\infty}dq^{\prime}dp^{\prime}\delta \left[ q-\left( Dq^{\prime} -Bp^{\prime}\right) \right] \Delta \left( q^{\prime},p^{\prime}\right) . \label{10.25} \end{equation} Combining Eqs. (\ref{10.4})-(\ref{10.7}) together we complete the proof. Therefore, the probability distribution for the Fresnel quadrature phase is the Radon transform of WF \begin{equation} |\left \langle q\right \vert F_{1}^{\dagger}\left \vert \psi \right \rangle |^{2}=|_{s,r}\left \langle q\right \vert \left. \psi \right \rangle |^{2} =\int_{-\infty}^{\infty}dq^{\prime}dp^{\prime}\delta \left[ q-\left( Dq^{\prime}-Bp^{\prime}\right) \right] \left \langle \psi \right \vert \Delta \left( q^{\prime},p^{\prime}\right) \left \vert \psi \right \rangle , \label{10.26} \end{equation} so we name $\left \vert q\right \rangle _{s,rs,r}\left \langle q\right \vert $ \textbf{the tomographic density}. Moreover, the tomogram of quantum state $\left \vert \psi \right \rangle $ is just the squared modulus of the wave function $_{s,r}\left \langle q\right \vert \left. \psi \right \rangle ,$ this new relation between quantum tomography and optical Fresnel transform may provide experimentalists to figure out new approach for generating tomography. The introduction of $\left \vert q\right \rangle _{s,r}$ also bring convenience to obtain the inverse of Radon transformation, using (\ref{10.20}) we have \begin{equation} e^{-igQ_{F}}=\int_{-\infty}^{\infty}dq\left \vert q\right \rangle _{s,r}{} _{s,r}\left \langle q\right \vert e^{-igq}=\int_{-\infty}^{\infty} dqdp\Delta \left( q,p\right) e^{-ig\left( Dq-Bp\right) }. \end{equation} Considering its right hand-side as a Fourier transformation, its reciprocal transform is \begin{align} \Delta \left( q,p\right) & =\frac{1}{4\pi^{2}}\int_{-\infty}^{\infty }dq^{\prime}\int_{-\infty}^{\infty}dg^{\prime}|g^{\prime}|\int_{0}^{\pi }d\varphi \left \vert q^{\prime}\right \rangle _{s,r}{}_{s,r}\left \langle q^{\prime}\right \vert \nonumber \\ & \times \exp \left[ -ig^{\prime}\left( \frac{q^{\prime}}{\sqrt{D^{2}+B^{2}} }-q\cos \varphi-p\sin \varphi \right) \right] , \end{align} where $g^{\prime}=g\sqrt{D^{2}+B^{2}},$ $\cos \varphi=\frac{D}{\sqrt {D^{2}+B^{2}}},$ $\sin \varphi=\frac{-B}{\sqrt{D^{2}+B^{2}}}.$So once the distribution $|_{s,r}\left \langle q\right \vert \left. \psi \right \rangle |^{2}$ are obtained, one can use the inverse Radon transformation familiar in tomographic imaging to obtain the Wigner distribution. By analogy, we can conclude that the $(A,C)$ related Radon transform of $\Delta \left( q,p\right) $ is just the pure state density operator $\left \vert p\right \rangle _{s,rs,r}\left \langle p\right \vert $ formed with the eigenstates belonging to the conjugate quadrature of $Q_{F},$ \begin{align} F_{1}\left \vert p\right \rangle \left \langle p\right \vert F_{1}^{\dagger} & =\left \vert p\right \rangle _{s,rs,r}\left \langle p\right \vert =\int_{-\infty }^{\infty}dq^{\prime}dp^{\prime}\delta \left[ p-\left( Aq^{\prime} -Cp^{\prime}\right) \right] \Delta \left( q^{\prime},p^{\prime}\right) ,\nonumber \\ A & =\frac{1}{2}\left( s^{\ast}-r^{\ast}+s-r\right) ,\text{ \ }C=\frac {1}{2i}\left( s-r-s^{\ast}+r^{\ast}\right) . \label{10.27} \end{align} Similarly, we find that for the momentum density, \begin{equation} F_{1}\left \vert p\right \rangle \left \langle p\right \vert F_{1}^{\dagger }=\left \vert p\right \rangle _{s,rs,r}\left \langle p\right \vert =\int_{-\infty }^{\infty}dq^{\prime}dp^{\prime}\delta \left[ p-\left( Ap^{\prime} -Cq^{\prime}\right) \right] \Delta \left( q^{\prime},p^{\prime}\right) , \label{10.28} \end{equation} where \begin{equation} F_{1}\left \vert p\right \rangle =\left \vert p\right \rangle _{s,r}=\frac {\pi^{-1/4}}{\sqrt{A-iC}}\exp \left \{ -\frac{D+iB}{A-iC}\frac{p^{2}}{2} +\frac{\sqrt{2}ip}{A-iC}a^{\dagger}+\frac{A+iC}{A-iC}\frac{a^{\dagger2}} {2}\right \} \left \vert 0\right \rangle . \label{10.29} \end{equation} As an application of the relation (\ref{10.2}), recalling that the $F_{1}(r,s)$ makes up a faithful representation of the symplectic group \cite{r10}, it then follows from (\ref{10.2}) that \begin{align} & F_{1}^{\prime}(r^{\prime},s^{\prime})F_{1}(r,s)\left \vert q\right \rangle \left \langle q\right \vert F_{1}^{\dagger}(r,s)F_{1}^{\prime \dagger}(r^{\prime },s^{\prime})=\left \vert q\right \rangle _{s^{\prime \prime},r^{\prime \prime }\text{ }s^{\prime \prime},r^{\prime \prime}}\left \langle q\right \vert \nonumber \\ & =\int \int_{-\infty}^{\infty}dq^{\prime}dp^{\prime}\delta \left[ q-\left( \left( B^{\prime}C+DD^{\prime}\right) q^{\prime}-\left( AB^{\prime }+BD^{\prime}\right) p^{\prime}\right) \right] \Delta \left( q^{\prime },p^{\prime}\right) , \label{10.30} \end{align} In this way a complicated Radon transform of tomography can be viewed as the sequential operation of two Fresnel transforms. This confirms that the continuous Radon transformation corresponds to the symplectic group transformation \cite{r6,r7}, this is an advantage of introducing the Fresnel operator. The group property of Fresnel operators help us to analyze complicated Radon transforms in terms of some sequential Fresnel transformations. The new relation may provide experimentalists to figure out new approach for realizing tomography. \subsection{Another new theorem to calculating the tomogram} In this subsection, we introduce a new theorem, i.e., the tomogram of a density operator $\rho$ is equal to the marginal integration of the classical Weyl correspondence function of $F^{\dagger}\rho F,$ where $F$ is the Fresnel operator. Multiplying both sides of Eq. (\ref{10.25}) by a density matrix $\rho$ and then performing the trace, noting the Wigner function $W(p,q)=\mathtt{Tr} \left[ \rho \Delta(p,q)\right] ,$ one can see \begin{align} & Tr\left[ {\displaystyle \iint \nolimits_{-\infty}^{\infty}} dq \acute{} dp \acute{} \delta \left[ q-\left( Dq \acute{} -Bp \acute{} \right) \right] \Delta \left( q \acute{} ,p \acute{} \right) \rho \right] \nonumber \\ & =Tr\left( \left \vert q\right \rangle _{s,rs,r}\left \langle q\right \vert \rho \right) =_{s,r}\left \langle q\right \vert \rho \left \vert q\right \rangle _{s,r}=\left \langle q\right \vert F^{\dagger}\rho F\left \vert q\right \rangle \nonumber \\ & = {\displaystyle \iint \nolimits_{-\infty}^{\infty}} dq \acute{} dp \acute{} \delta \left[ q-\left( Dq \acute{} -Bp \acute{} \right) \right] W(p,q). \label{j1} \end{align} The right hand side of Eq. (\ref{j1}) is commonly defined as the tomogram of quantum states in $(B,D)$ direction, so in our view the calculation of tomogram in $(B,D)$ direction is ascribed to calculating \begin{equation} \left \langle q\right \vert F^{\dagger}\rho F\left \vert q\right \rangle \equiv \Xi. \label{j14} \end{equation} This is a concise and neat formula. Similarly, the tomogram in $(A,C)$ direction is ascribed to $\left \langle p\right \vert F^{\dagger}\rho F\left \vert p\right \rangle $. According to the Weyl correspondence rule \begin{equation} H\left( X,P\right) =\iint_{-\infty}^{\infty}dpdx\mathfrak{h}(p,x)\Delta (p,x), \label{15} \end{equation} and the Weyl ordering form of $\Delta(p,q)$ \begin{equation} \Delta(p,q)= \genfrac{}{}{0pt}{}{:}{:} \delta \left( q-Q\right) \delta \left( p-P\right) \genfrac{}{}{0pt}{}{:}{:} , \label{16} \end{equation} where the symbol$ \genfrac{}{}{0pt}{}{:}{:} \ \genfrac{}{}{0pt}{}{:}{:} $ denotes Weyl ordering, the classical correspondence of a Weyl ordered operator $ \genfrac{}{}{0pt}{}{:}{:} \mathfrak{h}(Q,P) \genfrac{}{}{0pt}{}{:}{:} $ is obtained just by replacing $Q\rightarrow q,P\rightarrow p$ in $h,$ i.e., \begin{equation} \genfrac{}{}{0pt}{}{:}{:} \mathfrak{h}(Q,P) \genfrac{}{}{0pt}{}{:}{:} =\iint_{-\infty}^{\infty}dpdq\mathfrak{h}(p,q)\Delta(p,q), \label{j17} \end{equation} Let the classical Weyl correspondence of $F^{\dagger}\rho F$ be $h(p,q)$ \[ F^{\dagger}\rho F=\iint_{-\infty}^{\infty}dpdqh(p,q)\Delta(p,q), \] then using (\ref{j14}) and (\ref{10.23}) we have \begin{align} \Xi & =\left \langle q\right \vert F^{\dagger}\rho F\left \vert q\right \rangle \nonumber \\ & =\left \langle q\right \vert {\displaystyle \iint} dpdq \acute{} h(p,q \acute{} )\Delta \left( p,q \acute{} \right) \left \vert q\right \rangle \nonumber \\ & = {\displaystyle \iint} dpdq \acute{} h(p,q \acute{} )\int_{-\infty}^{+\infty}\frac{dv}{2\pi}e^{ipv}\left \langle x\right. \left. q \acute{} +\frac{v}{2}\right \rangle \left \langle q \acute{} -\frac{v}{2}\right. \left. q\right \rangle \nonumber \\ & = {\displaystyle \iint} dpdq \acute{} h(p,q \acute{} )\int_{-\infty}^{+\infty}\frac{dv}{2\pi}e^{ipv}\delta \left( q \acute{} -q+\frac{v}{2}\right) \delta \left( q \acute{} -q-\frac{v}{2}\right) \nonumber \\ & =\frac{1}{\pi} {\displaystyle \iint} dpdq \acute{} h(p,q \acute{} )e^{i2p\left( q \acute{} -q\right) }\delta \left( 2q \acute{} -2q\right) =\int_{-\infty}^{\infty}\frac{dp}{2\pi}h(p,q). \label{j18} \end{align} Thus we reach a theorem: The tomogram of a density operator $\rho$ is equal to the marginal integration of the classical Weyl correspondence $h(p,q)$ of $F^{\dagger}\rho F,$ where $F$ is the Fresnel operator, expressed by \begin{equation} \mathtt{Tr}\left[ \rho \left \vert q\right \rangle _{s,rs,r}\left \langle q\right \vert \right] =\int_{-\infty}^{\infty}\frac{dp}{2\pi}h(p,q), \label{19} \end{equation} or \begin{equation} \mathtt{Tr}\left[ \rho \left \vert p\right \rangle _{s,rs,r}\left \langle p\right \vert \right] =\int_{-\infty}^{\infty}\frac{dx}{2\pi}h(p,q). \label{20} \end{equation} In this way the relationship between tomogram of a density operator $\rho$ and the Fresnel transformed $\rho^{\prime}$s classical Weyl function is established. \section{Two-mode GFO and Its Application} For two-dimensional optical Fresnel transforms (see (\ref{2.5})) in the $x-y$ plane one may naturally think that the 2-mode GFO is just the direct product of two independent 1-mode GFOs but with the same $(A,B,C,D)$ matrix. However, here we present another 2-mode Fresnel operator which can not only lead to the usual 2-dimensional optical Fresnel transforms in some appropriate quantum mechanical representations, but also provide us with some new classical transformations (we name them entangled Fresnel transformations). \subsection{Two-mode GFO gained via coherent state representation} Similar in spirit to the single-mode case, we introduce the two-mode GFO $F_{2}\left( r,s\right) $ through the following 2-mode coherent state representation \cite{fancommun1} \begin{equation} F_{2}\left( r,s\right) =s\int \frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{2}}\left \vert sz_{1}+rz_{2}^{\ast},rz_{1}^{\ast}+sz_{2}\right \rangle \left \langle z_{1},z_{2}\right \vert , \label{11.1} \end{equation} which indicates that $F_{2}\left( r,s\right) $ is a mapping of classical sympletic transform $\left( z_{1},z_{2}\right) \rightarrow \left( sz_{1}+rz_{2}^{\ast},rz_{1}^{\ast}+sz_{2}\right) $ in phase space. Concretely, the ket in (\ref{11.1}) is \begin{equation} \left \vert sz_{1}+rz_{2}^{\ast},rz_{1}^{\ast}+sz_{2}\right \rangle \equiv \left \vert sz_{1}+rz_{2}^{\ast}\right \rangle _{1}\otimes \left \vert rz_{1}^{\ast}+sz_{2}\right \rangle _{2},\text{ }ss^{\ast}-rr^{\ast}=1, \label{11.2} \end{equation} $s$ and $r$ are complex and satisfy the unimodularity condition. Using the IWOP technique we perform the integral in (\ref{11.1}) and obtain \begin{align} F_{2}\left( r,s\right) & =s\int \frac{1}{\pi^{2}}d^{2}z_{1}d^{2}z_{2} \colon \exp[-|s|^{2}\left( |z_{1}|^{2}+|z_{2}|^{2}\right) -r^{\ast} sz_{1}z_{2}-rs^{\ast}z_{1}^{\ast}z_{2}^{\ast}\nonumber \\ & +\left( sz_{1}+rz_{2}^{\ast}\right) a_{1}^{\dagger}+\left( rz_{1}^{\ast }+sz_{2}\right) a_{2}^{\dagger}+z_{1}^{\ast}a_{1}+z_{2}^{\ast}a_{2} -a_{1}^{\dagger}a_{1}-a_{2}^{\dagger}a_{2}]\colon \nonumber \\ & =\frac{1}{s^{\ast}}\exp \left( \frac{r}{s^{\ast}}a_{1}^{\dagger} a_{2}^{\dagger}\right) \colon \exp \left[ \left( \frac{1}{s^{\ast}}-1\right) \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \right] \colon \exp \left( -\frac{r^{\ast}}{s^{\ast}}a_{1}a_{2}\right) \nonumber \\ & =\exp \left( \frac{r}{s^{\ast}}a_{1}^{\dagger}a_{2}^{\dagger}\right) \exp[\left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}+1\right) \ln \left( s^{\ast}\right) ^{-1}]\exp \left( -\frac{r^{\ast}}{s^{\ast}}a_{1} a_{2}\right) . \label{11.3} \end{align} Thus $F_{2}\left( r,s\right) $ induces the transform \begin{equation} F_{2}\left( r,s\right) a_{1}F_{2}^{-1}\left( r,s\right) =s^{\ast} a_{1}-ra_{2}^{\dagger},F_{2}\left( r,s\right) a_{2}F_{2}^{-1}\left( r,s\right) =s^{\ast}a_{2}-ra_{1}^{\dagger}. \label{11.4} \end{equation} and $F_{2}$ is actually a general 2-mode squeezing operator. Recall that (\ref{3.15}) implies the intrinsic relation between the EPR entangled state and the two-mode squeezed\ state, which has physical implementation, i.e. in the output of a parametric down-conversion the idler-mode and the signal-mode constitute a two-mode squeezed state, meanwhile are entangled with each other in frequency domain, we naturally select the entangled state representation to relate $F_{2}\left( r,s\right) $ to two-dimensional GFT. Letting $\left \vert g\right \rangle =F_{2}\left( r,s\right) \left \vert f\right \rangle ,$ and then projecting $\left \vert \psi \right \rangle $ onto the entangled state $\left \langle \eta^{\prime}\right \vert $ defined by (\ref{3.11}) and using the completeness relation (\ref{3.13}) of $\left \vert \eta \right \rangle $, we obtain \begin{align} g\left( \eta^{\prime}\right) & \equiv \left \langle \eta^{\prime}\right \vert \left. g\right \rangle =\left \langle \eta^{\prime}\right \vert F_{2}\left( r,s\right) \left \vert f\right \rangle \nonumber \\ & =\int \frac{d^{2}\eta}{\pi}\left \langle \eta^{\prime}\right \vert F_{2}\left( r,s\right) \left \vert \eta \right \rangle \left \langle \eta \right. \left \vert f\right \rangle \equiv \int d^{2}\eta \mathcal{K} _{2}^{\left( r,s\right) }\left( \eta^{\prime},\eta \right) f\left( \eta \right) . \label{11.5} \end{align} Then using the overcompleteness relation of the coherent state, and \begin{align*} \left \langle z_{1}^{\prime},z_{2}^{\prime}\right \vert F_{2}\left( r,s\right) \left \vert z_{1},z_{2}\right \rangle & =\frac{1}{s^{\ast}}\exp \left \{ -\frac{1}{2}(\left \vert z_{1}\right \vert ^{2}+\left \vert z_{2}\right \vert ^{2}+\left \vert z_{1}^{\prime}\right \vert ^{2}+\left \vert z_{2}^{\prime }\right \vert ^{2})\right. \\ & \left. +\frac{r}{s^{\ast}}z_{1}^{\prime \ast}z_{2}^{\prime \ast} -\frac{r^{\ast}}{s^{\ast}}z_{1}z_{2}+\frac{1}{s^{\ast}}\left( z_{1} ^{\prime \ast}z_{1}+z_{2}^{\prime \ast}z_{2}\right) \right \} , \end{align*} we can calculate the integral kernel \begin{align} \mathcal{K}_{2}^{\left( r,s\right) }\left( \eta^{\prime},\eta \right) & =\frac{1}{\pi}\left \langle \eta^{\prime}\right \vert F_{2}\left( r,s\right) \left \vert \eta \right \rangle \nonumber \\ & =\int \frac{d^{2}z_{1}d^{2}z_{2}d^{2}z_{1}^{\prime}d^{2}z_{2}^{\prime}} {\pi^{5}}\left \langle \eta^{\prime}\right \vert \left. z_{1}^{\prime} ,z_{2}^{\prime}\right \rangle \left \langle z_{1}^{\prime},z_{2}^{\prime }\right \vert F_{2}\left( r,s\right) \left \vert z_{1},z_{2}\right \rangle \left \langle z_{1},z_{2}\right. \left \vert \eta \right \rangle \nonumber \\ & =\frac{1}{s^{\ast}}\int \frac{d^{2}z_{1}d^{2}z_{2}d^{2}z_{1}^{\prime} d^{2}z_{2}^{\prime}}{\pi^{5}}\exp \left[ -\left( \left \vert z_{1}\right \vert ^{2}+\left \vert z_{2}\right \vert ^{2}+\left \vert z_{1}^{\prime}\right \vert ^{2}+\left \vert z_{2}^{\prime}\right \vert ^{2}\right) -\frac{1}{2}\left( \left \vert \eta^{\prime}\right \vert ^{2}+\left \vert \eta \right \vert ^{2}\right) \right] \nonumber \\ & \times \exp \left[ -\frac{r^{\ast}}{s^{\ast}}z_{1}z_{2}+z_{1}^{\ast} z_{2}^{\ast}+\eta z_{1}^{\ast}+\frac{1}{s^{\ast}}\left( z_{1}^{\prime \ast }z_{1}+z_{2}^{\prime \ast}z_{2}\right) +\frac{r}{s^{\ast}}z_{1}^{\prime \ast }z_{2}^{\prime \ast}+z_{1}^{\prime}z_{2}^{\prime}+\eta^{\prime \ast} z_{1}^{\prime}-\eta^{\prime}z_{2}^{\prime}-\eta^{\ast}z_{2}^{\ast}\right] \nonumber \\ & =\frac{1}{\left( -r-s+r^{\ast}+s^{\ast}\right) \pi}\exp \left[ \frac{\left( -s+r^{\ast}\right) \left \vert \eta \right \vert ^{2}-\left( r+s\right) \left \vert \eta^{\prime}\right \vert ^{2}+\eta \eta^{\prime \ast }+\eta^{\ast}\eta^{\prime}}{-r-s+r^{\ast}+s^{\ast}}-\frac{1}{2}\left( \left \vert \eta^{\prime}\right \vert ^{2}+\left \vert \eta \right \vert ^{2}\right) \right] . \label{11.6} \end{align} Using the relation between $s,r$ and $\left( A,B,C,D\right) $ in Eq.(\ref{5.2}) Eq. (\ref{11.6}) becomes \begin{equation} \mathcal{K}_{2}^{\left( r,s\right) }\left( \eta^{\prime},\eta \right) =\frac{1}{2iB\pi}\exp \left[ \frac{i}{2B}\left( A\left \vert \eta \right \vert ^{2}-\left( \eta \eta^{\prime \ast}+\eta^{\ast}\eta^{\prime}\right) +D\left \vert \eta^{\prime}\right \vert ^{2}\right) \right] \equiv \mathcal{K}_{2}^{M}\left( \eta^{\prime},\eta \right) , \label{11.7} \end{equation} where the superscript $M$ only means the parameters of $K_{2}^{M}$ are $\left[ A,B;C,D\right] $, and the subscript $2$ means the two-dimensional kernel. Eq. (\ref{11.7}) has the similar form as (\ref{2.12}) except for its complex form. Taking $\eta_{1}=x_{1},$ $\eta_{2}=x_{2}$ and $\eta_{1}^{\prime }=x_{1}^{\prime}$, $\eta_{2}^{\prime}=x_{2}^{\prime}$, we have \begin{equation} \mathcal{K}_{2}^{M}\left( \eta^{\prime},\eta \right) =\mathcal{K}_{2} ^{M}\left( x_{1}^{\prime},x_{2}^{\prime};x_{1},x_{2}\right) =\mathcal{K} _{1}^{M}\left( x_{1},x_{1}^{\prime}\right) \otimes \mathcal{K}_{1}^{M}\left( x_{2},x_{2}^{\prime}\right) . \label{11.8} \end{equation} This shows that $F_{2}\left( r,s\right) $ is really the counterpart of the 2-dimensional GFT. If taking the matrix element of $F_{2}\left( r,s\right) $ in the $\left \vert \xi \right \rangle $ representation which is conjugate to $\left \vert \eta \right \rangle $, we obtain the 2-dimensional GFT in its `frequency domain', i.e., \begin{align} \left \langle \xi^{\prime}\right \vert F_{2}\left( r,s\right) \left \vert \xi \right \rangle & =\int \frac{d^{2}\eta^{\prime}d^{2}\eta}{\pi^{2} }\left \langle \xi^{\prime}\right \vert \left. \eta^{\prime}\right \rangle \left \langle \sigma \right \vert F_{2}\left( r,s\right) \left \vert \eta \right \rangle \left \langle \eta \right \vert \left. \xi \right \rangle \nonumber \\ & =\frac{1}{8iB\pi}\int \frac{d^{2}\eta^{\prime}d^{2}\eta}{\pi^{2}} \mathcal{K}_{2}^{\left( r,s\right) }\left( \eta^{\prime},\eta \right) \exp \left( \frac{\xi^{\prime \ast}\eta^{\prime}-\xi^{\prime}\eta^{\prime \ast }+\xi \eta^{\ast}-\xi^{\ast}\eta}{2}\right) \nonumber \\ & =\frac{1}{2i\left( -C\right) \pi}\exp \left[ \frac{i}{2\left( -C\right) }\left( D\left \vert \xi \right \vert ^{2}+A\left \vert \xi^{\prime}\right \vert ^{2}-\xi^{\prime \ast}\xi-\xi^{\prime}\xi^{\ast}\right) \right] \equiv \mathcal{K}_{2}^{N}\left( \xi^{\prime},\xi \right) , \label{11.9} \end{align} where the superscript $N$ means that this transform kernel corresponds to the parameter matrix $N=\left[ D,-C,-B,A\right] $. The two-mode GFO also abides by the group multiplication rule. Using the IWOP technique and (\ref{11.1}) we obtain \begin{align} & F_{2}\left( r,s\right) F_{2}\left( r^{\prime},s^{\prime}\right) \nonumber \\ & =ss^{\prime}\int \frac{d^{2}z_{1}d^{2}z_{2}d^{2}z_{1}^{\prime}d^{2} z_{2}^{\prime}}{\pi^{4}}\colon \exp \{-|s|^{2}\left( |z_{1}|^{2}+|z_{2} |^{2}\right) -r^{\ast}sz_{1}z_{2}\nonumber \\ & -rs^{\ast}z_{1}^{\ast}z_{2}^{\ast}-\frac{1}{2}[|z_{1}^{\prime}|^{2} +|z_{2}^{\prime}|^{2}+|s^{\prime}z_{1}^{\prime}+r^{\prime}z_{2}^{\prime \ast }|^{2}+|r^{\prime}z_{1}^{\prime \ast}+s^{\prime}z_{2}^{\prime}|^{2}]\nonumber \\ & +\left( sz_{1}+rz_{2}^{\ast}\right) a_{1}^{\dagger}+\left( rz_{1}^{\ast }+sz_{2}\right) a_{2}^{\dagger}+z_{1}^{\prime \ast}a_{1}+z_{2}^{\prime \ast }a_{2}\nonumber \\ & +z_{1}^{\ast}\left( s^{\prime}z_{1}^{\prime}+r^{\prime}z_{2}^{\prime \ast }\right) +z_{2}^{\ast}\left( r^{\prime}z_{1}^{\prime \ast}+s^{\prime} z_{2}^{\prime}\right) -a_{1}^{\dagger}a_{1}-a_{2}^{\dagger}a_{2}\} \colon \nonumber \\ & =\frac{1}{s^{\prime \prime \ast}}\exp \left( \frac{r^{\prime \prime} }{2s^{\prime \prime \ast}}a_{1}^{\dagger}a_{2}^{\dagger}\right) \colon \exp \left \{ \left( \frac{1}{s^{\prime \prime \ast}}-1\right) \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \right \} \colon \exp \left( -\frac{r^{\prime \prime \ast}}{2s^{\prime \prime \ast}}a_{1}a_{2}\right) \nonumber \\ & =F_{2}\left( r^{\prime \prime},s^{\prime \prime}\right) , \label{11.10} \end{align} where $\left( r^{\prime \prime},s^{\prime \prime}\right) $ are given by Eq.(\ref{7.2}) or (\ref{7.4}). Therefore, (\ref{11.10}) is a loyal representation of the multiplication rule for ray transfer matrices in the sense of \textit{Matrix Optics}.$\ $ \subsection{Quantum Optical ABCD Law for two-mode GFO} Next we extended quantum optical ABCD Law to two-mode case. Operating with $F_{2}(r,s)$ on two-mode number state $\left \vert m,n\right \rangle $ and using the overlap between coherent state and number state, i.e. \begin{equation} \left \langle z_{1},z_{2}\right. \left \vert m,n\right \rangle =\frac {z_{1}^{\ast m}z_{2}^{\ast n}}{\sqrt{m!n!}}\exp \left[ -\frac{1}{2}\left( \left \vert z_{1}\right \vert ^{2}+\left \vert z_{2}\right \vert ^{2}\right) \right] , \label{11.11} \end{equation} and the integral formula \cite{Fanshu} \begin{equation} H_{m,n}\left( \xi,\eta \right) =(-1)^{n}e^{\xi \eta}\int \frac{d^{2}z}{\pi }z^{n}z^{\ast m}e^{-\left \vert z\right \vert ^{2}+\xi z-\eta z^{\ast}}, \label{11.12} \end{equation} we can calculate \begin{align} & F_{2}(r,s)\left \vert m,n\right \rangle \nonumber \\ & =s\int \frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{2}}\left \vert sz_{1}+rz_{2}^{\ast },rz_{1}^{\ast}+sz_{2}\right \rangle \left \langle z_{1},z_{2}\right. \left \vert m,n\right \rangle \nonumber \\ & =\frac{s}{\sqrt{m!n!}}\int \frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{2}}z_{1}^{\ast m}z_{2}^{\ast n}\exp \left[ -\left \vert s\right \vert ^{2}\left( \left \vert z_{1}\right \vert ^{2}+\left \vert z_{2}\right \vert ^{2}\right) \right] \nonumber \\ & \times \exp \left[ -sr^{\ast}z_{1}z_{2}-rs^{\ast}z_{1}^{\ast}z_{2}^{\ast }+\left( sz_{1}+rz_{2}^{\ast}\right) a_{1}^{\dagger}+\left( rz_{1}^{\ast }+sz_{2}\right) a_{2}^{\dagger}\right] \left \vert 00\right \rangle \nonumber \\ & =\frac{s}{\left \vert s\right \vert ^{2m+2}\sqrt{m!n!}}\int \frac{d^{2}z_{2} }{\pi}z_{2}^{\ast n}\left( sa_{1}^{\dagger}-sr^{\ast}z_{2}\right) ^{m} \exp \left( -\left \vert z_{2}\right \vert ^{2}+\frac{1}{s^{\ast}}a_{2} ^{\dagger}z_{2}+\frac{ra_{1}^{\dagger}a_{2}^{\dagger}}{s^{\ast}}\right) \left \vert 00\right \rangle \nonumber \\ & =\frac{r^{\ast n}}{s^{\ast n+1}\sqrt{m!n!}}H_{m,n}\left[ \frac {a_{1}^{\dagger}}{s^{\ast}},\frac{a_{2}^{\dagger}}{r^{\ast}}\right] \exp \left( \frac{ra_{1}^{\dagger}a_{2}^{\dagger}}{s^{\ast}}\right) \left \vert 00\right \rangle , \label{11.13} \end{align} where $H_{m,n}\left( \epsilon,\varepsilon \right) $ is the two variables Hermite polynomial \cite{r6,r7}, shown in (\ref{4.20}) and (\ref{4.21}). Using Eqs.(\ref{5.2}) and (\ref{8.9}), we recast Eq.(\ref{11.13}) into \begin{align} F_{2}(r,s)\left \vert m,n\right \rangle & =\frac{-2/\left( C+iD\right) }{\left( q_{1}+i\right) \sqrt{m!n!}}\left( -\frac{q_{1}^{\ast}+i}{q_{1} +i}\frac{C-iD}{C+iD}\right) ^{n}\nonumber \\ & \times H_{m,n}\left[ -\frac{2a_{1}^{\dagger}/\left( C+iD\right) } {q_{1}+i},\frac{2a_{2}^{\dagger}/\left( C-iD\right) }{q_{1}^{\ast} +i}\right] \exp \left( -\frac{q_{1}-i}{q_{1}+i}a_{1}^{\dagger}a_{2}^{\dagger }\right) \left \vert 00\right \rangle . \label{11.14} \end{align} Noticing the multiplication rule of $F_{2}\left( r,s\right) $ in Eq.(\ref{11.10}), which is equivalent to \begin{equation} F_{2}\left( A^{\prime},B^{\prime},C^{\prime}\right) F_{2}\left( A,B,C\right) =F_{2}\left( A^{\prime \prime},B^{\prime \prime},C^{\prime \prime }\right) , \label{11.15} \end{equation} where $\left( A^{\prime},B^{\prime},C^{\prime}\right) ,\left( A,B,C\right) $ and $\left( A^{\prime \prime},B^{\prime \prime},C^{\prime \prime}\right) $ are related to each other by Eq.(\ref{8.5}). Next we directly use the GFO to derive ABCD rule in quantum optics for Gaussian beam in two-mode case. According to Eq.(\ref{11.13}) and Eq. (\ref{11.15}) we obtain \begin{align} & F_{2}\left( A^{\prime},B^{\prime},C^{\prime}\right) F_{2}\left( A,B,C\right) \left \vert m,n\right \rangle \nonumber \\ & =\frac{r^{\prime \prime \ast n}}{s^{\prime \prime \ast n+1}\sqrt{m!n!}} H_{m,n}\left[ \frac{a_{1}^{\dagger}}{s^{\prime \prime \ast}},\frac {a_{2}^{\dagger}}{r^{\prime \prime \ast}}\right] \exp \left[ \frac {r^{\prime \prime}a_{1}^{\dagger}a_{2}^{\dagger}}{s^{\prime \prime \ast}}\right] \left \vert 00\right \rangle , \label{11.16} \end{align} Similar to the way of deriving Eq. (\ref{11.14}), we can simplify Eq. (\ref{11.16}) as \begin{align} & F_{2}\left( A^{\prime},B^{\prime},C^{\prime}\right) F_{2}\left( A,B,C\right) \left \vert 00\right \rangle \nonumber \\ & =\frac{-2/\left( C^{\prime \prime}+iD^{\prime \prime}\right) }{\left( q_{2}+i\right) \sqrt{m!n!}}\left( -\frac{q_{2}^{\ast}+i}{q_{2}+i} \frac{C^{\prime \prime}-iD^{\prime \prime}}{C^{\prime \prime}+iD^{\prime \prime} }\right) ^{n}\nonumber \\ & \times H_{m,n}\left[ -\frac{2a_{1}^{\dagger}/\left( C^{\prime \prime }+iD^{\prime \prime}\right) }{q_{2}+i},\frac{2a_{2}^{\dagger}/\left( C^{\prime \prime}-iD^{\prime \prime}\right) }{q_{2}^{\ast}+i}\right] \exp \left[ -\frac{q_{2}-i}{q_{2}+i}a_{1}^{\dagger}a_{2}^{\dagger}\right] \left \vert 00\right \rangle , \label{11.17} \end{align} where the relation between $q_{2}$ and $q_{1}$ are determined by Eq.(\ref{8.11}) which resembles Eq.(\ref{8.3}), this is just the new ABCD law for two-mode case in quantum optics. \subsection{Optical operators derived by decomposing GFO} \subsubsection{GFO as quadratic combinations of canonical operators} In order to obtain the quadratic combinations of canonical operators, let first derive an operator identity. Note $Q_{i}=(a_{i}+a_{i}^{\dagger} )/\sqrt{2},$ $P_{i}=(a_{i}-a_{i}^{\dagger})/(\sqrt{2}\mathtt{i}),$ and Eq.(\ref{3.12}),(\ref{3.13}) we can prove the operator identity \begin{align} e^{\frac{\lambda}{2}\left[ \left( Q_{1}-Q_{2}\right) ^{2}+\left( P_{1}+P_{2}\right) ^{2}\right] } & =\int \frac{d^{2}\eta}{\pi} e^{\frac{\lambda}{2}\left[ \left( Q_{1}-Q_{2}\right) ^{2}+\left( P_{1}+P_{2}\right) ^{2}\right] }\left \vert \eta \right \rangle \left \langle \eta \right \vert \nonumber \\ & =\frac{1}{1-\lambda}\colon \exp \left[ \frac{2\lambda}{1-\lambda} K_{+}\right] \colon, \label{11.18} \end{align} where we have set \begin{equation} K_{+}\equiv \frac{1}{4}[\left( Q_{1}-Q_{2}\right) ^{2}+\left( P_{1} +P_{2}\right) ^{2}]. \label{K1} \end{equation} When $B=0$, $A=1,$ $C\rightarrow C/A,$ $D=1,$ and using Eq.(\ref{11.3}) we see that \begin{align} F_{2}\left( 1,0,C/A\right) & =\frac{2}{2-iC/A}\colon \exp \left \{ \frac{iC/A}{2-iC/A}\left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2} -a_{1}^{\dagger}a_{2}^{\dagger}-a_{1}a_{2}\right) \right \} \colon \nonumber \\ & =\exp \left \{ \frac{iC}{A}K_{+}\right \} , \label{11.19} \end{align} which is corresponding to the square phase operator in single-mode case. In a similar way, using (\ref{3.18}) and (\ref{3.19}) we can derive another operator identity \begin{align} e^{\frac{\lambda}{2}\left[ \left( Q_{1}+Q_{2}\right) ^{2}+\left( P_{1}-P_{2}\right) ^{2}\right] } & =\int \frac{d^{2}\xi}{\pi} e^{\frac{\lambda}{2}\left[ \left( Q_{1}+Q_{2}\right) ^{2}+\left( P_{1}-P_{2}\right) ^{2}\right] }\left \vert \xi \right \rangle \left \langle \xi \right \vert \nonumber \\ & =\frac{1}{1-\lambda}\colon \exp \left[ \frac{2\lambda}{1-\lambda} K_{-}\right] \colon, \label{11.20} \end{align} where \begin{equation} K_{-}=\frac{1}{4}[\left( Q_{1}+Q_{2}\right) ^{2}+\left( P_{1}-P_{2}\right) ^{2}]. \label{K2} \end{equation} It then follows from Eqs.(\ref{11.3}) and (\ref{11.20}) \begin{align} F_{2}\left( 1,B/A,0\right) & =\frac{2}{2+iB/A}\colon \exp \left \{ -\frac{iB/A}{2+iB/A}\left( a_{1}^{\dagger}a_{2}^{\dagger}+a_{2}^{\dagger }a_{2}+a_{1}^{\dagger}a_{1}+a_{1}a_{2}\right) \right \} \colon \nonumber \\ & =\frac{2}{2+iB/A}\colon \exp \left \{ -\frac{2iB/A}{2+iB/A}K_{-}\right \} \colon \nonumber \\ & =\exp \left \{ -\frac{iB}{A}K_{-}\right \} , \label{11.21} \end{align} which is corresponding to Fresnel propagator in free space (single-mode case). In particular, when $B=C=0,$ and $D=A^{-1},$ Eq. (\ref{11.3}) becomes \begin{equation} F_{2}\left( A,0,0\right) =\operatorname{sech}\lambda \colon \exp \left[ -a_{1}^{\dagger}a_{2}^{\dagger}\tanh \lambda+\left( \operatorname{sech} \lambda-1\right) \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) +a_{1}a_{2}\tanh \lambda \right] \colon, \label{11.22} \end{equation} where$\frac{A-A^{-1}}{A+A^{-1}}=\tanh \lambda,$ $A=e^{\lambda}.$ Eq. (\ref{11.22}) is just the two-mode squeezing operator, \begin{align} F_{2}\left( A,0,0\right) & =\exp \left[ i\left( Q_{1}P_{2}+Q_{2} P_{1}\right) \ln A\right] \equiv \exp \left[ -2K_{0}\ln A\right] ,\label{11.22b}\\ K_{0} & \equiv-\frac{\mathtt{i}}{2}\left( Q_{1}P_{2}+Q_{2}P_{1}\right) , \label{K0} \end{align} which actually squeezes the entangled state $\left \vert \xi \right \rangle $ (its conjugate state is $\left \vert \eta \right \rangle $), \begin{equation} F_{2}\left( A,0,0\right) \left \vert \xi \right \rangle =\int \frac{d^{2} \xi^{\prime}}{\pi A}\left \vert \xi^{\prime}/A\right \rangle \left \langle \xi^{\prime}\right \vert \left. \xi \right \rangle =\frac{1}{A}\left \vert \xi/A\right \rangle . \label{11.23} \end{equation} Using the decomposition (\ref{9.5}) of the matrix and combining equations (\ref{11.19}), (\ref{11.21}) and (\ref{11.22b}) together, we see that \begin{align} F_{2}\left( A,B,C\right) & =F_{2}\left( 1,0,C/A\right) F_{2}\left( A,0,0\right) F_{2}\left( 1,B/A,0\right) \nonumber \\ & =\exp \left \{ \frac{iC}{A}K_{+}\right \} \exp \left \{ -2K_{0}\ln A\right \} \exp \left \{ -\frac{iB}{A}K_{-}\right \} . \label{11.24} \end{align} This is the two-mode quadratic canonical operator representation of $F_{2}\left( A,B,C\right) $. To prove Eq.(\ref{11.24}), using (\ref{11.23}) and (\ref{3.20}) we see \begin{align} \left \langle \eta \right \vert F_{2}\left( A,B,C\right) \left \vert \xi \right \rangle & =\exp \left( \frac{iC}{2A}\left \vert \eta \right \vert ^{2}-\frac{iB}{2A}\left \vert \xi \right \vert ^{2}\right) \left \langle \eta \right \vert \int \frac{d^{2}\xi^{\prime}}{A\pi}\left \vert \xi^{\prime }/A\right \rangle \left \langle \xi^{\prime}\right \vert \left. \xi \right \rangle \nonumber \\ \ & =\frac{1}{A}\exp \left( \frac{iC}{2A}\left \vert \eta \right \vert ^{2}-\frac{iB}{2A}\left \vert \xi \right \vert ^{2}\right) \left \langle \eta \right \vert \left. \xi/A\right \rangle \nonumber \\ \ & =\frac{1}{2A}\exp \left( \frac{iC}{2A}\left \vert \eta \right \vert ^{2}-\frac{iB}{2A}\left \vert \xi \right \vert ^{2}\right) \exp \left[ \frac {1}{2A}\left( \eta^{\ast}\xi-\eta \xi^{\ast}\right) \right] . \label{11.25} \end{align} It then follows \begin{align} \left \langle \eta^{\prime}\right \vert F_{2}\left( A,B,C\right) \left \vert \eta \right \rangle & =\int_{\infty}^{\infty}\frac{d^{2}\xi}{\pi}\left \langle \eta^{\prime}\right \vert F_{2}\left \vert \xi \right \rangle \left \langle \xi \right \vert \left. \eta \right \rangle \nonumber \\ & =\frac{1}{2iB}\exp \left[ \frac{i}{2B}\left( A\left \vert \eta \right \vert ^{2}-i\left( \eta \eta^{\prime \ast}+\eta^{\ast}\eta^{\prime}\right) +D\left \vert \eta^{\prime}\right \vert ^{2}\right) \right] \nonumber \\ & \equiv \mathcal{K}_{2}^{M}\left( \eta^{\prime},\eta \right) , \label{11.26} \end{align} which is just the transform kernel of a 2-dimensional GFT and the definition given in (\ref{11.24}) is true. Note that the quadratic combinations in Eqs.(\ref{K1}), (\ref{K2}) and (\ref{K0}) of the four canonical operators $\left( Q_{1},Q_{2};P_{1} ,P_{2}\right) $ obey the commutative relations $\left[ K_{+},K_{-}\right] =2K_{0},$ $\left[ K_{0},K_{\pm}\right] =\pm K_{\pm},$\ so $F_{2}\left( A,B,C\right) $ involves a SU(2) Lie algebra structure (this structure is also compiled by $Q^{2}/2$, $P^{2}/2$ and $-i\left( QP+PQ\right) /2$ that have been used in constructing $F_{1}\left( A,B,C\right) $). \subsubsection{Alternate decompositions of GFO and new optical operator identities} When $A=D=0,B=1,C=-1,$ from Eq.(\ref{11.3}) we see \begin{align} F_{2}\left( 0,1,-1\right) & =\exp \left[ -\left( a_{1}^{\dagger} a_{1}+a_{2}^{\dagger}a_{2}+1\right) \ln i\right] \nonumber \\ & =\exp \left[ -\mathtt{i}\frac{\pi}{2}\left( a_{1}^{\dagger}a_{1} +a_{2}^{\dagger}a_{2}+1\right) \right] \equiv \mathcal{F}, \label{11.27} \end{align} which can also be named the Fourier operator, since it induces the quantum mechanically transforms \cite{Fanshu} \begin{equation} \mathcal{F}^{\dagger}Q_{i}\mathcal{F}=P_{i},\text{ }\mathcal{F}^{\dagger} P_{i}\mathcal{F}^{\dagger}=-Q_{i}. \label{11.28} \end{equation} it then follows that \begin{equation} \mathcal{F}^{\dagger}K_{+}\mathcal{F}=K_{-}. \label{11.29} \end{equation} On the other hand, in order to obtain the decomposition of $F_{2}\left( A,B,C\right) $ for $A=0,$ similar to deriving Eq.(\ref{9.14}), we have \begin{equation} F_{2}\left( A,B,C\right) =\exp \left[ -\frac{iB}{D}K_{-}\right] \exp \left[ 2K_{0}\ln D\right] \exp \left[ \frac{iC}{D}K_{+}\right] ,\text{ for }D\neq0. \label{11.30} \end{equation} While for $B\neq0$ or $C\neq0$, using Eqs.(\ref{9.15}) and (\ref{9.16}) we have another decomposition of $F_{2}\left( A,B,C\right) $, i.e., \begin{equation} F_{2}\left( A,B,C\right) =\exp \left[ \frac{iD}{B}K_{+}\right] \exp \left[ -2K_{0}\ln B\right] \mathcal{F}\exp \left[ \frac{iA}{B}K_{+}\right] ,\text{ }B\neq0, \label{11.31} \end{equation} and \begin{equation} F_{2}\left( A,B,C\right) =\exp \left[ -\frac{iA}{C}K_{-}\right] \exp \left[ -2K_{0}\ln \frac{-1}{C}\right] \mathcal{F}\exp \left[ -\frac{iD}{C} K_{-}\right] ,C\neq0. \label{11.32} \end{equation} In addition, noticing Eqs.(\ref{9.26}) and (\ref{9.24}), we can rewrite Eqs.(\ref{11.31}) and (\ref{11.32}) as follows \begin{equation} F_{2}\left( A,B,C\right) =\exp \left[ \frac{i}{B}\left( D-1\right) K_{+}\right] \exp \left[ -iBK_{-}\right] \exp \left[ \frac{i}{B}\left( A-1\right) K_{+}\right] ,B\neq0, \label{11.33} \end{equation} and \begin{equation} F_{2}\left( A,B,C\right) =\exp \left[ \frac{-i}{C}\left( A-1\right) K_{-}\right] \exp \left[ iCK_{+}\right] \exp \left[ \frac{-i}{C}\left( D-1\right) K_{-}\right] ,C\neq0, \label{11.34} \end{equation} respectively. Next, according to some optical systems used frequently in physical optics, we derive some new entangled optical operator identities. For a special optical system with the parameter $A=0,$ $C=-B^{-1},$ (\ref{9.20}) which corresponds to the Fourier transform system, we have \begin{equation} \exp \left[ -\frac{iB}{D}K_{-}\right] \exp \left[ 2K_{0}\ln D\right] \exp \left[ -\frac{i}{BD}K_{+}\right] =\exp \left[ \frac{iD}{B}K_{+}\right] \exp \left[ -2K_{0}\ln B\right] \mathcal{F}. \label{11.35} \end{equation} In particular, when $A=D=0,$ $C=-B^{-1},$ Eq.(\ref{9.22}) corresponding to the ideal spectrum analyzer, we have \begin{equation} \exp \left[ -iBK_{-}\right] \exp \left[ -\frac{i}{B}K_{+}\right] \exp \left[ -iBK_{-}\right] =\exp \left[ -2K_{0}\ln B\right] \mathcal{F}. \label{11.36} \end{equation} When $B=0,$ $D=A^{-1},$ \[ \left( \begin{array} [c]{cc} A & 0\\ C & A^{-1} \end{array} \right) =\left( \begin{array} [c]{cc} 1 & 0\\ C/A & 1 \end{array} \right) \left( \begin{array} [c]{cc} A & 0\\ 0 & A^{-1} \end{array} \right) , \] which corresponds to the form of image system, another operator identity is given by \begin{equation} \exp \left[ -2K_{0}\ln A\right] \exp \left[ iACK_{+}\right] =\exp \left[ \frac{iC}{A}K_{+}\right] \exp \left[ -2K_{0}\ln A\right] . \label{11.37} \end{equation} When $C=0,$ $A=D^{-1},$ \[ \left( \begin{array} [c]{cc} D^{-1} & B\\ 0 & D \end{array} \right) =\left( \begin{array} [c]{cc} 1 & B/D\\ 0 & 1 \end{array} \right) \left( \begin{array} [c]{cc} D^{-1} & 0\\ 0 & D \end{array} \right) , \] which corresponds to the far foci system, \begin{equation} \exp \left[ \frac{iD}{B}K_{+}\right] \exp \left[ -2K_{0}\ln B\right] \mathcal{F}\exp \left[ \frac{i}{BD}K_{+}\right] =\exp \left[ -\frac{iB} {D}K_{-}\right] \exp \left[ 2K_{0}\ln D\right] . \label{11.38} \end{equation} When $D=0,$ $C=-B^{-1},$ corresponding to the Fresnel transform system, \[ \left( \begin{array} [c]{cc} A & B\\ -B^{-1} & 0 \end{array} \right) =\left( \begin{array} [c]{cc} B & 0\\ 0 & B^{-1} \end{array} \right) \left( \begin{array} [c]{cc} 0 & 1\\ -1 & 0 \end{array} \right) \left( \begin{array} [c]{cc} 1 & 0\\ A/B & 1 \end{array} \right) , \] we have \begin{equation} \exp \left[ -\frac{i}{AB}K_{+}\right] \exp \left[ -2K_{0}\ln A\right] \exp \left[ -\frac{iB}{A}K_{-}\right] =\exp \left[ -2K_{0}\ln B\right] \mathcal{F}\exp \left[ \frac{iA}{B}K_{+}\right] . \label{11.39} \end{equation} The GFO can unify those optical operators in two-mode case. Various decompositions of the GFO into the exponential canonical operators, corresponding to the decomposition of ray transfer matrix $\left[ A,B,C,D\right] ,$ are also derived. In our derivation, the entangled state representation is of useness in our research. \subsection{Quantum tomography and probability distribution for the Fresnel quadrature phase---two-mode entangled case} In section 8 we have found that under the Fresnel transformation the pure position density $\left \vert q\right \rangle \left \langle q\right \vert $\ becomes the tomographic density $\left \vert q\right \rangle _{s,rs,r} \left \langle q\right \vert $, which is just the Radon transform of the Wigner operator $\Delta \left( q,p\right) .$ In this section we want to generalize the above conclusion to two-mode entangled case. Here we shall prove \begin{equation} F_{2}\left \vert \eta \right \rangle \left \langle \eta \right \vert F_{2}^{\dagger }=\left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert =\pi \int d^{2}\gamma d^{2}\sigma \delta \left( \eta_{2}-D\sigma_{2}+B\gamma_{1}\right) \delta \left( \eta_{1}-D\sigma_{1}-B\gamma_{2}\right) \Delta \left( \sigma,\gamma \right) , \label{9} \end{equation} i.e., we show that $\left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert $ is just the Radon transform of the entangled Wigner operator $\Delta \left( \sigma,\gamma \right) .$ Similar in spirit to the single-mode case, operating $F_{2}\left( r,s\right) $ on entangled state representation $\left \vert \eta \right \rangle $ we see \begin{align} F_{2}\left( r,s\right) \left \vert \eta \right \rangle & =\frac{1}{s^{\ast} }\int \frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{2}}\exp \left[ \frac{r}{s^{\ast}} a_{1}^{\dagger}a_{2}^{\dagger}+\left( \frac{1}{s^{\ast}}-1\right) \left( a_{1}^{\dagger}z_{1}+a_{2}^{\dagger}z_{2}\right) -\frac{r^{\ast}}{s^{\ast} }z_{1}z_{2}\right] \left \vert z_{1},z_{2}\right \rangle \left \langle z_{1},z_{2}\right \vert \left. \eta \right \rangle \nonumber \\ & =\frac{1}{s^{\ast}}\int \frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{2}}\exp \left[ -\left \vert z_{1}\right \vert ^{2}+\frac{1}{s^{\ast}}\left( a_{1}^{\dagger }-r^{\ast}z_{2}\right) z_{1}+\left( \eta+z_{2}^{\ast}\right) z_{1}^{\ast }\right] \nonumber \\ & \times \exp \left[ -\frac{1}{2}\left \vert \eta \right \vert ^{2}-\left \vert z_{2}\right \vert ^{2}+\frac{1}{s^{\ast}}z_{2}a_{2}^{\dagger}-\eta^{\ast} z_{2}^{\ast}+\frac{r}{s^{\ast}}a_{1}^{\dagger}a_{2}^{\dagger}\right] \left \vert 00\right \rangle \nonumber \\ & =\frac{1}{s^{\ast}}\int \frac{d^{2}z_{2}}{\pi}\exp \left[ -\frac{s^{\ast }+r^{\ast}}{s^{\ast}}\left \vert z_{2}\right \vert ^{2}+\frac{1}{s^{\ast} }\left( a_{2}^{\dagger}-\eta r^{\ast}\right) z_{2}+\frac{1}{s^{\ast}}\left( a_{1}^{\dagger}-s^{\ast}\eta^{\ast}\right) z_{2}^{\ast}\right] \nonumber \\ & \times \exp \left[ +\frac{\eta}{s^{\ast}}a_{1}^{\dagger}+\frac{r}{s^{\ast} }a_{1}^{\dagger}a_{2}^{\dagger}-\frac{1}{2}\left \vert \eta \right \vert ^{2}\right] \left \vert 00\right \rangle \nonumber \\ & =\frac{1}{s^{\ast}+r^{\ast}}\exp \left \{ -\allowbreak \frac{s^{\ast} -r^{\ast}}{2\left( s^{\ast}+r^{\ast}\right) }\left \vert \eta \right \vert ^{2}+\allowbreak \frac{\eta a_{1}^{\dagger}}{s^{\ast}+r^{\ast}}\allowbreak -\allowbreak \frac{\eta^{\ast}a_{2}^{\dagger}}{s^{\ast}+r^{\ast}}+\frac {s+r}{s^{\ast}+r^{\ast}}\allowbreak a_{1}^{\dagger}a_{2}^{\dagger}\right \} \left \vert 00\right \rangle \equiv \left \vert \eta \right \rangle _{s,r}, \label{22} \end{align} or \begin{equation} \left \vert \eta \right \rangle _{s,r}=\frac{1}{\allowbreak D+iB}\exp \left \{ -\frac{\allowbreak A-iC}{2\left( \allowbreak D+iB\right) }\left \vert \eta \right \vert ^{2}+\frac{\eta a_{1}^{\dagger}}{\allowbreak D+iB}-\frac {\eta^{\ast}a_{2}^{\dagger}}{\allowbreak D+iB}+\frac{\allowbreak D-iB}{\allowbreak D+iB}a_{1}^{\dagger}a_{2}^{\dagger}\right \} \left \vert 00\right \rangle , \label{23} \end{equation} where we have used the integration formula \begin{equation} \int \frac{d^{2}z}{\pi}\exp \left( \zeta \left \vert z\right \vert ^{2}+\xi z+\eta z^{\ast}\right) =-\frac{1}{\zeta}e^{-\frac{\xi \eta}{\zeta}},\text{Re}\left( \zeta \right) <0. \label{24} \end{equation} Noticing the completeness relation and the orthogonality of $\left \vert \eta \right \rangle $ we immediately derive \begin{equation} \int \frac{d^{2}\eta}{\pi}\left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert =1,\text{ }_{s,r}\left \langle \eta \right \vert \left. \eta^{\prime}\right \rangle _{s,r}=\pi \delta \left( \eta-\eta^{\prime}\right) \delta \left( \eta^{\ast}-\eta^{\prime \ast}\right) , \label{25} \end{equation} a generalized entangled state representation $\left \vert \eta \right \rangle _{s,r}$ with the completeness relation (\ref{25}). From (\ref{23}) we can see that \begin{align} a_{1}\left \vert \eta \right \rangle _{s,r} & =\left( \frac{\eta}{\allowbreak D+iB}+\frac{\allowbreak D-iB}{\allowbreak D+iB}a_{2}^{\dagger}\right) \left \vert \eta \right \rangle _{s,r},\label{26}\\ a_{2}\left \vert \eta \right \rangle _{s,r} & =\left( -\frac{\eta^{\ast} }{\allowbreak D+iB}+\frac{\allowbreak D-iB}{\allowbreak D+iB}a_{1}^{\dagger }\right) \left \vert \eta \right \rangle _{s,r}, \label{27} \end{align} so we have the eigen-equations for $\left \vert \eta \right \rangle _{s,r}$ as follows \begin{align} \left[ D\left( Q_{1}-Q_{2}\right) -B\left( P_{1}-P_{2}\right) \right] \left \vert \eta \right \rangle _{s,r} & =\sqrt{2}\eta_{1}\left \vert \eta \right \rangle _{s,r},\text{ }\label{28}\\ \left[ B\left( Q_{1}+Q_{2}\right) +D\left( P_{1}+P_{2}\right) \right] \left \vert \eta \right \rangle _{s,r} & =\sqrt{2}\eta_{2}\left \vert \eta \right \rangle _{s,r}, \label{29} \end{align} We can also check Eqs.(\ref{26})-(\ref{29}) by another way. \subsubsection{$\left \vert \eta \right \rangle _{s,r\text{ }s,r}\left \langle \eta \right \vert $ as Radon transform of the entangled Wigner operator} For two-mode correlated system, we have introduced the Wigner operator in (\ref{3.27}). According to the Wely correspondence rule \cite{Weyl} \begin{equation} H\left( a_{1}^{\dagger},a_{2}^{\dagger};a_{1},a_{2}\right) =\int d^{2}\gamma d^{2}\sigma h\left( \sigma,\gamma \right) \Delta \left( \sigma,\gamma \right) , \label{31} \end{equation} where $h\left( \sigma,\gamma \right) $ is the Weyl correspondence of $H\left( a_{1}^{\dagger},a_{2}^{\dagger};a_{1},a_{2}\right) ,$ and \begin{equation} h\left( \sigma,\gamma \right) =4\pi^{2}\mathtt{Tr}\left[ H\left( a_{1}^{\dagger},a_{2}^{\dagger};a_{1},a_{2}\right) \Delta \left( \sigma,\gamma \right) \right] , \label{32} \end{equation} the classical Weyl correspondence of the projection operator $\left \vert \eta \right \rangle _{r,sr,s}\left \langle \eta \right \vert $ can be calculated as \begin{align} & 4\pi^{2}\mathtt{Tr}\left[ \left \vert \eta \right \rangle _{r,sr,s} \left \langle \eta \right \vert \Delta \left( \sigma,\gamma \right) \right] \nonumber \\ & =4\pi^{2}\int \frac{d^{2}\eta^{\prime}}{\pi^{3}}\left. _{r,s}\left \langle \eta \right \vert \left. \sigma-\eta^{\prime}\right \rangle \left \langle \sigma+\eta^{\prime}\right \vert \left. \eta \right \rangle _{r,s}\right. \exp(\eta^{\prime}\gamma^{\ast}-\eta^{\prime \ast}\gamma)\nonumber \\ & =4\pi^{2}\int \frac{d^{2}\eta^{\prime}}{\pi^{3}}\left \langle \eta \right \vert F_{2}^{\dagger}\left \vert \sigma-\eta^{\prime}\right \rangle \left \langle \sigma+\eta^{\prime}\right \vert F_{2}\left \vert \eta \right \rangle \exp (\eta^{\prime}\gamma^{\ast}-\eta^{\prime \ast}\gamma). \label{33} \end{align} Then using Eq.(\ref{11.7}) we have \begin{equation} 4\pi^{2}\mathtt{Tr}\left[ \left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert \Delta \left( \sigma,\gamma \right) \right] =\pi \delta \left( \eta_{2}-D\sigma_{2}+B\gamma_{1}\right) \delta \left( \eta_{1}-D\sigma _{1}-B\gamma_{2}\right) , \label{34} \end{equation} which means the following Weyl correspondence \begin{equation} \left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert =\pi \int d^{2}\gamma d^{2}\sigma \delta \left( \eta_{2}-D\sigma_{2}+B\gamma_{1}\right) \delta \left( \eta_{1}-D\sigma_{1}-B\gamma_{2}\right) \Delta \left( \sigma,\gamma \right) , \label{35} \end{equation} so the projector operator $\left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert $ is just the Radon transformation of $\Delta \left( \sigma,\gamma \right) $, $D$ and $B$ are the Radon transformation parameter. Combining Eqs. (\ref{22})-(\ref{35}) together we complete the proof (\ref{9}). Therefore, the quantum tomography in two-mode entangled case is expressed as \begin{equation} |_{s,r}\left \langle \eta \right \vert \left. \psi \right \rangle |^{2} =|\left \langle \eta \right \vert F^{\dagger}\left \vert \psi \right \rangle |^{2}=\pi \int d^{2}\gamma d^{2}\sigma \delta \left( \eta_{2}-D\sigma _{2}+B\gamma_{1}\right) \delta \left( \eta_{1}-D\sigma_{1}-B\gamma _{2}\right) \left \langle \psi \right \vert \Delta \left( \sigma,\gamma \right) \left \vert \psi \right \rangle . \label{36} \end{equation} where $\left \langle \psi \right \vert \Delta \left( \sigma,\gamma \right) \left \vert \psi \right \rangle $ is the Wigner function. So the probability distribution for the Fresnel quadrature phase is the tomography (Radon transform of the two-mode Wigner function).{\small \ }This new relation between quantum tomography and optical Fresnel transform may provide experimentalists to figure out new approach for generating tomography. Next we turn to the \textquotedblleft frequency\textquotedblright \ domain, that is to say, we shall prove that the $(A,C)$ related Radon transform of entangled Wigner operator $\Delta \left( \sigma,\gamma \right) $ is just the pure state density operator $\left \vert \xi \right \rangle _{s,rs,r}\left \langle \xi \right \vert ,$ i.e., \begin{equation} F_{2}\left \vert \xi \right \rangle \left \langle \xi \right \vert F_{2}^{\dagger }=\left \vert \xi \right \rangle _{s,rs,r}\left \langle \xi \right \vert =\pi \int \delta \left( \xi_{1}-A\sigma_{1}-C\gamma_{2}\right) \delta \left( \xi_{2}-A\sigma_{2}+C\gamma_{1}\right) \Delta \left( \sigma,\gamma \right) d^{2}\sigma d^{2}\gamma, \label{37} \end{equation} where $\left \vert \xi \right \rangle $ is the conjugated entangled state to $\left \vert \eta \right \rangle $. By analogy with the above procedure, we obtain the 2-dimensional Fresnel transformation in its `frequency domain', i.e., \begin{align} \mathcal{K}_{2}^{N}\left( \xi^{\prime},\xi \right) & \equiv \frac{1}{\pi }\left \langle \xi^{\prime}\right \vert F_{2}\left( r,s\right) \left \vert \xi \right \rangle \nonumber \\ & =\int \frac{d^{2}\eta d^{2}\sigma}{\pi^{2}}\left \langle \xi^{\prime }\right \vert \left. \eta^{\prime}\right \rangle \left \langle \eta^{\prime }\right \vert F_{2}\left( r,s\right) \left \vert \eta \right \rangle \left \langle \eta \right \vert \left. \xi \right \rangle \nonumber \\ & =\frac{1}{8iB\pi}\int \frac{d^{2}\sigma d^{2}\eta}{\pi^{2}}\exp \left( \frac{\xi^{\prime \ast}\eta^{\prime}-\xi^{\prime}\eta^{\prime \ast}+\xi \eta^{\ast}-\xi^{\ast}\eta}{2}\right) \mathcal{K}_{2}^{\left( \mathtt{r} ,s\right) }\left( \sigma,\eta \right) \nonumber \\ & =\frac{1}{2i\left( -C\right) \pi}\exp \left[ \frac{i}{2\left( -C\right) }\left( D\left \vert \xi \right \vert ^{2}+A\left \vert \xi^{\prime}\right \vert ^{2}-\xi^{\prime \ast}\xi-\xi^{\prime}\xi^{\ast}\right) \right] , \label{39} \end{align} where the superscript $N$ means that this transform kernel corresponds to the parameter matrix $N=\left[ D,-C,-B,A\right] $. Thus the 2D Fresnel transformation in its `frequency domain' is given by \begin{equation} \Psi \left( \xi^{\prime}\right) =\int \mathcal{K}_{2}^{N}\left( \xi^{\prime },\xi \right) \Phi \left( \xi \right) d^{2}\xi. \label{40} \end{equation} Operating $F_{2}\left( r,s\right) $ on $\left \vert \xi \right \rangle $ we also have \begin{equation} \left \vert \xi \right \rangle _{s,r}=\frac{1}{\allowbreak \allowbreak A-iC} \exp \left \{ -\frac{D+iB}{2\left( \allowbreak A-iC\right) }\left \vert \eta \right \vert ^{2}+\frac{\xi a_{1}^{\dagger}}{A-iC}+\frac{\xi^{\ast} a_{2}^{\dagger}}{\allowbreak A-iC}-\frac{\allowbreak A+iC}{\allowbreak A-iC}a_{1}^{\dagger}a_{2}^{\dagger}\right \} \left \vert 00\right \rangle , \label{41} \end{equation} or \begin{equation} \left \vert \xi \right \rangle _{s,r}=\frac{1}{s^{\ast}-r^{\ast}}\exp \left \{ -\allowbreak \frac{s^{\ast}+r^{\ast}}{2\left( s^{\ast}-r^{\ast}\right) }\left \vert \xi \right \vert ^{2}+\allowbreak \frac{\xi a_{1}^{\dagger}}{s^{\ast }-r^{\ast}}\allowbreak+\allowbreak \frac{\xi^{\ast}a_{2}^{\dagger}}{s^{\ast }-r^{\ast}}-\frac{s-r}{s^{\ast}-r^{\ast}}\allowbreak a_{1}^{\dagger} a_{2}^{\dagger}\right \} \left \vert 00\right \rangle . \label{42} \end{equation} Noticing that the entangled Wigner operator in $\left \langle \xi \right \vert $ representation is expressed as \begin{equation} \Delta \left( \sigma,\gamma \right) =\int \frac{d^{2}\xi}{\pi^{3}}\left \vert \gamma+\xi \right \rangle \left \langle \gamma-\xi \right \vert \exp(\xi^{\ast }\sigma-\sigma^{\ast}\xi), \label{43} \end{equation} and using the classical correspondence of $\left \vert \xi \right \rangle _{s,rs,r}\left \langle \xi \right \vert $ which is calculated by \begin{align} h(\sigma,\gamma) & =4\pi^{2}\mathtt{Tr}\left[ \left \vert \xi \right \rangle _{s,r\text{ }s,r}\left \langle \xi \right \vert \Delta \left( \sigma ,\gamma \right) \right] \nonumber \\ \ & =4\int \frac{d^{2}\xi}{\pi}\left \langle \gamma-\xi \right \vert F_{2}\left \vert \xi \right \rangle \left \langle \xi \right \vert F_{2}^{\dag }|\gamma+\xi \rangle \exp(\xi^{\ast}\sigma-\sigma^{\ast}\xi)\nonumber \\ & =\pi \delta \left( \xi_{1}-A\sigma_{1}-C\gamma_{2}\right) \delta \left( \xi_{2}-A\sigma_{2}+C\gamma_{1}\right) , \label{44} \end{align} we obtain \begin{equation} \left \vert \xi \right \rangle _{s,r\text{ }s,r}\left \langle \xi \right \vert =\pi \int \delta \left( \xi_{1}-A\sigma_{1}-C\gamma_{2}\right) \delta \left( \xi_{2}-A\sigma_{2}+C\gamma_{1}\right) \Delta \left( \sigma,\gamma \right) d^{2}\sigma d^{2}\gamma, \label{45} \end{equation} so the projector operator $\left \vert \xi \right \rangle _{s,r\text{ } s,r}\left \langle \xi \right \vert $ is another Radon transformation of the two-mode Wigner operator, with $A$ and $C$ being the Radon transformation parameter (`frequency' domain). Therefore, the quantum tomography in $_{s,r}\left \langle \xi \right \vert $ representation is expressed as the Radon transformation of the Wigner function \begin{equation} |\left \langle \xi \right \vert F^{\dagger}\left \vert \psi \right \rangle |^{2}=|_{s,r}\left \langle \xi \right \vert \left. \psi \right \rangle |^{2} =\pi \int d^{2}\gamma d^{2}\sigma \delta \left( \xi_{1}-A\sigma_{1}-C\gamma _{2}\right) \delta \left( \xi_{2}-A\sigma_{2}+C\gamma_{1}\right) \left \langle \psi \right \vert \Delta \left( \sigma,\gamma \right) \left \vert \psi \right \rangle , \label{46} \end{equation} and $_{s,r}\left \langle \xi \right \vert =\left \langle \xi \right \vert F^{\dagger}.$ \subsubsection{Inverse Radon transformation} Now we consider the inverse Radon transformation. For instance, using Eq.(\ref{35}) we see the Fourier transformation of $\left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert $ is \begin{align} & \int d^{2}\eta \left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert \exp(-i\zeta_{1}\eta_{1}-i\zeta_{2}\eta_{2})\nonumber \\ & =\pi \int d^{2}\gamma d^{2}\sigma \Delta \left( \sigma,\gamma \right) \exp \left[ -i\zeta_{1}\left( D\sigma_{1}+B\gamma_{2}\right) -i\zeta _{2}\left( D\sigma_{2}-B\gamma_{1}\right) \right] , \label{47} \end{align} the right-hand side of (\ref{47}) can be regarded as a special Fourier transformation of $\Delta \left( \sigma,\gamma \right) $, so by making its inverse Fourier transformation, we get \begin{align} \Delta \left( \sigma,\gamma \right) & =\frac{1}{(2\pi)^{4}}\int_{-\infty }^{\infty}dr_{1}\left \vert r_{1}\right \vert \int_{-\infty}^{\infty} dr_{2}\left \vert r_{2}\right \vert \int_{0}^{\pi}d\theta_{1}d\theta _{2}\nonumber \\ & \times \int_{-\infty}^{\infty}\frac{d^{2}\eta}{\pi}\left \vert \eta \right \rangle _{s,rs,r}\left \langle \eta \right \vert K\left( r_{1} ,r_{2},\theta_{1},\theta_{2}\right) , \label{48} \end{align} where $\cos \theta_{1}=\cos \theta_{2}=\frac{D}{\sqrt{B^{2}+D^{2}}},r_{1} =\zeta_{1}\sqrt{B^{2}+D^{2}},r_{2}=\zeta_{2}\sqrt{B^{2}+D^{2}}$ and \begin{align} K\left( r_{1},r_{2},\theta_{1},\theta_{2}\right) & \equiv \exp \left[ -ir_{1}\left( \frac{\eta_{1}}{\sqrt{B^{2}+D^{2}}}-\sigma_{1}\cos \theta _{1}-\gamma_{2}\sin \theta_{1}\right) \right] \nonumber \\ & \times \exp \left[ -ir_{2}\left( \frac{\eta_{2}}{\sqrt{B^{2}+D^{2}}} -\sigma_{2}\cos \theta_{2}+\gamma_{1}\sin \theta_{2}\right) \right] . \label{49} \end{align} Eq.(\ref{48}) is just the inverse Radon transformation of entangled Wigner operator in the entangled state representation. This is different from the two independent Radon transformations' direct product of the two independent single-mode Wigner operators, because in (\ref{23}) the $\left \vert \eta \right \rangle _{s,r}$ is an entangled state. Therefore the Wigner function of quantum state $\left \vert \psi \right \rangle $ can be reconstructed from the tomographic inversion of a set of measured probability distributions $\left \vert _{s,r}\left \langle \eta \right. \left \vert \psi \right \rangle \right \vert ^{2}$, i.e., \begin{align} W_{\psi} & =\frac{1}{(2\pi)^{4}}\int_{-\infty}^{\infty}dr_{1}\left \vert r_{1}\right \vert \int_{-\infty}^{\infty}dr_{2}\left \vert r_{2}\right \vert \int_{0}^{\pi}d\theta_{1}d\theta_{2}\nonumber \\ & \times \int_{-\infty}^{\infty}\frac{d^{2}\eta}{\pi}\left \vert _{s,r} \left \langle \eta \right. \left \vert \psi \right \rangle \right \vert ^{2}K\left( r_{1},r_{2},\theta_{1},\theta_{2}\right) . \label{50} \end{align} Thus, based on the previous section, we have further extended the relation connecting optical Fresnel transformation with quantum tomography to the entangled case{\small .} The tomography representation $_{s,r}\left \langle \eta \right \vert =\left \langle \eta \right \vert F_{2}^{\dagger}$ is set up, based on which the tomogram of quantum state $\left \vert \psi \right \rangle $ is just the squared modulus of the wave function $_{s,r}\left \langle \eta \right \vert \left. \psi \right \rangle .$ i.e. the probability distribution for the Fresnel quadrature phase is the tomogram (Radon transform of the Wigner function). \section{Fractional Fourier Transformation (FrFT) for 1-D case} The fractional Fourier transform (FrFT) has been shown to be a very useful tool in Fourier optics and information optics. The concept of FrFT was firstly introduced mathematically in 1980 by Namias \cite{Namias} as a mathematical tool for solving theoretical physical problems \cite{f0}, but did not brought enough attention until Mendlovic and Ozaktas \cite{Mendlovic,Ozaktas} defined the $\alpha$-th FrFT physically, based on propagation in quadratic graded-index media (GRIN media with medium parameters $n(r)=n_{1}-n_{2} r^{2}/2$). Since then a lot of works have been done on its properties, optical implementations and applications \cite{f1,f2,f3,f4}. \subsection{Quantum version of FrFT} The FrFT of $\theta$-order is defined in a manner, i.e., \begin{equation} \mathcal{F}_{\theta}\left[ f\left( x\right) \right] =\sqrt{\frac {e^{i\left( \frac{\pi}{2}-\theta \right) }}{2\pi \sin \theta}}\int_{-\infty }^{\infty}\exp \left \{ -i\frac{x^{2}+y^{2}}{2\tan \theta}+\frac{ixx^{\prime} }{\sin \theta}\right \} f\left( x\right) dx, \label{12.1} \end{equation} where the exponential function is an integral kernel. In order to find the quantum correspondence of FrFT, multiplying the function $\exp \left \{ -i\frac{x^{2}+y^{2}}{2\tan \theta}+\frac{ixx^{\prime}}{\sin \theta}\right \} $ by the ket $\int dy\left \vert y\right \rangle $ and bra $\int dx\left \langle x\right \vert $ from left and right, respectively, where $\left \vert y\right \rangle $ and $\left \vert x\right \rangle $ are coordinate eigenvectors, $X\left \vert x\right \rangle =x\left \vert x\right \rangle $, and then using (\ref{3.7}) and the IWOP technique to perform the integration, we obtain \begin{align} & \int_{-\infty}^{\infty}dxdy\left \vert y\right \rangle \exp \left \{ -i\frac{x^{2}+y^{2}}{2\tan \theta}+\frac{ixy}{\sin \theta}\right \} \left \langle x\right \vert \nonumber \\ & =\sqrt{-2\pi i\sin \theta e^{i\theta}}\colon \exp \left \{ \left( e^{i\theta }-1\right) a^{\dagger}a\right \} \colon \nonumber \\ & =\sqrt{-2\pi i\sin \theta e^{i\theta}}\exp \left \{ i\theta a^{\dagger }a\right \} , \label{12.2} \end{align} where we have used the operator identity in the last step of Eq.(\ref{12.2}) \begin{equation} \exp \left \{ fa^{\dagger}a\right \} =\colon \exp \left \{ \left( e^{f} -1\right) a^{\dagger}a\right \} \colon. \label{12.3} \end{equation} From the orthogonal relation $\left \langle x^{\prime}\right. \left \vert x\right \rangle =\delta \left( x-x^{\prime}\right) ,$ we know that Eq.(\ref{12.2}) indicates \begin{equation} \sqrt{\frac{e^{i\left( \frac{\pi}{2}-\theta \right) }}{2\pi \sin \theta}} \exp \left \{ -i\frac{x^{2}+y^{2}}{2\tan \theta}+\frac{ixy}{\sin \theta}\right \} =\left \langle y\right \vert e^{i\theta a^{\dagger}a}\left \vert x\right \rangle , \label{12.4} \end{equation} which implies that the integral kernel in Eq.(\ref{12.1}) is just the matrix element of operator $\exp \left \{ i\theta a^{\dagger}a\right \} $\ in coordinate state ($\exp \left \{ i\theta a^{\dagger}a\right \} $ called as Fractional Fourier Operator \cite{r15}. Therefore, if we consider $f\left( x\right) $ as $\left \langle x\right. \left \vert f\right \rangle $, the wave function of quantum state $\left \vert f\right \rangle $ in the coordinate representation, from Eqs. (\ref{12.1}) and (\ref{12.4}) it then follows \begin{equation} \mathcal{F}_{\theta}\left[ f\left( x\right) \right] =\int_{-\infty }^{\infty}dx\left \langle y\right \vert e^{i\theta a^{\dagger}a}\left \vert x\right \rangle f\left( x\right) =\left \langle y\right \vert e^{i\theta a^{\dagger}a}\left \vert f\right \rangle \equiv g\left( y\right) , \label{12.5} \end{equation} which suggests \begin{equation} \left \vert g\right \rangle =e^{i\theta a^{\dagger}a}\left \vert f\right \rangle . \label{12.6} \end{equation} From Eqs.(\ref{12.5})\ and (\ref{12.1}) one can see that the FrFT in Eq.(\ref{12.1}) corresponds actually to the rotating operator $\left( e^{i\theta a^{\dagger}a}\right) $ transform in Eq.(\ref{12.5}) between two quantum states, which is just the quantum version of FrFT. In fact, using quantum version of FrFT, one can directly derive various properties of the FrFTs. An important feature of the FrFT is that they are composed according to $\mathcal{F}_{\theta^{\prime}}\mathcal{F}_{\theta }=\mathcal{F}_{\theta^{\prime}+\theta}$ (the additivity property). Without losing generality, we examine \begin{equation} \mathcal{F}_{\theta+\theta^{\prime}}\left[ f\left( x\right) \right] \equiv \int_{-\infty}^{\infty}\frac{d^{2}\eta}{\pi}\left \langle y\right \vert e^{i\left( \theta+\theta^{\prime}\right) a^{\dagger}a}\left \vert x\right \rangle f\left( x\right) . \label{12.7} \end{equation} According to the completeness relation of coordinate eigenvector, $\int_{-\infty}^{\infty}dx^{\prime}\left \vert x^{\prime}\right \rangle \left \langle x^{\prime}\right \vert =1,$ Eq.(\ref{12.7}) yields \begin{align} \mathcal{F}_{\theta+\theta^{\prime}}\left[ f\left( x\right) \right] & =\int_{-\infty}^{\infty}dx\left \langle y\right \vert e^{i\theta a^{\dagger} a}e^{i\theta^{\prime}a^{\dagger}a}\left \vert x\right \rangle f\left( x\right) \nonumber \\ & =\int_{-\infty}^{\infty}dx^{\prime}\left \langle y\right \vert e^{i\theta a^{\dagger}a}\left \vert x^{\prime}\right \rangle \int_{-\infty}^{\infty }dx\left \langle x^{\prime}\right \vert e^{i\theta^{\prime}a^{\dagger} a}\left \vert x\right \rangle f\left( x\right) \nonumber \\ & =\int_{-\infty}^{\infty}dx^{\prime}\left \langle y\right \vert e^{i\theta a^{\dagger}a}\left \vert x^{\prime}\right \rangle \mathcal{F}_{\theta}\left[ f\left( x\right) \right] =\mathcal{F}_{\theta}\mathcal{F}_{\theta^{\prime} }\left[ f\left( x\right) \right] , \label{12.8} \end{align} which is just the additivity of FrFT. In particular, when $\left \vert f\right \rangle $ is the number state, $\left \vert f\right \rangle =\left \vert n\right \rangle =\frac{a^{\dagger n} }{\sqrt{n!}}\left \vert 0\right \rangle $, its wavefunction in coordinate representation is \begin{equation} f\left( x\right) =\left \langle x\right. \left \vert n\right \rangle =\frac {1}{\sqrt{2^{n}n!\sqrt{\pi}}}e^{-x^{2}/2}H_{n}(x), \label{12.9} \end{equation} the FrFT of $\left \langle x\right. \left \vert n\right \rangle $ is \begin{equation} \mathcal{F}_{\theta}\left[ \left \langle x\right. \left \vert n\right \rangle \right] =\left \langle y\right \vert e^{i\theta a^{\dagger}a}\left \vert n\right \rangle =e^{in\theta}\left \langle y\right. \left \vert n\right \rangle , \label{12.10} \end{equation} or \begin{equation} \mathcal{F}_{\theta}\left[ e^{-x^{2}/2}H_{n}(x)\right] =e^{in\theta }e^{-y^{2}/2}H_{n}(y), \label{12.11} \end{equation} which indicates that the eigenfunction is Hermite-Gaussian function with the corresponding eigenvalue being $e^{in\theta}$. \subsection{On the Scaled FrFT Operator} In studying various optical transformations the optical operator method is proposed \cite{r16} as mapping of ray-transfer ABCD matrix, such that the ray transfer through optical instruments and the diffraction can be discussed by virtue of the commutative relations of operators and the matrix algebra. The square phase operators, scaling operator, Fourier transform operator and the propagation operator in free space have been proposed in the literature, two important questions thus naturally arise: 1. what is the scaled FrFT (SFrFT) operator which corresponds to the SFrFT's integration kernel \cite{Namias} \begin{equation} \frac{1}{\sqrt{2\pi if_{e}\sin \phi}}\exp \left \{ \frac{i\left( x^{2} +x^{\prime2}\right) }{2f_{e}\tan \phi}-\frac{ix^{\prime}x}{f_{e}\sin \phi }\right \} , \label{12.12} \end{equation} where $f_{e}$ is standard focal length (or a scaled parameter); 2. If this operator is found, can it be further decomposed into simpler operators and what are their physical meaning? Since SFrFT has wide application in optical information detection and can be implemented even by using a thick lens \cite{r17},\ so our questions are worth of paying attention \cite{FHCPL}. \begin{figure} \caption{{\protect \small A thick lens as a kind of fractional Fourier transform device. }} \label{Fig2} \end{figure} Let start with a thick lens (shown in Fig.1) which represents a transfer matrix \cite{r17} \begin{equation} \left( \begin{array} [c]{cc} \mathcal{A} & \mathcal{B}\\ \mathcal{C} & \mathcal{D} \end{array} \right) =\left( \begin{array} [c]{cc} 1-\frac{\left( 1-1/n\right) l}{R_{1}} & \frac{l}{n}\\ -[\left( n-1\right) \frac{R_{1}+R_{2}}{R_{1}R_{2}}-\frac{l\left( n-1\right) ^{2}}{nR_{1}R_{2}}] & 1-\frac{\left( 1-1/n\right) l}{R_{2}} \end{array} \right) , \label{12.13} \end{equation} where $n$ is the reflective index; $l$ is the thickness of thick lens; $R_{1}$ and $R_{2}$ denotes the curvature radius of the two surfaces of the lens, respectively. When we choose $R_{1}=R_{2}=R,$ then Eq.(\ref{12.13}) reduces to \begin{equation} \left( \begin{array} [c]{cc} \mathcal{A} & \mathcal{B}\\ \mathcal{C} & \mathcal{D} \end{array} \right) =\left( \begin{array} [c]{cc} 1-\frac{\left( 1-1/n\right) l}{R} & \frac{l}{n}\\ -[\left( n-1\right) \frac{2}{R}-\frac{l\left( n-1\right) ^{2}}{nR^{2}}] & 1-\frac{\left( 1-1/n\right) l}{R} \end{array} \right) . \label{12.14} \end{equation} By defining $1-\frac{\left( 1-1/n\right) l}{R}=\cos \phi,$ $\frac{l}{n} =f_{e}\sin \phi,$ and $\frac{l}{R}=\frac{n\left( 1-\cos \phi \right) }{n-1},$ $l=nf_{e}\sin \phi,$ we can recast (\ref{12.14}) into the simple form \begin{equation} \left( \begin{array} [c]{cc} \mathcal{A} & \mathcal{B}\\ \mathcal{C} & \mathcal{D} \end{array} \right) =\left( \begin{array} [c]{cc} \cos \phi & f_{e}\sin \phi \\ -\sin \phi/f_{e} & \cos \phi \end{array} \right) ,\text{ }\det \left( \begin{array} [c]{cc} \mathcal{A} & \mathcal{B}\\ \mathcal{C} & \mathcal{D} \end{array} \right) =1. \label{12.15} \end{equation} According to (\ref{8.4}) we immediately know that the operator of SFrFT is \begin{align} F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) & =\exp \left \{ \frac{i\left( f_{e}-1/f_{e}\right) \tan \phi}{2V}a^{\dagger2}\right \} \nonumber \\ & \times \exp \left \{ \left( a^{\dagger}a+\frac{1}{2}\right) \ln \left( \frac{2\sec \phi}{V}\right) \right \} \nonumber \\ & \times \exp \left \{ \frac{i\left( f_{e}-1/f_{e}\right) \tan \phi}{2V} a^{2}\right \} ,\text{ \ }\label{12.16}\\ \text{ }V & =\left[ 2+i\left( f_{e}+1/f_{e}\right) \tan \phi \right] .\nonumber \end{align} Noting that the matrix $\left( \begin{array} [c]{cc} \mathcal{A} & \mathcal{B}\\ \mathcal{C} & \mathcal{D} \end{array} \right) $ can be decomposed into \begin{equation} \left( \begin{array} [c]{cc} \mathcal{A} & \mathcal{B}\\ \mathcal{C} & \mathcal{D} \end{array} \right) =\left( \begin{array} [c]{cc} 1 & 0\\ -\frac{1}{f_{e}}\tan \phi & 1 \end{array} \right) \left( \begin{array} [c]{cc} \cos \phi & 0\\ 0 & \cos \phi \end{array} \right) \left( \begin{array} [c]{cc} 1 & f_{e}\tan \phi \\ 0 & 1 \end{array} \right) , \label{12.17} \end{equation} according to the previous section we have \begin{align} F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) & =F\left( 1,0,-\frac{1}{f_{e}}\tan \phi \right) F\left( \cos \phi,0,0\right) F\left( 1,f_{e}\tan \phi,0\right) \nonumber \\ & =\exp \left( \frac{\tan \phi}{2if_{e}}Q^{2}\right) \exp \left \{ -\frac {i}{2}\left( QP+PQ\right) \ln \cos \phi \right \} \exp \left( \frac{f_{e} \tan \phi}{2i}P^{2}\right) , \label{12.18} \end{align} where $Q=\left( a+a^{\dagger}\right) /\sqrt{2},$ $P=\left( a-a^{\dagger }\right) /\left( \sqrt{2}i\right) $ and $\exp \left( -\frac{i\tan \phi }{2f_{e}}Q^{2}\right) ,$ $\exp \left \{ -\frac{i}{2}\left( QP+PQ\right) \ln \cos \phi \right \} $ and $\exp \left( -\frac{if_{e}\tan \phi}{2}P^{2}\right) $ are the quadrature phase operator, the squeezing operator and the free propagation operator, respectively. On the other hand, from $\left( \begin{array} [c]{cc} \mathcal{A} & \mathcal{B}\\ \mathcal{C} & \mathcal{D} \end{array} \right) ^{-1}=\left( \begin{array} [c]{cc} \mathcal{D} & \mathcal{-B}\\ \mathcal{-C} & \mathcal{A} \end{array} \right) ,$ and (\ref{12.18}) we see \begin{equation} F_{1}^{-1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) =\exp \left( -\frac{i\mathcal{C}}{2\mathcal{D}}Q^{2}\right) \exp \left( -\frac{i} {2}\left( QP+PQ\right) \ln \mathcal{D}\right) \exp \left( \frac {i\mathcal{B}}{2\mathcal{D}}P^{2}\right) , \label{12.19} \end{equation} it then follows \begin{equation} F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) =\exp \left( \frac{f_{e}\tan \phi}{2i}P^{2}\right) \exp \left( \frac{i}{2}\left( QP+PQ\right) \ln \cos \phi \right) \exp \left( \frac{\tan \phi}{2if_{e}} Q^{2}\right) . \label{12.20} \end{equation} Using the canonical operator form (\ref{12.18}) or (\ref{12.20}) of $F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) $ we can deduce its matrix element in the coordinate states $\left \vert x\right \rangle $ (its conjugate state is $\left \vert p\right \rangle $) \begin{equation} \left \langle x^{\prime}\right \vert F_{1}\left( \mathcal{A},\mathcal{B} ,\mathcal{C}\right) \left \vert x\right \rangle =\frac{1}{\sqrt{2\pi if_{e} \sin \phi}}\exp \left \{ \left( \frac{i\left( x^{2}+x^{\prime2}\right) }{2f_{e}\tan \phi}-\frac{ix^{\prime}x}{f_{e}\sin \phi}\right) \right \} , \label{12.21} \end{equation} which is just the kernel of SFrFT, thus we name $F_{1}\left( \mathcal{A} ,\mathcal{B},\mathcal{C}\right) $ SFrFT operator. Noticing that the $Q^{2}/2,$ $P^{2}/2$ and $\frac{i}{4}\left( QP+PQ\right) $ construct a close SU(2) Lie algebra, we can put Eq.(\ref{12.20}) into a more compact form, i.e., \begin{equation} F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) =\exp \left \{ -i\frac{\phi f_{e}}{2}\left( P^{2}+\frac{Q^{2}}{f_{e}^{2}}\right) \right \} , \label{12.22} \end{equation} Eqs. (\ref{12.18}), (\ref{12.20}) and (\ref{12.22}) are different forms of the same operator of SFrFT. Especially, when $f_{e}=1,$ $F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) \rightarrow \exp \left \{ -i\phi a^{\dagger}a\right \} ,$ which is the usual FrFT operator. Using (\ref{12.21}) the SFrFT of $f\left( x\right) =\left \langle x\right \vert \left. f\right \rangle ,$ denoted as $\mathcal{F}_{f_{e}}^{\phi }\left[ f\left( x\right) \right] ,$ can be expressed as an matrix element in quantum optics context, \begin{equation} \mathcal{F}_{f_{e}}^{\phi}\left[ f\left( x\right) \right] =\int dx\left \langle x^{\prime}\right \vert F_{1}\left( \mathcal{A},\mathcal{B} ,\mathcal{C}\right) \left \vert x\right \rangle \left \langle x\right \vert \left. f\right \rangle =\left \langle x^{\prime}\right \vert F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) \left \vert f\right \rangle . \label{12.23} \end{equation} The above discussions are useful since any unimodular matrix $\left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) $ can be decomposed into \cite{r18} \begin{equation} \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) =\left( \begin{array} [c]{cc} 1 & 0\\ -\mathcal{P} & 1 \end{array} \right) \left( \begin{array} [c]{cc} m & 0\\ 0 & m^{-1} \end{array} \right) \left( \begin{array} [c]{cc} \cos \phi & f_{e}\sin \phi \\ -\sin \phi/f_{e} & \cos \phi \end{array} \right) , \label{12.24} \end{equation} where the parameters $m,\mathcal{P},\phi$ are all real, \begin{equation} m^{2}=A^{2}+\frac{B^{2}}{f_{e}^{2}},\text{ }\tan \phi=\frac{B}{Af_{e}},\text{ }\mathcal{P}=-\frac{AC+DB/f_{e}^{2}}{A^{2}+\frac{B^{2}}{f_{e}^{2}}}. \label{12.25} \end{equation} Correspondingly, the operator of FrFT is given by \begin{align} F_{1}\left( A,B,C\right) & =F_{1}\left( 1,0,-\mathcal{P}\right) F_{1}\left( m,0,0\right) F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C} \right) \nonumber \\ & =\exp \left( -\frac{i}{2}\mathcal{P}Q^{2}\right) \exp \left( -\frac{i} {2}\left( QP+PQ\right) \ln m\right) \exp \left \{ \frac{\phi f_{e}} {2i}\left( P^{2}+\frac{Q^{2}}{f_{e}^{2}}\right) \right \} , \label{12.26} \end{align} where $F_{1}\left( 1,0,-\mathcal{P}\right) =\exp \left[ -\frac{i} {2}\mathcal{P}Q^{2}\right] $ is the quadratic phase operator. Thus the general Fresnel transform can always be expressed by SFrFT as follows \begin{align} g\left( x^{\prime}\right) & =\int dy\left \langle x^{\prime}\right \vert F_{1}\left( 1,0,-\mathcal{P}\right) F_{1}\left( m,0,0\right) \left \vert y\right \rangle \int dx\left \langle y\right \vert F_{1}\left( \mathcal{A} ,\mathcal{B},\mathcal{C}\right) \left \vert x\right \rangle f\left( x\right) \nonumber \\ & =\sqrt{m}\int dx^{\prime \prime}dy\left \langle x^{\prime}\right \vert \exp \left( -\frac{i}{2}\mathcal{P}X^{2}\right) \left \vert mx^{\prime \prime }\right \rangle \left \langle x^{\prime \prime}\right. \left \vert y\right \rangle \int dx\left \langle y\right \vert F_{1}\left( \mathcal{A},\mathcal{B} ,\mathcal{C}\right) \left \vert x\right \rangle f\left( x\right) \nonumber \\ & =\exp \left( -\frac{i}{2}\mathcal{P}x^{\prime2}\right) \int \frac{dy} {\sqrt{m}}\delta \left( \frac{x^{\prime}}{m}-y\right) \int dx\left \langle y\right \vert F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) \left \vert x\right \rangle f\left( x\right) \nonumber \\ & =\frac{1}{\sqrt{m}}\exp \left( -\frac{i}{2}\mathcal{P}x^{\prime2}\right) \int dx\left \langle \frac{x^{\prime}}{m}\right \vert F_{1}\left( \mathcal{A},\mathcal{B},\mathcal{C}\right) \left \vert x\right \rangle f\left( x\right) \nonumber \\ & =\frac{1}{\sqrt{m}}\exp \left( -\frac{i}{2}\mathcal{P}x^{\prime2}\right) \mathcal{F}_{f_{e}}^{\phi}\left[ f\right] \left( \frac{x^{\prime}} {m}\right) . \label{12.27} \end{align} i.e., the output $g\left( x^{\prime}\right) $ is the SFrFT of the input $f\left( x\right) $ plus a quadratic phase term $\exp \left( -\frac{i} {2}\mathcal{P}x^{\prime2}\right) .$ \subsection{An integration transformation from Chirplet to FrFT kernel} In the history of developing optics we have known that each optical setup corresponds to an optical transformation, for example, thick lens as a fractional Fourier transformer. In turn, once a new integration transform is found, its experimental implementation is expected. In this subsection we report a new integration transformation which can convert chirplet function to FrFT kernel \cite{FHJMO}, as this new transformation is invertible and obeys Parseval theorem, we expect it be realized by experimentalists. The new transform we propose here is \begin{equation} \iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }h(p,q)\equiv f\left( x,y\right) , \label{12.28} \end{equation} which differs from the usual two-fold Fourier transformation $\iint_{-\infty }^{\infty}\frac{dxdy}{4\pi^{2}}e^{ipx+iqy}f(x,y).$ In particular, when $h(p,q)=1,$ Eq. (\ref{12.28}) reduces to \begin{equation} \iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }=\int_{-\infty}^{\infty}dq\delta \left( q-y\right) e^{-2xi\left( q-y\right) }=1, \label{12.29} \end{equation} so $e^{2i\left( p-x\right) \left( q-y\right) }$ can be considered a basis funtion in $p-q$ phase space, or Eq. (\ref{12.28}) can be looked as an expansion of $f\left( x,y\right) $ with the expansion coefficient being $h(p,q).$ We can prove that the reciprocal transformation of (\ref{12.28}) is \begin{equation} \iint_{-\infty}^{\infty}\frac{dxdy}{\pi}e^{-2i(p-x)(q-y)}f(x,y)=h(p,q). \label{12.30} \end{equation} In fact, substituting (\ref{12.28}) into the left-hand side of (\ref{12.30}) yields \begin{align} & \iint_{-\infty}^{\infty}\frac{dp^{\prime}dq^{\prime}}{\pi}h(p^{\prime },q^{\prime})\iint \frac{dxdy}{\pi}e^{2i\left[ \left( p^{\prime}-x\right) \left( q^{\prime}-y\right) -\left( p-x\right) \left( q-y\right) \right] }\nonumber \\ & =\iint_{-\infty}^{\infty}dp^{\prime}dq^{\prime}h(p^{\prime},q^{\prime })e^{2i\left( p^{\prime}q^{\prime}-pq\right) }\nonumber \\ & \times \delta \left( p-p^{\prime}\right) \delta \left( q-q^{\prime}\right) \left. =h(p,q)\right. . \label{12.31} \end{align} This transformation's Parseval-like theorem is \begin{align} & \iint_{-\infty}^{\infty}\frac{dpdq}{\pi}|h(p,q)|^{2}\nonumber \\ & =\iint \frac{dxdy}{\pi}|f\left( x,y\right) |^{2}\iint \frac{dx^{\prime }dy^{\prime}}{\pi}e^{2i\left( x^{\prime}y^{\prime}-xy\right) }\nonumber \\ & \times \iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left[ \left( -y^{\prime}p-x^{\prime}q\right) +\left( py+xq\right) \right] }\nonumber \\ & =\iint \frac{dxdy}{\pi}|f\left( x,y\right) |^{2}\iint dx^{\prime }dy^{\prime}e^{2i\left( x^{\prime}y^{\prime}-xy\right) }\nonumber \\ & \times \delta \left( x-x^{\prime}\right) \delta \left( p-p^{\prime}\right) \nonumber \\ & =\iint \frac{dxdy}{\pi}|f\left( x,y\right) |^{2}. \label{12.32} \end{align} Now we apply Eq. (\ref{12.28}) to phase space transformation in quantum optics. Recall that a signal $\psi \left( q\right) $'s Wigner transform \cite{r5,r13,r14,r19} is \begin{equation} \psi \left( q\right) \rightarrow \int \frac{du}{2\pi}e^{ipu}\psi^{\ast}\left( q+\frac{u}{2}\right) \psi \left( q-\frac{u}{2}\right) . \label{12.33} \end{equation} Using Dirac's symbol \cite{r20} to write $\psi \left( q\right) =\left \langle q\right \vert \left. \psi \right \rangle ,$ $\left \vert q\right \rangle $ is the eigenvector of coordinate $Q$, the Wigner operator emerges from (\ref{12.33}), \begin{equation} \frac{1}{2\pi}\int_{-\infty}^{\infty}due^{-ipu}\left \vert q-\frac{u} {2}\right \rangle \left \langle q+\frac{u}{2}\right \vert =\Delta \left( p,q\right) ,\text{ }\hbar=1. \label{12.34} \end{equation} If $h\left( q,p\right) $ is quantized as the operator $\hat{H}\left( P,Q\right) $ through the Weyl-Wigner correspondence \cite{Weyl} \begin{equation} H\left( P,Q\right) =\iint_{-\infty}^{\infty}dpdq\Delta \left( p,q\right) h\left( q,p\right) , \label{12.35} \end{equation} then \begin{equation} h\left( q,p\right) =\int_{-\infty}^{\infty}due^{-ipu}\left \langle q+\frac {u}{2}\right \vert \hat{H}\left( Q,P\right) \left \vert q-\frac{u} {2}\right \rangle , \label{12.36} \end{equation} this in the literature is named the Weyl transform, $h\left( q,p\right) $ is the Weyl classical correspondence of the operator $\hat{H}\left( Q,P\right) $. Substituting (\ref{12.36}) into (\ref{12.28}) we have \begin{align} & \iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }h(p,q)\nonumber \\ & =\iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }\int_{-\infty}^{\infty}due^{-ipu}\nonumber \\ & \times \left \langle q+\frac{u}{2}\right \vert \hat{H}\left( Q,P\right) \left \vert q-\frac{u}{2}\right \rangle \nonumber \\ & =\int_{-\infty}^{\infty}dq\int_{-\infty}^{\infty}du\left \langle q+\frac {u}{2}\right \vert \hat{H}\left( Q,P\right) \left \vert q-\frac{u} {2}\right \rangle \nonumber \\ & \times \delta \left( q-y-\frac{u}{2}\right) e^{-2ix\left( q-y\right) }\nonumber \\ & =\int_{-\infty}^{\infty}due^{-ixu}\left \langle y+u\right \vert \hat {H}\left( Q,P\right) \left \vert y\right \rangle . \label{12.37} \end{align} Using $\left \langle y+u\right \vert =\left \langle u\right \vert e^{iPy}$ and $(\sqrt{2\pi})^{-1}e^{-ixu}=\left \langle p_{=x}\right \vert \left. u\right \rangle ,$ where $\left \langle p\right \vert $ is the momentum eigenvector, and \begin{align} \int_{-\infty}^{\infty}due^{-ixu}\left \langle y+u\right \vert & =\int_{-\infty}^{\infty}due^{-ixu}\left \langle u\right \vert e^{iPy}\nonumber \\ & =\sqrt{2\pi}\int_{-\infty}^{\infty}du\left \langle p_{=x}\right \vert \left. u\right \rangle \left \langle u\right \vert e^{iPy}\nonumber \\ & =\sqrt{2\pi}\left \langle p_{=x}\right \vert e^{ixy}, \label{12.38} \end{align} then Eq. (\ref{12.37}) becomes \begin{equation} \iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }h(p,q)=\sqrt{2\pi}\left \langle p_{=x}\right \vert \hat{H}\left( Q,P\right) \left \vert y\right \rangle e^{ixy}, \label{12.39} \end{equation} thus through the new integration transformation a new relationship between a phase space function $h(p,q)$ and its Weyl-Wigner correspondence operator $\hat{H}\left( Q,P\right) $ is revealed. The inverse of (\ref{12.39}), according to (\ref{12.30}), is \begin{equation} \iint_{-\infty}^{\infty}\frac{dxdy}{\sqrt{\pi/2}}e^{-2i\left( p-x\right) \left( q-y\right) }\left \langle p_{=x}\right \vert \hat{H}\left( Q,P\right) \left \vert y\right \rangle e^{ixy}=h(p,q). \label{12.40} \end{equation} For example, when $\hat{H}\left( Q,P\right) =e^{f(P^{2}+Q^{2}-1)/2},$ its classical correspondence is \begin{equation} e^{f\left( P^{2}+Q^{2}-1\right) /2}\rightarrow h(p,q)=\frac{2}{e^{f}+1} \exp \left \{ 2\frac{e^{f}-1}{e^{f}+1}\left( p^{2}+q^{2}\right) \right \} . \label{12.41} \end{equation} Substituting (\ref{12.41}) into (\ref{12.39}) we have \begin{align} & \frac{2}{e^{f}+1}\iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }\exp \left \{ 2\frac{e^{f}-1}{e^{f}+1}\left( p^{2}+q^{2}\right) \right \} \nonumber \\ & =\sqrt{2\pi}\left \langle p_{=x}\right \vert e^{f\left( P^{2}+Q^{2} -1\right) /2}\left \vert y\right \rangle e^{ixy}. \label{12.42} \end{align} Using the Gaussian integration formula \begin{align} & \iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }e^{-\lambda \left( p^{2}+q^{2}\right) }\nonumber \\ & =\frac{1}{\sqrt{\lambda^{2}+1}}\exp \left \{ \frac{-\lambda \left( x^{2}+y^{2}\right) }{\lambda^{2}+1}+\frac{2i\lambda^{2}}{\lambda^{2} +1}xy\right \} , \label{12.43} \end{align} in particular, when $\lambda=-i\tan \left( \frac{\pi}{4}-\frac{\alpha} {2}\right) ,$ with $\frac{-\lambda}{\lambda^{2}+1}=\frac{i}{2\tan \alpha},$ $\frac{2\lambda^{2}}{\lambda^{2}+1}=1-\frac{1}{\sin \alpha},$ Eq. (\ref{12.43}) becomes$\allowbreak$ \begin{align} & \frac{2}{ie^{-i\alpha}+1}\iint_{-\infty}^{\infty}\frac{dpdq}{\pi }e^{2i\left( p-x\right) \left( q-y\right) }\nonumber \\ & \times \exp \left \{ i\left( p^{2}+q^{2}\right) \tan(\frac{\pi}{4} -\frac{\alpha}{2})\right \} \nonumber \\ & =\frac{1}{\sqrt{ie^{-i\alpha}\sin \alpha}}\exp \left \{ \frac{i\left( x^{2}+y^{2}\right) }{2\tan \alpha}-\frac{ixy}{\sin \alpha}\right \} e^{ixy}, \label{12.44} \end{align} where $\exp \{i\tan \left( \frac{\pi}{4}-\frac{\alpha}{2}\right) \left( p^{2}+q^{2}\right) \}$ represents an infinite long chirplet function. Comparing (\ref{12.44}) with (\ref{12.42}) we see $ie^{-i\alpha}=e^{f},$ $f=i\left( \frac{\pi}{2}-\alpha \right) ,$ it then follows \begin{align} & \left \langle p_{=x}\right \vert e^{i(\frac{\pi}{2}-\alpha)\left( P^{2}+Q^{2}-1\right) /2}\left \vert y\right \rangle \nonumber \\ & =\frac{1}{\sqrt{2\pi ie^{-i\alpha}\sin \alpha}}\exp \left \{ \frac{i\left( x^{2}+y^{2}\right) }{2\tan \alpha}-\frac{ixy}{\sin \alpha}\right \} , \label{12.45} \end{align} where the right-hand side of (\ref{12.45}) is just the FrFT kernel. Therefore the new integration transformation (\ref{12.28}) can convert spherical wave to FrFT kernel. We expect this transformation could be implemented by experimentalists. Moreover, this transformation can also serve for solving some operator ordering problems. We notice \begin{align} & \frac{1}{\pi}\exp[2i\left( p-x\right) \left( q-y\right) ]\nonumber \\ & =\int_{-\infty}^{\infty}\frac{dv}{2\pi}\delta \left( q-y-\frac{v} {2}\right) \exp \left \{ i\left( p-x\right) v\right \} , \label{12.46} \end{align} so the transformation (\ref{12.28}) is equivalent to \begin{align} h(p,q) & \rightarrow \iint_{-\infty}^{\infty}\frac{dpdq}{\pi}e^{2i\left( p-x\right) \left( q-y\right) }h(p,q)\nonumber \\ & =\iint_{-\infty}^{\infty}dpdq\int_{-\infty}^{\infty}\frac{dv}{2\pi} \delta \left( q-y-\frac{v}{2}\right) e^{i\left( p-x\right) v} h(p,q)\nonumber \\ & =\iint_{-\infty}^{\infty}\frac{dpdq}{2\pi}h(p+x,y+\frac{q}{2})e^{ipq}. \label{12.47} \end{align} For example, using (\ref{12.34}) and (\ref{12.46}) we have \begin{align} \Delta(p,q) & \rightarrow \iint_{-\infty}^{\infty}\frac{dpdq}{2\pi} \Delta(p+x,y+\frac{q}{2})e^{ipq}\nonumber \\ & =\iint_{-\infty}^{\infty}\frac{dpdq}{4\pi^{2}}\int_{-\infty}^{\infty }due^{-i\left( p+x\right) u}\nonumber \\ & \times \left \vert y+\frac{q}{2}-\frac{u}{2}\right \rangle \left \langle y+\frac{q}{2}+\frac{u}{2}\right \vert e^{ipq}\nonumber \\ & =\int_{-\infty}^{\infty}\frac{dq}{2\pi}\int_{-\infty}^{\infty} due^{-ixu}\delta \left( q-u\right) \nonumber \\ & \times \left \vert y+\frac{q}{2}-\frac{u}{2}\right \rangle \left \langle y+\frac{q}{2}+\frac{u}{2}\right \vert \nonumber \\ & =\int_{-\infty}^{\infty}\frac{du}{2\pi}e^{-ixu}\left \vert y\right \rangle \left \langle y+u\right \vert =\left \vert y\right \rangle \left \langle y\right \vert \int_{-\infty}^{\infty}\frac{du}{2\pi}e^{iu\left( P-u\right) }\nonumber \\ & =\delta \left( y-Q\right) \delta \left( x-P\right) , \label{12.48} \end{align} so \begin{equation} \frac{1}{\pi}\iint \mathtt{d}p^{\prime}\mathtt{d}q^{\prime}\Delta \left( q^{\prime},p^{\prime}\right) e^{2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }=\delta \left( q-Q\right) \delta \left( p-P\right) , \label{12.49} \end{equation} thus this new transformation can convert the Wigner operator to $\delta \left( q-Q\right) \delta \left( p-P\right) .$ Similarly, we have \[ \frac{1}{\pi}\iint \mathtt{d}p^{\prime}\mathtt{d}q^{\prime}\Delta \left( q^{\prime},p^{\prime}\right) e^{-2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }=\delta \left( p-P\right) \delta \left( q-Q\right) . \] Then for the Wigner function of a density operator $\rho$, $W_{\psi }(p,q)\equiv \mathtt{Tr}\left[ \rho \Delta(p,q)\right] ,$ we have \begin{align} & \iint_{-\infty}^{\infty}\frac{dp^{\prime}dq^{\prime}}{\pi}\mathtt{Tr} \left[ \rho \Delta(p^{\prime},q^{\prime})\right] e^{2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }\nonumber \\ & =\mathtt{Tr}\left[ \rho \delta \left( q-Q\right) \delta \left( p-P\right) \right] \nonumber \\ & =\int \frac{dudv}{4\pi^{2}}\mathtt{Tr}\left[ \rho e^{i\left( q-Q\right) u}e^{i\left( p-P\right) v}\right] , \label{12.50} \end{align} we may define $\mathtt{Tr}\left[ \rho e^{i\left( q-Q\right) u}e^{i\left( p-P\right) v}\right] $ as the $Q-P$ characteristic function. Similarly, \begin{align} & \iint_{-\infty}^{\infty}\frac{dp^{\prime}dq^{\prime}}{\pi}\mathtt{Tr} \left[ \rho \Delta(p^{\prime},q^{\prime})\right] e^{-2\mathtt{i}\left( p-p^{\prime}\right) \left( q-q^{\prime}\right) }\nonumber \\ & =\mathtt{Tr}\left[ \rho \delta \left( p-P\right) \delta \left( q-Q\right) \right] \nonumber \\ & =\int \frac{dudv}{4\pi^{2}}\mathtt{Tr}\left[ \rho e^{i\left( p-P\right) v}e^{i\left( q-Q\right) u}\right] \label{12.51} \end{align} we name $\mathtt{Tr}\left[ \rho e^{i\left( p-P\right) v}e^{i\left( q-Q\right) u}\right] $ as the $P-Q$ characteristic function. \section{Complex Fractional Fourier Transformation} In this section, we extend 1-D FrFT to the complex fractional Fourier transformation (CFrFT). \subsection{Quantum version of CFrFT} According to Ref. \cite{r21}, based on the entangled state $\left \vert \eta \right \rangle $ in two-mode Fock space and its orthonormal property, we can take the matrix element of $\exp \left[ -i\alpha \left( a_{1}^{\dagger }a_{1}+a_{2}^{\dagger}a_{2}\right) \right] $ in the entangled state $\left \vert \eta \right \rangle ,$ \begin{equation} \mathcal{K}^{F}\left( \eta^{\prime},\eta \right) =\left \langle \eta^{\prime }\right \vert \exp \left[ -i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger }a_{2}\right) \right] \left \vert \eta \right \rangle , \label{13.1} \end{equation} as the integral transform kernel of CFrFT, \begin{equation} \mathcal{F}_{\alpha}\left[ f\right] \left( \eta \right) =\int \frac {d^{2}\eta}{\pi}\mathcal{K}^{F}\left( \eta^{\prime},\eta \right) f\left( \eta \right) . \label{13.2} \end{equation} Using the normally ordered expansion of $e^{\lambda a_{1}^{\dagger}a_{1} }=\colon \exp \left[ \left( e^{\lambda}-1\right) a_{1}^{\dagger}a_{1}\right] \colon$ and the completeness relation of the coherent state representation, $\left \vert z_{i}\right \rangle =\exp \left \{ -\frac{1}{2}\left \vert z_{i}\right \vert ^{2}+z_{i}a_{i}^{\dagger}\right \} \left \vert 0\right \rangle _{i},$ we calculate that $\mathcal{K}^{F}\left( \eta^{\prime},\eta \right) $ is \begin{align} \mathcal{K}^{F}\left( \eta^{\prime},\eta \right) & =\left \langle \eta^{\prime}\right \vert \frac{d^{2}z_{1}^{\prime}d^{2}z_{2}^{\prime}}{\pi ^{2}}\left \vert z_{1}^{\prime},z_{2}^{\prime}\right \rangle \left \langle z_{1}^{\prime},z_{2}^{\prime}\right \vert \colon \exp \left[ \left( e^{-i\alpha}-1\right) \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger} a_{2}\right) \right] \colon \nonumber \\ & \times \int \frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{2}}\left \vert z_{1} ,z_{2}\right \rangle \left \langle z_{1},z_{2}\right \vert \left. \eta \right \rangle \nonumber \\ & =\frac{e^{i(\alpha-\frac{\pi}{2})}}{2\sin \alpha}\exp \left[ \frac {i(\left \vert \eta^{\prime}\right \vert ^{2}+\left \vert \eta \right \vert ^{2} )}{2\tan \alpha}-\frac{i\left( \eta^{\prime \ast}\eta+\eta^{\ast}\eta^{\prime }\right) }{2\sin \alpha}\right] , \label{13.3} \end{align} which is just the integral kernel of the CFrFT in \cite{r22}. Thus we see that the matrix element of $\exp \left[ -i\alpha \left( a_{1}^{\dagger}a_{1} +a_{2}^{\dagger}a_{2}\right) \right] $ between two entangled state representations $\left \vert \eta \right \rangle $ and $\left \vert \eta^{\prime }\right \rangle $ corresponds to CFrFT$.$ This is a new route from quantum optical transform to classical CFrFT transform. Let $\eta=x_{2}+iy_{2},$ $\eta^{\prime}=x_{1}+iy_{1},$ (\ref{13.2}) becomes \begin{align} \mathcal{F}_{\alpha}\left[ f\right] \left( x_{2},y_{2}\right) & =\frac{e^{i(\alpha-\frac{\pi}{2})}}{2\sin \alpha}\exp \left[ \frac{i\left( x_{2}^{2}+y_{2}^{2}\right) }{2\tan \alpha}\right] \nonumber \\ & \times \int \frac{dx_{1}dy_{1}}{\pi}\exp \left[ \frac{i\left( x_{1} ^{2}+y_{1}^{2}\right) }{2\tan \alpha}-i\frac{\left( x_{1}x_{2}+y_{1} y_{2}\right) }{\sin \alpha}\right] f\left( x_{1},y_{1}\right) . \label{13.4} \end{align} In fact, letting $f\left( \eta \right) =\left \langle \eta \right \vert \left. f\right \rangle ,$ then using Eqs.(\ref{3.13}) and (\ref{13.3}) we have \begin{align} & \left \langle \eta^{\prime}\right \vert \exp \left[ -i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \right] \left \vert f\right \rangle \nonumber \\ & =\int \frac{d^{2}\eta}{\pi}\left \langle \eta^{\prime}\right \vert \exp \left[ -i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \right] \left \vert \eta \right \rangle \left \langle \eta \right \vert \left. f\right \rangle =\int \frac{d^{2}\eta}{\pi}\mathcal{K}^{F}\left( \eta^{\prime },\eta \right) f\left( \eta \right) \nonumber \\ & =\frac{e^{i(\alpha-\frac{\pi}{2})}}{2\sin \alpha}\int \frac{d^{2}\eta}{\pi }\exp \left[ \frac{i(\left \vert \eta^{\prime}\right \vert ^{2}+\left \vert \eta \right \vert ^{2})}{2\tan \alpha}-\frac{i\left( \eta^{\prime \ast}\eta +\eta^{\ast}\eta^{\prime}\right) }{2\sin \alpha}\right] f\left( \eta \right) . \label{13.5} \end{align} Thus the quantum mechanical version of CFrFT is given by \begin{equation} \mathcal{F}_{\alpha}\left[ f\right] \left( \eta^{\prime}\right) \equiv \left \langle \eta^{\prime}\right \vert \exp \left[ -i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \right] \left \vert f\right \rangle . \label{13.6} \end{equation} The standard complex Fourier transform is $\mathcal{F}_{\pi/2}.$ $\mathcal{F}_{0}$ is the identity operator. \subsection{Additivity property and eigenmodes of CFrFT} We will show later that this CFrFT can help us to reveal some new property which has been overlooked in the formulation of the direct product of two real FrFTs \cite{r23}. The definition (\ref{13.16}) is of course required to satisfy the basic postulate that $\mathcal{F}_{\alpha}\mathcal{F}_{\beta }\left[ f\right] \left( \eta^{\prime}\right) =\mathcal{F}_{\alpha+\beta }\left[ f\left( \eta \right) \right] $ (the additivity property). For this purpose, using Eq.(\ref{13.16}) and Eq.(\ref{3.13}) we see \begin{align} \mathcal{F}_{\alpha+\beta}\left[ f\left( \eta \right) \right] & \equiv \left \langle \eta^{\prime}\right \vert e^{-i(\alpha+\beta)\left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) }\left \vert f\right \rangle \nonumber \\ & =\int_{-\infty}^{\infty}\frac{d^{2}\eta^{\prime \prime}}{\pi}\left \langle \eta^{\prime}\right \vert e^{-i\alpha \left( a_{1}^{\dagger}a_{1} +a_{2}^{\dagger}a_{2}\right) }\left \vert \eta^{\prime \prime}\right \rangle \nonumber \\ & \times \int_{-\infty}^{\infty}\frac{d^{2}\eta}{\pi}\left \langle \eta ^{\prime \prime}\right \vert e^{-i\beta \left( a_{1}^{\dagger}a_{1} +a_{2}^{\dagger}a_{2}\right) }\left \vert \eta \right \rangle f\left( \eta \right) \nonumber \\ & =\int_{-\infty}^{\infty}\frac{d^{2}\eta^{\prime \prime}}{\pi}\left \langle \eta^{\prime}\right \vert e^{-i\alpha \left( a_{1}^{\dagger}a_{1} +a_{2}^{\dagger}a_{2}\right) }\left \vert \eta^{\prime \prime}\right \rangle \mathcal{F}_{\beta}\left[ f\left( \eta \right) \right] \nonumber \\ & =\mathcal{F}_{\alpha}\mathcal{F}_{\beta}\left[ f\left( \eta \right) \right] . \label{13.7} \end{align} \ This derivation is clear and concise by employing the $\left \vert \eta \right \rangle $ representation and quantum mechanical version of CFrFT. On the other hand, the formula (\ref{13.6}) can help us to derive CFrFT of some wave functions easily. For example, when $\left \vert f\right \rangle $ is a two-mode number state $\left \vert m,n\right \rangle =a_{1}^{\dag m} a_{2}^{\dag n}/\sqrt{m!n!}\left \vert 00\right \rangle $, then the CFrFT of the wave function $\left \langle \eta \right \vert \left. m,n\right \rangle $ is \begin{align} \mathcal{F}_{\alpha}\left[ \left \langle \eta \right \vert \left. m,n\right \rangle \right] & =\left \langle \eta^{\prime}\right \vert e^{i(\alpha+\beta)\left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) }\left \vert m,n\right \rangle \nonumber \\ & =e^{i(\alpha+\beta)\left( m+n\right) }\left \langle \eta^{\prime }\right \vert \left. m,n\right \rangle . \label{13.8} \end{align} To calculate $\left \langle \eta^{\prime}\right \vert \left. m,n\right \rangle $, let us recall the definition of two-variable Hermite polynomial $H_{m,n}\left( \xi,\xi^{\ast}\right) $ (\ref{4.20,4.21}), we can expand $\left \langle \eta^{\prime}\right \vert $ as \begin{equation} \left \langle \eta^{\prime}\right \vert =\left \langle 00\right \vert \sum _{m,n=0}^{\infty}i^{m+n}\frac{a_{1}^{m}a_{2}^{n}}{m!n!}H_{m,n}\left( -i\eta^{\prime \ast},i\eta^{\prime}\right) e^{-\left \vert \eta^{\prime }\right \vert ^{2}/2}, \label{13.9} \end{equation} thus \begin{equation} \left \langle \eta^{\prime}\right \vert \left. m,n\right \rangle =\frac{i^{m+n} }{\sqrt{m!n!}}H_{m,n}\left( -i\eta^{\prime \ast},i\eta^{\prime}\right) e^{-\left \vert \eta^{\prime}\right \vert ^{2}/2}. \label{13.10} \end{equation} As a result of (\ref{13.10}) we see that equation (\ref{13.8}) becomes \begin{equation} \mathcal{F}_{\alpha}\left[ H_{m,n}\left( -i\eta^{\ast},i\eta \right) e^{-\left \vert \eta \right \vert ^{2}/2}\right] =e^{i(\alpha+\beta)\left( m+n\right) }H_{m,n}\left( -i\eta^{\prime \ast},i\eta^{\prime}\right) e^{-\left \vert \eta^{\prime}\right \vert ^{2}/2}. \label{13.11} \end{equation} If we consider the operation $\mathcal{F}_{\alpha}$ as an operator, one can say that the eigenfunction of $\mathcal{F}_{\alpha}$ (the eigenmodes of CFrFT) is the two-variable Hermite polynomials $H_{m,n}$ with the eigenvalue being $e^{i(\alpha+\beta)\left( m+n\right) }$. This is a new property of CFrFT. Since the function space spanned by $H_{m,n}\left( \eta,\eta^{\ast}\right) $ is complete, \begin{equation} \int \frac{d^{2}\eta}{\pi}e^{-|\eta|^{2}}H_{m,n}\left( \eta,\eta^{\ast }\right) \left[ H_{m,n}\left( \eta,\eta^{\ast}\right) \right] ^{\ast }=\sqrt{m!n!m^{\prime}!n^{\prime}!}\delta_{m,m^{\prime}}\delta_{n,n^{\prime}}, \label{13.12} \end{equation} and \begin{equation} \sum_{m,n=0}^{\infty}\frac{1}{m!n!}H_{m,n}\left( \eta,\eta^{\ast}\right) \left[ H_{m,n}\left( \eta^{\prime},\eta^{\prime \ast}\right) \right] ^{\ast}e^{-\left \vert \eta \right \vert ^{2}}=\pi \delta \left( \eta-\eta ^{\prime}\right) \delta \left( \eta^{\ast}-\eta^{\prime \ast}\right) , \label{13.13} \end{equation} one can confirms that the eigenmodes of CFrFT form an orthogonal and complete basis set \cite{r24a}. Note that the two variable Hermite polynomial $H_{m,n}\left( \eta,\eta^{\ast}\right) $ is not the direct product of two independent ordinary Hermite polynomials, so CFrFT differs from the direct product of two FrFTs. \subsection{From Chirplet to CFrFT kernel} In this subsection, by developing Eq. (\ref{12.28}) to more general case which can be further related to the transformation between two mutually conjugate entangled state representations $\left \vert \xi \right \rangle $ and $\left \vert \eta \right \rangle $, we shall propose a new integration transformation in $\xi-\eta$ phase space (see Eq. (\ref{13.14}) below) and its inverse transformation. We find that Eq. (\ref{13.14}) also possesses some well-behaved transformation properties and can be used to obtain the CFrFT kernel from a chirplet \cite{r24}. \subsubsection{New complex integration transformation} Corresponding to the structure of phase space spanned by $\left \vert \xi \right \rangle $ and $\left \vert \eta \right \rangle $ and enlightened by Eq. (\ref{12.28}), we propose a new complex integration transformation in $\xi-\eta$ phase space \begin{equation} \int \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast }-\mu^{\ast}\right) }\mathcal{F}(\eta,\xi)\equiv D\left( \nu,\mu \right) . \label{13.14} \end{equation} When $\mathcal{F}(\xi,\eta)=1,$ (\ref{13.14}) becomes \begin{align} & \int \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast }-\mu^{\ast}\right) }\nonumber \\ & =\int d^{2}\xi \delta \left( \xi-\mu \right) \delta \left( \xi^{\ast} -\mu^{\ast}\right) e^{\nu \left( \xi^{\ast}-\mu^{\ast}\right) -\nu^{\ast }\left( \xi-\mu \right) }=1, \label{13.15} \end{align} so $e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast}-\mu^{\ast}\right) }$ can be considered a basis function in $\xi-\eta$ phase space, or Eq. (\ref{13.14}) can be looked as an expansion of $D\left( \nu,\mu \right) $ in terms of $e^{\left( \xi -\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast}-\mu^{\ast}\right) },$ with the expansion coefficient being $\mathcal{F}(\eta,\xi).$ We can prove that the inverse transform of (\ref{13.14}) is \begin{equation} \int \frac{d^{2}\mu d^{2}\nu}{\pi^{2}}e^{\left( \xi^{\ast}-\mu^{\ast}\right) \left( \eta-\nu \right) -\left( \eta^{\ast}-\nu^{\ast}\right) \left( \xi-\mu \right) }D\left( \nu,\mu \right) \equiv \mathcal{F}(\eta,\xi). \label{13.16} \end{equation} In fact, substituting (\ref{13.14}) into the left-hand side of (\ref{13.16}) yields \begin{align} & \int \frac{d^{2}\xi^{\prime}d^{2}\eta^{\prime}}{\pi^{2}}\mathcal{F} (\eta^{\prime},\xi^{\prime})\int \frac{d^{2}\mu d^{2}\nu}{\pi^{2}}\nonumber \\ & \times e^{\left( \xi^{\prime}-\mu \right) \left( \eta^{\prime \ast} -\nu^{\ast}\right) -\left( \eta^{\prime}-\nu \right) \left( \xi^{\prime \ast}-\mu^{\ast}\right) +\left( \xi^{\ast}-\mu^{\ast}\right) \left( \eta-\nu \right) -\left( \eta^{\ast}-\nu^{\ast}\right) \left( \xi -\mu \right) }\nonumber \\ & =\int \frac{d^{2}\xi^{\prime}d^{2}\eta^{\prime}}{\pi^{2}}\mathcal{F} (\eta^{\prime},\xi^{\prime})e^{\left( \xi^{\prime}\eta^{\prime \ast} -\eta^{\prime}\xi^{\prime \ast}+\xi^{\ast}\eta-\eta^{\ast}\xi \right) }\nonumber \\ & \times \iint \frac{d^{2}\mu d^{2}\nu}{\pi^{2}}e^{\left( \eta^{\ast} -\eta^{\prime \ast}\right) \mu+\left( \eta^{\prime}-\eta \right) \mu^{\ast} }e^{\left( \xi^{\prime \ast}-\xi^{\ast}\right) \nu+\left( \xi-\xi^{\prime }\right) \nu^{\ast}}\nonumber \\ & =\int d^{2}\xi^{\prime}d^{2}\eta^{\prime}\mathcal{F}(\eta^{\prime} ,\xi^{\prime})e^{\left( \xi^{\prime}\eta^{\prime \ast}-\eta^{\prime} \xi^{\prime \ast}+\xi^{\ast}\eta-\eta^{\ast}\xi \right) }\nonumber \\ & \times \delta^{\left( 2\right) }\left( \eta^{\prime}-\eta \right) \delta^{\left( 2\right) }\delta \left( \xi-\xi^{\prime}\right) \left. =\mathcal{F}(\eta,\xi)\right. . \label{13.17} \end{align} This Parseval-like theorem for this transformation can also be demonstrated, \begin{align} & \int \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}|\mathcal{F}(\eta,\xi)|^{2} \nonumber \\ & =\int \frac{d^{2}\mu d^{2}\nu}{\pi^{2}}|D\left( \nu,\mu \right) |^{2} \iint \frac{d^{2}\mu^{\prime}d^{2}\nu^{\prime}}{\pi^{2}}\nonumber \\ & \times \exp \left[ \left( \mu^{\ast}\nu-\nu^{\ast}\mu \right) +\left( \mu^{\prime}\nu^{\prime \ast}-\nu^{\prime}\mu^{\prime \ast}\right) \right] \nonumber \\ & \times \int \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}\exp \left[ \left( \mu ^{\prime \ast}-\mu^{\ast}\right) \eta+\left( \mu-\mu^{\prime}\right) \eta^{\ast}\right] \nonumber \\ & \times \exp \left[ (\nu^{\ast}-\nu^{\prime \ast})\xi+(\nu^{\prime}-\nu )\xi^{\ast}\right] \nonumber \\ & =\int \frac{d^{2}\mu d^{2}\nu}{\pi^{2}}|D\left( \nu,\mu \right) |^{2}\iint d^{2}\mu^{\prime}d^{2}\nu^{\prime}\nonumber \\ & \times \exp \left[ \left( \mu^{\ast}\nu-\nu^{\ast}\mu \right) +\left( \mu^{\prime}\nu^{\prime \ast}-\nu^{\prime}\mu^{\prime \ast}\right) \right] \nonumber \\ & \times \delta^{\left( 2\right) }\left( \mu-\mu^{\prime}\right) \delta^{\left( 2\right) }\delta(\nu^{\prime}-\nu)\nonumber \\ & =\int \frac{d^{2}\mu d^{2}\nu}{\pi^{2}}|D\left( \nu,\mu \right) |^{2}. \label{13.18} \end{align} \subsubsection{Complex integration transformation and complex Weyl transformation} In Ref. \cite{r25} for correlated two-body systems, we have successfully established the so-called entangled Wigner operator, expressed in the entangled state $\left \langle \eta \right \vert $ representation as (\ref{3.27}), \begin{equation} \Delta \left( \sigma,\gamma \right) \rightarrow \Delta(\eta,\xi)=\int \frac{d^{2}\sigma}{\pi^{3}}\left \vert \eta-\sigma \right \rangle \left \langle \eta+\sigma \right \vert e^{\sigma \xi^{\ast}-\sigma^{\ast}\xi}, \label{13.19} \end{equation} the advantage of introducing $\Delta(\eta,\xi)$ can be seen in Ref. \cite{r26}. The corresponding Wigner function for a density matrix $\rho$ is \begin{equation} W_{\rho}(\eta,\xi)=\int \frac{d^{2}\eta}{\pi^{3}}\left \langle \eta +\sigma \right \vert \rho \left \vert \eta-\sigma \right \rangle e^{\sigma \xi^{\ast }-\sigma^{\ast}\xi}. \label{13.20} \end{equation} If $F(\eta,\xi)$ is quantized as the operator $F\left( Q_{1},Q_{2} ,P_{1},P_{2}\right) $ through the Weyl-Wigner correspondence \begin{equation} F\left( Q_{1},Q_{2},P_{1},P_{2}\right) =\int d^{2}\eta d^{2}\xi \mathcal{F}(\eta,\xi)\Delta(\eta,\xi), \label{13.21} \end{equation} then using (\ref{13.20}) we see \begin{align} \mathcal{F}(\eta,\xi) & =4\pi^{2}\mathtt{Tr}\left[ F\left( Q_{1} ,Q_{2},P_{1},P_{2}\right) \Delta(\eta,\xi)\right] \nonumber \\ & =4\int \frac{d^{2}\sigma}{\pi}e^{\sigma \xi^{\ast}-\sigma^{\ast}\xi }\left \langle \eta+\sigma \right \vert F\left( Q_{1},Q_{2},P_{1},P_{2}\right) \left \vert \eta-\sigma \right \rangle , \label{13.22} \end{align} which is named as the complex Weyl transform, and $\mathcal{F}(\eta,\xi)$ is the Weyl classical correspondence of $F\left( Q_{1},Q_{2},P_{1},P_{2}\right) $. Substituting (\ref{13.22}) into (\ref{13.14}) we get \begin{align} & \iint \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast }-\mu^{\ast}\right) }\mathcal{F}(\eta,\xi)\nonumber \\ & =\iint \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast }-\mu^{\ast}\right) }\nonumber \\ & \times4\int \frac{d^{2}\sigma}{\pi}e^{\sigma \xi^{\ast}-\sigma^{\ast}\xi }\left \langle \eta+\sigma \right \vert F\left( Q_{1},Q_{2},P_{1},P_{2}\right) \left \vert \eta-\sigma \right \rangle \nonumber \\ & =4\int \frac{d^{2}\sigma d^{2}\eta}{\pi}e^{-\mu \left( \eta^{\ast}-\nu ^{\ast}\right) +\mu^{\ast}\left( \eta-\nu \right) }\delta \left( \eta^{\ast }-\nu^{\ast}-\sigma^{\ast}\right) \delta \left( \eta-\nu-\sigma \right) \nonumber \\ & \times \left \langle \eta+\sigma \right \vert F\left( Q_{1},Q_{2},P_{1} ,P_{2}\right) \left \vert \eta-\sigma \right \rangle \nonumber \\ & =4\int \frac{d^{2}\sigma}{\pi}e^{\mu^{\ast}\sigma-\mu \sigma^{\ast} }\left \langle \nu+2\sigma \right \vert F\left( Q_{1},Q_{2},P_{1},P_{2}\right) \left \vert \nu \right \rangle . \label{13.23} \end{align} Using (\ref{3.12}), we have \ \begin{align} \left \langle \nu+2\sigma \right \vert & =\left \langle 2\sigma \right \vert \exp \left \{ \frac{i}{\sqrt{2}}\left[ \nu_{1}\left( P_{1}-P_{2}\right) -\nu_{2}\left( Q_{1}+Q_{2}\right) \right] \right \} ,\label{13.24}\\ \nu & =\nu_{1}+i\nu_{2}.\nonumber \end{align} As a result of (\ref{13.24}) and $\frac{1}{2}e^{\mu^{\ast}\sigma-\mu \sigma^{\ast}}=\left \langle \xi_{=\mu}\right \vert \left. 2\sigma \right \rangle ,$ we see \begin{align} & 4\int d^{2}\sigma e^{\mu^{\ast}\sigma-\mu \sigma^{\ast}}\left \langle \nu+2\sigma \right \vert \nonumber \\ & =8\int d^{2}\sigma \left \langle \xi_{=\mu}\right \vert \left. 2\sigma \right \rangle \left \langle 2\sigma \right \vert \exp \{ \frac{i}{\sqrt{2}}\left[ \nu_{1}\left( P_{1}-P_{2}\right) -\nu_{2}\left( Q_{1}+Q_{2}\right) \right] \} \nonumber \\ & =2\pi \left \langle \xi_{=\mu}\right \vert \exp \{ \frac{i}{\sqrt{2}}\left[ \nu_{1}\left( P_{1}-P_{2}\right) -\nu_{2}\left( Q_{1}+Q_{2}\right) \right] \} \nonumber \\ & =2\pi \left \langle \xi_{=\mu}\right \vert e^{i\left( \mu_{2}\nu_{1}-\mu _{1}\nu_{2}\right) }. \label{13.25} \end{align} Using (\ref{13.25}), we convert Eq. (\ref{13.23}) as \begin{align} & \iint \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast }-\mu^{\ast}\right) }F(\eta,\xi)\nonumber \\ & =2\pi \left \langle \xi_{=\mu}\right \vert F\left( Q_{1},Q_{2},P_{1} ,P_{2}\right) \left \vert \nu \right \rangle e^{i\left( \nu_{1}\mu_{2}-\nu _{2}\mu_{1}\right) }. \label{13.26} \end{align} The inverse of (\ref{13.26}), according to (\ref{13.16}), is \begin{align} \mathcal{F}(\eta,\xi) & =\iint \frac{2d^{2}\mu d^{2}\nu}{\pi}e^{\left( \xi^{\ast}-\mu^{\ast}\right) \left( \eta-\nu \right) -\left( \eta^{\ast }-\nu^{\ast}\right) \left( \xi-\mu \right) }\nonumber \\ & \times \left \langle \xi_{=\mu}\right \vert F\left( Q_{1},Q_{2},P_{1} ,P_{2}\right) \left \vert \nu \right \rangle e^{i\left( \nu_{1}\mu_{2}-\nu _{2}\mu_{1}\right) }. \label{13.27} \end{align} Thus through the new integration transformation, a new relationship between a phase space function $\mathcal{F}(\eta,\xi)$ and its Weyl-Wigner correspondence operator $F\left( Q_{1},Q_{2},P_{1},P_{2}\right) $ is revealed. For example, from the following Weyl-Wigner correspondence \begin{equation} \frac{4}{\left( e^{f}+1\right) ^{2}}\exp \left[ \frac{e^{f}-1}{e^{f} +1}(\left \vert \eta \right \vert ^{2}+\left \vert \xi \right \vert ^{2})\right] \rightarrow \exp \{f[K_{+}+K_{-}-1]\}, \label{13.28} \end{equation} ($K_{+}$ and $K_{-}$ are defined in Eqs.(\ref{K1}) and (\ref{K2})) and (\ref{13.27}) we have \begin{align} & \frac{4}{\left( e^{f}+1\right) ^{2}}\iint \frac{d^{2}\xi d^{2}\eta} {\pi^{2}}e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast}-\mu^{\ast}\right) }\nonumber \\ & \times \exp \left[ \frac{e^{f}-1}{e^{f}+1}(\left \vert \eta \right \vert ^{2}+\left \vert \xi \right \vert ^{2})\right] \nonumber \\ & =2\pi \left \langle \xi_{=\mu}\right \vert F\left( Q_{1},Q_{2},P_{1} ,P_{2}\right) \left \vert \nu \right \rangle e^{i\left( \nu_{1}\mu_{2}-\nu _{2}\mu_{1}\right) }. \label{13.29} \end{align} Using the Gaussian integration formula \begin{align} & \iint \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast }-\mu^{\ast}\right) }e^{-\lambda(\left \vert \eta \right \vert ^{2}+\left \vert \xi \right \vert ^{2})}\nonumber \\ & =\frac{1}{1+\lambda^{2}}\exp \left[ -\frac{\lambda(\left \vert \mu \right \vert ^{2}+\left \vert \nu \right \vert ^{2})}{1+\lambda^{2}} +\frac{\lambda^{2}\left( \mu \nu^{\ast}-\mu^{\ast}\nu \right) }{1+\lambda^{2} }\right] , \label{13.30} \end{align} in particular, when $\lambda=-i\tan \left( \frac{\pi}{4}-\frac{\alpha} {2}\right) ,$ with $\frac{-\lambda}{\lambda^{2}+1}=\frac{i}{2\tan \alpha},$ $\frac{\lambda^{2}}{\lambda^{2}+1}=\frac{1}{2}-\frac{1}{2\sin \alpha},$Eq. (\ref{13.30}) becomes$\allowbreak$ \begin{align} & \iint \frac{d^{2}\xi d^{2}\eta}{\pi^{2}}e^{\left( \xi-\mu \right) \left( \eta^{\ast}-\nu^{\ast}\right) -\left( \eta-\nu \right) \left( \xi^{\ast }-\mu^{\ast}\right) }\exp \left[ i\tan(\frac{\pi}{4}-\frac{\alpha} {2})(\left \vert \eta \right \vert ^{2}+\left \vert \xi \right \vert ^{2})\right] \nonumber \\ & =\frac{e^{i\alpha}}{i\sin \alpha}\exp \left[ \frac{i(\left \vert \mu \right \vert ^{2}+\left \vert \nu \right \vert ^{2})}{2\tan \alpha}-\frac{\mu \nu^{\ast}-\mu^{\ast}\nu}{2\sin \alpha}+i\mu_{2}\nu_{1}-i\mu_{1}\nu_{2}\right] , \label{13.31} \end{align} where $\exp[i\tan \left( \frac{\pi}{4}-\frac{\alpha}{2}\right) (\left \vert \eta \right \vert ^{2}+\left \vert \xi \right \vert ^{2})]$ represents an infinite long chirplet function. By taking $f=i(\frac{\pi}{2}-\alpha)$ in (\ref{13.29}), such that $ie^{-i\alpha}=e^{f},$ and comparing with (\ref{13.31}) we obtain \begin{align} & \left \langle \xi_{=\mu}\right \vert F\left( Q_{1},Q_{2},P_{1},P_{2}\right) \left \vert \nu \right \rangle \nonumber \\ & =\frac{-ie^{i\alpha}}{2\pi \sin \alpha}\exp \left[ \frac{i(\left \vert \mu \right \vert ^{2}+\left \vert \nu \right \vert ^{2})}{2\tan \alpha}-\frac{\mu \nu^{\ast}-\mu^{\ast}\nu}{2\sin \alpha}\right] , \label{13.32} \end{align} where the right-hand side of (\ref{13.32}) is just the CFrFT kernel whose properties can be seen in Ref. \cite{r26}. (One may compare the forms (\ref{13.3}) and (\ref{13.32}) to see their slight difference. For the relation between them we refer to Ref.\cite{r24,r26}). Dragoman has shown that the kernel of the CFrFT can be classically produced with rotated astigmatic optical systems that mimic the quantum entanglement. Therefore the new integration transformation (\ref{13.14}) can convert spherical wave to CFrFT kernel. We expect this transformation could be implemented by experimentalists. \subsection{Squeezing for the generalized scaled FrFT} In some practical applications it is necessary to introduce input and output scale parameters \cite{17,18} into FrFT, i.e., scaled FrFT. The reason lies in that two facts: (1) the scaled FrFT may be more useful and convenient for optical information processing due to the scale parameters (free parameters) introduced into FrFT; (2) it can be reduced to the conventional FrFT under a given condition. In this subsection, by establishing the relation between the optical scaled FrFT and quantum mechanical squeezing-rotating operator transform in one-mode case, we employ the IWOP technique and the bipartite entangled state representation of two-mode squeezing operator to extend the scaled FrFT to more general cases, such as scaled complex FrFT and entangled scaled FrFT. The properties of scaled FrFTs can be seen more clearly from the viewpoint of representation transform in quantum mechanics. \subsubsection{Quantum correspondence of the scaled FrFT} The scaled FrFT \cite{r17} of $\alpha$-order is defined in a manner such that the usual FrFT is its special case, i.e., \begin{equation} \mathcal{F}_{\alpha}\left[ f\left( x\right) \right] =\sqrt{\frac {e^{i\left( \frac{\pi}{2}-\alpha \right) }}{2\pi \mu \nu \sin \alpha}} \int_{-\infty}^{\infty}\exp \left \{ -i\frac{x^{2}/\mu^{2}+y^{2}/\nu^{2}} {2\tan \alpha}+\frac{ixx^{\prime}}{\mu \nu \sin \alpha}\right \} f\left( x\right) dx, \label{13.33} \end{equation} where the exponential function is an integral kernel. In a similar way to deriving the quantum correspondence of FrFT in (\ref{12.4}), and using the natural repression of single-mode squeezing operator $S_{1}$ in coordinate representation \cite{r27}, \begin{equation} S_{1}\left( \mu \right) =\frac{1}{\sqrt{\mu}}\int_{-\infty}^{\infty }dx\left \vert \frac{x}{\mu}\right \rangle \left \langle x\right \vert , \label{13.34} \end{equation} we have \begin{align} & \exp \left \{ -i\frac{x^{2}/\mu^{2}+y^{2}/\nu^{2}}{2\tan \alpha} +\frac{ixx^{\prime}}{\mu \nu \sin \alpha}\right \} \nonumber \\ & =\sqrt{-2\pi i\mu \nu e^{i\alpha}\sin \alpha}\left \langle y\right \vert S_{1}^{\dagger}\left( \nu \right) \exp \left \{ i\alpha a^{\dagger}a\right \} S_{1}\left( \mu \right) \left \vert x\right \rangle , \label{13.35} \end{align} which implies that the integral kernel in Eq.(\ref{13.33}) is just the matrix element of operator $S_{1}^{\dagger}\left( \nu \right) \exp \left \{ i\theta a^{\dagger}a\right \} S_{1}\left( \mu \right) $\ in coordinate states. From Eq.(\ref{13.35}) it then follows that \begin{align} \mathcal{F}_{\alpha}\left[ f\left( x\right) \right] & =\int_{-\infty }^{\infty}dx\left \langle y\right \vert S_{1}^{\dagger}\left( \nu \right) e^{i\alpha a^{\dagger}a}S_{1}\left( \mu \right) \left \vert x\right \rangle f\left( x\right) \nonumber \\ & =\left \langle y\right \vert S_{1}^{\dagger}\left( \nu \right) e^{i\alpha a^{\dagger}a}S_{1}\left( \mu \right) \left \vert f\right \rangle \equiv g\left( y\right) , \label{13.36} \end{align} which suggests \begin{equation} \left \vert g\right \rangle =S_{1}^{\dagger}\left( \nu \right) e^{i\alpha a^{\dagger}a}S_{1}\left( \mu \right) \left \vert f\right \rangle . \label{13.37} \end{equation} From Eqs.(\ref{13.36})\ and (\ref{13.33}) one can see that the scaled FrFT in Eq.(\ref{13.36}) corresponds actually to the squeezing-rotating operator $\left( S_{1}^{\dagger}\left( \nu \right) e^{i\theta a^{\dagger}a} S_{1}\left( \mu \right) \right) $ transform in Eq.(\ref{13.36}) between two quantum states. \subsubsection{The Scaled CFrFT} On the basis of quantum mechanical version of one-mode scaled FrFT, we generalize it to two-mode case, i.e., we can introduce the integral \begin{align} \mathcal{F}_{\alpha}^{C}\left[ f\left( \eta \right) \right] & \equiv \left \langle \eta^{\prime}\right \vert S_{2}^{\dagger}\left( \nu \right) e^{i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) } S_{2}\left( \mu \right) \left \vert f\right \rangle \nonumber \\ & =\int_{-\infty}^{\infty}\frac{d^{2}\eta}{\pi}\left \langle \eta^{\prime }\right \vert S_{2}^{\dagger}\left( \nu \right) e^{i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) }S_{2}\left( \mu \right) \left \vert \eta \right \rangle f\left( \eta \right) , \label{13.38} \end{align} where $f\left( \eta \right) =$ $\left \langle \eta \right. \left \vert f\right \rangle $. Using the natural expression of the two-mode squeezing operator $S_{2}$ (\ref{3.16}), and noticing that $\left \langle \eta^{\prime }\right \vert e^{i\theta \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger} a_{2}\right) }\left \vert \eta \right \rangle $ is just the integral kernel of CFrFT (\ref{13.3}), we can reform (\ref{13.38}) as \begin{align} \mathcal{F}_{\alpha}^{C}\left[ f\left( \eta \right) \right] & =\frac{e^{i\left( \frac{\pi}{2}-\alpha \right) }}{2\mu \nu \sin \alpha}\int \frac{d^{2}\eta}{\pi}f\left( \eta \right) \nonumber \\ & \times \exp \left \{ -\frac{i(\left \vert \eta^{\prime}\right \vert ^{2} /\nu^{2}+\allowbreak \left \vert \eta \right \vert ^{2}/\mu^{2})}{2\tan \alpha }+\frac{i\left( \eta^{\prime}{}^{\ast}\allowbreak \eta+\eta^{\ast} \allowbreak \eta^{\prime}\right) }{2\mu \nu \sin \alpha}\right \} . \label{13.39} \end{align} It is obvious that Eq.(\ref{13.39}) is just a generalized CFrFT with squeezing parameters, we name it the scaled CFrFT.\ Thus we link a two-mode squeezing-rotating operator transform to the scaled CFrFT of complex functions. \subsubsection{Entangled scaled FrFT} On the other hand, recall that the entangled state $\left \vert \eta \right \rangle $ can be Schmidt-decomposed as \cite{r28} \begin{equation} \left \vert \eta \right \rangle =e^{-i\eta_{1}\eta_{2}}\int_{-\infty}^{\infty }dx\left \vert x\right \rangle _{1}\otimes \left \vert x-\sqrt{2}\eta _{1}\right \rangle _{2}e^{i\sqrt{2}x\eta_{2}}, \label{13.40} \end{equation} we see that \begin{align} \left \langle x_{1}^{\prime},x_{2}^{\prime}\right \vert \left. \eta^{\prime }\right \rangle & =e^{-i\eta_{1}^{\prime}\eta_{2}^{\prime}}\delta \left( \sqrt{2}\eta_{1}^{\prime}+x_{2}^{\prime}-x_{1}^{\prime}\right) e^{i\sqrt {2}x_{1}^{\prime}\eta_{2}^{\prime}},\nonumber \\ \left \langle \eta \right \vert \left. x_{1},x_{2}\right \rangle & =e^{i\eta_{1}\eta_{2}}\delta \left( \sqrt{2}\eta_{1}+x_{2}-x_{1}\right) e^{-i\sqrt{2}x_{1}\eta_{2}}. \label{13.41} \end{align} Using Eq.(\ref{3.13}) we have \begin{align} K\left( x_{1}^{\prime},x_{2}^{\prime},x_{1},x_{2}\right) & \equiv \left \langle x_{1}^{\prime},x_{2}^{\prime}\right \vert S_{2}^{\dagger}\left( \nu \right) e^{i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger} a_{2}\right) }S_{2}\left( \mu \right) \left \vert x_{1},x_{2}\right \rangle .\nonumber \\ & =\int \frac{d^{2}\eta d^{2}\eta^{\prime}}{\pi^{2}}\left \langle x_{1} ^{\prime},x_{2}^{\prime}\right \vert \left. \eta^{\prime}\right \rangle \left \langle \eta^{\prime}\right \vert S_{2}^{\dagger}\left( \nu \right) e^{i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) } S_{2}\left( \mu \right) \left \vert \eta \right \rangle \left \langle \eta \right. \left \vert x_{1},x_{2}\right \rangle , \label{13.42} \end{align} where $\left \vert x_{1},x_{2}\right \rangle =\left \vert x_{1}\right \rangle \otimes \left \vert x_{2}\right \rangle .$ On substituting Eqs. (\ref{13.40}) and (\ref{13.41}) into Eq.(\ref{13.42}), we can derive \begin{align} K\left( x_{1}^{\prime},x_{2}^{\prime},x_{1},x_{2}\right) & =\left \{ \sqrt{\frac{e^{i\left( \frac{\pi}{2}-\alpha \right) }}{2\pi \sin \alpha}} \exp \left[ -i\frac{\lambda_{\nu}^{\prime2}+\lambda_{\mu}^{2}}{2\tan \alpha }+\frac{i\lambda_{\mu}\lambda_{\nu}^{\prime}}{\sin \alpha}\right] \right \} \nonumber \\ & \times \left \{ \sqrt{\frac{e^{i\left( \frac{\pi}{2}-\alpha \right) }} {2\pi \sin \alpha}}\exp \left[ -i\frac{\kappa_{\nu}^{\prime2}+\kappa_{\mu}^{2} }{2\tan \alpha}+\frac{i\kappa_{\mu}\kappa_{\nu}^{\prime}}{\sin \alpha}\right] \right \} , \label{13.43} \end{align} where $\lambda_{\mu}=\frac{x_{1}-x_{2}}{\sqrt{2}\mu},$ $\lambda_{\nu}^{\prime }=\frac{x_{1}^{\prime}-x_{2}^{\prime}}{\sqrt{2}\nu};\kappa_{\mu}=\frac {\mu \left( x_{2}+x_{1}\right) }{\sqrt{2}},\kappa_{\nu}^{\prime}=\frac {\nu \left( x_{1}^{\prime}+x_{2}^{\prime}\right) }{\sqrt{2}}.$ From Eq.(\ref{13.43}) one can see that a new 2-dimensional (2D) scaled FrFT can be composed of one 1D scaled FFT in its space domain and the other in its "frequency" domain, while the transform variables being the combination of two coordinates as shown in Eq.(\ref{13.44}), so Eq.(\ref{13.43}) is quite different from the direct product two 1D scaled FrFTs that are both in `space domain` are indicated in Eq.(\ref{13.36}). Note that the new 2D scaled FFT is still characterized by only 3-parameter. Therefore, for any function $f\left( x_{1},x_{2}\right) =\left \langle x_{1},x_{2}\right. \left \vert f\right \rangle $ we can define an entangled scaled FrFT, i.e., \begin{align} \mathcal{F}_{\alpha}^{E}\left[ f\left( x_{1},x_{2}\right) \right] & =\int_{-\infty}^{\infty}K\left( x_{1}^{\prime},x_{2}^{\prime},x_{1} ,x_{2}\right) f\left( x_{1},x_{2}\right) dx_{1}dx_{2}\nonumber \\ & =\left \langle x_{1}^{\prime},x_{2}^{\prime}\right \vert S_{2}^{\dagger }\left( \nu \right) e^{i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger }a_{2}\right) }S_{2}\left( \mu \right) \left \vert f\right \rangle . \label{13.45} \end{align} Next we examine the properties of these scaled FrFTs in the quantum optics context. Without losing generality, for the additivity property, we consider the scaled CFrFT, \begin{equation} \mathcal{F}_{\alpha+\beta}^{C}\left[ f\left( \eta \right) \right] \equiv \int \frac{d^{2}\eta}{\pi}\left \langle \eta^{\prime}\right \vert S_{2}^{\dagger}\left( \nu \right) e^{i(\alpha+\beta)\left( a_{1}^{\dagger }a_{1}+a_{2}^{\dagger}a_{2}\right) }S_{2}\left( \mu \right) \left \vert \eta \right \rangle f\left( \eta \right) . \label{13.46} \end{equation} Inserting the completeness relation of $\left \vert \eta \right \rangle $ into Eq.(\ref{13.46}) yields \begin{align} \mathcal{F}_{\alpha+\beta}^{C}\left[ f\left( \eta \right) \right] & =\int \frac{d^{2}\eta}{\pi}\left \langle \eta^{\prime}\right \vert S_{2} ^{\dagger}\left( \nu \right) e^{i\alpha \left( a_{1}^{\dagger}a_{1} +a_{2}^{\dagger}a_{2}\right) }S_{2}\left( \tau \right) S_{2}^{\dagger }\left( \tau \right) e^{i\beta \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger }a_{2}\right) }S_{2}\left( \mu \right) \left \vert \eta \right \rangle f\left( \eta \right) \nonumber \\ & =\int \frac{d^{2}\eta^{\prime \prime}}{\pi}\left \langle \eta^{\prime }\right \vert S_{2}^{\dagger}\left( \nu \right) e^{i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) }S_{2}\left( \mu_{=\tau }^{\prime}\right) \left \vert \eta^{\prime \prime}\right \rangle \nonumber \\ & \times \int \frac{d^{2}\eta}{\pi}\left \langle \eta^{\prime \prime}\right \vert S_{2}^{\dagger}\left( \nu_{=\tau}^{\prime}\right) e^{i\beta \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) }S_{2}\left( \mu \right) \left \vert \eta \right \rangle f\left( \eta \right) \nonumber \\ & =\int \frac{d^{2}\eta^{\prime \prime}}{\pi}\left \langle \eta^{\prime }\right \vert S_{2}^{\dagger}\left( \nu \right) e^{i\alpha \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) }S_{2}\left( \tau \right) \left \vert \eta_{=\tau}^{\prime \prime}\right \rangle \mathcal{F}_{\beta}\left[ f\left( \eta \right) \right] =\mathcal{F}_{\alpha}\mathcal{F}_{\beta}\left[ f\left( \eta \right) \right] , \label{13.47} \end{align} which is just the additivity property. It should be pointed out that the condition of additive operator for the scaled FrFTs is that the parameter $\nu^{\prime}$ of the prior cascade should be equal to the parameter $\mu^{\prime}$ of the next one, i.e., $\mu^{\prime}=\nu^{\prime}.$ For other scaled FrFTs, the properties can also be discussed in the similar way (according to their quantum versions). To this end, we should emphasize that different scaled FrFTs correspond to different quantum mechanical squeezing operators or representations. That is to say, it is possible that some other scaled FrFT can be presented by using different quantum mechanical squeezing operators or representations. \section{Adaption of Collins diffraction formula and CFrFT} The connection between the Fresnel diffraction in free space and the FrFT had been bridged by Pellat-Finet \cite{r29} who found that FrFTs are adapted to the mathematical expression of Fresnel diffraction, just as the standard Fourier transform is adapted to Fraunhofer diffraction. In previous sections, a new formulation of the CFrFT and the Collins diffraction formula are respectively derived in the context of representation transform of quantum optics. In this section we inquire if the adaption problem of Collins diffraction formula to the CFrFT can also be tackled in the context of quantum optics. We shall treat this topic with the use of two-mode (3 parameters) squeezing operator and in the entangled state representation of continuous variables, in so doing the quantum mechanical version of associated theory of classical diffraction and classical CFrFT is obtained, which connects classical optics and quantum optics in this aspect. \begin{figure} \caption{{\protect \small The Fresnel diffraction through ABCD optical system.}} \label{Fig1} \end{figure} For Gaussian beam, the $ABCD$ rule is equally derived via optical diffraction integral theory---the Collins integral formula. As shown in Fig.2, if $f\left( \eta \right) $ represents the input field amplitude at point $\eta$ on $S_{1}$, and $g\left( \eta^{\prime}\right) $ denotes the diffraction field amplitude at point $\eta^{\prime}$ on $S_{2},$ then Collins formula in complex form takes the form (\ref{11.5,11.7}). Next we shall examine adaption of the Collins formula to the CFrFT by virtue of the entangled state representation in quantum optics \cite{r23}. \subsection{Adaption of the Collins formula to CFrFT} Using the completeness relation of $\left \vert \eta \right \rangle ,$ we can further put Eq.(\ref{11.5}) into \begin{equation} g\left( \eta^{\prime}\right) =\left \langle \eta^{\prime}\right \vert U_{2}\left( r,s\right) \left \vert f\right \rangle =\left \langle \eta^{\prime }\right \vert U_{2}\left( r,s\right) \mu_{1}^{2}\int \frac{d^{2}\eta}{\pi }\left \vert \mu_{1}\eta \right \rangle \left \langle \mu_{1}\eta \right. \left \vert f\right \rangle , \label{14.1} \end{equation} and taking $\eta^{\prime}=\sqrt{\frac{B}{D}}\frac{\sigma}{K},$ $\mu_{1} =\sqrt{\frac{B}{A}}/L$ as well as writing \begin{equation} g\left( \eta^{\prime}\right) \rightarrow \left \langle \sqrt{\frac{B}{D}} \frac{\sigma}{K}\right \vert \left. g\right \rangle \equiv G\left( \sigma \right) ,\text{ }f\left( \mu_{1}\eta \right) \equiv F\left( \eta \right) , \label{14.2} \end{equation} where $K$ and $L$ are two constants to be determined later, then according to Eqs. (\ref{11.5}) and (\ref{14.1}) we have \begin{align} G\left( \sigma \right) & =\mu_{1}^{2}\int \frac{d^{2}\eta}{\pi}\left \langle \eta^{\prime}\right \vert U_{2}\left( r,s\right) \left \vert \mu_{1} \eta \right \rangle F\left( \eta \right) \nonumber \\ & =\frac{1}{2iAL^{2}}\exp \left[ \frac{i\left \vert \sigma \right \vert ^{2} }{2K^{2}}\right] \int \frac{d^{2}\eta}{\pi}\exp \left \{ \frac{i\left \vert \eta \right \vert ^{2}}{2L^{2}}-\frac{i\left( \sigma^{\ast}\eta+\sigma \eta^{\ast}\right) }{2LK\sqrt{AD}}\right \} F\left( \eta \right) . \label{14.3} \end{align} Comparing Eq.(\ref{14.3}) with Eq.(\ref{11.7}) leads us to choose \begin{equation} L^{2}=\tan \alpha,\ K=\sqrt{\sin2\alpha/\left( 2AD\right) }. \label{14.4} \end{equation} Then Eq.(\ref{14.3}) becomes \begin{align} G\left( \sigma \right) & =\frac{\cos \alpha}{i2A\sin \alpha}\exp \left[ i\frac{AD-\cos^{2}\alpha}{\sin2\alpha}\left \vert \sigma \right \vert ^{2}\right] \nonumber \\ & \times \int \frac{d^{2}\eta}{\pi}\exp \left \{ \frac{i\left( \left \vert \eta \right \vert ^{2}+\left \vert \sigma \right \vert ^{2}\right) }{2\tan \alpha }-\frac{i\left( \sigma^{\ast}\eta+\sigma \eta^{\ast}\right) }{2\sin \alpha }\right \} F\left( \eta \right) \nonumber \\ & =\frac{\cos \alpha}{A}e^{-i\alpha}\exp \left[ i\frac{AD-\cos^{2}\alpha} {\sin2\alpha}\left \vert \sigma \right \vert ^{2}\right] \mathcal{F}_{\alpha }\left[ F\right] \left( \sigma \right) , \label{14.5} \end{align} so Eq. (\ref{14.5}) is a standard CFrFT up to a quadratic phase term $\exp \left[ i\frac{AD-\cos^{2}\alpha}{\sin2\alpha}\left \vert \sigma \right \vert ^{2}\right] $, according to Eq.(\ref{14.4}) and $\sqrt{\frac {B}{D}}\frac{\sigma}{K}=\eta^{\prime}$, it can also be written as \begin{equation} \exp \left[ i\frac{AD-\cos^{2}\alpha}{\sin2\alpha}\left \vert \sigma \right \vert ^{2}\right] =\exp \left[ \frac{i}{R}\left \vert \eta^{\prime}\right \vert ^{2}\right] , \label{14.6} \end{equation} which represents a quadratic approximation to a sphere wave diverging from a luminous point at distance \begin{equation} R=\frac{2AB}{AD-\cos^{2}\alpha} \label{14.7} \end{equation} from $S_{2}.$ Let $S$ be the sphere tangent to $S_{2}$ with radius $R$ (see Fig.2). A point on $S$ is located by its\ projection on $S_{2}$, this means that coordinates on $S_{2}$ can also be used as coordinates on $S$. Therefore, the quadratic phase term can be compensated if the output field is observed on $S$ but $S_{2}$. Then, after considering the phase compensation, the field transforms from $S_{1}$ to $S$ is \begin{equation} G_{S}\left( \sigma \right) =\frac{\cos \alpha}{A}e^{-i\alpha}\mathcal{F} _{\alpha}\left[ F\right] \left( \sigma \right) , \label{14.8} \end{equation} In this way, the field amplitude on $S$ is the perfect $\alpha-th$ FFT-C\ of the field amplitude on $S_{1}$. \subsection{Adaption of the additivity property of CFrFT to the Collins formula for two successive Fresnel diffractions} The most important property of FrFT is that $\mathcal{F}_{\alpha}$ obeys the additivity rule, i.e., two successive FrFT of order $\alpha$\ and $\beta$ makes up the FFT of order $\alpha+\beta$. For the CFrFT, its additivity property is proven in Eq.(\ref{13.7}). For Collins diffraction from $S_{1}$ to $S^{\prime}$ (see Fig.1), the additivity means that the diffraction pattern observed on $S^{\prime}$($\bar{\eta}$) \ (the sphere tangent to $S_{3}$ with radius $R^{\prime}$) and associated with $\mathcal{F}_{\alpha+\beta}$ should be the result of a first diffraction phenomenon (associated with $\mathcal{F}_{\alpha})$ on $S$ (with $\eta^{\prime}$), followed by a second diffraction phenomenon (associated with $\mathcal{F}_{\beta})$ from $S$ to $S^{\prime}$. This is a necessary consequence of the Huygens principle. Next we prove that such is indeed the case. Firstly, let us consider the field transform from $S_{1}$ (with $\eta$) to $S^{\prime}$ (see Fig.2) described by the ray transfer matrix [$A^{\prime },B^{\prime},C^{\prime},D^{\prime}$]. Similar to deriving Eq.(\ref{14.7}), after the squeezing transform and the phase compensation, \begin{equation} R^{\prime}=\frac{2A^{\prime}B^{\prime}}{A^{\prime}D^{\prime}-\cos^{2} \alpha^{\prime}}\rightarrow \exp \left( \frac{i}{R^{\prime}}\left \vert \bar{\eta}\right \vert ^{2}\right) , \label{14.9} \end{equation} thus we can obtain the expression of CFrFT for Collins diffraction from $S_{1}$ to $S^{\prime}$ (not $S_{3}$)$,$ \begin{equation} G_{S^{\prime}}\left( \sigma^{\prime}\right) =\frac{\cos \alpha^{\prime} }{A^{\prime}}e^{-i\alpha^{\prime}}\mathcal{F}_{\alpha^{\prime}}\left[ f\left( \mu_{1}^{\prime}\eta \right) \right] \left( \sigma^{\prime}\right) , \label{14.10} \end{equation} where $\bar{\eta}=\sqrt{\frac{B^{\prime}}{D^{\prime}}}\frac{\sigma^{\prime} }{K^{\prime}}\ $and $\mu_{1}^{\prime}=\sqrt{\frac{B^{\prime}}{A^{\prime}} }/L^{\prime},$ \begin{equation} L^{\prime2}=\tan \alpha^{\prime},\ K^{\prime}=\sqrt{\sin2\alpha^{\prime }/\left( 2A^{\prime}D^{\prime}\right) }. \label{14.11} \end{equation} Eq.(\ref{14.10}) is the same in form as Eq.(\ref{14.8}) but with primed variables. Using Eqs.(\ref{14.9}) and (\ref{11.5}) one can prove that the transform from $S_{1}$ to $S^{\prime}$ is (see Eqs. (\ref{11.5}), (\ref{14.5})-(\ref{14.7}))$\ $ \begin{equation} g_{S^{\prime}}\left( \bar{\eta}\right) =\exp \left( -\frac{i}{R^{\prime} }\left \vert \bar{\eta}\right \vert ^{2}\right) \left \langle \bar{\eta }\right \vert U_{2}\left( r^{\prime},s^{\prime}\right) \left \vert f\right \rangle \equiv G_{S^{\prime}}\left( \sigma^{\prime}\right) . \label{14.12} \end{equation} In Eq.(\ref{14.12}) we have taken the phase compensation term (\ref{14.9}) into account. Secondly, let us consider the second diffraction from $S$ to $S_{3}$ determined by the ray transfer matrix [$A^{\prime \prime},B^{\prime \prime },C^{\prime \prime},D^{\prime \prime}$]. For this purpose, using the group multiplication rule of $F_{2}\left( r,s\right) $, we can decompose the diffraction from $S_{1}$\ to $S^{\prime}$ into two parts: one is described as the matrix $[A,B,C,D]$ from plane $S_{1}$ (with $\eta$) to $S_{2}\left( S\right) $ (with $\eta^{\prime}$), the other is $[A^{\prime \prime} ,B^{\prime \prime},C^{\prime \prime},D^{\prime \prime}]$ from plane $S_{2}$ to $S_{3}\left( S^{\prime}\right) $ (with $\bar{\eta}$), then the total matrix from $S_{1}$ to $S_{3}$ is \begin{equation} \left( \begin{array} [c]{cc} A^{\prime} & B^{\prime}\\ C^{\prime} & D^{\prime} \end{array} \right) =\left( \begin{array} [c]{cc} A^{\prime \prime} & B^{\prime \prime}\\ C^{\prime \prime} & D^{\prime \prime} \end{array} \right) \left( \begin{array} [c]{cc} A & B\\ C & D \end{array} \right) . \label{14.13} \end{equation} Using Eq.(\ref{3.13}) and the group multiplication rule of $F_{2}\left( r^{\prime},s^{\prime}\right) $, we can further put Eq.(\ref{14.12}) into another form \begin{align} G_{S^{\prime}}\left( \sigma^{\prime}\right) & =\exp \left[ -\frac {i}{R^{\prime}}\left \vert \bar{\eta}\right \vert ^{2}\right] \left \langle \bar{\eta}\right \vert U_{2}\left( r^{\prime \prime},s^{\prime \prime}\right) U_{2}\left( r,s\right) \left \vert f\right \rangle \nonumber \\ & =\exp \left[ -\frac{i}{R^{\prime}}\left \vert \bar{\eta}\right \vert ^{2}\right] \left \langle \bar{\eta}\right \vert U_{2}\left( r^{\prime \prime },s^{\prime \prime}\right) \mu_{2}^{\prime2}\int \frac{d^{2}\sigma}{\pi }\left \vert \mu_{2}^{\prime}\sigma \right \rangle \left \langle \mu_{2}^{\prime }\sigma \right \vert U_{2}\left( r,s\right) \mu_{1}^{\prime2}\int \frac {d^{2}\eta}{\pi}\left \vert \mu_{1}^{\prime}\eta \right \rangle \left \langle \mu_{1}^{\prime}\eta \right. \left \vert f\right \rangle \nonumber \\ & =\mu_{2}^{\prime2}\int \frac{d^{2}\sigma}{\pi}\exp \left( -\frac{i\left \vert \bar{\eta}\right \vert ^{2}}{R^{\prime}}\right) \left \langle \bar{\eta }\right \vert U_{2}\left( r^{\prime \prime},s^{\prime \prime}\right) \left \vert \mu_{2}^{\prime}\sigma \right \rangle \left[ \mu_{1}^{\prime2}\int \frac {d^{2}\eta}{\pi}\left \langle \mu_{2}^{\prime}\sigma \right \vert U_{2}\left( r,s\right) \left \vert \mu_{1}^{\prime}\eta \right \rangle F\left( \eta \right) \right] \nonumber \\ & =\frac{B}{DK^{2}}\int \frac{d^{2}\sigma}{\pi}\exp \left( -\frac{i\left \vert \bar{\eta}\right \vert ^{2}}{R^{\prime}}\right) \left \langle \bar{\eta }\right \vert U_{2}\left( r^{\prime \prime},s^{\prime \prime}\right) \left \vert \mu_{2}^{\prime}\sigma \right \rangle G\left( \sigma \right) , \label{14.14} \end{align} where $\mu_{2}^{\prime}=\sqrt{\frac{B}{D}}\frac{1}{K}$ and we have made a reasonable assumption that $\mu_{1}^{\prime}=\mu_{1}$(so $f\left( \mu _{1}^{\prime}\eta \right) =F\left( \eta \right) $), which means that there are same scaled variants for the input field amplitudes on $S_{1}$ of the diffractions from $S_{1}$ to $S$ and from $S_{1}$ to $S^{\prime}$. In order to examine the second diffraction domain from $S$ to $S^{\prime}$ (not $S_{3}$), we need to translate the output field amplitude $G\left( \sigma \right) $ observed on plane $S_{2}$ to the field amplitude observed on sphere plane $S,$ i.e., putting $G\left( \sigma \right) $ into $G_{S}\left( \sigma \right) $ (see Eq.(\ref{14.8})) by taking the phase compensation (see Eq.(\ref{14.3})) into account. Thus the field transform from $S$ to $S^{\prime}$ is ($G_{S}\left( \sigma \right) \rightarrow G_{S^{\prime} }\left( \sigma^{\prime}\right) $) \begin{align} G_{S^{\prime}}\left( \sigma^{\prime}\right) & =\frac{B}{DK^{2}}\int \frac{d^{2}\sigma}{\pi}\exp \left( \frac{i}{R}\frac{B\left \vert \sigma \right \vert ^{2}}{DK^{2}}-\frac{i\left \vert \bar{\eta}\right \vert ^{2} }{R^{\prime}}\right) \left \langle \bar{\eta}\right \vert U_{2}\left( r^{\prime \prime},s^{\prime \prime}\right) \left \vert \mu_{2}^{\prime} \sigma \right \rangle G_{S}\left( \sigma \right) \nonumber \\ & =\frac{B}{DK^{2}}\frac{1}{2iB^{\prime \prime}}\int \frac{d^{2}\sigma}{\pi }\exp \left \{ \frac{iB^{\prime}}{K^{\prime2}D^{\prime}}\left( \frac {D^{\prime \prime}}{2B^{\prime \prime}}-\frac{1}{R^{\prime}}\right) \left \vert \sigma^{\prime}\right \vert ^{2}\right. \nonumber \\ & \left. +\frac{iB\left \vert \sigma \right \vert ^{2}}{DK^{2}}\left( \frac{A^{\prime \prime}}{2B^{\prime \prime}}+\frac{1}{R}\right) -\frac{i\left( \sigma \sigma^{\prime \ast}+\sigma^{\prime}\sigma^{\ast}\right) } {2B^{\prime \prime}K^{\prime}K\sqrt{\frac{DD^{\prime}}{BB^{\prime}}}}\right \} G_{S}\left( \sigma \right) . \label{14.15} \end{align} Comparing Eq.(\ref{14.15}) with Eq.(\ref{11.7}) leads us to choose \begin{equation} \sin \beta=B^{\prime \prime}K^{\prime}K\sqrt{\frac{DD^{\prime}}{BB^{\prime}} }=\frac{B^{\prime \prime}}{2}\sqrt{\frac{\sin2\alpha^{\prime}\sin2\alpha }{A^{\prime}ABB^{\prime}}}, \label{14.16} \end{equation} and noticing that $\mu_{1}^{\prime}=\mu_{1}$ yields $\frac{B^{\prime} }{A^{\prime}\tan \alpha^{\prime}}=\frac{B}{A\tan \alpha},$ thus we have \begin{equation} \text{ }A^{\prime}=\frac{B^{\prime \prime}}{B}\frac{\cos \alpha^{\prime} \sin \alpha}{\sin \beta},\text{ }A=\frac{B^{\prime \prime}}{B^{\prime}}\frac {\sin \alpha^{\prime}\cos \alpha}{\sin \beta}. \label{14.17} \end{equation} Combining Eqs.(\ref{14.1}) and (\ref{14.17}) it then follows (letting $\alpha^{\prime}=\alpha+\beta$) \begin{equation} \frac{B}{DK^{2}}\frac{1}{2iB^{\prime \prime}}=\frac{AB}{\sin2\alpha}\frac {1}{iB^{\prime \prime}}=\frac{A}{iA^{\prime}\cos \alpha}\frac{\cos \alpha ^{\prime}}{2\sin \beta}, \label{14.18} \end{equation} and \begin{align} \frac{B^{\prime}}{K^{\prime2}D^{\prime}}\left( \frac{D^{\prime \prime} }{2B^{\prime \prime}}-\frac{1}{R^{\prime}}\right) & =\frac{1}{2}\cot \beta,\nonumber \\ \frac{B}{DK^{2}}\left( \frac{A^{\prime \prime}}{2B^{\prime \prime}}+\frac{1} {R}\right) & =\frac{1}{2}\cot \beta, \label{14.19} \end{align} where we have used Eqs.(\ref{14.7}), (\ref{14.9}) and (\ref{14.13}). Substitution of Eqs.(\ref{14.8}), (\ref{14.16}), (\ref{14.18}) and (\ref{14.19}) into Eq.(\ref{14.15}) yields \begin{align} G_{S^{\prime}}\left( \sigma^{\prime}\right) & =\frac{A}{iA^{\prime} \cos \alpha}\frac{\cos \alpha^{\prime}}{2\sin \beta}\int \frac{d^{2}\sigma}{\pi }\exp \left \{ \frac{i\left( \left \vert \sigma^{\prime}\right \vert ^{2}+\left \vert \sigma \right \vert ^{2}\right) }{2\tan \beta}-\frac{i\left( \sigma \sigma^{\prime \ast}+\sigma^{\prime}\sigma^{\ast}\right) }{2\sin \beta }\right \} G_{S}\left( \sigma \right) \nonumber \\ & =\frac{\cos \alpha^{\prime}e^{-i\alpha}}{iA^{\prime}}\frac{1}{2\sin \beta }\int \frac{d^{2}\sigma}{\pi}\exp \left \{ \frac{i\left( \left \vert \sigma^{\prime}\right \vert ^{2}+\left \vert \sigma \right \vert ^{2}\right) }{2\tan \beta}-\frac{i\left( \sigma \sigma^{\prime \ast}+\sigma^{\prime} \sigma^{\ast}\right) }{2\sin \beta}\right \} \mathcal{F}_{\alpha}\left[ F\right] \left( \sigma \right) \nonumber \\ & =\frac{\cos \alpha^{\prime}}{A^{\prime}}e^{-i\left( \alpha+\beta \right) }\mathcal{F}_{\beta}\mathcal{F}_{\alpha}\left[ F\right] \left( \sigma^{\prime}\right) . \label{14.20} \end{align} The first equation of Eq.(\ref{14.20}) indicates that it is just a CFrFT of $G_{S}\left( \sigma \right) $ from $S$ to $S^{\prime}.$ Comparing Eq.(\ref{14.20}) with Eq.(\ref{14.10}), we see \begin{equation} \mathcal{F}_{\beta}\mathcal{F}_{\alpha}\left[ F\right] \left( \sigma^{\prime}\right) =\mathcal{F}_{\alpha+\beta}\left[ F\right] \left( \sigma^{\prime}\right) . \label{14.21} \end{equation} Thus we complete the study of adaption of CFrFT to the mathematical representation of Collins diffraction formula in quantum optics context. \section{The Fractional Radon transform} Optical tomographic imaging techniques derive two-dimensional data from a three-dimensional object to obtain a slice image of the internal structure and thus have the ability to peer inside the object noninvasively. The mathematical method which complete this task is the Radon transformation. Similarly, one can use the inverse Radon transformation to obtain the Wigner distribution by tomographic inversion of a set of measured probability distributions of the quadrature amplitude \cite{vogel,Smithey}. Based on the Radon transform \cite{Radon} and the FrFT we can introduce the conception of fractional Radon transformation (FRT) which combines both of them in a reasonable way. We notice the well-known fact that the usual Radon transform of a function $f\left( \vec{r}\right) $ can be proceeded in two successive steps, the first step is an $n-$dimensional ordinary Fourier transform, i.e. performing a usual FT of $f\left( \vec{r}\right) $ in $n$-dimensional $\vec{k}$ space, \begin{equation} F\left( \vec{k}\right) =F\left( t\hat{e}\right) =\int f\left( \vec {r}\right) e^{-2\pi i\vec{k}\cdot \vec{r}}d\vec{r}, \label{18.1} \end{equation} where $\vec{k}=t\hat{e},$ $\hat{e}$ is a unit vector, $t$ is a real number. Its inverse is \begin{equation} f\left( \vec{r}\right) =\int F\left( \vec{k}\right) e^{2\pi i\vec{k} \cdot \vec{r}}d\vec{k}. \label{18.2} \end{equation} Letting $s=t\lambda$ and rewriting (\ref{18.1}) as \begin{equation} F\left( t\hat{e}\right) =\int_{-\infty}^{\infty}ds\int d\vec{r}f\left( \vec{r}\right) e^{-2\pi is}\delta \left( s-\vec{k}\cdot \vec{r}\right) =\int_{-\infty}^{\infty}d\lambda e^{-2\pi it\lambda}\int f\left( \vec {r}\right) \delta \left( \lambda-\hat{e}\cdot \vec{r}\right) d\vec{r}, \label{18.3} \end{equation} one can see that the integration over $d\vec{r}$ has been defined as a Radon transform of $f\left( \vec{r}\right) $, denoted as \begin{equation} \int f\left( \vec{r}\right) \delta \left( \lambda-\hat{e}\cdot \vec {r}\right) d\vec{r}=f_{R}\left( \lambda,\hat{e}\right) . \label{18.4} \end{equation} So $F\left( t\hat{e}\right) $ can be considered as a $1-$dimensional Fourier transform of $f_{R}\left( \lambda,\hat{e}\right) ,$ \begin{equation} F\left( t\hat{e}\right) =\int_{-\infty}^{\infty}d\lambda e^{-2\pi it\lambda }f_{R}\left( \lambda,\hat{e}\right) . \label{18.5} \end{equation} Its inverse transform is \begin{equation} f_{R}\left( \lambda,\hat{e}\right) =\int_{-\infty}^{\infty}F\left( t\hat {e}\right) e^{2\pi it\lambda}dt, \label{18.6} \end{equation} this ordinary $1-$dimensional Fourier transform is considered as the second step. Combining result of (\ref{18.1}) and (\ref{18.6}) we have \begin{equation} f_{R}\left( \lambda,\hat{e}\right) =\int_{-\infty}^{\infty}\int f\left( \vec{r}\right) e^{-2\pi it\hat{e}\cdot \vec{r}}e^{2\pi it\lambda}d\vec{r}dt. \label{18.7} \end{equation} i. e. two usual FTs make up a Radon transform, The inverse of (\ref{18.7}) is \begin{equation} \int_{-\infty}^{\infty}\int f_{R}\left( \lambda,\hat{e}\right) e^{2\pi i\vec{k}\cdot \vec{r}}e^{-2\pi it\lambda}d\vec{k}d\lambda=f\left( \vec {r}\right) . \label{18.8} \end{equation} By analogy with these procedures we can make two successively FRFTs to realize the new fractional Radon transformation \cite{FRT} . The $n$-dimensional FrFT of $f\left( \vec{r}\right) $ is defined as \begin{equation} \mathfrak{F}_{\alpha,\vec{k}}\left[ f\right] =\left( C_{\alpha}\right) ^{n}\int \exp \left( \frac{i\left( \vec{r}^{2}+\vec{k}^{2}\right) } {2\tan \alpha}-\frac{i\vec{k}\cdot \vec{r}}{\sin \alpha}\right) f\left( \vec {r}\right) d\vec{r}\equiv F_{\alpha}\left( t\hat{e}\right) ,\text{ \ } \vec{k}=t\hat{e}. \label{18.9} \end{equation} where $\alpha$ is named as the order of FrFT, $C_{\alpha}=\left[ \frac{e^{i\alpha}}{2\pi i\sin \alpha}\right] ^{1/2}.$ Firstly, we perform an $1-$dimensional inverse fractional Fourier transform for $F_{\alpha}\left( t\hat{e}\right) \ $in $t$-space, \begin{align} f_{R,\alpha}\left( \lambda,\hat{e}\right) & =\left[ C_{\alpha}\right] ^{1-n}\mathfrak{F}_{-\alpha,t}\left[ F_{\alpha}\left( t\hat{e}\right) \right] \nonumber \\ & =\left[ C_{\alpha}\right] ^{1-n}C_{-\alpha}\int_{-\infty}^{\infty} \exp \left( -\frac{i\left( \lambda^{2}+t^{2}\right) }{2\tan \alpha} +\frac{i\lambda t}{\sin \alpha}\right) F_{\alpha}\left( t\hat{e}\right) dt, \label{18.11} \end{align} $\left[ C_{\alpha}\right] ^{1-n}$ was introduced for later's convenience. Then substituting (\ref{18.9}) into (\ref{18.11}) we have \begin{align} f_{R,\alpha}\left( \lambda,\hat{e}\right) & =\left[ C_{\alpha}\right] ^{1-n}C_{-\alpha}\left( C_{\alpha}\right) ^{n}\int \exp \left( -\frac {i\left( t^{2}+\lambda^{2}\right) }{2\tan \alpha}+\frac{i\lambda t} {\sin \alpha}+\frac{i\left( \vec{r}^{2}+t^{2}\right) }{2\tan \alpha} -\frac{it\hat{e}\cdot \vec{r}}{\sin \alpha}\right) f\left( \vec{r}\right) d\vec{r}dt\nonumber \\ & =\int \exp \left( \frac{i\left( \vec{r}^{2}-\lambda^{2}\right) } {2\tan \alpha}\right) \delta \left( \lambda-\hat{e}\cdot \vec{r}\right) f\left( \vec{r}\right) d\vec{r}, \label{18.12} \end{align} which completes the $n$-dimensional fractional Radon transformation. Especially, when $\alpha=\pi/2,$ (\ref{18.12}) reduces to the usual Radon transform (\ref{18.4}). Now we examine if the additive property of FrFT is consistent with (\ref{18.12}). According to the additive property of FrFT $\mathfrak{F}_{\alpha}\mathfrak{F}_{\alpha}=\mathfrak{F}_{\alpha+\beta},$ and (\ref{18.9}) we see \begin{align} F_{\alpha+\beta}\left( \vec{k}=t\hat{e}\right) & =\mathfrak{F}_{\beta ,\vec{k}}\mathfrak{F}_{\alpha,\vec{\xi}}\left[ f\right] \nonumber \\ & =\left( C_{\beta}\right) ^{n}\left( C_{\alpha}\right) ^{n}\int \int \exp \left( \frac{i\left( \vec{r}^{2}+\vec{\xi}^{2}\right) }{2\tan \alpha }+\frac{i\left( \vec{\xi}^{2}+\vec{k}^{2}\right) }{2\tan \beta}\right) \nonumber \\ & \times \exp \left( -\frac{i\vec{\xi}\cdot \vec{r}}{\sin \alpha}-\frac{i\vec {k}\cdot \vec{\xi}}{\sin \beta}\right) f\left( \vec{r}\right) d\vec{r} d\vec{\xi}\nonumber \\ & =\mathfrak{F}_{\beta+\alpha,\vec{k}}\left[ f\right] . \label{18.13} \end{align} The corresponding one-dimensional inverse FrFT should be \begin{align} & \left[ \frac{e^{i\left( \alpha+\beta \right) }}{2\pi i\sin \left( \alpha+\beta \right) }\right] ^{\left( 1-n\right) /2}C_{-\beta}C_{-\alpha }\int_{-\infty}^{\infty}\exp \left( -\frac{i\left( \lambda^{2}+\mu ^{2}\right) }{2\tan \beta}-\frac{i\left( t^{2}+\mu^{2}\right) }{2\tan \alpha }\right) \nonumber \\ & \times \exp \left( +\frac{i\lambda \mu}{\sin \beta}+\frac{i\mu t}{\sin \alpha }\right) F_{\alpha+\beta}\left( t\hat{e}\right) dtd\mu \nonumber \\ & =\left[ \frac{e^{i\left( \alpha+\beta \right) }}{2\pi i\sin \left( \alpha+\beta \right) }\right] ^{\left( 1-n\right) /2}C_{-\left( \alpha+\beta \right) }\int_{-\infty}^{\infty}F_{\alpha+\beta}\left( t\hat {e}\right) \exp \left( -\frac{i\left( t^{2}+\lambda^{2}\right) } {2\tan \left( \alpha+\beta \right) }+\frac{i\lambda t}{\sin \left( \alpha+\beta \right) }\right) dt\nonumber \\ & =\int \exp \left( \frac{i\left( \vec{r}^{2}-\lambda^{2}\right) } {2\tan \left( \alpha+\beta \right) }\right) \delta \left( \lambda-\hat {e}\cdot \vec{r}\right) f\left( \vec{r}\right) d\vec{r}=f_{R,\alpha+\beta }\left( \lambda,\hat{e}\right) , \label{18.14} \end{align} \ \ which coincides with (\ref{18.12}). From (\ref{18.12}) and (\ref{18.14}), we can confirm that the transform kernel of $\alpha$th FrFT is \begin{equation} \exp \left( \frac{i\left( \vec{r}^{2}-\lambda^{2}\right) }{2\tan \alpha }\right) \delta \left( \lambda-\hat{e}\cdot \vec{r}\right) . \label{18.15} \end{equation} For example, one can calculate the fractional Radon transform of the $n-$mode Wigner operator to obtain some new quantum mechanical representations. Finally we give the inversion of the fractional Radon transformation, From (\ref{18.12}) we have \begin{equation} \frac{1}{\left( 2\pi \sin^{2}\alpha \right) ^{n/2}}\int \int f_{R,\alpha }\left( \lambda,\hat{e}\right) \exp \left( \frac{i\left( \lambda^{2} -\vec{r}^{2}\right) }{2\tan \alpha}-\frac{i\lambda t}{\sin \alpha}+\frac {it\hat{e}\cdot \vec{r}}{\sin \alpha}\right) d\vec{k}d\lambda=f\left( \vec {r}\right) , \label{18.16} \end{equation} which is an extension of (\ref{18.8}). In summary, based on the Radon transform and fractional Fourier transform we have naturally introduced the $n$-dimensional FRFT, in Ref. Zalevsky and Mendlovic \cite{zeev} also defined 2-dimensional FRFT, but in different approach. We have identified the transform kernel for FrFT. The generalization to complex fractional Radon transformation is also possible \cite{jiang}. \section{Wavelet transformation and the IWOP\ technique} In recent years wavelet transforms \cite{wavelet,wavelet5} have been developed which can overcome some shortcomings of the classical Fourier analysis and therefore has been widely used in Fourier optics and information science since 1980s. Here we present a quantum optical version of classical wavelet transform (WT) by virtue of the IWOP\ technique. \subsection{Quantum optical version of classical WTs} A wavelet has its energy concentrated in time to give a tool for the analysis of transient, nonstationary, or time-varying phenomena. (It is a wavelet because it is localized and it resembles a wave because it oscillates.) Mathematically, wavelets are defined by starting with a function $\psi$ of the real variable $x$, named a mother wavelet which is required to decrease rapidly to zero as $|x|$ tends to infinity, \begin{equation} \int_{-\infty}^{\infty}\psi \left( x\right) dx=0, \label{15.1} \end{equation} A more general requirement for a mother\ wavelet is to demanded $\psi \left( x\right) $ to have vanishing moments $\int_{-\infty}^{\infty}x^{l}\psi \left( x\right) dx=0,$ $l=0,1,2...,L.$ (A greater degree of smoothness than continuity also leads to vanishing moments for the mother wavelet). The theory of wavelets is concerned with the representation of a function in terms of a two-parameter family of dilates and translates of a fixed function. The mother wavelet $\psi$ generates the other wavelets of the family $\psi_{\left( \mu,s\right) }$, ($\mu$ is scaling parameter, $s$ is a translation parameter, $s\in \mathrm{R),}$ the dilated-translated function is defined as \begin{equation} \psi_{\left( \mu,s\right) }\left( x\right) =\frac{1}{\sqrt{\left \vert \mu \right \vert }}\psi \left( \frac{x-s}{\mu}\right) , \label{15.2} \end{equation} while the wavelet integral transform of a signal function $f\left( x\right) \in L^{2}\left( \mathrm{R}\right) $ by $\psi$ is defined by \begin{equation} W_{\psi}f\left( \mu,s\right) =\frac{1}{\sqrt{\left \vert \mu \right \vert } }\int_{-\infty}^{\infty}f\left( x\right) \psi^{\ast}\left( \frac{x-s}{\mu }\right) dx. \label{15.3} \end{equation} We can express (\ref{15.3}) as \begin{equation} W_{\psi}f\left( \mu,s\right) =\left \langle \psi \right \vert U\left( \mu,s\right) \left \vert f\right \rangle . \label{15.4} \end{equation} where $\left \langle \psi \right \vert $ is the state vector corresponding to the given mother wavelet, $\left \vert f\right \rangle $ is the state to be transformed, and \begin{equation} U\left( \mu,s\right) \equiv \frac{1}{\sqrt{\left \vert \mu \right \vert }} \int_{-\infty}^{\infty}\left \vert \frac{x-s}{\mu}\right \rangle \left \langle x\right \vert dx \label{15.5} \end{equation} is the squeezing-translating operator \cite{wavelet,wavelet1,wavelet2}, $\left \langle x\right \vert $ is the eigenvector of coordinate operator. In order to combine the wavelet transform with quantum states transform more tightly, using the IWOP technique we can directly perform the integral in (\ref{15.5}) ($Q=(a+a^{\dag})/\sqrt{2},\mu>0$) \begin{align} U\left( \mu,s\right) & =\frac{1}{\sqrt{\pi \mu}}\int_{-\infty}^{\infty }dx\colon \exp \left[ -\frac{\mu^{2}+1}{2\mu^{2}}x^{2}+\frac{xs}{\mu^{2}} +\sqrt{2}\frac{x-s}{\mu}a^{\dagger}+\sqrt{2}xa-\frac{s^{2}}{2\mu^{2}} -Q^{2}\right] \colon \nonumber \\ & =\sqrt{\frac{2\mu}{1+\mu^{2}}}\colon \exp \left[ \frac{1}{2\left( 1+\mu ^{2}\right) }\left( \frac{s}{\mu}+\sqrt{2}a^{\dagger}+\sqrt{2}\mu a\right) ^{2}-\sqrt{2}\frac{s}{\mu}a^{\dagger}-\frac{s^{2}}{2\mu^{2}}-Q^{2}\right] \colon. \label{15.6} \end{align} This is the explicitly normal product form. Let $\mu=e^{\lambda}$, $\operatorname{sech}\lambda=\frac{2\mu}{1+\mu^{2}},$ $\tanh \lambda=\frac {\mu^{2}-1}{\mu^{2}+1},$ using the operator identity $e^{ga^{\dagger}a} =\colon \exp \left[ \left( e^{g}-1\right) a^{\dagger}a\right] \colon,$ Eq. (\ref{15.6}) becomes \begin{align} U\left( \mu,s\right) & =\exp \left[ \frac{-s^{2}}{2\left( 1+\mu ^{2}\right) }-\frac{a^{\dagger2}}{2}\tanh \lambda-\frac{a^{\dagger}s}{\sqrt {2}}\operatorname{sech}\lambda \right] \nonumber \\ & \times \exp \left[ \left( a^{\dagger}a+\frac{1}{2}\right) \ln \operatorname{sech}\lambda \right] \nonumber \\ & \times \exp \left[ \frac{a^{2}}{2}\tanh \lambda+\frac{sa}{\sqrt{2} }\operatorname{sech}\lambda \right] . \label{15.7} \end{align} In particular, when $s=0$, it reduces to the well-known squeezing operator, \begin{equation} U\left( \mu,0\right) =\frac{1}{\sqrt{\mu}}\int_{-\infty}^{\infty}\left \vert \frac{x}{\mu}\right \rangle \left \langle x\right \vert dx=\exp[\frac{\lambda} {2}\left( a^{2}-a^{\dagger2}\right) . \label{15.8} \end{equation} For a review of the squeezed state theory we refer to \cite{squeezed1}. \subsection{The condition of\ mother wavelet in the context of quantum optics} Now we analyze the condition (\ref{15.1}) for mother wavelet from the point of view of quantum optics. Due to \begin{equation} \int_{-\infty}^{\infty}\left \vert x\right \rangle dx=\left \vert p=0\right \rangle , \label{15.9} \end{equation} where $\left \vert p\right \rangle $ is the momentum eigenstate, we can recast the condition into quantum mechanics as \begin{equation} \int_{-\infty}^{\infty}\psi \left( x\right) dx=0\rightarrow \left \langle p=0\right \vert \left. \psi \right \rangle =0, \label{15.10} \end{equation} which indicates that the probability of a measurement of $\left \vert \psi \right \rangle $ by the projection operator $\left \vert p\right \rangle \left \langle p\right \vert $ with value $p=0$ is zero. Without loss of generality, we suppose \begin{equation} \left \vert \psi \right \rangle _{M}=G\left( a^{\dagger}\right) \left \vert 0\right \rangle =\sum_{n=0}^{\infty}g_{n}a^{\dagger n}\left \vert 0\right \rangle , \label{15.11} \end{equation} where $g_{n}$ are such chosen as to letting $\left \vert \psi \right \rangle $ obeying the condition (\ref{15.1}). Using the coherent states' overcompleteness relation we have \begin{align} \left \langle p=0\right \vert \left. \psi \right \rangle & =\left \langle p=0\right \vert \int \frac{d^{2}z}{\pi}\left \vert z\right \rangle \left \langle z\right \vert \sum \limits_{n}g_{n}a^{\dagger n}\left \vert 0\right \rangle \nonumber \\ & =\sum \limits_{n}g_{n}\int \frac{d^{2}z}{\pi}e^{-|z|^{2}}z^{\ast n} \sum \limits_{m}\frac{\left( \frac{z^{2}}{2}\right) ^{m}}{m!}\nonumber \\ & =\sum \limits_{m}\sum \limits_{n}\frac{1}{m!2^{m}}g_{n}\delta_{n,2m} n!=\sum \limits_{n}g_{2n}=0. \label{15.12} \end{align} Eq.(\ref{15.12}) provides a general formalism to find the qualified wavelets. For example, assuming $g_{2n}=0$ for $n>3$, so the coefficients of the survived terms should satisfy \begin{equation} g_{0}+g_{2}+3g_{4}+15g_{6}=0, \label{15.64} \end{equation} and $\left \vert \psi \right \rangle $ becomes \begin{equation} \left \vert \psi \right \rangle =\left( g_{0}+g_{2}a^{\dagger2}+g_{4} a^{\dagger4}+g_{6}a^{\dagger6}\right) \left \vert 0\right \rangle . \label{15.65} \end{equation} Projecting it onto the coordinate representation, we get the qualified wavelets \begin{align} \psi \left( x\right) & =\pi^{-1/4}e^{-x^{2}/2}\left[ g_{0}+g_{2}\left( 2x^{2}-1\right) +g_{4}\left( 4x^{4}-12x^{2}+3\right) \right. \nonumber \\ & \left. +g_{6}\left( 8x^{6}-60x^{4}+90x^{2}-15\right) \right] , \label{15.66} \end{align} where we have used $\left \langle x\right. \left \vert n\right \rangle =\left( 2^{n}n!\sqrt{\pi}\right) ^{-1/2}H_{n}\left( x\right) e^{-x^{2}/2}$, and $H_{n}\left( x\right) $ is the Hermite polynomials. Now we take some examples. Case 1: in (\ref{15.12}) by taking $g_{0}=\frac{1}{2},$ $g_{2}=-\frac{1}{2},$ $g_{2n}=0$ (otherwise)$,$ we have \begin{equation} \left \vert \psi \right \rangle _{M}=\frac{1}{2}\left( 1-a^{\dagger2}\right) \left \vert 0\right \rangle , \label{15.13} \end{equation} it then follows \begin{equation} \psi_{M}\left( x\right) \equiv \frac{1}{2}\left \langle x\right \vert \left( 1-a^{\dagger2}\right) \left \vert 0\right \rangle =\frac{1}{2}\left \langle x\right \vert \left( \left \vert 0\right \rangle -\sqrt{2}\left \vert 2\right \rangle \right) =\pi^{-1/4}e^{-x^{2}/2}\left( 1-x^{2}\right) , \label{15.14} \end{equation} which is just the Maxican hat wavelet, satisfying the condition $\int _{-\infty}^{\infty}e^{-x^{2}/2}\left( 1-x^{2}\right) dx=0.$Hence $\frac {1}{2}\left( 1-a^{\dagger2}\right) \left \vert 0\right \rangle $ is the state vector corresponding to the Maxican hat mother wavelet (see Fig. 3). Once the state vector $\left \langle \psi \right \vert $ corresponding to mother wavelet is known, for any state $\left \vert f\right \rangle $ the matrix element $_{M}\left \langle \psi \right \vert U\left( \mu,s\right) \left \vert f\right \rangle $ is just the wavelet transform of $f(x)$ with respect to $\left \langle \psi \right \vert .$ \begin{figure} \caption{{\protect \small Traditional Mexican hat wavelet.}} \label{Fig3} \end{figure} \begin{figure} \caption{{\protect \small Generalized Mexican hat wavelet }$\psi_{2}\left( x\right) ${\protect \small when } $g_{0}=-2${\protect \small , }$g_{2}=-1${\protect \small , }$g_{4} =1${\protect \small and }$g_{6}=0${\protect \small .}} \label{Fig4} \end{figure} \begin{figure} \caption{{\protect \small Generalized Mexican hat wavelet when }$g_{0}=-1${\protect \small , }$g_{2}=-2$ {\protect \small , }$g_{4}=1${\protect \small and }$g_{6}=0${\protect \small .}} \label{Fig5} \end{figure} Case 2: when $g_{0}=-2$, $g_{2}=-1$, $g_{4}=1$ and $g_{6}=0$, from (\ref{15.66}) we obtain (see Fig. 4) \begin{equation} \psi_{2}\left( x\right) =2\pi^{-1/4}e^{-x^{2}/2}\left( 2x^{4} -7x^{2}+1\right) , \label{15.67} \end{equation} which also satisfies $\int_{-\infty}^{\infty}\psi_{2}\left( x\right) dx=0$. Note that when $g_{0}=-1$, $g_{2}=-2$, $g_{4}=1$ and $g_{6}=0$, we obtain a slightly different wavelet (see Fig. 5). Therefore, as long as the parameters $g_{2n}$ conforms to condition (\ref{15.64}), we can adjust their values to control the shape of the wavelets. Case 3: when $g_{0}=1$, $g_{2}=2$, $g_{4}=4$ and $g_{6}=-1$, we get (see Fig. 6) \begin{equation} \psi_{3}\left( x\right) =\pi^{-1/4}e^{-x^{2}/2}\left( -8x^{6} +76x^{4}-134x^{2}+26\right) , \label{15.68} \end{equation} and $\int_{-\infty}^{\infty}\psi_{3}\left( x\right) dx=0$. From these figures we observe that the number of the nodes of the curves at the $x$-axis is equal to the highest power of the wavelet functions.\begin{figure} \caption{{\protect \small Generalized Mexican hat wavelet }$\psi_{3}\left( x\right) ${\protect \small when } $g_{0}=1${\protect \small , }$g_{2}=2${\protect \small , }$g_{4}=4$ {\protect \small and }$g_{6}=-1${\protect \small .}} \label{Fig6} \end{figure} To further reveal the properties of the newly found wavelets, we compare the wavelet transform computed with the well-known Mexican hat wavelet $\psi _{1}\left( x\right) $ and that with our new wavelet $\psi_{2}\left( x\right) $. Concretely, we map a simple cosine signal $\cos \pi x$ by performing the wavelet transforms with $\psi_{i}\left[ T\left( x-X\right) \right] $, $i=1,2$, into a two-dimensional space $\left( X,T\right) $, where $X$ denotes the location of a wavelet and $a$ its size. The resulting wavelet transforms by $\psi_{1}\left( x\right) $ (=$\psi_{M}\left( x\right) $) and $\psi_{2}\left( x\right) $ are \begin{align} \Omega_{1}\left( X,T\right) & =\frac{2}{\sqrt{3}}\int_{-\infty}^{\infty }dx\psi_{1}\left[ T\left( x-X\right) \right] \cos \pi x,\label{15.69}\\ \Omega_{2}\left( X,T\right) & =\frac{1}{\sqrt{30}}\int_{-\infty}^{\infty }dx\psi_{2}\left[ T\left( x-X\right) \right] \cos \pi x, \label{15.70} \end{align} where $2/\sqrt{3}$ and $1/\sqrt{30}$ are the normalization factors for $\psi_{1}$ and $\psi_{2}$ respectively, the wavelet integral $\Omega _{i}\left( X,T\right) $ are also called wavelet coefficient which measures the variation of cos$\pi x$ in a neighborhood of $X,$ whose size is proportional to $1/T$. The contour line representation of $\Omega_{1}\left( X,T\right) $ and $\Omega_{2}\left( X,T\right) $ are depicted in Fig. 7 and Fig. 8, respectively, where the transverse axis is $X$-axis (time axis), while the longitudinal axis ($T$-axis) is the frequency axis. \begin{figure} \caption{{\protect \small Contour line representation of }$\Omega_{1}\left( X,a\right) .$} \label{Fig7} \end{figure} \begin{figure} \caption{{\protect \small Contour line representation of }$\Omega_{2}\left( X,a\right) .$} \label{Fig8} \end{figure} It is remarkable that although two overall shapes of the two contour lines look similar, there exist two notable differences between these two figures : 1) Along $T$-axis $\Omega_{1}\left( X,T\right) $ has one maximum, while $\Omega_{2}\left( X,T\right) $ has one main maximum and one subsidiary maximum (\textquotedblleft two islands"), so when $\psi_{2}$ scales its size people have one more chance to identify the frequency information of the cosine wave than using $\psi_{M}$. Interesting enough, the \textquotedblleft two islands" of $\Omega_{2}\left( X,T\right) $ in Fig. 8 can be imagined as if they were produced while the figure of $\Omega_{1}\left( X,T\right) $ deforms into two sub-structures along $a$-axis. 2) Near the maximum of $\Omega_{2}\left( X,T\right) $ the density of the contour lines along $a$-axis is higher than that of $\Omega_{1}\left( X,T\right) $, which indicates that the new wavelet $\psi_{2}$ is more sensitive in detecting frequency information of the signal at this point. Therefore, $\psi_{2}\left( x\right) $ may be superior to $\psi_{M}\left( x\right) $ in analyzing some signals. Finally, we mention that there exist some remarkable qualitative similarities between the mother wavelets presented in Figs. 3 through 6 and some of the amplitude envelopes of higher order laser spatial modes and spatial supermodes of phase locked diode laser arrays \cite{las1,las2,las3}, which are due to spatial coherence. \subsection{Quantum mechanical version of Parseval theorem for WT} In this subsection, we shall prove that the Parseval theorem of 1D WT \cite{wavelet1,wavelet2,r30}: \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty}dsW_{\psi }f_{1}\left( \mu,s\right) W_{\psi}^{\ast}f_{2}\left( \mu,s\right) =2C_{\psi}\int f_{1}\left( x\right) f_{2}^{\ast}\left( x\right) dx, \label{15.16} \end{equation} where $\psi \left( x\right) $ is a mother wavelet whose Fourier transform is $\psi \left( p\right) ,$ $C_{\psi}=2\pi \int_{0}^{\infty}\frac{\left \vert \psi \left( p\right) \right \vert ^{2}}{p}dp<\infty$. In the context of quantum mechanics, according to Eq.(\ref{15.4}) we see that the quantum mechanical version of Parseval theorem should be \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty }ds\left \langle \psi \right \vert U\left( \mu,s\right) \left \vert f_{1}\right \rangle \left \langle f_{2}\right \vert U^{\dagger}\left( \mu,s\right) \left \vert \psi \right \rangle =2C_{\psi}\left \langle f_{2}\right. \left \vert f_{1}\right \rangle . \label{15.17} \end{equation} and since $\psi \left( x\right) =\left \langle x\right \vert \left. \psi \right \rangle $, so $\psi \left( p\right) $ involved in $C_{\psi}$ is $\left \langle p\right \vert \left. \psi \right \rangle ,$ $\left \langle p\right \vert $ is the momentum eigenvector \begin{equation} \psi \left( p\right) =\left \langle p\right \vert \left. \psi \right \rangle =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dx\psi \left( x\right) e^{-ipx}. \label{15.18} \end{equation} Eq.(\ref{15.17}) indicates that once the state vector $\left \langle \psi \right \vert $ corresponding to mother wavelet is known, for any two states $\left \vert f_{1}\right \rangle $ and $\left \vert f_{2}\right \rangle $, their overlap up to the factor $C_{\psi}$ (determined by Eq.(\ref{15.31})) is just their corresponding overlap of WTs in the ($\mu,s)$ parametric space. \textit{Proof of Equation}\textbf{ (\ref{15.17})}: In order to show Eq.(\textbf{\ref{15.17}}), we calculate \begin{align} U^{\dagger}\left( \mu,s\right) \left \vert p\right \rangle & =\frac{1} {\sqrt{\left \vert \mu \right \vert }}\int_{-\infty}^{\infty}dx\left \vert x\right \rangle \left \langle \frac{x-s}{\mu}\right \vert \left. p\right \rangle \nonumber \\ & =\frac{e^{-i\frac{ps}{\mu}}}{\sqrt{2\pi \left \vert \mu \right \vert }} \int_{-\infty}^{\infty}dx\left \vert x\right \rangle e^{i\frac{p}{\mu} x}\nonumber \\ & =\frac{1}{\sqrt{\left \vert \mu \right \vert }}e^{-i\frac{ps}{\mu}}\left \vert \frac{p}{\mu}\right \rangle , \label{15.19} \end{align} which leads to \begin{equation} \int_{-\infty}^{\infty}dsU^{\dagger}\left( \mu,s\right) \left \vert p^{\prime}\right \rangle \left \langle p\right \vert U\left( \mu,s\right) =2\pi \delta \left( p-p^{\prime}\right) \left \vert \frac{p^{\prime}}{\mu }\right \rangle \left \langle \frac{p}{\mu}\right \vert , \label{15.20} \end{equation} where we have used the formula \begin{equation} \int_{-\infty}^{\infty}\frac{dx}{2\pi}e^{ix\left( p-p^{\prime}\right) }=\delta \left( p-p^{\prime}\right) . \label{15.21} \end{equation} Inserting the completeness relation $\int_{-\infty}^{\infty}dp\left \vert p\right \rangle \left \langle p\right \vert =1$ into the left side of Eq.(\ref{15.16}) and then using Eq.(\ref{15.20}) we have \begin{align} \text{L.H.S of Eq.(\ref{15.16})} & =\int_{-\infty}^{\infty}\frac{d\mu} {\mu^{2}}\int_{-\infty}^{\infty}dsdpdp^{\prime}\psi^{\ast}\left( p\right) \psi \left( p^{\prime}\right) \left \langle f_{2}\right \vert U^{\dagger }\left( \mu,s\right) \left \vert p^{\prime}\right \rangle \left \langle p\right \vert U\left( \mu,s\right) \left \vert f_{1}\right \rangle \nonumber \\ & =2\pi \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty }dp\psi^{\ast}\left( p\right) \psi \left( p\right) \left \langle f_{2}\right. \left \vert \frac{p}{\mu}\right \rangle \left \langle \frac{p}{\mu }\right \vert \left. f_{1}\right \rangle \nonumber \\ & \equiv I_{1}+I_{2}, \label{15.22} \end{align} where \begin{align} I_{1} & =2\pi \int_{0}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty }dp\psi^{\ast}\left( p\right) \psi \left( p\right) \left \langle f_{2}\right. \left \vert \frac{p}{\mu}\right \rangle \left \langle \frac{p}{\mu }\right \vert \left. f_{1}\right \rangle \nonumber \\ & =2\pi \int_{-\infty}^{\infty}dp\left[ \int_{0}^{\infty}\left \vert \psi \left( \mu p\right) \right \vert ^{2}\frac{d\mu}{\mu}\right] \left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle , \label{15.23} \end{align} and \begin{align} I_{2} & =2\pi \int_{-\infty}^{0}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty }dp\left \vert \psi \left( p\right) \right \vert ^{2}\left \langle f_{2}\right. \left \vert \frac{p}{\mu}\right \rangle \left \langle \frac{p}{\mu}\right \vert \left. f_{1}\right \rangle \nonumber \\ & =2\pi \int_{-\infty}^{\infty}dp\left[ \int_{0}^{\infty}\left \vert \psi \left( -\mu p\right) \right \vert ^{2}\frac{d\mu}{\mu}\right] \left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle . \label{15.24} \end{align} Further, we can put Eqs.(\ref{15.23}) and (\ref{15.24}) into the following forms, \begin{align} I_{1} & =2\pi \int_{0}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty }dp\psi^{\ast}\left( p\right) \psi \left( p\right) \left \langle f_{2}\right. \left \vert \frac{p}{\mu}\right \rangle \left \langle \frac{p}{\mu }\right \vert \left. f_{1}\right \rangle \nonumber \\ & =C_{\psi}\int_{0}^{\infty}dp\left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle +2\pi \int_{-\infty}^{0}dp\left[ \int_{0}^{\infty}\left \vert \psi \left( \mu p\right) \right \vert ^{2}\frac{d\mu p}{\mu p}\right] \left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle \nonumber \\ & =C_{\psi}\int_{0}^{\infty}dp\left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle +2\pi \int_{-\infty}^{0}dp\left[ \int_{0}^{-\infty}\left \vert \psi \left( p^{\prime}\right) \right \vert ^{2}\frac{dp^{\prime}}{p^{\prime}}\right] \left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle \nonumber \\ & =C_{\psi}\int_{0}^{\infty}dp\left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle +C_{\psi }^{\prime}\int_{-\infty}^{0}dp\left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle \label{15.25} \end{align} and \begin{equation} I_{2}=C_{\psi}^{\prime}\int_{0}^{\infty}dp\left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle +C_{\psi}\int_{-\infty}^{0}dp\left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle , \label{15.26} \end{equation} where \begin{align} C_{\psi} & =2\pi \int_{0}^{\infty}\left \vert \psi \left( \mu p\right) \right \vert ^{2}\frac{d\mu}{\mu}=2\pi \int_{0}^{\infty}\left \vert \psi \left( p\right) \right \vert ^{2}\frac{dp}{p},\nonumber \\ C_{\psi}^{\prime} & =2\pi \int_{0}^{-\infty}\left \vert \psi \left( p^{\prime }\right) \right \vert ^{2}\frac{dp^{\prime}}{p^{\prime}}=2\pi \int_{0}^{\infty }\left \vert \psi \left( -p\right) \right \vert ^{2}\frac{dp}{p}, \label{15.27} \end{align} thus when the definite integration satisfies the admissible condition, i.e., \begin{equation} \int_{0}^{\infty}\left \vert \psi \left( p\right) \right \vert ^{2}\frac{dp} {p}=\int_{0}^{\infty}\left \vert \psi \left( -p\right) \right \vert ^{2} \frac{dp}{p}, \label{15.28} \end{equation} which leads to \begin{equation} 2\pi \int_{-\infty}^{\infty}\left \vert \psi \left( p\right) \right \vert ^{2}\frac{dp}{\left \vert p\right \vert }=2C_{\psi}. \label{15.29} \end{equation} Eq. (\ref{15.22}) can be transformed to \begin{equation} \text{L.H.S of Eq.(\ref{15.16})}=2C_{\psi}\int_{-\infty}^{\infty }dp\left \langle f_{2}\right. \left \vert p\right \rangle \left \langle p\right \vert \left. f_{1}\right \rangle =\text{R.H.S of Eq.(\ref{15.16}).} \label{15.30} \end{equation} where \begin{equation} C_{\psi}\equiv2\pi \int_{0}^{\infty}\frac{\left \vert \psi \left( p\right) \right \vert ^{2}}{p}dp<\infty,\text{ } \label{15.31} \end{equation} thus the theorem is proved. Especially, when $\left \vert f_{1}\right \rangle =$ $\left \vert f_{2}\right \rangle ,$ Eq.(\ref{15.17}) becomes \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty }ds|\left \langle \psi \right \vert U\left( \mu,s\right) \left \vert f_{1}\right \rangle |^{2}=2C_{\psi}\left \langle f_{1}\right. \left \vert f_{1}\right \rangle , \label{15.32} \end{equation} which is named isometry of energy. \subsection{Inversion formula of WT} Now we can directly derive the inversion formula of WT, i.e. we take $\left \langle f_{2}\right \vert =\left \langle x\right \vert $ in Eq.(\ref{15.17} ), then using Eq.(\ref{15.4}) we see that Eq.(\ref{15.17}) reduces to \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty}dsW_{\psi }f_{1}\left( \mu,s\right) \left \langle x\right \vert U^{\dagger}\left( \mu,s\right) \left \vert \psi \right \rangle =2C_{\psi}\left \langle x\right. \left \vert f_{1}\right \rangle . \label{15.33} \end{equation} Due to Eq.(\ref{15.5}) we have \begin{equation} \left \langle x\right \vert U^{\dagger}\left( \mu,s\right) =\frac{1} {\sqrt{\left \vert \mu \right \vert }}\left \langle x\right \vert \int_{-\infty }^{\infty}dx^{\prime}\left \vert x^{\prime}\right \rangle \left \langle \frac{x^{\prime}-s}{\mu}\right \vert =\frac{1}{\sqrt{\left \vert \mu \right \vert }}\left \langle \frac{x-s}{\mu}\right \vert . \label{15.34} \end{equation} It then follows \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty}dsW_{\psi }f_{1}\left( \mu,s\right) \frac{1}{\sqrt{\left \vert \mu \right \vert } }\left \langle \frac{x-s}{\mu}\right \vert \left. \psi \right \rangle =2C_{\psi }\left \langle x\right. \left \vert f_{1}\right \rangle , \label{15.35} \end{equation} which means \begin{equation} f_{1}\left( x\right) =\frac{1}{2C_{\psi}}\int_{-\infty}^{\infty}\frac{d\mu }{\mu^{2}\sqrt{\left \vert \mu \right \vert }}\int_{-\infty}^{\infty} ds\psi \left( \frac{x-s}{\mu}\right) W_{\psi}f_{1}\left( \mu,s\right) , \label{15.36} \end{equation} this is the inversion formula of WT. \subsection{New orthogonal property of mother wavelet in parameter space} Form the Parserval theorem (\ref{15.16}) of WT in quantum mechanics we can derive some new property of mother wavelet \cite{r31}. Taking $\left \vert f_{1}\right \rangle =\left \vert x\right \rangle $, $\left \vert f_{2} \right \rangle =\left \vert x^{\prime}\right \rangle $ in (\ref{15.16}) one can see that \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}\left \vert \mu \right \vert } \int_{-\infty}^{\infty}ds\psi \left( \frac{x-s}{\mu}\right) \psi^{\ast }\left( \frac{x^{\prime}-s}{\mu}\right) =2C_{\psi}\delta \left( x-x^{\prime }\right) , \label{15.37} \end{equation} which is a new orthogonal property of mother wavelet in parameter space spanned by $\left( \mu,s\right) $. In a similar way, we take $\left \vert f_{1}\right \rangle =\left \vert f_{2}\right \rangle =\left \vert n\right \rangle ,$ a number state, since $\left \langle n\right. \left \vert n\right \rangle =1,$ then we have \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty }ds|\left \langle \psi \right \vert U\left( \mu,s\right) \left \vert n\right \rangle |^{2}=2C_{\psi}, \label{15.38} \end{equation} or take $\left \vert f_{1}\right \rangle =\left \vert f_{2}\right \rangle =\left \vert z\right \rangle ,$ $\left \vert z\right \rangle =\exp \left( -\left \vert z\right \vert ^{2}/2+za^{\dagger}\right) \left \vert 0\right \rangle $ is the coherent state, then \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}}\int_{-\infty}^{\infty }ds|\left \langle \psi \right \vert U\left( \mu,s\right) \left \vert z\right \rangle |^{2}=2C_{\psi}. \label{15.39} \end{equation} This indicates that $C_{\psi}$ is $\left \vert f_{1}\right \rangle $-independent, which coincides with the expression in (\ref{15.31}). Next, we consider a special example. When the mother wavelet is the Mexican hat (\ref{15.14}), we have \begin{equation} \psi_{M}\left( p\right) \equiv \left \langle p\right. \left \vert \psi _{M}\right \rangle =\frac{1}{2}\left( \left \langle p\right. \left \vert 0\right \rangle -\sqrt{2}\left \langle p\right. \left \vert 2\right \rangle \right) =\pi^{-1/4}\allowbreak p^{2}e^{-\frac{1}{2}p^{2}}. \label{15.40} \end{equation} where \begin{equation} \left \langle p\right. \left \vert n\right \rangle =\frac{\left( -i\right) ^{n}}{\sqrt{2^{n}n!\sqrt{\pi}}}e^{-p^{2}/2}H_{n}\left( p\right) . \label{15.41} \end{equation} Here $H_{n}\left( p\right) $ is the single-variable Hermit polynomial \cite{r32}. Substituting Eq.(\ref{15.40}) into Eq.(\ref{15.31}) we have \begin{equation} C_{\psi}\equiv2\pi \int_{0}^{\infty}\frac{\left \vert \psi_{M}\left( p\right) \right \vert ^{2}}{p}dp=\sqrt{\pi}. \label{15.42} \end{equation} Thus, for the Mexican hat wavelet (\ref{15.14}), we see \begin{equation} \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}\left \vert \mu \right \vert } \int_{-\infty}^{\infty}ds\psi_{M}\left( \frac{x-s}{\mu}\right) \psi _{M}^{\ast}\left( \frac{x^{\prime}-s}{\mu}\right) =2\sqrt{\pi}\delta \left( x-x^{\prime}\right) . \label{15.43} \end{equation} Eq.(\ref{15.43}) can be checked as follows. Using Eq.(\ref{15.14}) and noticing that $\psi_{M}\left( x\right) =\psi_{M}\left( -x\right) $, we can put the left hand side of Eq.(\ref{15.43}) into \begin{align} & \mathtt{L.H.S.of}\text{ }(\text{\ref{15.43}})\nonumber \\ & =2\int_{0}^{\infty}du\int_{-\infty}^{\infty}ds\psi_{M}\left( ux-s\right) \psi_{M}^{\ast}\left( ux^{\prime}-s\right) \nonumber \\ & =\left \{ \begin{array} [c]{cc} 0, & x\neq x^{\prime}\\ \frac{3}{2}\int_{0}^{\infty}du\rightarrow \infty, & x=x^{\prime} \end{array} \right. =\mathtt{R.H.S.of}\text{ }(\ref{15.43}). \label{15.44} \end{align} where we have used the integration formula$\allowbreak$s \begin{equation} \int_{-\infty}^{\infty}\left( 1-s^{2}\right) ^{2}\exp \left( -s^{2}\right) ds=\allowbreak \frac{3}{4}\sqrt{\pi}, \label{15.45} \end{equation} and \begin{align} & \int_{-\infty}^{\infty}\left( 1-s^{2}\right) \left[ 1-\left( s-b\right) ^{2}\right] e^{-s^{2}/2-\left( b-s\right) ^{2}/2}ds\nonumber \\ & =\frac{\sqrt{\pi}}{16}e^{-\frac{b^{2}}{4}}\left[ 12+b^{2}\left( b^{2}-12\right) \right] . \label{15.46} \end{align} Next, we examine if the Morlet wavelet obey the formalism (\ref{15.37}). The the Morlet wavelet is defined as \cite{r33,r34,r35} \begin{equation} \psi_{mor}\left( x\right) =\pi^{-1/4}\left( e^{ifx}-e^{-f^{2}/2}\right) e^{-x^{2}/2}. \label{15.47} \end{equation} Substituting (\ref{15.47}) into the left hand side of (\ref{15.37}) yields \begin{align} I & \equiv \int_{-\infty}^{\infty}\frac{d\mu}{\mu^{2}\left \vert \mu \right \vert }\int_{-\infty}^{\infty}ds\psi_{mor}\left( \frac{x-s}{\mu }\right) \psi_{mor}^{\ast}\left( \frac{x^{\prime}-s}{\mu}\right) \nonumber \\ & =\left \{ \begin{array} [c]{cc} 0, & x\neq x^{\prime}\\ 2\left( 1+e^{-f^{2}}-2e^{-3f^{2}/4}\right) \int_{0}^{\infty}\frac{d\mu} {\mu^{2}}\rightarrow \infty, & x=x^{\prime} \end{array} \right. . \label{15.48} \end{align} Thus the Morlet wavelet satisfies Eq.(\ref{15.37}). \subsection{WT and Wigner-Husimi Distribution Function} Phase space technique has been proved very useful in various branches of physics. Distribution functions in phase space have been a major topic in studying quantum mechanics and quantum statistics. Among various phase space distributions the Wigner function $F_{w}\left( q,p\right) $ \cite{r13,r14} is the most popularly used, since its two marginal distributions lead to measuring probability density in coordinate space and momentum space, respectively. But the Wigner distribution function itself is not a probability distribution due to being both positive and negative. In spite of its some attractive formal properties, it needs to be improved. To overcome this inconvenience, the Husimi distribution function $F_{h}\left( q^{\prime },p^{\prime}\right) $ is introduced \cite{r36}, which is defined in a manner that guarantees it to be nonnegative. Its definition is smoothing out the Wigner function by averaging over a \textquotedblleft coarse graining" function, \begin{equation} F_{h}\left( q,p,\kappa \right) =\int \int_{-\infty}^{\infty}dq^{\prime }dp^{\prime}F_{w}\left( q^{\prime},p^{\prime}\right) \exp \left[ -\kappa \left( q^{\prime}-q\right) ^{2}-\frac{\left( p^{\prime}-p\right) ^{2}}{\kappa}\right] , \label{15.49} \end{equation} where $\kappa>0$ is the Gaussian spatial width parameter, which is free to be chosen and which determines the relative resolution in $p$-space versus $q$-space. In the following, we shall employ the optical wavelet transformation to study the Husimi distribution function, this is to say, we shall show that the Husimi distribution function of a quantum state $\left \vert \psi \right \rangle $ can be obtained by making a WT of the Gaussian function $e^{-x^{2}/2},$ i.e., \begin{equation} \left \langle \psi \right \vert \Delta_{h}\left( q,p,\kappa \right) \left \vert \psi \right \rangle =\frac{e^{-\frac{p^{2}}{\kappa}}}{\sqrt{\pi \kappa} }\left \vert \int_{-\infty}^{\infty}dx\psi^{\ast}\left( \frac{x-s}{\mu }\right) e^{-x^{2}/2}\right \vert ^{2}, \label{15.50} \end{equation} where \begin{equation} s=\frac{-1}{\sqrt{\kappa}}\left( \kappa q+ip\right) ,\text{ }\mu =\sqrt{\kappa}, \label{15.51} \end{equation} and $\left \langle \psi \right \vert \Delta_{h}\left( q,p\right) \left \vert \psi \right \rangle $ is the Husimi distribution function as well as $\Delta _{h}\left( q,p,\kappa \right) $ is the Husimi operator, \begin{equation} \Delta_{h}\left( q,p,\kappa \right) =\frac{2\sqrt{\kappa}}{1+\kappa} \colon \exp \left \{ \frac{-\kappa \left( q-Q\right) ^{2}}{1+\kappa} -\frac{\left( p-P\right) ^{2}}{1+\kappa}\right \} \colon, \label{15.52} \end{equation} here $\colon \colon$ denotes normal ordering; $Q$ and $P$ are the coordinate and the momentum operator. \textbf{Proof of Eq.(\ref{15.50}). }According to Eqs.(\ref{15.3}) and (\ref{15.4}), when $\left \vert f\right \rangle $ is the vacuum state $\left \vert 0\right \rangle $, $e^{-x^{2}/2}=\pi^{1/4}\left \langle x\right. \left \vert 0\right \rangle $, we see that \begin{equation} \pi^{-1/4}\int_{-\infty}^{\infty}\frac{dx}{\sqrt{\mu}}\psi^{\ast}\left( \frac{x-s}{\mu}\right) e^{-x^{2}/2}dx=\left \langle \psi \right \vert U\left( \mu,s\right) \left \vert 0\right \rangle . \label{15.53} \end{equation} From Eq.(\ref{15.7}) it then follows that \begin{equation} U\left( \mu,s\right) \left \vert 0\right \rangle =\operatorname{sech} ^{1/2}\lambda \exp \left[ \frac{-s^{2}}{2\left( 1+\mu^{2}\right) } -\frac{a^{\dagger}s}{\sqrt{2}}\operatorname{sech}\lambda-\frac{a^{\dagger2} }{2}\tanh \lambda \right] \left \vert 0\right \rangle . \label{15.54} \end{equation} Substituting Eq.(\ref{15.51}) and $\tanh \lambda=\frac{\kappa-1}{\kappa+1},$ $\cosh \lambda=\frac{1+\kappa}{2\sqrt{\kappa}}$ into Eq.(\ref{15.54}) yields \begin{align} & e^{-\frac{p^{2}}{2\kappa}+\frac{ipq}{\kappa+1}}U\left( \mu=\sqrt{\kappa },s=-\sqrt{\kappa}q-ip/\sqrt{\kappa}\right) \left \vert 0\right \rangle \nonumber \\ & =\left( \frac{2\sqrt{\kappa}}{1+\kappa}\right) ^{1/2}\exp \left \{ \frac{-\kappa q^{2}}{2\left( 1+\kappa \right) }-\frac{p^{2}}{2\left( 1+\kappa \right) }\right. \nonumber \\ & \left. +\frac{\sqrt{2}a^{\dagger}}{1+\kappa}\left( \kappa q+ip\right) +\frac{1-\kappa}{2\left( 1+\kappa \right) }a^{\dagger2}\right \} \left \vert 0\right \rangle \left. \equiv \left \vert p,q\right \rangle _{\kappa}\right. , \label{14.55} \end{align} then the WT of Eq.(\ref{15.53}) can be further expressed as \begin{equation} e^{-\frac{p^{2}}{2\kappa}+\frac{ipq}{\kappa+1}}\int_{-\infty}^{\infty} \frac{dx}{\left( \kappa \pi \right) ^{1/4}}\psi^{\ast}\left( \frac{x-s}{\mu }\right) e^{-x^{2}/2}=\left \langle \psi \right. \left \vert p,q\right \rangle _{\kappa}. \label{15.56} \end{equation} Using normally ordered form of the vacuum state projector $\left \vert 0\right \rangle \left \langle 0\right \vert =\colon e^{-a^{\dagger}a}\colon,$and the IWOP method as well as Eq.(\ref{15.55}) we have \begin{equation} \left \vert p,q\right \rangle _{\kappa \kappa}\left \langle p,q\right \vert =\frac{2\sqrt{\kappa}}{1+\kappa}\colon \exp \left[ \frac{-\kappa \left( q-Q\right) ^{2}}{1+\kappa}-\frac{\left( p-P\right) ^{2}}{1+\kappa}\right] \colon=\Delta_{h}\left( q,p,\kappa \right) . \label{15.57} \end{equation} Now we explain why $\Delta_{h}\left( q,p,\kappa \right) $ is the Husimi operator. Using the formula for converting an operator $A$ into its Weyl ordering form \cite{r37} \begin{align} A & =2\int \frac{d^{2}\beta}{\pi}\left \langle -\beta \right \vert A\left \vert \beta \right \rangle \genfrac{}{}{0pt}{}{:}{:} \exp \{2\left( \beta^{\ast}a-a^{\dagger}\beta+a^{\dagger}a\right) \} \genfrac{}{}{0pt}{}{:}{:} ,\label{15.58}\\ d^{2}\beta & =d\beta_{1}d\beta_{2},\text{ }\beta=\beta_{1}+i\beta _{2},\nonumber \end{align} where the symbol $ \genfrac{}{}{0pt}{}{:}{:} \genfrac{}{}{0pt}{}{:}{:} $ denotes the Weyl ordering, $\left \vert \beta \right \rangle $ is the usual coherent state, substituting\ Eq.(\ref{15.57}) into Eq.(\ref{15.58}) and performing the integration by virtue of the technique of integration within a Weyl ordered product of operators, we obtain \begin{equation} \left \vert p,q\right \rangle _{\kappa \kappa}\left \langle p,q\right \vert =2 \genfrac{}{}{0pt}{}{:}{:} \exp \left[ -\kappa \left( q-Q\right) ^{2}-\frac{\left( p-P\right) ^{2} }{\kappa}\right] \genfrac{}{}{0pt}{}{:}{:} . \label{15.59} \end{equation} This is the Weyl ordering form of $\left \vert p,q\right \rangle _{\kappa \kappa }\left \langle p,q\right \vert .$ Then according to Weyl quantization scheme \cite{Weyl} we know the classical corresponding function of a Weyl ordered operator is obtained by just replacing $Q\rightarrow q^{\prime},P\rightarrow p^{\prime},$ \begin{equation} \genfrac{}{}{0pt}{}{:}{:} \exp \left[ -\kappa \left( q-Q\right) ^{2}-\frac{\left( p-P\right) ^{2} }{\kappa}\right] \genfrac{}{}{0pt}{}{:}{:} \rightarrow \exp \left[ -\kappa \left( q-q^{\prime}\right) ^{2}-\frac{\left( p-p^{\prime}\right) ^{2}}{\kappa}\right] , \label{15.60} \end{equation} and in this case the Weyl rule is expressed as \begin{align} \left \vert p,q\right \rangle _{\kappa \kappa}\left \langle p,q\right \vert & =2\int dq^{\prime}dp^{\prime} \genfrac{}{}{0pt}{}{:}{:} \delta \left( q^{\prime}-Q\right) \delta \left( p^{\prime}-P\right) \genfrac{}{}{0pt}{}{:}{:} \exp \left[ -\kappa \left( q-q^{\prime}\right) ^{2}-\frac{\left( p-p^{\prime}\right) ^{2}}{\kappa}\right] \nonumber \\ & =2\int dq^{\prime}dp^{\prime}\Delta_{w}\left( q^{\prime},p^{\prime }\right) \exp \left[ -\kappa \left( q^{\prime}-q\right) ^{2}-\frac{\left( p^{\prime}-p\right) ^{2}}{\kappa}\right] , \label{15.61} \end{align} where at the last step we used the Weyl ordering form of the Wigner operator $\Delta_{w}\left( q,p\right) $ \cite{r38} \begin{equation} \Delta_{w}\left( q,p\right) = \genfrac{}{}{0pt}{}{:}{:} \delta \left( q-Q\right) \delta \left( p-P\right) \genfrac{}{}{0pt}{}{:}{:} . \label{15.62} \end{equation} In reference to Eq.(\ref{15.49}) in which the relation between the Husimi function and the WF is shown, we know that the right-hand side of Eq. (\ref{15.61}) should be just the Husimi operator, i.e. \begin{align} \left \vert p,q\right \rangle _{\kappa \kappa}\left \langle p,q\right \vert & =2\int dq^{\prime}dp^{\prime}\Delta_{w}\left( q^{\prime},p^{\prime}\right) \exp \left[ -\kappa \left( q^{\prime}-q\right) ^{2}-\frac{\left( p^{\prime }-p\right) ^{2}}{\kappa}\right] \nonumber \\ & =\Delta_{h}\left( q,p,\kappa \right) , \label{15.63} \end{align} thus Eq. (\ref{15.50}) is proved by combining Eqs.(\ref{15.63}) and (\ref{15.56}). Thus the optical WT can be used to study the Husimi distribution function in quantum optics phase space theory \cite{r39a}. \section{Complex Wavelet transformation in entangled state representations} We now turn to 2-dimensional complex wavelet transform (CWT) \cite{r39}. \subsection{CWT and the condition of Mother Wavelet} Since wavelet family involves squeezing transform, we recall that the two-mode squeezing operator has a natural representation in the entangled state representation (ESR), $\exp[\lambda \left( a_{1}^{\dagger}a_{2}^{\dagger }-a_{1}a_{2}\right) ]=\frac{1}{\mu}\int_{-\infty}^{\infty}\left \vert \frac{\eta}{\mu}\right \rangle \left \langle \eta \right \vert dx,$ $\mu =e^{\lambda}$, thus we are naturally led to studying 2-dimensional CWT in ESR. Using ESR we can derive some new results more conveniently than using the direct-product of two single-particle coordinate eigenstates. To be concrete, we impose the condition on qualified mother wavelets also in $\left \vert \eta \right \rangle $ representation, \begin{equation} \int_{-\infty}^{\infty}\frac{d^{2}\eta}{2\pi}\psi \left( \eta \right) =0, \label{16.1} \end{equation} where $\psi \left( \eta \right) =$ $\left \langle \eta \right \vert \left. \psi \right \rangle .$ Thus we see \begin{equation} \int_{-\infty}^{\infty}\frac{d^{2}\eta}{2\pi}\left \vert \eta \right \rangle =\exp \{-a_{1}^{\dagger}a_{2}^{\dagger}\} \left \vert 00\right \rangle =\left \vert \xi=0\right \rangle , \label{16.2} \end{equation} and the condition (\ref{16.1}) becomes \begin{equation} \left \langle \xi=0\right \vert \left. \psi \right \rangle =0. \label{16.3} \end{equation} Without loss of generality, assuming \begin{equation} \left \vert \psi \right \rangle =\sum_{n,m=0}^{\infty}K_{n,m}a_{1}^{\dagger n}a_{2}^{\dagger m}\left \vert 00\right \rangle , \label{16.4} \end{equation} then using the two-mode coherent $\left \vert z_{1}z_{2}\right \rangle $ state we can write (\ref{16.3}) as \begin{align} \left \langle \xi=0\right \vert \left. \psi \right \rangle & =\left \langle \xi=0\right \vert \int \frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{2}}\left \vert z_{1} z_{2}\right \rangle \left \langle z_{1}z_{2}\right \vert \sum_{n,m=0}^{\infty }K_{n,m}a_{1}^{\dagger n}a_{2}^{\dagger m}\left \vert 00\right \rangle \nonumber \\ \ & =\sum_{n,m=0}^{\infty}K_{n,m}\int \frac{d^{2}z_{1}d^{2}z_{2}}{\pi^{2} }z_{1}^{\ast n}z_{2}^{\ast m}\exp \left[ -|z_{1}|^{2}-|z_{2}|^{2}-z_{1} z_{2}\right] \nonumber \\ \ & =\sum_{n,m=0}^{\infty}K_{n,m}\int \frac{d^{2}z_{2}}{\pi}\exp \left[ -|z_{2}|^{2}\right] z_{2}^{n}z_{2}^{\ast m}\left( -1\right) ^{n}\nonumber \\ & =\sum_{n=0}^{\infty}n!K_{n,n}\left( -1\right) ^{n}=0, \label{16.5} \end{align} this is the constraint on the coefficient $K_{n,n}$ in (\ref{16.5}), i.e., the admissibility condition for $\left \vert \psi \right \rangle $. Thus Eq. (\ref{16.4}) is in the form: \begin{equation} \left \vert \psi \right \rangle =\sum_{n=0}^{\infty}n!K_{n,n}\left \vert n,n\right \rangle . \label{16.6} \end{equation} To derive the qualified mother wavelet $\psi \left( \eta \right) =$ $\left \langle \eta \right \vert \left. \psi \right \rangle $ from $\left \vert \psi \right \rangle $, noticing Eq.(\ref{13.9}) and (\ref{16.6}) we have \begin{align} \psi \left( \eta \right) & =e^{-\left \vert \eta \right \vert ^{2}/2}\sum _{n=0}^{\infty}K_{n,n}H_{n,n}\left( \eta^{\ast},\eta \right) \left( -1\right) ^{n}\nonumber \\ & =e^{-\left \vert \eta \right \vert ^{2}/2}\sum_{n=0}^{\infty}n!K_{n,n} L_{n}\left( \left \vert \eta \right \vert ^{2}\right) , \label{16.7} \end{align} where $L_{n}\left( x\right) $ is the\ Laguerre polynomial. In this case, we may name the wavelet in Eq. (\ref{16.7}) as the Laguerre--Gaussian mother wavelets, analogous to the name of Laguerre--Gaussian modes in optical propagation. For example: (1) When taking $K_{0,0}=\frac{1}{2},$ $K_{1,1}=\frac{1}{2},$ $K_{n,n}=0$ for $n\geqslant2,$ so we see \begin{equation} \left \vert \psi \right \rangle _{1}=\frac{1}{2}\left( 1+a_{1}^{\dagger} a_{2}^{\dagger}\right) \left \vert 00\right \rangle , \label{16.8} \end{equation} which differs from the direct-product state $\left( 1-a_{1}^{\dagger 2}\right) \left \vert 0\right \rangle _{1}\otimes \left( 1-a_{2}^{\dagger 2}\right) \left \vert 0\right \rangle _{2}$. It then follows from Eq. (\ref{16.7}) that \begin{equation} \psi_{1}\left( \eta \right) \equiv \frac{1}{2}\left \langle \eta \right \vert \left( \left \vert 00\right \rangle +\left \vert 11\right \rangle \right) =e^{-\frac{1}{2}\left \vert \eta \right \vert ^{2}}\{1-\frac{1}{2}\left \vert \eta \right \vert ^{2}\}, \label{16.9} \end{equation} which differs from $e^{-\left( x^{2}+y^{2}\right) /2}(1-x^{2})\left( 1-y^{2}\right) $, the direct product of two 1D Mexican hat wavelets (see also the difference between Figs. 9 and 10). \begin{figure} \caption{{The Laguree-Gaussian mother wavelet }$\psi _{1}\left( \eta \right) ${.}} \label{Fig9} \end{figure} \begin{figure} \caption{{2D Mexican hat mother wavelet (Hermite Gaussian mother wavelet).}} \label{Fig10} \end{figure} (2) when $K_{0,0}=1$, $K_{1,1}=3$, $K_{2,2}=1$, $K_{n,n}=0$ for $n\geqslant3,$ we have (see Fig. 11) \begin{equation} \psi_{2}\left( \eta \right) \equiv \left( 6-7\left \vert \eta \right \vert ^{2}+\left \vert \eta \right \vert ^{4}\right) e^{-\frac{1}{2}\left \vert \eta \right \vert ^{2}}. \label{16.10} \end{equation} \begin{figure} \caption{Laguerre-Gaussian {mother wavelet }$\psi _{2}\left( \eta \right) .$} \label{Fig11} \end{figure} (3) when $K_{0,0}=1$, $K_{1,1}=1$, $K_{2,2}=3$, $K_{3,3}=3$, $K_{n,n}=0$ for $n\geqslant4,$ the mother wavelet $\psi_{3}\left( \eta \right) $ (see Fig. 12) reads \begin{equation} \psi_{3}\left( \eta \right) =\left( 14-31\left \vert \eta \right \vert ^{2}+12\left \vert \eta \right \vert ^{4}-\left \vert \eta \right \vert ^{6}\right) e^{-\frac{1}{2}\left \vert \eta \right \vert ^{2}}. \label{16.11} \end{equation} \begin{figure} \caption{Laguerre-Gaussian {mother wavelet }$\psi _{3}\left( \eta \right) .$} \end{figure}From the figures we can see that as long as the coefficients $K_{n,n}$ satisfy condition (\ref{16.7}), we can construct arbitrary complex mother wavelet by adding or reducing the number of coefficients, or by adjusting the value of them. And since only $K_{n,m}$ ($m=n$) survive in all the coefficients, the mother wavelets obtained are all circularly symmetric on the complex plane. Moreover, the CWT of a signal function $F\left( \eta \right) $ by $\Psi$ is defined by \begin{equation} W_{\psi}F\left( \mu,\kappa \right) =\frac{1}{\mu}\int \frac{d^{2}\eta}{\pi }F\left( \eta \right) \psi^{\ast}\left( \frac{\eta-\kappa}{\mu}\right) . \label{16.12} \end{equation} Using the $\left \langle \eta \right \vert $ representation we can treat it from the quantum mechanically, \begin{equation} W_{\psi}F\left( \mu,\kappa \right) =\frac{1}{\mu}\int \frac{d^{2}\eta}{\pi }\left \langle \psi \right \vert \left. \frac{\eta-\kappa}{\mu}\right \rangle \left \langle \eta \right \vert \left. F\right \rangle =\left \langle \Psi \right \vert U_{2}\left( \mu,\kappa \right) \left \vert F\right \rangle , \label{16.13} \end{equation} where \begin{equation} U_{2}\left( \mu,\kappa \right) \equiv \frac{1}{\mu}\int \frac{d^{2}\eta}{\pi }\left \vert \frac{\eta-\kappa}{\mu}\right \rangle \left \langle \eta \right \vert ,\; \mu=e^{\lambda}, \label{16.14} \end{equation} is the two-mode squeezing-displacing operator. Using the IWOP technique we can calculate its normally ordered form, \begin{align} U_{2}\left( \mu,\kappa \right) & =\operatorname{sech}\lambda \colon \exp \{ \left( a_{1}^{\dagger}a_{2}^{\dagger}-a_{1}a_{2}\right) \tanh \lambda+\left( \operatorname{sech}\lambda-1\right) \left( a_{1}^{\dagger}a_{1} +a_{2}^{\dagger}a_{2}\right) \nonumber \\ & +\frac{1}{2}\left( \sigma^{\ast}a_{2}^{\dagger}-\sigma a_{1}^{\dagger }\right) \operatorname{sech}\lambda+\frac{1}{1+\mu^{2}}\left( \kappa^{\ast }a_{1}-\kappa a_{2}-\frac{1}{2}\left \vert \kappa \right \vert ^{2}\right) \} \colon. \label{16.15} \end{align} When $\kappa=0,$ it reduces to the usual normally ordered two-mode squeezing operator. Once the state vector $_{M}\left \langle \Psi \right \vert $ corresponding to mother wavelet is known, for any state $\left \vert F\right \rangle $ the matrix element $_{M}\left \langle \Psi \right \vert U_{2}\left( \mu,\kappa \right) \left \vert F\right \rangle $ is just the wavelet transform of $F(\eta)$ with respect to $_{M}\left \langle \Psi \right \vert .$ Therefore, various quantum optical field states can then be analyzed by their wavelet transforms. \subsection{Parseval Theorem in CWT} In order to complete the CWT theory, we must ask if the corresponding Parseval theorem exists \cite{r40}. This is important since the inversion formula of CWT may appear as a lemma of this theorem. Noting that CWT involves two-mode squeezing transform, so the corresponding Parseval theorem differs from that of the direct-product of two 1D wavelet transforms, too. Next let us prove the Parseval theorem for CWT, \begin{equation} \int_{0}^{\infty}\frac{d\mu}{\mu^{3}}\int \frac{d^{2}\kappa}{\pi}W_{\psi} g_{1}\left( \mu,\kappa \right) W_{\psi}^{\ast}g_{2}\left( \mu,\kappa \right) =C_{\psi}^{\prime}\int \frac{d^{2}\eta}{\pi}g_{2}^{\ast}\left( \eta \right) g_{1}\left( \eta \right) , \label{16.16} \end{equation} where $\kappa=\kappa_{1}+i\kappa_{2},$ and \begin{equation} C_{\psi}^{\prime}=4\int_{0}^{\infty}\frac{d\left \vert \xi \right \vert }{\left \vert \xi \right \vert }\left \vert \psi \left( \xi \right) \right \vert ^{2}. \label{16.17} \end{equation} $\psi \left( \xi \right) $ is the complex Fourier transform of $\psi \left( \eta \right) ,\psi \left( \xi \right) =\left \langle \xi \right \vert \left. \psi \right \rangle =\int_{-\infty}^{\infty}\frac{d^{2}\eta}{\pi}\left \langle \xi \right \vert \left. \eta \right \rangle \left \langle \eta \right \vert \left. \psi \right \rangle $. According to (\ref{16.13}) and (\ref{16.14}) the quantum mechanical version of Parseval theorem should be \begin{equation} \int_{0}^{\infty}\frac{d\mu}{\mu^{3}}\int \frac{d^{2}\kappa}{\pi}\left \langle \psi \right \vert U_{2}\left( \mu,\kappa \right) \left \vert g_{1}\right \rangle \left \langle g_{2}\right \vert U_{2}^{\dagger}\left( \mu,\kappa \right) \left \vert \psi \right \rangle =C_{\psi}^{\prime}\left \langle g_{2}\right. \left \vert g_{1}\right \rangle . \label{16.18} \end{equation} Eq.(\ref{16.18}) indicates that once the state vector $\left \langle \psi \right \vert $ corresponding to mother wavelet is known, for any two states $\left \vert g_{1}\right \rangle $ and $\left \vert g_{2}\right \rangle $, their overlap up to the factor $C_{\psi}$ (determined by (\ref{16.17})) is just their corresponding overlap of CWTs in the ($\mu,\kappa$) parametric space. Next we prove Eq.(\textbf{\ref{16.16}) or (\ref{16.18}). }In the same procedure as\textbf{ }the proof of Eq.(\ref{15.17})\textbf{. }We start with calculating $U_{2}^{\dagger}\left( \mu,\kappa \right) \left \vert \xi \right \rangle .$ Using (\ref{3.20}) and (\ref{16.14}), we have \begin{equation} U_{2}^{\dagger}\left( \mu,\kappa \right) \left \vert \xi \right \rangle =\frac{1}{\mu}\left \vert \frac{\xi}{\mu}\right \rangle e^{\frac{i}{\mu}\left( \xi_{1}\kappa_{2}-\xi_{2}\kappa_{1}\right) }, \label{16.19} \end{equation} it then follows \begin{align} & \int \frac{d^{2}\kappa}{\pi}U_{2}^{\dagger}\left( \mu,\kappa \right) \left \vert \xi^{\prime}\right \rangle \left \langle \xi \right \vert U_{2}\left( \mu,\kappa \right) \nonumber \\ & =\frac{1}{\mu^{2}}\int \frac{d^{2}\kappa}{\pi}e^{\frac{i}{\mu}\left[ \left( \xi_{1}^{\prime}-\xi_{1}\right) \kappa_{2}+\left( \xi_{2}-\xi _{2}^{\prime}\right) \kappa_{1}\right] }\left \vert \frac{\xi^{\prime}}{\mu }\right \rangle \left \langle \frac{\xi}{\mu}\right \vert \nonumber \\ & =4\pi \left \vert \frac{\xi}{\mu}\right \rangle \left \langle \frac{\xi}{\mu }\right \vert \delta \left( \xi_{1}^{\prime}-\xi_{1}\right) \delta \left( \xi_{2}-\xi_{2}^{\prime}\right) . \label{16.20} \end{align} Using the completeness of $\left \vert \xi \right \rangle $ and (\ref{16.20}) the left-hand side (LHS) of (\ref{16.18}) can be reformed as \begin{align} & \text{LHS of Eq.(\ref{16.18})}\nonumber \\ & =\int_{0}^{\infty}\frac{d\mu}{\mu^{3}}\int \frac{d^{2}\kappa d^{2}\xi d^{2}\xi^{\prime}}{\pi^{3}}\left \langle \psi \right \vert \left. \xi \right \rangle \nonumber \\ & \times \left \langle \xi \right \vert U_{2}\left( \mu,\kappa \right) \left \vert g_{1}\right \rangle \left \langle g_{2}\right \vert U_{2}^{\dagger }\left( \mu,\kappa \right) \left \vert \xi^{\prime}\right \rangle \left \langle \xi^{\prime}\right \vert \left. \psi \right \rangle \nonumber \\ & =4\int_{0}^{\infty}\frac{d\mu}{\mu^{3}}\int \frac{d^{2}\xi}{\pi}\left \vert \psi \left( \xi \right) \right \vert ^{2}\left \langle g_{2}\right. \left \vert \frac{\xi}{\mu}\right \rangle \left \langle \frac{\xi}{\mu}\right. \left \vert g_{1}\right \rangle \nonumber \\ & =\int \frac{d^{2}\xi}{\pi}\left \{ 4\int_{0}^{\infty}\frac{d\mu}{\mu }\left \vert \psi \left( \mu \xi \right) \right \vert ^{2}\right \} \left \langle g_{2}\right. \left \vert \xi \right \rangle \left \langle \xi \right. \left \vert g_{1}\right \rangle , \label{16.21} \end{align} where the integration value in $\{..\}$ is actually $\xi-$independent. Noting that the mother wavelet $\psi \left( \eta \right) $ in Eq.(\ref{16.7}) is just the function of $\left \vert \eta \right \vert ,$ so $\psi \left( \xi \right) $ is also the function of $\left \vert \xi \right \vert .$ In fact, using Eqs.(\ref{16.7}) and (\ref{3.20}), we have \begin{equation} \psi \left( \xi \right) =e^{-1/2\left \vert \xi \right \vert ^{2}}\sum _{n=0}^{\infty}K_{n,n}H_{n,n}\left( \left \vert \xi \right \vert ,\left \vert \xi \right \vert \right) , \label{16.22} \end{equation} where we have used the integral formula \begin{equation} \int \frac{d^{2}z}{\pi}e^{\zeta \left \vert z\right \vert ^{2}+\xi z+\eta z^{\ast }}=-\frac{1}{\zeta}e^{-\frac{\xi \eta}{\zeta}},\text{Re}\left( \zeta \right) <0. \label{16.23} \end{equation} So we can rewrite (\ref{16.21}) as \begin{equation} \text{LHS of (\ref{16.18})}=C_{\psi}^{\prime}\int \frac{d^{2}\xi}{\pi }\left \langle g_{2}\right. \left \vert \xi \right \rangle \left \langle \xi \right. \left \vert g_{1}\right \rangle =C_{\psi}^{\prime}\left \langle g_{2}\right. \left \vert g_{1}\right \rangle , \label{16.24} \end{equation} where \begin{equation} C_{\psi}^{\prime}=4\int_{0}^{\infty}\frac{d\mu}{\mu}\left \vert \psi \left( \mu \xi \right) \right \vert ^{2}=4\int_{0}^{\infty}\frac{d\left \vert \xi \right \vert }{\left \vert \xi \right \vert }\left \vert \psi \left( \xi \right) \right \vert ^{2}. \label{16.25} \end{equation} Then we have completed the proof of the Parseval theorem for CWT in (\ref{16.18}). Here, we should emphasize that (\ref{16.18}) is not only different from the product of two 1D WTs, but also different from the usual WT in 2D. When $\left \vert g_{2}\right \rangle =\left \vert \eta \right \rangle ,$ by using (\ref{16.14}) we see $\left \langle \eta \right \vert U_{2}^{\dagger}\left( \mu,\kappa \right) \left \vert \psi \right \rangle =\frac{1}{\mu}\psi \left( \frac{\eta-\kappa}{\mu}\right) ,$ then substituting it into (\ref{16.18}) yields \begin{equation} g_{1}\left( \eta \right) =\frac{1}{C_{\psi}^{\prime}}\int_{0}^{\infty} \frac{d\mu}{\mu^{3}}\int \frac{d^{2}\kappa}{\pi \mu}W_{\psi}g_{1}\left( \mu,\kappa \right) \psi \left( \frac{\eta-\kappa}{\mu}\right) , \label{16.26} \end{equation} which is just the inverse transform of the CWT. Especially, when $\left \vert g_{1}\right \rangle =$ $\left \vert g_{2}\right \rangle ,$ Eq. (\ref{16.18}) reduces to \begin{align} \int_{0}^{\infty}\frac{d\mu}{\mu^{3}}\int \frac{d^{2}\kappa}{\pi}\left \vert W_{\psi}g_{1}\left( \mu,\kappa \right) \right \vert ^{2} & =C_{\psi} ^{\prime}\int \frac{d^{2}\eta}{\pi}\left \vert g_{1}\left( \eta \right) \right \vert ^{2},\nonumber \\ \text{or }\int_{0}^{\infty}\frac{d\mu}{\mu^{3}}\int \frac{d^{2}\kappa}{\pi }\left \vert \left \langle \psi \right \vert U_{2}\left( \mu,\kappa \right) \left \vert g_{1}\right \rangle \right \vert ^{2} & =C_{\psi}^{\prime }\left \langle g_{1}\right. \left \vert g_{1}\right \rangle , \label{16.27} \end{align} which is named isometry of energy. \subsection{Orthogonal property of mother wavelet in parameter space} On the other hand, when $\left \vert g_{1}\right \rangle =\left \vert \eta \right \rangle ,$ $\left \vert g_{2}\right \rangle =\left \vert \eta^{\prime }\right \rangle $, Eq.(\ref{16.18}) becomes \begin{equation} \frac{1}{C_{\psi}^{\prime}}\int_{0}^{\infty}\frac{d\mu}{\mu^{5}}\int \frac{d^{2}\kappa}{\pi}\psi \left( \frac{\eta^{\prime}-\kappa}{\mu}\right) \psi^{\ast}\left( \frac{\eta-\kappa}{\mu}\right) =\pi \delta^{(2)}\left( \eta-\eta^{\prime}\right) , \label{16.28} \end{equation} which is a new orthogonal property of mother wavelet in parameter space spanned by $\left( \mu,\kappa \right) $. In a similar way, we take $\left \vert g_{1}\right \rangle =\left \vert g_{2}\right \rangle =\left \vert m,n\right \rangle ,$ a two-mode number state, since $\left \langle m,n\right. \left \vert m,n\right \rangle =1,$ then we have \begin{equation} \int_{0}^{\infty}\frac{d\mu}{\mu^{3}}\int \frac{d^{2}\kappa}{\pi}\left \vert \left \langle \psi \right \vert U_{2}\left( \mu,\kappa \right) \left \vert m,n\right \rangle \right \vert ^{2}=C_{\psi}^{\prime}, \label{16.29} \end{equation} or take $\left \vert g_{1}\right \rangle =\left \vert g_{2}\right \rangle =\left \vert z_{1},z_{2}\right \rangle ,$ $\left \vert z\right \rangle =\exp \left( -\left \vert z\right \vert ^{2}/2+za^{\dagger}\right) \left \vert 0\right \rangle $ is the coherent state, then \begin{equation} \int_{0}^{\infty}\frac{d\mu}{\mu^{3}}\int \frac{d^{2}\kappa}{\pi}\left \vert \left \langle \psi \right \vert U_{2}\left( \mu,\kappa \right) \left \vert z_{1},z_{2}\right \rangle \right \vert ^{2}=C_{\psi}^{\prime}. \label{16.30} \end{equation} Next we examine a special example. When the mother wavelet is $\psi_{1}\left( \eta \right) $ in (\ref{16.9}), using (\ref{3.20}) we have $\psi \left( \xi \right) =\frac{1}{2}\left \vert \xi \right \vert ^{2}e^{-\frac{1} {2}\left \vert \xi \right \vert ^{2}},$ which leads to $C_{\psi}^{\prime} =\int_{0}^{\infty}\left \vert \xi \right \vert ^{3}e^{-\left \vert \xi \right \vert ^{2}}d\left \vert \xi \right \vert =\frac{1}{2}.$ Thus for $\psi_{1}\left( \eta \right) $, we see \begin{equation} 2\int_{0}^{\infty}\frac{d\mu}{\mu^{5}}\int \frac{d^{2}\kappa}{\pi}\psi _{1}\left( \frac{\eta^{\prime}-\kappa}{\mu}\right) \psi_{1}^{\ast}\left( \frac{\eta-\kappa}{\mu}\right) =\pi \delta^{(2)}\left( \eta-\eta^{\prime }\right) . \label{16.31} \end{equation} Eq. (\ref{16.31}) can be checked as follows. Using (\ref{16.9}) and the integral formula \begin{align} & \int_{0}^{\infty}u\left( 1-\frac{ux^{2}}{2}\right) \left( 1-\frac {uy^{2}}{2}\right) e^{-u\frac{x^{2}+y^{2}}{2}}du\nonumber \\ & =-\frac{4(x^{4}-4x^{2}y^{2}+y^{4})}{(x^{2}+y^{2})^{4}},\text{ }\operatorname{Re}\left( x^{2}+y^{2}\right) >0, \label{16.32} \end{align} we can put the left-hand side (LHS) of (\ref{16.31}) into \begin{equation} \text{LHS of (\ref{16.31})}=-\int \frac{d^{2}\kappa}{\pi}\frac{4(x^{4} -4x^{2}y^{2}+y^{4})}{(x^{2}+y^{2})^{4}}, \label{16.33} \end{equation} where $x^{2}=\left \vert \eta^{\prime}-\kappa \right \vert ^{2},$ $y^{2} =\left \vert \eta-\kappa \right \vert ^{2}.$ When $\eta^{\prime}=\eta,$ $x^{2}=y^{2},$ \begin{equation} \text{LHS of (\ref{16.31})}=\int \allowbreak \frac{d^{2}\kappa}{2\pi \left \vert \kappa-\eta \right \vert ^{4}}=\int_{0}^{\infty}\allowbreak \int_{0}^{2\pi} \frac{drd\theta}{2\pi r^{3}}\rightarrow \infty. \label{16.34} \end{equation} On the other hand, when $\eta \neq \eta^{\prime}$ and noticing that \begin{align} x^{2} & =\left( \eta_{1}^{\prime}-\kappa_{1}\right) ^{2}+\left( \eta _{2}^{\prime}-\kappa_{2}\right) ^{2},\nonumber \\ y^{2} & =\left( \eta_{1}-\kappa_{1}\right) ^{2}+\left( \eta_{2} -\kappa_{2}\right) ^{2}, \label{16.35} \end{align} which leads to $\mathtt{d}x^{2}\mathtt{d}y^{2}=4\left \vert J\right \vert \mathtt{d}\kappa_{1}\mathtt{d}\kappa_{2}$, where $J\left( x,y\right) =\left \vert \begin{array} [c]{cc} \kappa_{1}-\eta_{1}^{\prime} & \kappa_{2}-\eta_{2}^{\prime}\\ \kappa_{1}-\eta_{1} & \kappa_{2}-\eta_{2} \end{array} \right \vert $. As a result of (\ref{16.35}), (\ref{16.33}) reduces to \begin{equation} \text{LHS of (\ref{16.31})}=-4\int_{-\infty}^{\infty}\frac{dxdy}{\pi} \frac{xy(x^{4}-4x^{2}y^{2}+y^{4})}{\left \vert J\right \vert (x^{2}+y^{2})^{4} }=0\text{,} \label{16.36} \end{equation} where we have noticed that $J\left( x,y\right) $ is the funtion of $\left( x^{2},y^{2}\right) .$ Thus we have \begin{equation} \text{LHS of (\ref{16.31})}=\left \{ \begin{array} [c]{cc} \infty, & \eta=\eta^{\prime},\\ 0, & \eta \neq \eta^{\prime}. \end{array} \right. =\text{RHS of (\ref{16.31}).} \label{16.37} \end{equation} \subsection{CWT and Entangled Husimi distribution} Recalling that in Ref.\cite{r41}, the so-called entangled Husimi operator $\Delta_{h}\left( \sigma,\gamma,\kappa \right) $ has been introduced, which is endowed with definite physical meaning, and it is found that the two-mode squeezed coherent state $\left \vert \sigma,\gamma \right \rangle _{\kappa}$ representation of $\Delta_{h}\left( \sigma,\gamma,\kappa \right) ,$ $\Delta_{h}\left( \sigma,\gamma,\kappa \right) =$ $\left \vert \sigma ,\gamma,\kappa \right \rangle \left \langle \sigma,\gamma,\kappa \right \vert $. The entangled Husimi operator $\Delta_{h}\left( \sigma,\gamma,\kappa \right) $ and the entangled Husimi distribution $F_{h}\left( \sigma,\gamma ,\kappa \right) $ of quantum state $\left \vert \psi \right \rangle $ are given by \begin{equation} \Delta_{h}\left( \sigma,\gamma,\kappa \right) =4\int d^{2}\sigma^{\prime }d^{2}\gamma^{\prime}\Delta_{w}\left( \sigma^{\prime},\gamma^{\prime}\right) \exp \left \{ -\kappa \left \vert \sigma^{\prime}-\sigma \right \vert ^{2}-\frac {1}{\kappa}\left \vert \gamma^{\prime}-\gamma \right \vert ^{2}\right \} , \label{e4} \end{equation} and \begin{equation} F_{h}\left( \sigma,\gamma,\kappa \right) =4\int d^{2}\sigma^{\prime} d^{2}\gamma^{\prime}F_{w}\left( \sigma^{\prime},\gamma^{\prime}\right) \exp \left \{ -\kappa \left \vert \sigma^{\prime}-\sigma \right \vert ^{2}-\frac {1}{\kappa}\left \vert \gamma^{\prime}-\gamma \right \vert ^{2}\right \} , \label{e5} \end{equation} respectively, where $F_{w}\left( \sigma^{\prime},\gamma^{\prime}\right) =\left \langle \psi \right \vert \Delta_{w}\left( \sigma^{\prime},\gamma ^{\prime}\right) \left \vert \psi \right \rangle $ is two-mode Wigner function, with $\Delta_{w}\left( \sigma^{\prime},\gamma^{\prime}\right) $\ being the two-mode Wigner operator. Thus we are naturally led to studying the entangled Husimi distribution function from the viewpoint of wavelet transformation. In this subsection, we shall extend the relation between wavelet transformation and Wigner-Husimi distribution function to the entangled case, that is to say, we employ the CWT to investigate the entangled Husimi distribution function (EHDF) by bridging the relation between CWT and EHDF. This is a convenient approach for calculating various entangled Husimi distribution functions of miscellaneous two-mode quantum states. \subsubsection{CWT and its quantum mechanical version} In Ref.\cite{r39}, the CWT has been proposed, i.e., the CWT of a complex signal function $g\left( \eta \right) $ by $\psi$ is defined by \begin{equation} W_{\psi}g\left( \mu,z\right) =\frac{1}{\mu}\int \frac{d^{2}\eta}{\pi}g\left( \eta \right) \psi^{\ast}\left( \frac{\eta-z}{\mu}\right) , \label{e6} \end{equation} whose admissibility condition for mother wavelets, $\int \frac{d^{2}\eta}{2\pi }\psi \left( \eta \right) =0,$ is examined in the entangled state representations $\left \langle \eta \right \vert $ and a family of new mother wavelets (named the Laguerre--Gaussian wavelets) are found to match the CWT \cite{r39}. In fact, by introducing the bipartite entangled state representation $\left \langle \eta=\eta_{1}+\mathtt{i}\eta_{2}\right \vert ,$we can treat (\ref{e5}) quantum mechanically, \begin{equation} W_{\psi}g\left( \mu,z\right) =\frac{1}{\mu}\int \frac{d^{2}\eta}{\pi }\left \langle \psi \right \vert \left. \frac{\eta-z}{\mu}\right \rangle \left \langle \eta \right \vert \left. g\right \rangle =\left \langle \psi \right \vert U_{2}\left( \mu,z\right) \left \vert g\right \rangle , \label{e8} \end{equation} where $z=z_{1}+iz_{2}\in C,$ $0<\mu \in R,$ $g\left( \eta \right) \equiv \left \langle \eta \right \vert \left. g\right \rangle \ $and $\psi \left( \eta \right) =\left \langle \eta \right \vert \left. \psi \right \rangle $ are the wavefunction of state vector $\left \vert g\right \rangle $ and the mother wavelet state vector $\left \vert \psi \right \rangle $ in $\left \langle \eta \right \vert $ representation, respectively, and \begin{equation} U_{2}\left( \mu,z\right) \equiv \frac{1}{\mu}\int \frac{d^{2}\eta}{\pi }\left \vert \frac{\eta-z}{\mu}\right \rangle \left \langle \eta \right \vert ,\; \mu=e^{\lambda}, \label{e9} \end{equation} is the two-mode squeezing-displacing operator. Noticing that the two-mode squeezing operator has its natural expression in $\left \langle \eta \right \vert $ representation (\ref{3.24}), which is different from the direct product of two single-mode squeezing (dilation) operators, and the two-mode squeezed state is simultaneously an entangled state, thus we can put Eq.(\ref{e9}) into the following form, \begin{equation} U_{2}\left( \mu,z\right) =S_{2}\left( \mu \right) \mathfrak{D}\left( z\right) , \label{e11} \end{equation} where $\mathfrak{D}\left( z\right) $ is a two-mode displacement operator, $\mathfrak{D}\left( z\right) \left \vert \eta \right \rangle =\left \vert \eta-z\right \rangle $ and \begin{align} \mathfrak{D}\left( z\right) & =\int \frac{d^{2}\eta}{\pi}\left \vert \eta-z\right \rangle \left \langle \eta \right \vert \nonumber \\ & =\exp \left[ iz_{1}\frac{P_{1}-P_{2}}{\sqrt{2}}-iz_{2}\frac{Q_{1}+Q_{2} }{\sqrt{2}}\right] \nonumber \\ & =D_{1}\left( -z/2\right) D_{2}\left( z^{\ast}/2\right) . \label{e12} \end{align} It the follows the quantum mechanical version of CWT is \begin{equation} W_{\psi}g\left( \mu,\zeta \right) =\left \langle \psi \right \vert S_{2}\left( \mu \right) \mathfrak{D}\left( z\right) \left \vert g\right \rangle =\left \langle \psi \right \vert S_{2}\left( \mu \right) D_{1}\left( -z/2\right) D_{2}\left( z^{\ast}/2\right) \left \vert g\right \rangle . \label{e13} \end{equation} Eq.(\ref{e13}) indicates that the CWT can be put into a matrix element in the $\left \langle \eta \right \vert $ representation of the two-mode displacing and the two-mode squeezing operators in Eq.(\ref{e10}) between the mother wavelet state vector $\left \vert \psi \right \rangle $ and the state vector $\left \vert g\right \rangle $ to be transformed. Once the state vector $\left \langle \psi \right \vert $ as mother wavelet is chosen, for any state $\left \vert g\right \rangle $ the matrix element $\left \langle \psi \right \vert U_{2}\left( \mu,z\right) \left \vert g\right \rangle $ is just the wavelet transform of $g(\eta)$ with respect to $\left \langle \psi \right \vert .$ Therefore, various quantum optical field states can then be analyzed by their wavelet transforms. \subsubsection{Relation between CWT and EHDF} In the following we shall show that the EHDF of a quantum state $\left \vert \psi \right \rangle $ can be obtained by making a complex wavelet transform of the Gaussian function $e^{-\left \vert \eta \right \vert ^{2}/2},$ i.e., \begin{equation} \left \langle \psi \right \vert \Delta_{h}\left( \sigma,\gamma,\kappa \right) \left \vert \psi \right \rangle =e^{-\frac{1}{\kappa}\left \vert \gamma \right \vert ^{2}}\left \vert \int \frac{d^{2}\eta}{\sqrt{\kappa}\pi}e^{-\left \vert \eta \right \vert ^{2}/2}\psi^{\ast}\left( \frac{\eta-z}{\sqrt{\kappa}}\right) \right \vert ^{2}, \label{e14} \end{equation} where $\mu=e^{\lambda}=\sqrt{\kappa},$ $z=z_{1}+iz_{2},$ and \begin{align} z_{1} & =\frac{\cosh \lambda}{1+\kappa}\left[ \gamma^{\ast}-\gamma -\kappa \left( \sigma^{\ast}+\sigma \right) \right] ,\label{e15}\\ z_{2} & =\frac{i\cosh \lambda}{1+\kappa}\left[ \gamma+\gamma^{\ast} +\kappa \left( \sigma-\sigma^{\ast}\right) \right] , \label{e16} \end{align} $\Delta_{h}\left( \sigma,\gamma,\kappa \right) $ is named the entangled Husimi operator by us, \begin{align} \Delta_{h}\left( \sigma,\gamma,\kappa \right) & =\frac{4\kappa}{\left( 1+\kappa \right) ^{2}}\colon \exp \left \{ -\frac{\left( a_{1}+a_{2}^{\dag }-\gamma \right) \left( a_{1}^{\dag}+a_{2}-\gamma^{\ast}\right) }{1+\kappa }\right. \nonumber \\ & -\left. \frac{\kappa \left( a_{1}-a_{2}^{\dag}-\sigma \right) \left( a_{1}^{\dag}-a_{2}-\sigma^{\ast}\right) }{1+\kappa}\right \} \colon. \label{e17} \end{align} $\left \langle \psi \right \vert \Delta_{h}\left( \sigma,\gamma,\kappa \right) \left \vert \psi \right \rangle $ is the Husimi distribution function. \textbf{Proof of Eq.(\ref{e14}).} When the state to be transformed is $\left \vert g\right \rangle =\left \vert 00\right \rangle $ (the two-mode vacuum state), by noticing that $\left \langle \eta \right. \left \vert 00\right \rangle =e^{-\left \vert \eta \right \vert ^{2}/2},$ we can express Eq.(\ref{e8}) as \begin{equation} \frac{1}{\mu}\int \frac{d^{2}\eta}{\pi}e^{-\left \vert \eta \right \vert ^{2} /2}\psi^{\ast}\left( \frac{\eta-z}{\mu}\right) =\left \langle \psi \right \vert U_{2}\left( \mu,z\right) \left \vert 00\right \rangle . \label{e18} \end{equation} To combine the CWTs with transforms of quantum states more tightly and clearly, using the IWOP technique we can directly perform the integral in Eq.(\ref{e9}) \cite{r42} \begin{align} U_{2}\left( \mu,z\right) & =\operatorname{sech}\lambda \exp \left[ -\frac{1}{2\left( 1+\mu^{2}\right) }\left \vert z\right \vert ^{2} +a_{1}^{\dagger}a_{2}^{\dagger}\tanh \lambda+\frac{1}{2}\left( z^{\ast} a_{2}^{\dagger}-za_{1}^{\dagger}\right) \operatorname{sech}\lambda \right] \nonumber \\ & \times \exp \left[ \left( a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \ln \operatorname{sech}\lambda \right] \exp \left( \frac{z^{\ast}a_{1}-za_{2} }{1+\mu^{2}}-a_{1}a_{2}\tanh \lambda \right) . \label{e19} \end{align} where we have set $\mu=e^{\lambda}$, $\operatorname{sech}\lambda=\frac{2\mu }{1+\mu^{2}}$, $\tanh \lambda=\frac{\mu^{2}-1}{\mu^{2}+1}$, and we have used the operator identity $e^{ga^{\dagger}a}=\colon \exp \left[ \left( e^{g}-1\right) a^{\dagger}a\right] \colon$. In particular, when $z=0,$ $U_{2}\left( \mu,z=0\right) $ becomes to the usual normally ordered two-mode squeezing operator $S_{2}\left( \mu \right) $. From Eq.(\ref{e19}) it then follows that \begin{align} U_{2}\left( \mu,z\right) \left \vert 00\right \rangle & =\operatorname{sech} \lambda \exp \left \{ -\frac{\left( z_{1}-iz_{2}\right) \left( z_{1} +iz_{2}\right) }{2\left( 1+\mu^{2}\right) }+a_{1}^{\dagger}a_{2}^{\dagger }\tanh \lambda \right. \nonumber \\ & \left. +\frac{1}{2}\left[ \left( z_{1}-iz_{2}\right) a_{2}^{\dagger }-\left( z_{1}+iz_{2}\right) a_{1}^{\dagger}\right] \operatorname{sech} \lambda \right \} \left \vert 00\right \rangle . \label{e20} \end{align} Substituting Eqs.(\ref{e15}), (\ref{e16}) and $\tanh \lambda=\frac{\kappa -1}{\kappa+1},$ $\cosh \lambda=\frac{1+\kappa}{2\sqrt{\kappa}}$ into Eq.(\ref{e20}) yields \begin{align} & e^{-\frac{1}{2\kappa}\left \vert \gamma \right \vert ^{2}-\frac{\sigma \gamma^{\ast}-\gamma \sigma^{\ast}}{2\left( \kappa+1\right) }}U_{2}\left( \mu,z_{1},z_{2}\right) \left \vert 00\right \rangle \nonumber \\ & =\frac{2\sqrt{\kappa}}{1+\kappa}\exp \left \{ -\frac{\left \vert \gamma \right \vert ^{2}+\kappa \left \vert \sigma \right \vert ^{2}}{2\left( \kappa+1\right) }+\frac{\kappa \sigma+\gamma}{1+\kappa}a_{1}^{\dagger} +\frac{\gamma^{\ast}-\kappa \sigma^{\ast}}{1+\kappa}a_{2}^{\dagger} +a_{1}^{\dagger}a_{2}^{\dagger}\frac{\kappa-1}{\kappa+1}\right \} \allowbreak \left \vert 00\right \rangle \left. \equiv \right. \left \vert \sigma,\gamma \right \rangle _{\kappa}, \label{e21} \end{align} then the CWT of Eq.(\ref{e18}) can be further expressed as \begin{equation} e^{-\frac{1}{2\kappa}\left \vert \gamma \right \vert ^{2}-\frac{\sigma \gamma^{\ast}-\gamma \sigma^{\ast}}{2\left( \kappa+1\right) }}\int \frac {d^{2}\eta}{\mu \pi}e^{-\left \vert \eta \right \vert ^{2}/2}\psi^{\ast}\left( \frac{\eta-z_{1}-iz_{2}}{\mu}\right) =\left \langle \psi \right. \left \vert \sigma,\gamma \right \rangle _{\kappa}. \label{e22} \end{equation} Using normally ordered form of the vacuum state projector $\left \vert 00\right \rangle \left \langle 00\right \vert =\colon e^{-a_{1}^{\dagger} a_{1}-a_{2}^{\dagger}a_{2}}\colon,$ and the IWOP method as well as Eq.(\ref{e21}) we have \begin{align} \left \vert \sigma,\gamma \right \rangle _{\kappa \kappa}\left \langle \sigma,\gamma \right \vert & =\frac{4\kappa}{\left( 1+\kappa \right) ^{2} }\colon \exp \left[ -\frac{\left \vert \gamma \right \vert ^{2}+\kappa \left \vert \sigma \right \vert ^{2}}{\kappa+1}+\frac{\kappa \sigma+\gamma}{1+\kappa} a_{1}^{\dagger}+\frac{\gamma^{\ast}-\kappa \sigma^{\ast}}{1+\kappa} a_{2}^{\dagger}\right. \nonumber \\ & \left. +\frac{\kappa \sigma^{\ast}+\gamma^{\ast}}{1+\kappa}a_{1} +\frac{\gamma-\kappa \sigma}{1+\kappa}a_{2}+\frac{\kappa-1}{\kappa+1}\left( a_{1}^{\dagger}a_{2}^{\dagger}+a_{1}a_{2}\right) -a_{1}^{\dagger}a_{1} -a_{2}^{\dagger}a_{2}\right] \colon \nonumber \\ & =\frac{4\kappa}{\left( 1+\kappa \right) ^{2}}\colon \exp \left \{ -\frac{\left( a_{1}+a_{2}^{\dag}-\gamma \right) \left( a_{1}^{\dag} +a_{2}-\gamma^{\ast}\right) }{1+\kappa}\right. \nonumber \\ & -\left. \frac{\kappa \left( a_{1}-a_{2}^{\dag}-\sigma \right) \left( a_{1}^{\dag}-a_{2}-\sigma^{\ast}\right) }{1+\kappa}\right \} \colon \left. =\right. \Delta_{h}\left( \sigma,\gamma,\kappa \right) . \label{e23} \end{align} Now we explain why $\Delta_{h}\left( \sigma,\gamma,\kappa \right) $ is the entangled Husimi operator. Using the formula for converting an operator $A$ into its Weyl ordering form \cite{Weyl} \begin{equation} A=4\int \frac{d^{2}\alpha d^{2}\beta}{\pi^{2}}\left \langle -\alpha ,-\beta \right \vert A\left \vert \alpha,\beta \right \rangle \genfrac{}{}{0pt}{}{:}{:} \exp \{2\left( \alpha^{\ast}a_{1}-a_{1}^{\dagger}\alpha+\beta^{\ast} a_{2}-a_{2}^{\dagger}\beta+a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \} \genfrac{}{}{0pt}{}{:}{:} , \label{e24} \end{equation} where the symbol $ \genfrac{}{}{0pt}{}{:}{:} \genfrac{}{}{0pt}{}{:}{:} $ denotes the Weyl ordering, $\left \vert \beta \right \rangle $ is the usual coherent state, substituting\ Eq.(\ref{e23}) into Eq.(\ref{e24}) and performing the integration by virtue of the technique of integration within a Weyl ordered product of operators, we obtain \begin{align} \left \vert \sigma,\gamma \right \rangle _{\kappa \kappa}\left \langle \sigma,\gamma \right \vert & =\frac{16\kappa}{\left( 1+\kappa \right) ^{2} }\int \frac{d^{2}\alpha d^{2}\beta}{\pi^{2}}\left \langle -\alpha,-\beta \right \vert \colon \exp \left \{ -\frac{\left( a_{1}+a_{2}^{\dag} -\gamma \right) \left( a_{1}^{\dag}+a_{2}-\gamma^{\ast}\right) }{1+\kappa }\right. \nonumber \\ & \left. -\frac{\kappa \left( a_{1}-a_{2}^{\dag}-\sigma \right) \left( a_{1}^{\dag}-a_{2}-\sigma^{\ast}\right) }{1+\kappa}\right \} \colon \left \vert \alpha,\beta \right \rangle \nonumber \\ & \times \genfrac{}{}{0pt}{}{:}{:} \exp \{2\left( \alpha^{\ast}a_{1}-a_{1}^{\dagger}\alpha+\beta^{\ast} a_{2}-a_{2}^{\dagger}\beta+a_{1}^{\dagger}a_{1}+a_{2}^{\dagger}a_{2}\right) \} \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & =4 \genfrac{}{}{0pt}{}{:}{:} \exp \left \{ -\kappa \left( a_{1}-a_{2}^{\dag}-\sigma \right) \left( a_{1}^{\dag}-a_{2}-\sigma^{\ast}\right) -\frac{1}{\kappa}\left( a_{1} +a_{2}^{\dag}-\gamma \right) \left( a_{1}^{\dag}+a_{2}-\gamma^{\ast}\right) \right \} \genfrac{}{}{0pt}{}{:}{:} , \label{e25} \end{align} where we have used the integral formula \begin{equation} \int \frac{d^{2}z}{\pi}\exp \left( \zeta \left \vert z\right \vert ^{2}+\xi z+\eta z^{\ast}\right) =-\frac{1}{\zeta}e^{-\frac{\xi \eta}{\zeta}},\text{Re}\left( \zeta \right) <0. \label{e26} \end{equation} Eq.(\ref{e25}) is the Weyl ordering form of $\left \vert \sigma,\gamma \right \rangle _{\kappa \kappa}\left \langle \sigma,\gamma \right \vert .$ Then according to Weyl quantization scheme we know the Weyl ordering form of two-mode Wigner operator is given by \begin{equation} \Delta_{w}\left( \sigma,\gamma \right) = \genfrac{}{}{0pt}{}{:}{:} \delta \left( a_{1}-a_{2}^{\dag}-\sigma \right) \delta \left( a_{1}^{\dag }-a_{2}-\sigma^{\ast}\right) \delta \left( a_{1}+a_{2}^{\dag}-\gamma \right) \delta \left( a_{1}^{\dag}+a_{2}-\gamma^{\ast}\right) \genfrac{}{}{0pt}{}{:}{:} , \label{e27} \end{equation} thus the classical corresponding function of a Weyl ordered operator is obtained by just replacing $a_{1}-a_{2}^{\dag}\rightarrow \sigma^{\prime} ,a_{1}+a_{2}^{\dag}\rightarrow \gamma^{\prime},$ i.e., \begin{align} & 4 \genfrac{}{}{0pt}{}{:}{:} \exp \left \{ -\kappa \left( a_{1}-a_{2}^{\dag}-\sigma \right) \left( a_{1}^{\dag}-a_{2}-\sigma^{\ast}\right) -\frac{1}{\kappa}\left( a_{1} +a_{2}^{\dag}-\gamma \right) \left( a_{1}^{\dag}+a_{2}-\gamma^{\ast}\right) \right \} \genfrac{}{}{0pt}{}{:}{:} \nonumber \\ & \rightarrow4\exp \left \{ -\kappa \left \vert \sigma^{\prime}-\sigma \right \vert ^{2}-\frac{1}{\kappa}\left \vert \gamma^{\prime}-\gamma \right \vert ^{2}\right \} , \label{e28} \end{align} and in this case the Weyl rule is expressed as \begin{align} \left \vert \sigma,\gamma \right \rangle _{\kappa \kappa}\left \langle \sigma,\gamma \right \vert & =4\int d^{2}\sigma^{\prime}d^{2}\gamma^{\prime} \genfrac{}{}{0pt}{}{:}{:} \delta \left( a_{1}-a_{2}^{\dag}-\sigma \right) \delta \left( a_{1}^{\dag }-a_{2}-\sigma^{\ast}\right) \delta \left( a_{1}+a_{2}^{\dag}-\gamma \right) \nonumber \\ & \times \delta \left( a_{1}^{\dag}+a_{2}-\gamma^{\ast}\right) \genfrac{}{}{0pt}{}{:}{:} \exp \left \{ -\kappa \left \vert \sigma^{\prime}-\sigma \right \vert ^{2}-\frac {1}{\kappa}\left \vert \gamma^{\prime}-\gamma \right \vert ^{2}\right \} \nonumber \\ & =4\int d^{2}\sigma^{\prime}d^{2}\gamma^{\prime}\Delta_{w}\left( \sigma^{\prime},\gamma^{\prime}\right) \exp \left \{ -\kappa \left \vert \sigma^{\prime}-\sigma \right \vert ^{2}-\frac{1}{\kappa}\left \vert \gamma^{\prime}-\gamma \right \vert ^{2}\right \} . \label{e29} \end{align} In reference to Eq.(\ref{e5}) in which the relation between the entangled Husimi function and the two-mode Wigner function is shown, we know that the right-hand side of Eq. (\ref{e29}) should be just the entangled Husimi operator, i.e. \begin{equation} \left \vert \sigma,\gamma \right \rangle _{\kappa \kappa}\left \langle \sigma,\gamma \right \vert =4\int d^{2}\sigma^{\prime}d^{2}\gamma^{\prime} \Delta_{w}\left( \sigma^{\prime},\gamma^{\prime}\right) \exp \left \{ -\kappa \left \vert \sigma^{\prime}-\sigma \right \vert ^{2}-\frac{1}{\kappa }\left \vert \gamma^{\prime}-\gamma \right \vert ^{2}\right \} =\Delta_{h}\left( \sigma,\gamma,\kappa \right) , \label{e30} \end{equation} thus Eq. (\ref{e14}) is proved by combining Eqs.(\ref{e30}) and (\ref{e22}). Thus we have further extended the relation between wavelet transformation and Wigner-Husimi distribution function to the entangled case. That is to say, we prove that the entangled Husimi distribution function of a two-mode quantum state $\left \vert \psi \right \rangle $\ is just the modulus square of the complex wavelet transform of $e^{-\left \vert \eta \right \vert ^{2}/2}$\ with $\psi \left( \eta \right) $\ being the mother wavelet up to a Gaussian function, i.e., $\left \langle \psi \right \vert \Delta_{h}\left( \sigma ,\gamma,\kappa \right) \left \vert \psi \right \rangle =e^{-\frac{1}{\kappa }\left \vert \gamma \right \vert ^{2}}\left \vert \int \frac{d^{2}\eta} {\sqrt{\kappa}\pi}e^{-\left \vert \eta \right \vert ^{2}/2}\psi^{\ast}\left( \left( \eta-z\right) /\sqrt{\kappa}\right) \right \vert ^{2}$. Thus is a convenient approach for calculating various entangled Husimi distribution functions of miscellaneous quantum states. \section{Symplectic Wavelet transformation (SWT)} In this section we shall generalize the usual wavelet transform to symplectic wavelet transformation (SWT) by using the coherent state representation \cite{r43}. \subsection{Single-mode SWT} First we are motivated to generalize the usual wavelet transform, which concerns about dilation, to optical Fresnel transform (we will explain this in detail in section below), i.e. we shall use the symplectic-transformed---translated versions of the mother wavelet \begin{equation} \psi_{r,s;\kappa}\left( z\right) =\sqrt{s^{\ast}}\psi \left[ s\left( z-\kappa \right) -r\left( z^{\ast}-\kappa^{\ast}\right) \right] \label{17.1} \end{equation} as a weighting function to synthesize the original complex signal $f\left( z\right) $, \begin{align} W_{\psi}f\left( r,s;\kappa \right) & =\int \frac{d^{2}z}{\pi}f\left( z\right) \psi_{r,s;\kappa}^{\ast}\left( z\right) ,\text{ }\label{17.2}\\ d^{2}z & =dxdy,\text{ }z=x+iy,\nonumber \end{align} this is named the symplectic-transformed---translated wavelet transform. One can see that the mother wavelet $\psi$ generates the other wavelets of the family $\psi^{\ast}\left[ s\left( z-\kappa \right) -r\left( z^{\ast} -\kappa^{\ast}\right) \right] $ through a translating transform followed by a symplectic transform, ($r,s$ are the symplectic transform parameter, $|s|^{2}-|r|^{2}=1,$ $\kappa$ is a translation parameter, $s$, $r$ and $\kappa \in \mathrm{C}$)\textrm{,} this can be seen more clearly by writing the second transform in matrix form \begin{equation} \left( \begin{array} [c]{c} z-\kappa \\ z^{\ast}-\kappa^{\ast} \end{array} \right) \rightarrow M\left( \begin{array} [c]{c} z-\kappa \\ z^{\ast}-\kappa^{\ast} \end{array} \right) ,\text{ }M\equiv \left( \begin{array} [c]{cc} s & -r\\ -r^{\ast} & s^{\ast} \end{array} \right) , \label{17.3} \end{equation} where $M$ is a symplectic matrix satisfies $M^{T}JM=J$, $J=\left( \begin{array} [c]{cc} 0 & I\\ -I & 0 \end{array} \right) $. Symplectic matrices in Hamiltonian dynamics correspond to canonical transformations and keep the Poisson bracket invariant, while in matrix optics they represent ray transfer matrices of optical instruments, such as lenses and fibers. \subsubsection{Properties of symplectic-transformed---translated WT} It is straightforward to evaluate this transform and its reciprocal transform when $f\left( z\right) $ is the complex Fourier exponentials, $f\left( z\right) =\exp \left( z\beta^{\ast}-z^{\ast}\beta \right) ,$ (note that $z\beta^{\ast}-z^{\ast}\beta$ is pure imaginary): \begin{align} W_{\psi}f & =\sqrt{s}\int \frac{d^{2}z}{\pi}\exp \left( z\beta^{\ast} -z^{\ast}\beta \right) \psi^{\ast}\left[ s\left( z-\kappa \right) -r\left( z^{\ast}-\kappa^{\ast}\right) \right] \nonumber \\ & =\sqrt{s}\int \frac{d^{2}z}{\pi}\exp \left[ \left( z+\kappa \right) \beta^{\ast}-\left( z^{\ast}+\kappa^{\ast}\right) \beta \right] \psi^{\ast }\left( sz-rz^{\ast}\right) \nonumber \\ & =\sqrt{s}\int \frac{d^{2}z^{\prime}}{\pi}\exp \left[ \left( s^{\ast }z^{\prime}+rz^{\prime \ast}+\kappa \right) \beta^{\ast}-\left( sz^{\prime \ast}+r^{\ast}z^{\prime}+\kappa^{\ast}\right) \beta \right] \psi^{\ast }\left( z^{\prime}\right) \nonumber \\ & =\exp \left[ \kappa \beta^{\ast}-\kappa^{\ast}\beta \right] \sqrt{s} \int \frac{d^{2}z^{\prime}}{\pi}\exp \left[ z^{\prime}\left( s^{\ast} \beta^{\ast}-r^{\ast}\beta \right) -z^{\prime \ast}\left( s\beta-r\beta^{\ast }\right) \right] \psi^{\ast}\left( z^{\prime}\right) \nonumber \\ & =\sqrt{s}\exp \left[ \kappa \beta^{\ast}-\kappa^{\ast}\beta \right] \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast}\beta \right) , \label{17.4} \end{align} where $\Phi$ is the complex Fourier transform of $\psi^{\ast}.$ Then we form the adjoint operation \begin{align} W_{\psi}^{\ast}\left( W_{\psi}f\right) \left( z\right) & =\sqrt{s^{\ast }}\int \frac{d^{2}\kappa}{\pi}\left( W_{\psi}f\right) \left( r,s;\kappa \right) \psi \left[ s\left( z-\kappa \right) -r\left( z^{\ast}-\kappa ^{\ast}\right) \right] \nonumber \\ & =\left \vert s\right \vert \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast} \beta \right) \int \frac{d^{2}\kappa}{\pi}\exp \left[ \left( \kappa+z\right) \beta^{\ast}-\left( \kappa^{\ast}+z^{\ast}\right) \beta \right] \psi \left[ -s\kappa+r\kappa^{\ast}\right] \nonumber \\ & =\left \vert s\right \vert \exp \left( z\beta^{\ast}-z^{\ast}\beta \right) \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast}\beta \right) \nonumber \\ & \times \int \frac{d^{2}\kappa^{\prime}}{\pi}\exp \left[ -\kappa^{\prime }\left( s^{\ast}\beta^{\ast}-r^{\ast}\beta \right) +\kappa^{\prime \ast }\left( s\beta-r\beta^{\ast}\right) \right] \psi \left( \kappa^{\prime }\right) \nonumber \\ & =\left \vert s\right \vert \exp \left( z\beta^{\ast}-z^{\ast}\beta \right) \left \vert \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast}\beta \right) \right \vert ^{2}, \label{17.5} \end{align} from which we have \begin{equation} \int \frac{W_{\psi}^{\ast}\left( W_{\psi}f\right) \left( z\right) d^{2} s}{\left \vert s\right \vert ^{2}}=\exp \left( z\beta^{\ast}-z^{\ast} \beta \right) \int d^{2}s\frac{\left \vert \Phi \left( s^{\ast}\beta^{\ast }-r^{\ast}\beta \right) \right \vert ^{2}}{\left \vert s\right \vert } \label{17.6} \end{equation} so we get the inversion formula \begin{equation} f\left( z\right) =\exp \left( z\beta^{\ast}-z^{\ast}\beta \right) =\frac{\int d^{2}sW_{\psi}^{\ast}\left( W_{\psi}f\right) \left( z\right) /\left \vert s\right \vert ^{2}}{\int d^{2}s\left \vert \Phi \left( s^{\ast} \beta^{\ast}-r^{\ast}\beta \right) \right \vert ^{2}/\left \vert s\right \vert }. \label{17.7} \end{equation} Eq.(\ref{17.7}) leads us to impose the normalization \begin{equation} \int d^{2}s\left \vert \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast}\beta \right) \right \vert ^{2}/\left \vert s\right \vert =1, \label{17.8} \end{equation} in order to get the wavelet representation \begin{equation} f\left( z\right) =\int d^{2}sW_{\psi}^{\ast}\left( W_{\psi}f\right) \left( z\right) /\left \vert s\right \vert ^{2}. \label{17.9} \end{equation} Then we can have a form of Parseval's theorem for this new wavelet transform: \textbf{\ Preposition: }\ For any $f$ and $f^{\prime}$ we have \begin{equation} \int \int W_{\psi}f\left( r,s;\kappa \right) W_{\psi}f^{\prime \ast}\left( r,s;\kappa \right) \frac{d^{2}\kappa d^{2}s}{\left \vert s\right \vert ^{2} }=\int \frac{d^{2}z}{2\pi}f\left( z\right) f^{\prime \ast}\left( z\right) . \label{17.10} \end{equation} Proof: Let us assume $F(\beta)$ and $F^{\prime}(\beta)$ are the complex Fourier transform of $f\left( z\right) $ and $f^{\prime}\left( z\right) $ respectively, \begin{equation} F(\beta)=\int \frac{d^{2}z}{2\pi}f\left( z\right) \exp \left( z\beta^{\ast }-z^{\ast}\beta \right) \label{17.11} \end{equation} recall the convolution theorem defined on complex Fourier transform, \begin{align} & \int d^{2}zf\left( \alpha-z,\alpha^{\ast}-z^{\ast}\right) f^{\prime }\left( z\right) \nonumber \\ & =\int d^{2}z\int \frac{d^{2}\beta}{2\pi}F(\beta)e^{\left( \alpha^{\ast }-z^{\ast}\right) \beta-\left( \alpha-z\right) \beta^{\ast}}\int \frac {d^{2}\beta^{\prime}}{2\pi}F^{\prime}(\beta^{\prime})\exp \left( z^{\ast} \beta^{\prime}-z\beta^{\prime \ast}\right) \nonumber \\ & =\int \int d^{2}\beta d^{2}\beta^{\prime}F^{\prime}(\beta^{\prime} )F(\beta)e^{\alpha^{\ast}\beta-\alpha \beta^{\ast}}\delta \left( \beta -\beta^{\prime}\right) \delta \left( \beta^{\ast}-\beta^{\prime \ast}\right) \nonumber \\ & =\int d^{2}\beta F(\beta)F^{\prime}(\beta)e^{\alpha^{\ast}\beta-\alpha \beta^{\ast}} \label{17.12} \end{align} so from (\ref{17.12}) and (\ref{17.1}), (\ref{17.2}) we see that $W_{\psi }f\left( r,s;\kappa \right) =\int \frac{d^{2}z}{\pi}f\left( z\right) \psi_{r,s;\kappa}^{\ast}\left( z\right) $ can be considered as a convolution in the form \begin{align} & \int d^{2}zf\left( z\right) \psi^{\ast}\left[ s\left( z-\kappa \right) -r\left( z^{\ast}-\kappa^{\ast}\right) \right] \nonumber \\ & =\int d^{2}\beta F(\beta)\Phi^{\ast}\left( s\beta-r\beta^{\ast}\right) \exp \left( \kappa \beta^{\ast}-\kappa^{\ast}\beta \right) \label{17.13} \end{align} It then follows from (\ref{17.12}) that \begin{align} & \int W_{\psi}f\left( r,s;\kappa \right) W_{\psi}f^{\prime \ast}\left( r,s;\kappa \right) d^{2}\kappa \nonumber \\ & =\left \vert s\right \vert \int \frac{d^{2}\beta d^{2}\beta^{\prime}}{2\pi }F(\beta)\Phi^{\ast}\left( s\beta-r\beta^{\ast}\right) \nonumber \\ & \times F^{\prime \ast}(\beta^{\prime})\Phi^{\prime}\left( s\beta^{\prime }-r\beta^{\prime \ast}\right) \delta \left( \beta-\beta^{\prime}\right) \delta \left( \beta^{\ast}-\beta^{\prime \ast}\right) \nonumber \\ & =\left \vert s\right \vert \int \frac{d^{2}\beta}{2\pi}F(\beta)F^{\prime \ast }(\beta)\left \vert \Phi \left( s\beta-r\beta^{\ast}\right) \right \vert ^{2}, \label{17.14} \end{align} Therefore, using (\ref{17.8}) we see that the further integration yields \begin{align} & \int \frac{d^{2}s}{\left \vert s\right \vert ^{2}}\int W_{\psi}f\left( r,s;\kappa \right) W_{\psi}f^{\prime \ast}\left( r,s;\kappa \right) d^{2}\kappa \nonumber \\ & =\int \frac{d^{2}\beta}{2\pi}F(\beta)F^{\prime \ast}(\beta)\int \frac{d^{2} s}{\left \vert s\right \vert }\left \vert \Phi \left( s\beta-r\beta^{\ast }\right) \right \vert ^{2}\nonumber \\ & =\int \frac{d^{2}\beta}{2\pi}F(\beta)F^{\prime \ast}(\beta)=\int \frac{d^{2} z}{2\pi}f\left( z\right) f^{\prime \ast}\left( z\right) , \label{17.15} \end{align} which completes the proof. \textbf{Theorem}: From the Proposition (\ref{17.10}) we have \begin{equation} \int \int W_{\psi}f\left( r,s;\kappa \right) \psi_{r,s;\kappa}\left( z\right) \frac{d^{2}\kappa d^{2}s}{\left \vert s\right \vert ^{2}}=f\left( z\right) , \label{17.16} \end{equation} that is, there exists an inversion formula for arbitrary function $f\left( z\right) $. In fact, in Eq. (\ref{17.2}) when we take $f\left( z\right) =\delta \left( z-z^{\prime}\right) ,$ then \begin{equation} W_{\psi}f\left( r,s;\kappa \right) =\int \frac{d^{2}z}{\pi}f\left( z\right) \psi_{r,s;\kappa}^{\ast}\left( z\right) =\psi_{r,s;\kappa}^{\ast}\left( z^{\prime}\right) . \label{17.17} \end{equation} Substituting (\ref{17.17}) into (\ref{17.15}) we obtain (\ref{17.16}). \subsubsection{Relation between $W_{\psi}f\left( r,s;\kappa \right) $ and optical Fresnel transform} Now we explain why the idea of $W_{\psi}f\left( r,s;\kappa \right) $ is originated from the optical Fresnel transform. We can visualize the symplectic-transformed---translated wavelet transform in the context of quantum mechanics, letting $f\left( z\right) \equiv \left \langle z\right \vert \left. f\right \rangle $, $\left \langle z\right \vert $ is the coherent state, $\left \vert z\right \rangle =\exp \left[ za^{\dagger}-z^{\ast}a\right] \equiv \left \vert \left( \begin{array} [c]{c} z\\ z^{\ast} \end{array} \right) \right \rangle $, $|0\rangle$ is the vacuum state in Fock space, then Eq. (\ref{17.1}) can be expressed as \begin{align} W_{\psi}f\left( r,s;\kappa \right) & =\sqrt{s}\int \frac{d^{2}z}{\pi} \psi^{\ast}\left[ s\left( z-\kappa \right) -r\left( z^{\ast}-\kappa^{\ast }\right) \right] f\left( z\right) \nonumber \\ & =\sqrt{s}\int \frac{d^{2}z}{\pi}\left \langle \psi \right \vert \left. \left( \begin{array} [c]{cc} s & -r\\ -r^{\ast} & s^{\ast} \end{array} \right) \left( \begin{array} [c]{c} z-\kappa \\ z^{\ast}-\kappa^{\ast} \end{array} \right) \right \rangle \left \langle z\right \vert \left. f\right \rangle \nonumber \\ & =\left \langle \psi \right \vert F_{1}\left( r,s,\kappa \right) \left \vert f\right \rangle , \label{17.18} \end{align} where $F^{\left( r,s,\kappa \right) }$ is defined as \begin{equation} F_{1}\left( r,s,\kappa \right) =\sqrt{s}\int \frac{d^{2}z}{\pi}\left \vert sz-rz^{\ast}\right \rangle \left \langle z+\kappa \right \vert ,\; \; \label{17.19} \end{equation} and $\left \vert sz-rz^{\ast}\right \rangle \equiv \left \vert \left( \begin{array} [c]{cc} s & -r\\ -r^{\ast} & s^{\ast} \end{array} \right) \left( \begin{array} [c]{c} z\\ z^{\ast} \end{array} \right) \right \rangle .$To know the explicit form of $F_{1}\left( r,s,\kappa \right) $, we employ the normal ordering of the vacuum projector $\left \vert 0\right \rangle \left \langle 0\right \vert =:\exp \left( -a^{\dagger}a\right) :$ and the IWOP technique to perform the integration in (\ref{17.19}), which leads to \begin{align} F_{1}\left( r,s,\kappa \right) & =\exp(\frac{1}{4}\left \vert \kappa \right \vert ^{2}+\frac{r}{8s}\kappa^{\ast2}+\frac{r^{\ast}\kappa^{2}} {8s^{\ast}})\exp \left( -\frac{r}{2s^{\ast}}a^{\dagger2}-\frac{\kappa }{2s^{\ast}}\left( \left \vert s\right \vert ^{2}+\left \vert r\right \vert ^{2}\right) a^{\dagger}\right) \nonumber \\ & \exp \left[ \left( a^{\dagger}a+\frac{1}{2}\right) \ln \frac{1}{s^{\ast} }\right] \exp \left( \frac{r^{\ast}}{2s^{\ast}}a^{2}-\frac{1}{2s^{\ast} }\left( s^{\ast}\kappa^{\ast}+r^{\ast}\kappa \right) a\right) . \label{17.20} \end{align} The transformation matrix element of $F_{1}\left( r,s,\kappa=0\right) $ in the coordinate representation $\left \vert x\right \rangle $ is just the kernel of optical diffraction integration (\ref{5.10}) (Fresnel transform), this explains our motivation to introduce $W_{\psi}f\left( r,s;\kappa \right) $. In particular, when $r^{\ast}=r\equiv \sinh \lambda$, $s=s^{\ast}\equiv \cosh \lambda,$ $F_{1}\left( r,s,\kappa=0\right) $ reduces to it the well-known single-mode squeezing operator $\exp[\frac{\lambda}{2}\left( a^{2}-a^{\dagger2}\right) ]$ which corresponds to dilation in the usual WT. \subsection{Entangled SWT} In the above subsection, the mother wavelet is gained through a translating transform followed by a symplectic transform. This motivation arises from the consideration that symplectic transforms are more general than the dilated transform, and are useful in Fresnel transform of Fourier optics, e.g. ray transfer matrices of optical instruments, such as lenses and fibers in matrix optics, while in quantum optics symplectic transforms correspond to single-mode Fresnel operator (or generalized SU(1,1) squeezing operator). Recalling that in section 9 we have introduced the 2-mode entangled Fresnel operator which is a mapping of classical mixed transformation $\left( z,z^{\prime}\right) \rightarrow \left( sz+rz^{\prime \ast},sz^{\prime }+rz^{\ast}\right) $ in 2-mode coherent state $\left \vert z,z^{\prime }\right \rangle $ representation onto quantum operator $F_{2}\left( r,s\right) $, thus we are naturally led to develop the SWT in (\ref{17.2}) to the so-called entangled SWT (ESWT) \cite{r44} for signals $g\left( z,z^{\prime}\right) $ defined in two complex planes, \begin{equation} W_{\phi}g\left( r,s;k,k^{\prime}\right) = {\displaystyle \iint} \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}g\left( z,z^{\prime}\right) \phi_{r,s;k,k^{\prime}}^{\ast}\left( z,z^{\prime}\right) , \label{17.21} \end{equation} here \begin{equation} \phi_{r,s;k.k^{\prime}}\left( z,z^{\prime}\right) \equiv s^{\ast}\phi \left[ s\left( z-k\right) +r\left( z^{\prime \ast}-k^{\prime \ast}\right) ,s\left( z^{\prime}-k^{\prime}\right) +r\left( z^{\ast}-k^{\ast}\right) \right] , \label{17.22} \end{equation} is used as a weighting function to synthesize the signal $g\left( z,z^{\prime}\right) $ regarding to two complex planes. One can see that the mother wavelet $\phi$ generates the family $\phi^{\ast}\left[ s\left( z-k\right) +r\left( z^{\prime \ast}-k^{\prime \ast}\right) ,s\left( z^{\prime}-k^{\prime}\right) +r\left( z^{\ast}-k^{\ast}\right) \right] $ through a translating transform followed by an entangled symplectic transform. We emphasize that this transform mixes the two complex planes, which is different from the tensor product of two independent transforms $\left( z,z^{\prime}\right) \rightarrow \left[ s\left( z-k\right) -r\left( z^{\ast}-k^{\ast}\right) ,s\left( z^{\prime}-k^{\prime}\right) -r\left( z^{\prime \ast}-k^{\prime \ast}\right) \right] $ given by (\ref{17.18}). The new symplectic transform can be seen more clearly by writing it in matrix form: \begin{equation} \left( \begin{array} [c]{c} z-k\\ z^{\ast}-k^{\ast}\\ z^{\prime}-k^{\prime}\\ z^{\prime \ast}-k^{^{\prime}\ast} \end{array} \right) \longrightarrow \mathcal{M}\left( \begin{array} [c]{c} z-k\\ z^{\ast}-k^{\ast}\\ z^{\prime}-k^{\prime}\\ z^{\prime \ast}-k^{^{\prime}\ast} \end{array} \right) \text{, }\mathcal{M}=\left[ \begin{array} [c]{cccc} s & 0 & 0 & r\\ 0 & s^{\ast} & r^{\ast} & 0\\ 0 & r & s & 0\\ r^{\ast} & 0 & 0 & s^{\ast} \end{array} \right] \label{17.23} \end{equation} where $\mathcal{M}$ is symplectic satisfying $\mathcal{M}^{T}\mathcal{J} \mathcal{M}=\mathcal{J}$, $\mathcal{J}=\left[ \begin{array} [c]{cc} 0 & \mathcal{I}\\ -\mathcal{I} & 0 \end{array} \right] $ , $\mathcal{I}$ is the $2\times2$ unit matrix. For Eq. (\ref{17.21}) being qualified as a new wavelet transform we must prove that it possesses fundamental properties of the usual wavelet transforms, such as the \textit{admissibility condition, }the Parseval's theorem and the inversion formula. When $g\left( z,z^{\prime}\right) $ is the complex Fourier exponential, \begin{equation} g_{1}\left( z,z^{\prime}\right) =\exp \left( z\beta^{\ast}-z^{\ast} \beta+z^{\prime}\gamma^{\ast}-z^{\prime \ast}\gamma \right) , \label{17.24} \end{equation} according to (\ref{17.21})-(\ref{17.22}) we evaluate its ESWT \begin{align} W_{\phi}g_{1} & = {\displaystyle \iint} \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}\exp \left( z\beta^{\ast}-z^{\ast} \beta+z^{\prime}\gamma^{\ast}-z^{\prime \ast}\gamma \right) \phi _{r,s;k.k^{\prime}}^{\ast}\left( z,z^{\prime}\right) \nonumber \\ & =s {\displaystyle \iint} \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}\phi^{\ast}\left[ sz+rz^{\prime \ast },sz^{\prime}+rz^{\ast}\right] \nonumber \\ & \times \exp \left[ \left( z+k\right) \beta^{\ast}-\left( z^{\ast} +k^{\ast}\right) \beta+\left( z^{\prime}+k^{\prime}\right) \gamma^{\ast }-\left( z^{\prime \ast}+k^{\prime \ast}\right) \gamma \right] . \label{17.25} \end{align} Making the integration variables transform $sz+rz^{\prime \ast}\rightarrow w,$ $sz^{\prime}+rz^{\ast}\rightarrow w^{\prime},$ Eq. (\ref{17.25}) becomes \begin{align} W_{\phi}g_{1} & =s\exp \left( k\beta^{\ast}-k^{\ast}\beta+k^{\prime} \gamma^{\ast}-k^{\prime \ast}\gamma \right) {\displaystyle \iint} \frac{d^{2}wd^{2}w^{\prime}}{\pi^{2}}\phi^{\ast}\left( w,w^{\prime}\right) \nonumber \\ & \times \exp \left[ w\left( s^{\ast}\beta^{\ast}+r^{\ast}\gamma \right) -w^{\ast}\left( s\beta+r\gamma^{\ast}\right) +w^{\prime}\left( s^{\ast }\gamma^{\ast}+r^{\ast}\beta \right) -w^{\prime \ast}\left( s\gamma +r\beta^{\ast}\right) \right] , \label{17.26} \end{align} the last integration is just the complex Fourier transform (CFT) of $\phi^{\ast},$ denoting it as $\Phi^{\ast},$ we have \begin{equation} W_{\phi}g_{1}=s\exp \left( k\beta^{\ast}-k^{\ast}\beta+k^{\prime}\gamma^{\ast }-k^{\prime \ast}\gamma \right) \Phi^{\ast}\left( s^{\ast}\beta^{\ast} +r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) . \label{17.27} \end{equation} Then we form the adjoint operation of (\ref{17.27}), \begin{align} & W_{\phi}^{\ast}\left( W_{\phi}g_{1}\right) \left( z,z^{\prime}\right) \nonumber \\ & =s^{\ast} {\displaystyle \iint} \frac{d^{2}kd^{2}k^{\prime}}{\pi^{2}}\{ \left( W_{\phi}g\right) \left( r,s;k,k^{\prime}\right) \} \phi \left[ s\left( z-k\right) +r\left( z^{\prime \ast}-k^{\prime \ast}\right) ,s\left( z^{\prime}-k^{\prime}\right) +r\left( z^{\ast}-k^{\ast}\right) \right] \nonumber \\ & =\left \vert s\right \vert ^{2}\Phi^{\ast}\left( s^{\ast}\beta^{\ast }+r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) {\displaystyle \iint} \frac{d^{2}kd^{2}k^{\prime}}{\pi^{2}}\exp \left( k\beta^{\ast}-k^{\ast} \beta+k^{\prime}\gamma^{\ast}-k^{\prime \ast}\gamma \right) \nonumber \\ & \times \phi \left[ s\left( z-k\right) +r\left( z^{\prime \ast} -k^{\prime \ast}\right) ,s\left( z^{\prime}-k^{\prime}\right) +r\left( z^{\ast}-k^{\ast}\right) \right] \nonumber \\ & =\left \vert s\right \vert ^{2}\Phi^{\ast}\left( s^{\ast}\beta^{\ast }+r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) {\displaystyle \iint} \frac{d^{2}kd^{2}k^{\prime}}{\pi^{2}}\phi \left[ -sk-rk^{\prime \ast },-sk^{\prime}-rk^{\ast}\right] \nonumber \\ & \times \exp \left[ \left( k+z\right) \beta^{\ast}-\left( k^{\ast} +z^{\ast}\right) \beta+\left( k^{\prime}+z^{\prime}\right) \gamma^{\ast }-\left( k^{\prime \ast}+z^{\prime \ast}\right) \gamma \right] \nonumber \\ & =\left \vert s\right \vert ^{2}\Phi^{\ast}\left( s^{\ast}\beta^{\ast }+r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) \exp \left( z\beta^{\ast}-z^{\ast}\beta+z^{\prime}\gamma^{\ast}-z^{\prime \ast }\gamma \right) {\displaystyle \iint} \frac{d^{2}vd^{2}v^{\prime}}{\pi^{2}}\phi \left( v,v^{\prime}\right) \nonumber \\ & \times \exp \left[ -v\left( s^{\ast}\beta^{\ast}+r^{\ast}\gamma \right) +v^{\ast}\left( s\beta+r\gamma^{\ast}\right) -v^{\prime}\left( s^{\ast }\gamma^{\ast}+r^{\ast}\beta \right) +v^{\prime \ast}\left( s\gamma +r\beta^{\ast}\right) \right] , \label{17.28} \end{align} where the integration in the last line is just the CFT of $\phi$ (comparing with (\ref{17.26})), thus (\ref{17.28}) leads to \begin{equation} W_{\phi}^{\ast}\left( W_{\phi}g_{1}\right) \left( z,z^{\prime}\right) =\left \vert s\right \vert ^{2}\exp \left( z\beta^{\ast}-z^{\ast}\beta +z^{\prime}\gamma^{\ast}-z^{\prime \ast}\gamma \right) \left \vert \Phi \left( s^{\ast}\beta^{\ast}+r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast }\beta \right) \right \vert ^{2}. \label{17.29} \end{equation} From Eq. (\ref{17.29}) we have \begin{align} & \int d^{2}sW_{\phi}^{\ast}\left( W_{\phi}g_{1}\right) \left( z,z^{\prime}\right) /\left \vert s\right \vert ^{4}\nonumber \\ & =\exp \left( z\beta^{\ast}-z^{\ast}\beta+z^{\prime}\gamma^{\ast} -z^{\prime \ast}\gamma \right) \nonumber \\ & \times \int d^{2}s\left \vert \Phi \left( s^{\ast}\beta^{\ast}+r^{\ast} \gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) \right \vert ^{2}/\left \vert s\right \vert ^{2}, \label{17.30} \end{align} which together with (\ref{17.24}) lead to \begin{equation} g_{1}\left( z,z^{\prime}\right) =\frac{\int d^{2}sW_{\phi}^{\ast}\left( W_{\phi}g_{1}\right) \left( z,z^{\prime}\right) /\left \vert s\right \vert ^{4}}{\int d^{2}s\left \vert \Phi \left( s^{\ast}\beta^{\ast}+r^{\ast} \gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) \right \vert ^{2}/\left \vert s\right \vert ^{2}}. \label{17.31} \end{equation} Eq. (\ref{17.31}) implies that we should impose the normalization \begin{equation} \int d^{2}s\left \vert \Phi \left( s^{\ast}\beta^{\ast}+r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) \right \vert ^{2}/\left \vert s\right \vert ^{2}=1, \label{17.32} \end{equation} such that the reproducing process\textit{\ }exists \begin{equation} g_{1}\left( z,z^{\prime}\right) =\int d^{2}sW_{\phi}^{\ast}\left( W_{\phi }g_{1}\right) \left( z,z^{\prime}\right) /\left \vert s\right \vert ^{4}. \label{17.33} \end{equation} (\ref{17.32}) may be named the generalized admissibility condition. Now we can have the corresponding \emph{Parseval theorem}: For any $g$ and $g^{\prime}$ we have \begin{equation} {\displaystyle \iiint} W_{\phi}g\left( r,s;k,k^{\prime}\right) W_{\phi}^{\ast}g^{\prime}\left( r,s;k,k^{\prime}\right) \frac{d^{2}kd^{2}k^{\prime}d^{2}s}{\left \vert s\right \vert ^{4}}= {\displaystyle \iint} d^{2}zd^{2}z^{\prime}g\left( z,z^{\prime}\right) g^{\prime \ast}\left( z,z^{\prime}\right) . \label{17.34} \end{equation} \emph{Proof:}\textbf{\ }Assuming $F\left( \beta,\gamma \right) $ and $F^{\prime}\left( \beta,\gamma \right) $ be CFT of $g\left( z,z^{\prime }\right) $ and $g^{\prime}\left( z,z^{\prime}\right) $, respectively, \begin{equation} F\left( \beta,\gamma \right) = {\displaystyle \iint} \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}g\left( z,z^{\prime}\right) \exp \left( z\beta^{\ast}-z^{\ast}\beta+z^{\prime}\gamma^{\ast}-z^{\prime \ast} \gamma \right) , \label{17.35} \end{equation} recalling the corresponding convolution theorem \begin{align} & {\displaystyle \iint} d^{2}zd^{2}z^{\prime}g\left( \alpha-z,\alpha^{\ast}-z^{\ast};\alpha^{\prime }-z^{\prime},\alpha^{\prime \ast}-z^{\prime \ast}\right) g^{\prime}\left( z,z^{\prime}\right) \nonumber \\ & = {\displaystyle \iint} d^{2}\beta d^{2}\gamma F\left( \beta,\gamma \right) F^{\prime}\left( \beta,\gamma \right) \exp \left( \alpha^{\ast}\beta-\alpha \beta^{\ast} +\alpha^{\prime \ast}\gamma-\alpha^{\prime}\gamma^{\ast}\right) , \label{17.36} \end{align} so from Eqs. (\ref{17.21}) and (\ref{17.35})-(\ref{17.36}) we see that $W_{\phi}g\left( r,s;k,k^{\prime}\right) = {\displaystyle \iint} \frac{d^{2}zd^{2}z\prime}{\pi^{2}}g\left( z,z^{\prime}\right) \phi _{r,s;k,k^{\prime}}^{\ast}\left( z,z^{\prime}\right) $ can be considered as a convolution in the form (noting that the CFT of $\phi^{\ast}$ is $\Phi ^{\ast},$ see (\ref{17.26})-(\ref{17.27})) \begin{align} & {\displaystyle \iint} d^{2}zd^{2}z^{\prime}g\left( z,z^{\prime}\right) \phi^{\ast}\left[ s\left( z-k\right) +r\left( z^{\prime \ast}-k^{\prime \ast}\right) ,s\left( z^{\prime}-k^{\prime}\right) +r\left( z^{\ast}-k^{\ast}\right) \right] \nonumber \\ & = {\displaystyle \iint} d^{2}\beta d^{2}\gamma F\left( \beta,\gamma \right) \Phi^{\ast}\left( s^{\ast}\beta^{\ast}+r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast }\beta \right) \exp \left( k\beta^{\ast}-k^{\ast}\beta+k^{\prime}\gamma^{\ast }-k^{\prime \ast}\gamma \right) . \label{17.37} \end{align} Using Eq. (\ref{17.37}) we calculate \begin{align} & {\displaystyle \iint} W_{\phi}g\left( r,s;k,k^{\prime}\right) W_{\phi}^{\ast}g^{\prime}\left( r,s;k,k^{\prime}\right) d^{2}kd^{2}k^{\prime}\nonumber \\ & =\left \vert s\right \vert ^{2} {\displaystyle \iiiint} d^{2}\beta d^{2}\gamma d^{2}\beta^{\prime}d^{2}\gamma^{\prime}F\left( \beta,\gamma \right) \Phi^{\ast}\left( s^{\ast}\beta^{\ast}+r^{\ast} \gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) \nonumber \\ & \times F^{\prime^{\ast}}\left( \beta^{\prime},\gamma^{\prime}\right) \Phi \left( s^{\ast}\beta^{\ast}+r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast }+r^{\ast}\beta \right) \delta \left( \beta-\beta^{\prime}\right) \delta \left( \beta^{\ast}-\beta^{\prime \ast}\right) \delta \left( \gamma-\gamma^{\prime}\right) \delta \left( \gamma^{\ast}-\gamma^{\prime \ast }\right) \nonumber \\ & =\left \vert s\right \vert ^{2} {\displaystyle \iint} d^{2}\beta d^{2}\gamma F\left( \beta,\gamma \right) F^{\prime \ast}\left( \beta,\gamma \right) \left \vert \Phi \left( s^{\ast}\beta^{\ast}+r^{\ast }\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) \right \vert ^{2}. \label{17.38} \end{align} As a consequence of (\ref{17.32}) and (\ref{17.38}) the further integration yields \begin{align} & \int \frac{d^{2}s}{\left \vert s\right \vert ^{4}} {\displaystyle \iint} W_{\phi}g\left( r,s;k,k^{\prime}\right) W_{\phi}^{\ast}g^{\prime}\left( r,s;k,k^{\prime}\right) d^{2}kd^{2}k^{\prime}\nonumber \\ & = {\displaystyle \iint} d^{2}\beta d^{2}\gamma F\left( \beta,\gamma \right) F^{\prime \ast}\left( \beta,\gamma \right) \int d^{2}s\left \vert \Phi \left( s^{\ast}\beta^{\ast }+r^{\ast}\gamma,\text{ }s^{\ast}\gamma^{\ast}+r^{\ast}\beta \right) \right \vert ^{2}/\left \vert s\right \vert ^{2}\nonumber \\ & = {\displaystyle \iint} d^{2}\beta d^{2}\gamma F\left( \beta,\gamma \right) F^{\prime \ast}\left( \beta,\gamma \right) = {\displaystyle \iint} d^{2}zd^{2}z^{\prime}g\left( z,z^{\prime}\right) g^{\prime \ast}\left( z,z^{\prime}\right) , \label{17.39} \end{align} which completes the proof. \emph{Inversion Formula}:\emph{\ }From Eq. (\ref{17.34}) we have \begin{equation} g\left( z,z^{\prime}\right) = {\displaystyle \iiint} W_{\phi}g\left( r,s;k,k^{\prime}\right) \phi_{r,s;k,k^{\prime}}\left( z,z^{\prime}\right) \frac{d^{2}kd^{2}k^{\prime}d^{2}s}{\pi^{2}\left \vert s\right \vert ^{4}}, \label{17.40} \end{equation} that is, there exists an inversion formula for $g\left( z,z^{\prime}\right) $ which represents the original signal $g\left( z,z^{\prime}\right) $ as a superposition of wavelet functions $\phi_{r,s;k,k^{\prime}},$ with the value of entangled wavelet transform $W_{\phi}g\left( r,s;k,k^{\prime}\right) $ serving as coefficients. In fact, in Eq. (\ref{17.21}) when we take \begin{equation} g\left( z,z^{\prime}\right) =\delta \left( z-u\right) \delta \left( z^{\ast}-u^{\ast}\right) \delta \left( z^{\prime}-u^{\prime}\right) \delta \left( z^{\prime \ast}-u^{\prime \ast}\right) , \label{17.41} \end{equation} then \begin{equation} W_{\phi}g\left( r,s;k,k^{\prime}\right) =\frac{1}{\pi^{2}}\phi _{r,s;k,k^{\prime}}^{\ast}\left( u,u^{\prime}\right) . \label{17.42} \end{equation} Substituting (\ref{17.41})-(\ref{17.42}) into (\ref{17.39}), we obtain (\ref{17.40}). We can visualize the ESWT in the context of quantum mechanics, letting $g\left( z,z^{\prime}\right) =\left \langle z,z^{\prime}\right \vert \left. g\right \rangle \ $and using Eqs. (\ref{17.22})-(\ref{17.23}), Eq. (\ref{17.21}) is expressed as \begin{align} W_{\phi}g\left( r,s;k,k^{\prime}\right) & =s {\displaystyle \iint} \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}\phi^{\ast}\left[ s\left( z-k\right) +r\left( z^{\prime \ast}-k^{\prime \ast}\right) ,s\left( z^{\prime} -k^{\prime}\right) +r\left( z^{\ast}-k^{\ast}\right) \right] g\left( z,z^{\prime}\right) \nonumber \\ & =s {\displaystyle \iint} \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}\left \langle \phi \right. \left \vert \mathcal{M}\left( \begin{array} [c]{c} z-k\\ z^{\ast}-k^{\ast}\\ z^{\prime}-k^{\prime}\\ z^{\prime \ast}-k^{\prime \ast} \end{array} \right) \right \rangle \left \langle z,z^{\prime}\right \vert \left. g\right \rangle =\left \langle \phi \right \vert F_{2}\left( r,s;k,k^{\prime }\right) \left \vert g\right \rangle \label{17.43} \end{align} where $F_{2}\left( r,s;k,k^{\prime}\right) $ is defined as \begin{equation} \begin{array} [c]{c} F_{2}\left( r,s;k,k^{\prime}\right) =s {\displaystyle \iint} \frac{d^{2}zd^{2}z^{\prime}}{\pi^{2}}\left \vert sz+rz^{\prime \ast},sz^{\prime }+rz^{\ast}\right \rangle \left \langle z+k,z^{\prime}+k^{\prime}\right \vert ,\\ \left \vert sz+rz^{\prime \ast},sz^{\prime}+rz^{\ast}\right \rangle =\left \vert sz+rz^{\prime \ast}\right \rangle _{1}\otimes \left \vert sz^{\prime}+rz^{\ast }\right \rangle _{2}. \end{array} \label{17.44} \end{equation} When $k=0$ and $k^{\prime}=0$, $F_{2}\left( r,s;k=k^{\prime}=0\right) $ is just the 2-mode Fresnel operator. Thus, we have extended the SWT of signals in one complex plane to ESWT of signals defined in two complex planes, the latter is not the tensor product of two independent SWTs, this generalization is inevitable, since it resembles the extending from the single-mode squeezing transform (or Fresnel operator) to the two-mode squeezing transform (or entangled Fresnel operator) in quantum optics. \subsection{Symplectic-dilation mixed WT} Next we shall introduce a new kind of WT, i.e., symplectic-dilation mixed WT \cite{r45}. Recalling that in Ref. \cite{r46} we have constructed a new entangled-coherent state representation (ECSR) $\left \vert \alpha ,x\right \rangle $, \begin{align} \left \vert \alpha,x\right \rangle & =\exp \left[ -\frac{1}{2}x^{2}-\frac {1}{4}\left \vert \alpha \right \vert ^{2}+(x+\frac{\alpha}{2})a_{1}^{\dagger }\right. \nonumber \\ & \left. +(x-\frac{\alpha}{2})a_{2}^{\dagger}-\frac{1}{4}(a_{1}^{\dagger }+a_{2}^{\dagger})^{2}\right] \left \vert 00\right \rangle , \label{17.45} \end{align} which is the common eigenvector of the operator $\left( X_{1}+X_{2}\right) /2$ and $a_{1}-a_{2},$ i.e., $\left( a_{1}-a_{2}\right) \left \vert \alpha,x\right \rangle =\alpha \left \vert \alpha,x\right \rangle $ and $\frac {1}{2}(X_{1}+X_{2})\left \vert \alpha,x\right \rangle =\frac{1}{\sqrt{2} }x\left \vert \alpha,x\right \rangle ,$ where $X_{i}=\frac{1}{\sqrt{2}} (a_{i}+a_{i}^{\dagger})$ is the coordinate operator, ($i=1,2)$. $\left \vert \alpha,x\right \rangle $ is complete, \begin{equation} \int_{-\infty}^{\infty}\frac{\mathtt{d}x}{\sqrt{\pi}}\int \frac{\mathtt{d} ^{2}\alpha}{2\pi}\left \vert \alpha,x\right \rangle \left \langle \alpha ,x\right \vert =1, \label{17.46} \end{equation} and exhibits partly non-orthogonal property (for $\alpha)$ and orthonormal property (for $x),$ \begin{align} & \left \langle \alpha^{\prime},x^{\prime}\right. \left \vert \alpha ,x\right \rangle \nonumber \\ & =\sqrt{\pi}\exp \left[ -\frac{1}{4}(\left \vert \alpha \right \vert ^{2}+\left \vert \alpha^{\prime}\right \vert ^{2})+\frac{1}{2}\alpha \alpha^{\prime \ast}\right] \delta \left( x^{\prime}-x\right) , \label{17.47} \end{align} so $\left \vert \alpha,x\right \rangle $ possess behavior of both the coherent state and the entangled state. An interesting question is: Can we introduce a new kind of continuous WT for which the $\left \vert \alpha,x\right \rangle $ representation underlies? The answer is affirmative. Our motivation of this issue comes from the mixed lens-Fresnel transform in classical optics \cite{r47} (see (\ref{44}) below). By synthesizing (\ref{15.3}) and (\ref{17.2}) and in reference to (\ref{17.46}) we propose the mixed WT for $g\left( \alpha,x\right) $ ($\alpha=\alpha_{1}+\mathtt{i}\alpha_{2}$): \begin{equation} W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) \equiv \int_{-\infty}^{\infty }\frac{\mathtt{d}x}{\sqrt{\pi}}\int \frac{\mathtt{d}^{2}\alpha}{2\pi}g\left( \alpha,x\right) \psi_{s,r,\kappa;\mathrm{a},b}^{\ast}\left( \alpha,x\right) . \label{17.48} \end{equation} where $\mathtt{d}^{2}\alpha=\mathtt{d}\alpha_{1}\mathtt{d}\alpha_{2},$ the family of mother wavelet $\psi$ involves both the the symplectic transform of $\alpha$ and the dilation-transform\ of $x$, \begin{equation} \psi_{s,r,\kappa;\mathrm{a},b}\left( \alpha,x\right) =\sqrt{\frac{s^{\ast} }{\left \vert \mathrm{a}\right \vert }}\psi \left[ s\left( \alpha -\kappa \right) -r\left( \alpha^{\ast}-\kappa^{\ast}\right) ,\frac {x-b}{\mathrm{a}}\right] . \label{17.49} \end{equation} Letting $g\left( \alpha,x\right) \equiv$ $\left \langle \alpha,x\right \vert \left. g\right \rangle ,$ then (\ref{17.48}) can be expressed as quantum mechanical version \begin{equation} W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) =\left \langle \psi \right \vert U\left( s,r,\kappa;\mathrm{a},b\right) \left \vert g\right \rangle , \label{17.50} \end{equation} where $U\left( s,r,\kappa;\mathrm{a},b\right) $ is defined as \begin{align} U\left( s,r,\kappa;\mathrm{a},b\right) & =\sqrt{\frac{s}{\left \vert \mathrm{a}\right \vert }}\int_{-\infty}^{\infty}\frac{\mathtt{d}x}{\sqrt{\pi} }\int \frac{\mathtt{d}^{2}\alpha}{2\pi}\nonumber \\ & \times \left \vert s\alpha-r\alpha^{\ast},\frac{x-b}{\mathrm{a}}\right \rangle \left \langle \alpha+\kappa,x\right \vert . \label{17.51} \end{align} $U\left( s,r,\kappa=0;\mathrm{a},b=0\right) $ is just the generalized squeezing operator, which causes a lens-Fresnel mixed transform. For Eq. (\ref{17.48}) being qualified as a new WT we must prove that it possesses fundamental properties of the usual WTs, such as the admissibility condition, the Parseval theorem and the inversion formula. It is straightforward to evaluate the transform (\ref{17.48}) and its reciprocal transform when $g\left( \alpha,x\right) $\ is the exponential $g_{1}\left( \alpha,x\right) =\exp \left( \alpha^{\ast}\beta-\alpha \beta^{\ast} -\mathtt{i}px\right) ,$ \begin{align} W_{\psi}g_{1} & =\sqrt{\frac{s}{\left \vert \mathrm{a}\right \vert }} e^{\kappa^{\ast}\beta-\kappa \beta^{\ast}-\mathtt{i}pb}\int_{-\infty}^{\infty }\frac{\mathtt{d}x}{\sqrt{\pi}}\int \frac{\mathtt{d}^{2}\alpha}{2\pi }\nonumber \\ & \times \psi^{\ast}(s\alpha-r\alpha^{\ast},\frac{x}{\mathrm{a}} )e^{\alpha^{\ast}\beta-\alpha \beta^{\ast}-\mathtt{i}px}. \label{17.52} \end{align} Making the integration variables transform $s\alpha-r\alpha^{\ast}\rightarrow w,\frac{x}{\mathrm{a}}\rightarrow x^{\prime},$ leading to $\mathtt{d} ^{2}\alpha \rightarrow \mathtt{d}^{2}w$ and $\int_{-\infty}^{\infty} \mathtt{d}x\rightarrow \left \vert \mathrm{a}\right \vert \int_{-\infty}^{\infty }\mathtt{d}x^{\prime}$, (\ref{17.52}) becomes \begin{equation} W_{\psi}g_{1}=\sqrt{s\left \vert \mathrm{a}\right \vert }\Phi^{\ast}\left( s^{\ast}\beta^{\ast}-r^{\ast}\beta,\text{ }\mathrm{a}p\right) e^{\kappa ^{\ast}\beta-\kappa \beta^{\ast}-\mathtt{i}pb}, \label{17.53} \end{equation} where $\Phi^{\ast}$ is just the Fourier transform of $\psi^{\ast},$ \begin{align} \Phi^{\ast}\left( s^{\ast}\beta^{\ast}-r^{\ast}\beta,\text{ }\mathrm{a} p\right) & =\int_{-\infty}^{\infty}\frac{\mathtt{d}x^{\prime}}{\sqrt{\pi} }\int \frac{\mathtt{d}^{2}w}{2\pi}\psi^{\ast}\left( w,x^{\prime}\right) \nonumber \\ & \times e^{w^{\ast}\left( s\beta-r\beta^{\ast}\right) -w\left( s^{\ast }\beta^{\ast}-r^{\ast}\beta \right) -\mathtt{i}\mathrm{a}px^{\prime}}. \label{17.54} \end{align} Then we perform the adjoint WT of (\ref{17.48}), using (\ref{17.49})\ and (\ref{17.53}) we see \begin{align} & W_{\psi}^{\ast}\left( W_{\psi}g_{1}\right) \left( \alpha,x\right) \nonumber \\ & =\sqrt{\frac{s^{\ast}}{\left \vert \mathrm{a}\right \vert }}\int_{-\infty }^{\infty}\frac{\mathtt{d}b}{\sqrt{\pi}}\int \frac{\mathtt{d}^{2}\kappa}{2\pi }W_{\psi}g_{1}\nonumber \\ & \times \psi \left[ s\left( \alpha-\kappa \right) -r\left( \alpha^{\ast }-\kappa^{\ast}\right) ,\frac{x-b}{\mathrm{a}}\right] \nonumber \\ & =\left \vert s\right \vert \left \vert \mathrm{a}\right \vert g_{1}\left( \alpha,x\right) \Phi^{\ast}\left( s^{\ast}\beta^{\ast}-r^{\ast}\beta,\text{ }\mathrm{a}p\right) \int_{-\infty}^{\infty}\frac{\mathtt{d}b^{\prime}} {\sqrt{\pi}}\nonumber \\ & \times \int \frac{\mathtt{d}^{2}\kappa^{\prime}}{2\pi}e^{\kappa^{\prime} \beta^{\ast}-\kappa^{\prime \ast}\beta+\mathtt{i}\mathrm{a}pb^{\prime}} \psi \left( s\kappa^{\prime}-r\kappa^{\prime \ast},b^{\prime}\right) \nonumber \\ & =\left \vert s\right \vert \left \vert \mathrm{a}\right \vert g_{1}\left( \alpha,x\right) \left \vert \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast} \beta,\text{ }\mathrm{a}p\right) \right \vert ^{2}. \label{17.55} \end{align} From Eq. (\ref{17.55}) we obtain \begin{align} & \int_{-\infty}^{\infty}\frac{\mathtt{d}\mathrm{a}}{\mathrm{a}^{2}}\int \frac{\mathtt{d}^{2}s}{\left \vert s\right \vert ^{2}}W_{\psi}^{\ast}\left( W_{\psi}g_{1}\right) \left( \alpha,x\right) \nonumber \\ & =g_{1}\left( \alpha,x\right) \int_{-\infty}^{\infty}\frac{\mathtt{d} \mathrm{a}}{\left \vert \mathrm{a}\right \vert }\int \frac{\mathtt{d}^{2} s}{\left \vert s\right \vert }\left \vert \Phi \left( s^{\ast}\beta^{\ast }-r^{\ast}\beta,\text{ }\mathrm{a}p\right) \right \vert ^{2}, \label{17.56} \end{align} which leads to \begin{equation} g_{1}\left( \alpha,x\right) =\frac{\int_{-\infty}^{\infty}\frac {\mathtt{d}\mathrm{a}}{\mathrm{a}^{2}}\int \frac{\mathtt{d}^{2}s}{\left \vert s\right \vert ^{2}}W_{\psi}^{\ast}\left( W_{\psi}g_{1}\right) \left( \alpha,x\right) }{\int_{-\infty}^{\infty}\frac{\mathtt{d}\mathrm{a} }{\left \vert \mathrm{a}\right \vert }\int \frac{\mathtt{d}^{2}s}{\left \vert s\right \vert }\left \vert \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast}\beta,\text{ }\mathrm{a}p\right) \right \vert ^{2}}. \label{17.57} \end{equation} Eq. (\ref{17.57}) implies that we should impose the normalization \begin{equation} \int_{-\infty}^{\infty}\frac{\mathtt{d}\mathrm{a}}{\left \vert \mathrm{a} \right \vert }\int \frac{\mathtt{d}^{2}s}{\left \vert s\right \vert }\left \vert \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast}\beta,\text{ }\mathrm{a}p\right) \right \vert ^{2}=1, \label{17.58} \end{equation} such that the reproducing process\textit{\ }exists \begin{equation} g_{1}\left( \alpha,x\right) =\int_{-\infty}^{\infty}\frac{\mathtt{d} \mathrm{a}}{\mathrm{a}^{2}}\int \frac{\mathtt{d}^{2}s}{\left \vert s\right \vert ^{2}}W_{\psi}^{\ast}\left( W_{\psi}g_{1}\right) \left( \alpha,x\right) . \label{17.59} \end{equation} (\ref{17.58}) may be named the generalized \textit{admissibility condition}. Now we can have the corresponding \emph{Parseval theorem}: For any $g$ and $g^{\prime}$ we have \begin{align} & \int_{-\infty}^{\infty}\frac{\mathtt{d}\mathrm{a\mathtt{d}}b} {\mathrm{a}^{2}}\int \frac{\mathtt{d}^{2}\kappa \mathtt{d}^{2}s}{\left \vert s\right \vert ^{2}}W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) W_{\psi }^{\ast}g^{\prime}\left( s,r,\kappa;\mathrm{a},b\right) \nonumber \\ & =\int_{-\infty}^{\infty}\mathtt{d}x\int \mathtt{d}^{2}\alpha g\left( \alpha,x\right) g^{\prime \ast}\left( \alpha,x\right) . \label{17.60} \end{align} \emph{Proof:}\textbf{\ }Assuming $F\left( \beta,p\right) $ and $F^{\prime }\left( \beta,p\right) $ be the Fourier transforms of $g\left( \alpha,x\right) $ and $g^{\prime}\left( \alpha,x\right) $, respectively, \begin{equation} F\left( \beta,p\right) =\int_{-\infty}^{\infty}\frac{\mathtt{d}x}{\sqrt {2\pi}}\int \frac{\mathtt{d}^{2}\alpha}{\pi}g\left( \alpha,x\right) e^{\alpha \beta^{\ast}-\alpha^{\ast}\beta+\mathtt{i}px}, \label{17.61} \end{equation} In order to prove (\ref{17.60}), we first calculate $W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) $. In similar to deriving Eq.(\ref{19}), using (\ref{14}), (\ref{15}) and the inversion formula of (\ref{17.61}) we have \begin{align} & W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) \nonumber \\ & =\sqrt{\frac{s}{\left \vert \mathrm{a}\right \vert }}\int_{-\infty}^{\infty }\frac{\mathtt{d}p}{\sqrt{2\pi}}\int \frac{\mathtt{d}^{2}\beta}{\pi}F\left( \beta,p\right) \int_{-\infty}^{\infty}\frac{\mathtt{d}x}{\sqrt{\pi}}\int \frac{\mathtt{d}^{2}\alpha}{2\pi}\nonumber \\ & \times e^{\alpha^{\ast}\beta-\alpha \beta^{\ast}-\mathtt{i}px}\psi^{\ast }\left[ s\left( \alpha-\kappa \right) -r\left( \alpha^{\ast}-\kappa^{\ast }\right) ,\frac{x-b}{\mathrm{a}}\right] \nonumber \\ & =\sqrt{s\left \vert \mathrm{a}\right \vert }\int_{-\infty}^{\infty} \frac{\mathtt{d}p}{\sqrt{2\pi}}\int \frac{\mathtt{d}^{2}\beta}{\pi}F\left( \beta,p\right) \nonumber \\ & \times \Phi^{\ast}\left( s^{\ast}\beta^{\ast}-r^{\ast}\beta,\text{ }\mathrm{a}p\right) e^{\kappa^{\ast}\beta-\kappa \beta^{\ast}-\mathtt{i}pb}. \label{17.62} \end{align} It then follows \begin{align} & \int_{-\infty}^{\infty}\mathtt{d}b\int \mathtt{d}^{2}\kappa W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) W_{\psi}^{\ast}g^{\prime}\left( s,r,\kappa;\mathrm{a},b\right) \nonumber \\ & =\left \vert \mathrm{a}s\right \vert \int_{-\infty}^{\infty}\mathtt{d} p\mathtt{d}p^{\prime}\int \mathtt{d}^{2}\beta \mathtt{d}^{2}\beta^{\prime }F\left( \beta,p\right) F^{\prime \ast}\left( \beta^{\prime},p^{\prime }\right) \nonumber \\ & \times \Phi^{\ast}\left( s^{\ast}\beta^{\ast}-r^{\ast}\beta,\text{ }\mathrm{a}p\right) \Phi \left( s^{\ast}\beta^{\prime \ast}-r^{\ast} \beta^{\prime},\mathrm{a}p^{\prime}\right) \nonumber \\ & \times \int_{-\infty}^{\infty}\frac{\mathtt{d}b}{2\pi}\int \frac {\mathtt{d}^{2}\kappa}{\pi^{2}}e^{\kappa^{\ast}\left( \beta-\beta^{\prime }\right) -\kappa \left( \beta^{\ast}-\beta^{\prime \ast}\right) +\mathtt{i}\left( p^{\prime}-p\right) b}\nonumber \\ & =\left \vert \mathrm{a}s\right \vert \int_{-\infty}^{\infty}\mathtt{d} p\int \mathtt{d}^{2}\beta F\left( \beta,p\right) F^{\prime \ast}\left( \beta,p\right) \left \vert \Phi \left( s^{\ast}\beta^{\ast}-r^{\ast} \beta,\mathrm{a}p\right) \right \vert ^{2}. \label{17.63} \end{align} Substituting (\ref{17.63}) into the left-hand side (LHS) of (\ref{17.60}) and using (\ref{17.58}) we see \begin{align} \text{LHS of (\ref{26})} & =\int_{-\infty}^{\infty}\mathtt{d}p\int \mathtt{d}^{2}\beta F\left( \beta,p\right) F^{\prime \ast}\left( \beta,p\right) \nonumber \\ & \times \int_{-\infty}^{\infty}\frac{\mathtt{d}\mathrm{a}}{\left \vert \mathrm{a}\right \vert }\int \frac{\mathtt{d}^{2}s}{\left \vert s\right \vert }\left \vert \Phi^{\ast}\left( s^{\ast}\beta^{\ast}-r^{\ast}\beta ,\mathrm{a}p\right) \right \vert ^{2}\nonumber \\ & =\int_{-\infty}^{\infty}\mathtt{d}p\int \mathtt{d}^{2}\beta F\left( \beta,p\right) F^{\prime \ast}\left( \beta,p\right) . \label{17.64} \end{align} Thus we complete the proof of Eq.(\ref{17.60}). \emph{Inversion Formula}:\emph{\ }From Eq. (\ref{17.60}) we have \begin{equation} g\left( \alpha,x\right) =\int_{-\infty}^{\infty}\frac{\mathtt{d\mathrm{a} d}b}{\sqrt{\pi}\mathrm{a}^{2}}\int \frac{\mathtt{d}^{2}\kappa \mathtt{d}^{2} s}{2\pi \left \vert s\right \vert ^{2}}W_{\psi}g\left( s,r,\kappa;\mathrm{a} ,b\right) \psi_{s,r,\kappa;\mathrm{a},b}\left( \alpha,x\right) , \label{17.65} \end{equation} that is the inversion formula for the original signal $g\left( \alpha ,x\right) $ expressed by a superposition of wavelet functions $\psi _{s,r,\kappa;\mathrm{a},b}\left( \alpha,x\right) ,$ with the value of continuous WT $W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) $ serving as coefficients. In fact, in Eq. (\ref{17.48}) when we take $g\left( \alpha,x\right) =\delta \left( \alpha-\alpha^{\prime}\right) \delta \left( \alpha^{\ast}-\alpha^{\prime \ast}\right) \delta \left( x-x^{\prime}\right) ,$ then \begin{equation} W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) =\frac{1}{2\pi \sqrt{\pi}} \psi_{s,r,\kappa;\mathrm{a},b}^{\ast}\left( \alpha^{\prime},x^{\prime }\right) . \label{17.66} \end{equation} Substituting (\ref{17.66}) into (\ref{17.64}) yields (\ref{17.65}). We can visualize the new WT $W_{\psi}g\left( s,r,\kappa;\mathrm{a},b\right) $ in the context of quantum optics. Noticing that the generalized squeezing operator $U\left( s,r,\kappa=0;\mathrm{a},b=0\right) $ in (\ref{17.51}) is an image of the combined mapping of\ the classical real dilation transform $x\rightarrow$ $x/\mathrm{a}$ ($\mathrm{a}>0$) and the classical complex symplectic transform $\left( \alpha,\alpha^{\ast}\right) \rightarrow \left( s\alpha-r\alpha^{\ast},s^{\ast}\alpha^{\ast}-r^{\ast}\alpha \right) $ in $\left \vert \alpha,x\right \rangle $ representation, one can use the technique of integration within normal product of operators to perform the integration in (\ref{17.51}) to derive its explicit form (see Eq. (15) in Ref. \cite{r12a}). The transform matrix element of $U\left( s,r,\kappa =0;\mathrm{a},b=0\right) $ in the entangled state representation $\left \vert \eta \right \rangle $ is \begin{align} & \left \langle \eta \right \vert U\left( s,r,\kappa=0;\mathrm{a},b=0\right) \left \vert \eta^{\prime}\right \rangle \nonumber \\ & =\sqrt{\frac{s}{\mathrm{a}}}\int_{-\infty}^{\infty}\frac{\mathtt{d}x} {\sqrt{\pi}}\int \frac{\mathtt{d}^{2}\alpha}{2\pi}\left \langle \eta \right. \left \vert s\alpha-r\alpha^{\ast},\frac{x}{\mathrm{a}}\right \rangle \left \langle \alpha,x\right \vert \left. \eta^{\prime}\right \rangle . \label{17.67} \end{align} In Fock space $\left \vert \eta=\eta_{1}+\mathtt{i}\eta_{2}\right \rangle $ is two-mode EPR entangled state in (\ref{3.11}). Then using (\ref{17.45}) and (\ref{3.11}), we obtain \begin{equation} \left \langle \eta \right. \left \vert \alpha,x\right \rangle =\frac{1}{\sqrt{2} }\exp \left[ -\frac{\alpha^{2}+\left \vert \alpha \right \vert ^{2}}{4}-\frac {1}{2}\eta_{1}^{2}+\eta_{1}\alpha-\mathtt{i}\eta_{2}x\right] . \label{17.68} \end{equation} Substituting (\ref{17.68}) into (\ref{17.67}) and using (\ref{5.2})$,$ we obtain \begin{align} & \left \langle \eta \right \vert U\left( s,r,\kappa=0;\mathrm{a},b=0\right) \left \vert \eta^{\prime}\right \rangle \nonumber \\ & =\frac{\pi}{\sqrt{\mathrm{a}}}\delta \left( \eta_{2}^{\prime}-\eta _{2}/\mathrm{a}\right) \frac{1}{\sqrt{2\mathtt{i}\pi B}}\nonumber \\ & \times \exp \left[ \frac{\mathtt{i}}{2B}\left( A\eta_{1}^{\prime2} -2\eta_{1}\eta_{1}^{\prime}+D\eta_{1}^{2}\right) \right] . \label{17.69} \end{align} which is just the kernel of a mixed lens$-$Fresnel transform, i.e., the variable $\eta_{1}$ of the object experiences a generalized Fresnel transform, while $\eta_{2}$ undergoes a lens transformation. Thus, based on $\left \vert \alpha,x\right \rangle $ we have introduced SDWT which involves both the real variable dilation-transform\ and complex variable symplectic transform, corresponding to the lens-Fresnel mixed transform in classical optics. \section{Fresnel-Hadamard combinatorial transformation} In the theoretical study of quantum computer, of great importance is the Hadamard transform. This operation is $n$ Hadamard gates acting in parallel on $n$ qubits. The Hadamard transform produces an equal superposition of all computational basis states. From the point of view of Deutsch-Jozsa quantum algorithm, the Hadamard transform is an example of the $N=2^{n}$ quantum Fourier transform, which can be expressed as \cite{r48} \begin{equation} \left \vert j\right \rangle =\frac{1}{\sqrt{2^{n}}} {\displaystyle \sum \limits_{k=0}^{2^{n}-1}} e^{2\pi ijk/2^{n}}\left \vert k\right \rangle . \label{j18.1} \end{equation} Now the continuous Hadamard transform, used to go from the coordinate basis $\left \vert x\right \rangle $ to the momentum basis$,$ is defined as\cite{r49} \begin{equation} \mathfrak{F}\left \vert x\right \rangle =\frac{1}{\sqrt{\pi}\sigma}\int _{-\infty}^{\infty}dy\exp \left( 2ixy/\sigma^{2}\right) \left \vert y\right \rangle , \label{j18.2} \end{equation} where $\sigma$ is the scale length. $\mathfrak{F}$ is named Hadamard operator. Using the completeness of $\int_{-\infty}^{\infty}dx\left \vert x\right \rangle \left \langle x\right \vert =1,$ we have \begin{equation} \mathfrak{F}=\frac{1}{\sqrt{\pi}\sigma} {\displaystyle \iint_{-\infty}^{\infty}} dxdy\exp \left( 2ixy/\sigma^{2}\right) \left \vert y\right \rangle \left \langle x\right \vert . \label{j18.3} \end{equation} The above two transforms (Fresnel transform and Hadmard transform) are independent of each other, an interesting question thus naturally arises: can we combine the two transforms together? To put it in another way, can we construct a combinatorial operator which play the role of both Fresnel transform and Hadmard transform for two independent optical modes? The answer is affirmative, in this section we try to construct so-called Fresnel-Hadmard combinatorial transform. \subsection{The Hadamard-Fresnel combinatorial operator} Based on the coherent-entangled representation $\left \vert \alpha ,x\right \rangle $, and enlightened by Eq. (\ref{5.4}) and (\ref{j18.2}) we now construct the following ket-bra integration \cite{r50} \begin{equation} U=\frac{\sqrt{s}}{\sqrt{\pi}\sigma}\int \frac{d^{2}\alpha}{\pi} {\displaystyle \iint} dxdy\exp \left( 2ixy/\sigma^{2}\right) \left \vert s\alpha-r\alpha^{\ast },y\right \rangle \left \langle \alpha,x\right \vert , \label{j18.4} \end{equation} we name $U$ the Hadamard-Fresnel combinatorial operator. Substituting Eq.(\ref{j18.2}) into Eq.(\ref{j18.4}), and using the two-mode vacuum projector's normally ordered form $\left \vert 00\right \rangle \left \langle 00\right \vert =\colon \exp \left[ -a_{1}^{+}a_{1}-a_{2}^{+} a_{2}\right] \colon$ as well as the IWOP technique we get \begin{equation} U=\frac{\sqrt{s}}{\sqrt{\pi}\sigma}\colon \int \frac{d^{2}z}{\pi}A\left( z,z^{\ast}\right) {\displaystyle \iint} dxdyB(x,y)e^{C}\colon, \end{equation} where \[ C\equiv-\frac{\left( a_{1}^{+}+a_{2}^{+}\right) ^{2}+\left( a_{1} +a_{2}\right) ^{2}}{4}-a_{1}^{+}a_{1}-a_{2}^{+}a_{2}, \] \[ B(x,y)\equiv \exp \left[ -\frac{y^{2}+x^{2}}{2}+y\left( a_{1}^{+}+a_{2} ^{+}\right) +x\left( a_{1}+a_{2}\right) +\frac{2ixy}{\sigma^{2}}\right] , \] and \[ A\left( z,z^{\ast}\right) \equiv \exp \left[ -\frac{\left \vert sz-rz^{\ast }\right \vert ^{2}+\left \vert z\right \vert ^{2}}{4}+\frac{sz-rz^{\ast}} {2}\left( a_{1}^{\dagger}-a_{2}^{\dagger}\right) +\frac{z^{\ast}\left( a_{1}-a_{2}\right) }{2}\right] , \] they are all within the normal ordering symbol $::$ $.$ Now performing the integration over $dxdy$ within $::$ and remembering that all creation operators are commute with all annihilation operators (the essence of the IWOP technique) so that they can be considered c-number during the integration, we can finally obtain \begin{align} U & =\frac{1}{\sqrt{s^{\ast}}}\frac{4\sqrt{\pi}\sigma}{\sqrt{\sigma^{4}+4} }\colon \exp \left \{ -\frac{1}{2}\frac{r}{s^{\ast}}\left( \frac{a_{1}^{\dag }-a_{2}^{\dag}}{\sqrt{2}}\right) ^{2}+\frac{\sigma^{4}-4}{2\left( \sigma ^{4}+4\right) }\left( \frac{a_{1}^{\dag}+a_{2}^{\dag}}{\sqrt{2}}\right) ^{2}\right. \nonumber \\ & +\left( \frac{1}{s^{\ast}}-1\right) \frac{a_{1}^{\dag}-a_{2}^{\dag} }{\sqrt{2}}\frac{a_{1}-a_{2}}{\sqrt{2}}+\left( \frac{4i\sigma^{2}}{\sigma ^{4}+4}-1\right) \frac{a_{1}^{\dag}+a_{2}^{\dag}}{\sqrt{2}}\frac{a_{1}+a_{2} }{\sqrt{2}}\nonumber \\ & +\left. \frac{1}{2}\frac{r^{\ast}}{s^{\ast}}\left( \frac{a_{1}-a_{2} }{\sqrt{2}}\right) ^{2}+\frac{\sigma^{4}-4}{2\left( \sigma^{4}+4\right) }\left( \frac{a_{1}+a_{2}}{\sqrt{2}}\right) ^{2}\right \} \colon, \label{j18.5} \end{align} which is the normally ordered form of Hadamard-Fresnel combinatorial operator. \subsection{The properties of Hadamard-Fresnel operator} Note \begin{equation} \left[ \frac{a_{1}-a_{2}}{\sqrt{2}},\frac{a_{1}^{\dag}+a_{2}^{\dag}}{\sqrt {2}}\right] =0, \label{j18.6} \end{equation} \ and \begin{equation} \left[ \frac{a_{1}-a_{2}}{\sqrt{2}},\frac{a_{1}^{\dag}-a_{2}^{\dag}}{\sqrt {2}}\right] =1,\text{ \ }\left[ \frac{a_{1}+a_{2}}{\sqrt{2}},\frac {a_{1}^{\dag}+a_{2}^{\dag}}{\sqrt{2}}\right] =1, \label{j18.7} \end{equation} $\frac{a_{1}-a_{2}}{\sqrt{2}}$ can be considered a mode independent of another mode $\frac{a_{1}^{\dag}+a_{2}^{\dag}}{\sqrt{2}},$ thus we have the operator identity \begin{equation} \text{exp}\left[ f\left( a_{1}^{\dag}\pm a_{2}^{\dag}\right) \left( a_{1}\pm a_{2}\right) \right] =\colon \exp[\frac{1}{2}\left( e^{2f} -1\right) \left( a_{1}^{\dag}\pm a_{2}^{\dag}\right) \left( a_{1}\pm a_{2}\right) ]\colon. \label{j18.8} \end{equation} Using (\ref{j18.8}) we can rewrite Eq.(\ref{j18.7}) as \begin{equation} U=U_{2}U_{1}=U_{1}U_{2}, \label{j18.9} \end{equation} where \begin{align} U_{1} & =\frac{4\sqrt{\pi}\sigma}{\sqrt{\sigma^{4}+4}}\exp \left[ \frac{\sigma^{4}-4}{2\left( \sigma^{4}+4\right) }\left( \frac{a_{1}^{\dag }+a_{2}^{\dag}}{\sqrt{2}}\right) ^{2}\right] \nonumber \\ & \exp \left[ \frac{a_{1}^{\dag}+a_{2}^{\dag}}{\sqrt{2}}\frac{a_{1}+a_{2} }{\sqrt{2}}\ln \frac{4i\sigma^{2}}{\left( \sigma^{4}+4\right) }\right] \exp \left[ \frac{\sigma^{4}-4}{2\left( \sigma^{4}+4\right) }\left( \frac{a_{1}+a_{2}}{\sqrt{2}}\right) ^{2}\right] \label{j18.10} \end{align} and \begin{align} U_{2} & =\exp \left[ -\frac{r}{2s^{\ast}}\left( \frac{a_{1}^{\dag} -a_{2}^{\dag}}{\sqrt{2}}\right) ^{2}\right] \exp \left[ \left( \frac {a_{1}^{\dag}-a_{2}^{\dag}}{\sqrt{2}}\frac{a_{1}-a_{2}}{\sqrt{2}}+\frac{1} {2}\right) \ln \frac{1}{s^{\ast}}\right] \nonumber \\ & \exp \left[ \frac{r^{\ast}}{2s^{\ast}}\left( \frac{a_{1}-a_{2}}{\sqrt{2} }\right) ^{2}\right] , \label{j18.11} \end{align} while $U_{2}$ is the Fresnel operator for mode $\frac{a_{1}-a_{2}}{\sqrt{2}},$ $U_{1}$ is named the Hadamard operator for mode $\frac{a_{1}+a_{2}}{\sqrt{2} }.$ It then follows \begin{align} U\frac{a_{1}-a_{2}}{\sqrt{2}}U^{-1} & =U_{2}\frac{a_{1}-a_{2}}{\sqrt{2} }U_{2}^{-1}=s^{\ast}\frac{a_{1}-a_{2}}{\sqrt{2}}+r\frac{a_{1}^{\dag} -a_{2}^{\dag}}{\sqrt{2}},\nonumber \\ U\frac{a_{1}^{\dag}-a_{2}^{\dag}}{\sqrt{2}}U^{-1} & =U_{2}\frac{a_{1}^{\dag }-a_{2}^{\dag}}{\sqrt{2}}U_{2}^{-1}=r^{\ast}\frac{a_{1}-a_{2}}{\sqrt{2} }+s\frac{a_{1}^{\dag}-a_{2}^{\dag}}{\sqrt{2}}, \label{j18.12} \end{align} from which we see the Hadamard-Fresnel combinatorial operator can play the role of Fresnel transformation for $\frac{a_{1}-a_{2}}{\sqrt{2}}.$ Physically, $\frac{a_{1}-a_{2}}{\sqrt{2}}$ and $\frac{a_{1}+a_{2}}{\sqrt{2}}$ can be two output fields of a beamsplitter. In a similar way, we have \begin{align} U\frac{a_{1}+a_{2}}{\sqrt{2}}U^{-1} & =U_{1}\frac{a_{1}+a_{2}}{\sqrt{2} }U_{1}^{-1}=\frac{1}{4i\sigma^{2}}\left[ \left( \sigma^{4}+4\right) \frac{a_{1}+a_{2}}{\sqrt{2}}-\left( \sigma^{4}-4\right) \frac{a_{1}^{\dag }+a_{2}^{\dag}}{\sqrt{2}}\right] ,\nonumber \\ U\frac{a_{1}^{\dag}+a_{2}^{\dag}}{\sqrt{2}}U^{-1} & =U_{1}\frac{a_{1}^{\dag }+a_{2}^{\dag}}{\sqrt{2}}U_{1}^{-1}=\frac{1}{4i\sigma^{2}}\left[ -\left( \sigma^{4}+4\right) \frac{a_{1}^{\dag}+a_{2}^{\dag}}{\sqrt{2}}+\left( \sigma^{4}-4\right) \frac{a_{1}+a_{2}}{\sqrt{2}}\right] \label{j18.13} \end{align} which for the quadrature $X_{i}=\left( a_{i}+a_{i}^{\dag}\right) /\sqrt{2},$ \ $P_{i}=\left( a_{i}-a_{i}^{\dag}\right) /\sqrt{2}i,$ $\left( i=1,2\right) ,$ leads to \begin{equation} U\frac{X_{1}+X_{2}}{2}U^{-1}=\frac{\sigma^{2}}{4}\left( P_{1}+P_{2}\right) ,\text{ \ \ }U\left( P_{1}+P_{2}\right) U^{-1}=-\frac{4}{\sigma^{2}} \frac{X_{1}+X_{2}}{2}, \label{j18.14} \end{equation} from which we see that the Hadamard-Fresnel combinatorial operator also plays the role of exchanging the total momentum---average position followed by a squeezing transform, with the squeezing parameter being $\frac{\sigma^{2}} {4}.$ The mutual transform in (\ref{j18.14}) can be realized by \begin{align} e^{i\frac{\pi}{2}\left( a_{1}^{\dag}a_{1}+a_{2}^{\dag}a_{2}\right) }\left( X_{1}+X_{2}\right) e^{-i\frac{\pi}{2}\left( a_{1}^{\dag}a_{1}+a_{2}^{\dag }a_{2}\right) } & =P_{1}+P_{2},\text{ }\label{j18.15}\\ \text{\ }e^{i\frac{\pi}{2}\left( a_{1}^{\dag}a_{1}+a_{2}^{\dag}a_{2}\right) }\left( P_{1}+P_{2}\right) e^{-i\frac{\pi}{2}\left( a_{1}^{\dag}a_{1} +a_{2}^{\dag}a_{2}\right) } & =-\left( X_{1}+X_{2}\right) \text{\ } \label{j18.16} \end{align} while the two-mode squeezing operator is $S_{2}=\exp \left[ \ln \frac{2} {\sigma^{2}}\left( a_{1}^{\dag}a_{2}^{\dag}-a_{1}a_{2}\right) \right] ,$ therefore \begin{equation} U_{1}=S_{2}^{-1}e^{i\frac{\pi}{2}\left( a_{1}^{\dag}a_{1}+a_{2}^{\dag} a_{2}\right) }. \label{j18.17} \end{equation} From Eq.(\ref{j18.9}) and Eq.(\ref{j18.17}), we see that the Hadamard-Fresnel combinatorial operator can be decomposed as \begin{equation} U=U_{2}S_{2}^{-1}e^{i\frac{\pi}{2}\left( a_{1}^{\dag}a_{1}+a_{2}^{\dag} a_{2}\right) }=S_{2}^{-1}e^{i\frac{\pi}{2}\left( a_{1}^{\dag}a_{1} +a_{2}^{\dag}a_{2}\right) }U_{2}. \label{j18.18} \end{equation} It can be also seen that $U$ is unitary, $U^{+}U=UU^{+}=1$. In this section, we have introduced the Fresnel-Hadamard combinatorial operator by virtue of the IWOP technique. This unitary operator plays the role of both Fresnel transformation for mode $\frac{a_{1}-a_{2}}{\sqrt{2}}$ and Hadamard transformation for mode $\frac{a_{1}+a_{2}}{\sqrt{2}},$ respectively, and the two transformations are combinatorial. We have shown that the two transformations are concisely expressed in the coherent-entangled state representation as a projective operator in integration form. We also found that the Fresnel-Hadamard operator can be decomposed as $U_{2}S_{2} ^{-1}e^{i\frac{\pi}{2}\left( a_{1}^{\dag}a_{1}+a_{2}^{\dag}a_{2}\right) },$ a Fresnel operator $U_{2}$, a two-mode squeezing operator $S_{2}^{-1}$ and the total momentum-average position exchanging operator. Physically, $\frac {a_{1}-a_{2}}{\sqrt{2}}$ and $\frac{a_{1}+a_{2}}{\sqrt{2}}$ can be two output fields of a beamsplitter. If an optical device can be designed for Fresnel-Hadamard combinatorial transform, then it can be directly applied to these two output fields of the beamsplitter. In summary, although quantum optics and classical optics are so different, no matter in the mathematical tools they employed or in a conceptual view (quantum optics concerning the wave-particle duality of optical field with an emphasis on its nonclassical properties, whereas classical optics works on the distribution aqnd propagation of the light waves), that it is a new exploration to link them systematically. However, In this review, via the route of developing Dirac's symbolic method we have revealed some links between them by mapping classical symplectic transformation in the coherent state representation onto quantum unitary operators (GFO), throughout our discussion the IWOP technique is indispensable for the derivation. We have resorted to the quantum optical interpretation of various classical optical transformations by adopting quantum optics concepts such as the coherent states, squeezed states, and entangled states, etc. Remarkably, we have endowed complex fractional Fourier transform, Hankel transform with quantum optical representation-transform interpretation. Our formalism, starting from quantum optics theory, not only provides quantum mechanical account of various classical optical transformations, but also have found their way back to some new classical transformations, e.g. entangled Fresnel transform, Fresnel-wavelet transform, etc, which may have realistic optical interpretation in the future. As Dirac predicted, functions that have been applied in classical optical problems may be translated in an operator language in quantum mechanics, and vice-versa. We expect that the content of this work may play some role in quantum states engineering, i.e., optical field states' preparation and design. Once the correspondence in this respect between the two distinct fields is established, the power of Dirac's symbolic method can be fully displayed to solve some new problems in classical optics, e.g., to find new eigen-modes of some optical transforms; to extend the research region of classical optics theoretically by introducing new transforms (for example, the entangled Fresnel transforms), which may bring attention of experimentalists, who may get new ideas to implement these new classical optical transformations. \textbf{Acknowledgement:} This work supported by the National Natural Science Foundation of China, Grant No. 10775097 and 10874174, and a grant from the Key Programs Foundation of Ministry of Education of China (No. 210115), and the Research Foundation of the Education Department of Jiangxi Province of China (grant no. GJJ10097). \end{document}
arXiv
Person authentication based on eye-closed and visual stimulation using EEG signals Hui Yen Yap ORCID: orcid.org/0000-0002-1367-32261, Yun-Huoy Choo2, Zeratul Izzah Mohd Yusoh2 & Wee How Khoh1 The study of Electroencephalogram (EEG)-based biometric has gained the attention of researchers due to the neurons' unique electrical activity representation of an individual. However, the practical application of EEG-based biometrics is not currently widespread and there are some challenges to its implementation. Nowadays, the evaluation of a biometric system is user driven. Usability is one of the concerning issues that determine the success of the system. The basic elements of the usability of a biometric system are effectiveness, efficiency and user satisfaction. Apart from the mandatory consideration of the biometric system's performance, users also need an easy-to-use and easy-to-learn authentication system. Thus, to satisfy these user requirements, this paper proposes a reasonable acquisition period and employs a consumer-grade EEG device to authenticate an individual to identify the performances of two acquisition protocols: eyes-closed (EC) and visual stimulation. A self-collected database of eight subjects was utilized in the analysis. The recording process was divided into two sessions, which were the morning and afternoon sessions. In each session, the subject was requested to perform two different tasks: EC and visual stimulation. The pairwise correlation of the preprocessed EEG signals of each electrode channel was determined and a feature vector was formed. Support vector machine (SVM) was then used for classification purposes. In the performance analysis, promising results were obtained, where EC protocol achieved an accuracy performance of 83.70–96.42% while visual stimulation protocol attained an accuracy performance of 87.64–99.06%. These results have demonstrated the feasibility and reliability of our acquisition protocols with consumer-grade EEG devices. The growing interest in brain-computer interface (BCI) has led to an increase in the importance of understanding brain functions. BCI refers to a communication pathway between an external device and the human brain without involving any physical movements, and covers both medical and nonmedical uses [1]. Authentication study is one of the examples of BCI which uses brain signals as a biometric identifier. Authentication is essential in our daily lives, which is performed in almost all human-to-computer interactions to verify a user's identity through passwords, pin codes, fingerprints, card readers, retina scanners, etc.With the growth of technology, advanced biometric authentication has been developed. Physiological biometrics use a person's physical characteristics to identify an individual, such as face, fingerprint, palm print, retina, iris, etc. This type of biometrics is hardly to be replaced once it has been compromised. On the other hand, behavioral biometrics analyze the digital patterns in performing a specific task in the authentication. It is hard to mimic compared with the former biometrics, and it is revocable and replaceable when compromised [2]. While these traditional types of biometrics, human cognitive characteristics can be used to develop an alternative way of conventional physiological and behavioral biometrics [3]. It analyzes an individual's cognitive behavior (biosignals), such as a person's emotional and cognitive state for the purpose of identification and verification. The motivation of choosing brain signals for authentication lies in the desire for a more privacy-compliant solution compared to other biometric traits. Brain signals possess specific characteristics which are not present in most of the widely used biometrics. They are unique and difficult to be captured by an imposter from a distance, thus increasing their resistance against spoofing attacks. One of the commonly used methods in recording brain signals is Electroencephalography or also known as EEG. It records the brain's electrical activities by calculating voltage variations within the brain [1]. It is also a straightforward and non-invasive method to record brain electrical activity as it only requires placing electrodes on the scalp's surface. Brain activity can be obtained through EEG recordings using specifically designed protocols, including the resting state, motor imaginary, non-motor imaginary and stimulation protocol [4]. The resting-state protocol is easy to operate as it only requires users to rest for a few minutes in either eyes-closed (EC) or eyes-open (EO) state, while the EEG data are recorded. On the other hand, motor imaginary requires the users to mentally simulate a physical action, such as movements of the right hand, left hand, foot and others. Other than that, EEG data can also be acquired by asking the user to perform non-motor imaginary tasks, for instance, mental calculation, internal speech or singing. Finally, the stimulation protocol presents the users with a series of stimuli and the electrical response of the users is recorded. Various stimuli have been proposed and applied in the literature for this protocol, such as pictures, wording, audio, etc. Despite promising results being reported in the literature, the utilization of EEG-based biometrics system is not currently widespread in practical applications. One of the reasons lies in the implementation and operation of this biometric approach. The performance relies on the design of the acquisition protocol [5]. This approach requires a long period of time for the users to undergo EEG brain's data recording. This approach is impractical to be used in real life as users would not be willing to spend that much time on the authentication process. Moreover, Ruiz Blondet et al. and Wu et al. [6, 7] argued that most studies used high-density EEG devices, which were very costly and the setup process was time-consuming. Typically, a biometrics system is expected to be accessed by users frequently. Its fundamental usability elements are effectiveness, efficiency, and user satisfaction [8]. Effectiveness refers to how well a user can perform a task. Efficiency measures how quickly a user can perform the task with a reasonably low error rate. Finally, satisfaction measures the users' perceptions and feelings towards the application. With these requirements, users may not only need a reliable system, but also a user-friendly and affordable EEG device during the acquisition process. A consumer-grade wireless EEG device with lesser channels can be a potential alternative to replace the clinical-grade device. It should also strike a balance between security and user-friendliness in real-life applications [9]. Thus, the paper aims to propose an acquisition protocol that employs a consumer-grade EEG device with a reasonable enrolment period. In addition, the reliability of the EEG signals recorded via a consumer-grade device is also examined through different sessions with regard to two acquisition protocols, namely eye-closed (EC) and visual stimulation protocol. The rest of the paper is organized as follows. Section 2 discusses the literature review. Section 3 presents the proposed approach. Section 4 shows the experiment results and performance evaluation, and Sect. 5 discuss es the findings of the proposed system. Finally, Sect. 6 provides a conclusive remark to this paper and some future works are suggested. From the beginning of the twentieth century, EEG analysis has been mainly employed in the medication field to study brain diseases such as stroke, brain tumor, epilepsy, Alzheimer, Parkinson, etc. [10]. In particular, it has been heavily employed in BCI in the last decade, where the main objective is to help patients with severe neuromuscular disorders. Applications of BCI functions by either observing the users' state or allowing the users express their intentions; meanwhile, the users' brain signals are recorded and sent to a computer system for further analysis. The result is then translated into a command and the system is instructed to complete the intended task [1]. Recently, the research of BCI has been extended further to cover several applications, including authentication and security [4]. Cognitive biometric is a new technology that utilizes brain activity to authenticate an individual. The brain's activity can be recorded by measuring the blood flow in the brain or by measuring the electrical activity of the brain's neurons. EEG is widely considered for usage in security areas as the signals are unique and possess distinctive characteristics, which are not present in other commonly used biometrics such as face, iris, palm prints and fingerprints. Due to its high privacy compliance nature, EEG-based biometric is robust against spoofing attacks as it is impossible for an imposter to capture the brain signal from a distance [10]. EEG signals are also sensitive to stress. Thus, it is hard to force a person to reproduce brain activity when they are panicked. In general, biometrics must fulfill four requirements: universality, permanence, uniqueness and collectability [11]. Universality refers to the requirement that each person should naturally possess the characteristic being measured. Permanence requires that the characteristics of a person should stay the same over time for the purpose of criteria matching. Uniqueness is the requirement that the characteristics of a person should be unique and distinguishable from one another. Finally, collectability requires that the characteristics of a person should be measurable with any capturing device. Previous studies had made a significant effort to prove the viability of EEG as a biometric identifier [10, 12,13,14]. Ruiz Blondet et al. [6] further emphasized that, in terms of collectability, the design of EEG acquisition protocol should be user-friendly to the users. It can be done by reducing the number of electrodes to make the design more feasible and closer to real-world applications. Several EEG acquisition protocols were designed and proposed in the literature to obtain specific brain responses of interest. The main objective was to study the neural mechanisms of information processing in environmental perception and during complex cognitive operations [15]. These acquisition protocols generally be divided into two categories: resting state and stimulation [16]. For the resting state protocol, the user is required to sit on a chair and rest for a few minutes in either eyes-closed (EC) or eyes-open (EO) state as instructed. Meanwhile, the brain signals of the users are recorded. To the best of our knowledge, [17] was the first research that proposed an EEG-based biometric using a resting state protocol. The authors recorded EEG signals from four subjects when they were performing EC activity that lasted for 3 continuous minutes. The spectral values of the signals were calculated using Fast Fourier Transform (FFT). The Alpha frequency band (7–12 Hz) was obtained and this value was further sub-divided into three overlapping sub-bands. The obtained classification scores ranging from 80 to 95% were correct, which proved that the EEG signals can be used as one of the biometric traits. Both sub-bands were informative and no frequency band was reported to have an extra benefit over the others. In La Rocca et al. [10], the repeatability of the EEG signal was addressed. A 'resting state' protocol with both EC and EO was designed to acquire raw EEG signals from nine healthy subjects in two different sessions, in which both sessions were 1 to 3 weeks apart. The signals from the 54 electrodes that were attached to the scalp of the subject were continuously recorded. The raw EEG signals were filtered by an anti-aliasing FIR filter before they were presented in four sub-bands from 0.5 to 30 Hz. A common average referencing (CAR) filter was then employed to minimize the artifacts. Each preprocessing signal was modelled according to an autoregressive model while using reflection coefficients to generate the feature vector, then a linear classifier was employed for classification. In the evaluation, a different set of electrodes combination was tested and the results showed a high degree of repeatability over the time interval. In Ma et al. [18], the EEG data were adopted from a public data set. A total of 10 subjects were enrolled and they were asked to perform 55 s of EC and EO tasks, respectively, using a device with 64 electrodes. The recorded EEG signals were segmented into 55 trials separately with a 1-s frame length. 50 trials were used for training and the rest were used for testing purposes. Convolutional neural networks (CNN) was applied for feature extraction and classification. The findings showed that the suggested approach yielded a high degree of accuracy with accuracy of 88% for a 10-class classification. Besides, an inter-personal difference can be discovered using a very low-frequency band of 0 to 2 Hz. The second EEG acquisition protocol is based on the stimulus of an external event on the subjects. After stimulation, the electrical response of the subjects is recorded through the nervous system. A typically employed stimulation protocol in EEG-based biometric is the Event-Related Potential (ERP). It is a time-locked deflection on the ongoing brain activity after being exposed to an external event. The event can be sensory, visual or audio stimuli [1]. In Palaniappan and Ravi [19], the study was conducted to assess the feasibility of ERP using visual stimuli. 20 subjects participated in the study. Their signals were obtained from 61 electrodes placed on the scalp when they looked at typical black images with white lines of drawn objects such as an aeroplane, a banana, a ball, etc. The recorded signals with an eye blink artifact with magnitude above 100 µV were removed. Besides, those signals were also de-noised through Principal Component Analysis (PCA). The spectra features consisting of power in the gamma band (30 to 50 Hz) were extracted and classified through a Simplified Fuzzy ARTMAP (SFA) neural network (NN). The results showed an average classification of 94.18%, which proved the proposed method's potential in recognizing individuals. The stability of the EEG signals was evaluated in [14] using visual stimulation protocol to record raw EEG signals from 45 subjects. Those subjects were presented with several acronyms (example: DVD, TV and TN) which were intermixed with other lexical types. The experiments consisted of three different sessions, which were carried out in 6 months. For the third session, only nine subjects returned for data acquisition. A hardware filter was applied to reduce the influence of DC shifts and bootstrapping was used to generate extra features. Different classifiers such as cross-correlation, support vector machine (SVM) and divergent autoencoder (DIVA) were adopted. The findings verified the permanence of the EEG characteristics and it was found that the brain signals of the subjects could remain stable over a relatively long period of time. Besides, Ruiz-Blondet et al. [20] had suggested that using ERP may provide more accurate results in EEG-based biometric as its elicitation process allows for some control over the user's cognitive state during EEG data recording sessions. The EEG data from 50 subjects were acquired using 30 sensors. The cognitive Event-Related Potential Biometric Recognition (CEREBRE) protocol was designed to obtain the unique response of the subjects from the brain systems. This protocol includes different categories of stimuli such as sine gratings, low-frequency words, food images, words, celebrities and oddballs. Besides, subjects were also asked to remain in resting state and undergo pass-thought sessions. The duration of the entire experiment was roughly one and a half hours. The study did not apply any artifact rejection or feature extraction method, where only simple cross-correlation was used for classification. The results showed that all stimulus types achieved greater accuracy. In a recent study, the authors in Sabeti et al. [21] investigated the subjects' features using resting (EO) and ERP acquisition protocol. Each subject was required to perform a task in EO state for 2 min, where no stimulus was imposed for the first task. However, for the second task, audio stimuli were randomly applied and the subjects were requested to discriminate the different pitch levels. The EEG recording for the second task took around 20 min. The EEG signals were filtered using a bandpass filter ranged from 0.5 to 45 Hz. Several features such as spectral coherence, wavelet coefficients and correlation were extracted and evaluated using SVM, K-Nearest Neighbors (KNN) and Random Forest classifiers. Results showed that correlation was the most discriminative feature among other methods in user authentication. The implementation of the resting protocol from previous studies has shown that the procedure is convenient, but an individual's mental state is uncontrollable when EEG data are acquired in different sessions. Thus, visual stimulation is proposed to provide more reliable biometric authentication as this approach allows the experimenter to control the individual's cognitive state during the time of acquisition. However, due to the small size of an ERP, a large number of trials is needed to gain the desired accuracy performance of the authentication, which leads to the users undergoing a lengthy EEG acquisition period [20]. EEG-based systems are still far from being commercialized as they still face several challenges [5]. Usability is one of the challenges which should gain more attention as it is an important principle to determine the success of the system. Users tend to use the system if it is convenient and easy to use. However, most of the current data acquisition process requires a lengthy time to set up, especially for a wired EEG recording device. Besides that, the user has to place a large number of electrodes on their scalp using conductive gel to reduce skin impedance. As an alternative, [7] suggested replacing the cumbersome wired devices with consumer-grade wireless EEG devices which could be more practicable in real life. However, these devices possess a limitation that needs to be considered, where the signal quality could be relatively inferior compared to the research-grade type of devices. Moreover, the lengthy acquisition period is another line of research that needs to be addressed as the participants could lose patience during the acquisition process, which leads to the distortion of the signal or reluctance to take part in the data enrolment process. Therefore, acquisition protocols that utilize a consumer-grade device to acquire EEG signals within a reasonably short period of time are proposed in this work. The performance of an EEG-based biometric depends on a proper design of the acquisition protocol. The portability of the EEG device and acquisition period will be considered to improve the usability and practicability of the system. The proposed system comprises 5 components: data acquisition, preprocessing, signal segmentation, feature extraction, and classification. Figure 1 illustrates the flowchart of the proposed method. Overflow of proposed method Acquisition protocol Conventionally, EEG signals are recorded using clinical-grade EEG equipment. This device is expensive and inconvenient as the setting up could take a tremendous among of time. Hence, in this work, a consumer-grade type of EEG device is used as an alternative to improve user experience. The EEG signals are collected from 8 healthy volunteers (2 female, 6 male, all ages from 18 to 33) using Emotiv EPOC+ wireless headset, as illustrated in Fig. 2. It comprises 14 integrated electrodes with two reference sensors where each sensor is located at the standard positions of the International 10–20 systems as shown in Fig. 3. EEG Emotiv EPOC+ wireless headset Framework of brainwave user recognition Before the acquisition process, a brief introduction about the purpose of the study was given to the subject. In addition, the subject was also allowed to see the changes in their EEG signal when they blinked their eyes or moved their bodies. The purpose of this demonstration is to tell the subject that any eye movements and muscle tension can impact their brain waves. Thus, they were requested to avoid big movements and remain as still as possible. The entire data acquisition process was conducted in a standard enclosed room. The recording process was divided into morning and afternoon sessions to assess the stability of the consumer-grade EEG equipment when recording EEG signals over different sessions. In each session, the subject was required to perform two different tasks (eyes-closed and visual stimulation), while data were recorded at a 256 Hz sampling rate. Task 1: Eyes-closed (EC)—subject was seated on a chair with both arms resting. Before the enrollment, the subject was instructed to keep the mind as calm as possible and remain in a resting state with eyes closed. The recording started 10 s after the subject closed the eyes and remained resting. EEG signals were recorded for 30 s continuously and then the recording process was stopped. Task 2: Visual stimulation—the subject was requested to be seated on the same chair without any major movements after completing Task 1. A LED screen of size 17″ was placed in front of the subject. The subject was guided to sit comfortably at a certain distance from the screen. During the recording process, a series of stimuli with 120 single words were displayed to the subject. The subject was requested to focus and interpret each stimulus silently at all times, where no big body movements were allowed. However, they were allowed to blink their eyes to reduce the tiredness during the enrolment process. The stimulation design was mainly focused on wording presentation as the subject's semantic memory might provide distinctive biometric properties. Each stimulus was a wording that consisted of four to seven letters that the subject could easily understand. A stimulus was displayed on the computer screen for 1 s followed by a 1-s black screen, as illustrated in Fig. 4. It took approximately 4 min to show all the 120 wordings to the user (including the black screen), then the recording process was stopped. Along the process, an Inter-Stimulus Interval (ISI) could be segmented into parts (coined as a trial in this work) that consisted of 0.5 s of black screen, followed by 1 s of stimulus displayed and another 0.5 s of black screen, as illustrated in Fig. 4. Visual stimulation with using wording presentation A total of eight subjects contributed to the EEG data acquisition process and a total of four well-collected data sets were obtained from the two sessions as follows: Session 1: Eyes-closed data set, S1ec Session 1: Visual stimulation data set, S1s Preprocessing and segmentation EEGLAB is an interactive MATLAB toolbox and was implemented in this study for preprocessing and segmentation purposes. Before performing feature extraction, unwanted artifacts and unnecessary information will be removed from the collected EEG signals, therefore improving the signal-to-noise ratio. Filtering is a process to filter continuous EEG data before epoching or artifact removal. Finite Impulse Response (FIR), a linear filter, was adopted to remove the direct current shifts of the recorded EEG signals where the range was set from 1 to 55 Hz. An Automatic Artifact Removal (AAR) was then applied to data set S1s and S2s to remove the ocular artifacts in the recorded EEG signals. The AAR is one of the toolboxes available in the EEGLAB plug-in [22] and is used to correct the ocular effects within EEG signals. No artifact rejection was applied to S1ec and S2ec data sets as the EEG signals collected for these data sets were for resting state without eyes and muscle movements. After the removal of the artifacts, the EEG signals were segmented into small parts, which were named trials. For eye-closed data sets (S1ec and S2ec), the first 5 s of the signal, which contained inconsistency, were discarded. The remaining EEG signals were then segmented into 25 trials, with each trial containing a 1-s frame length (256 sample points). The frame length had been experimentally selected based on the existing study [10]. On the other hand, for visual stimulation data sets (S1s and S2s), the signals were epoched and ERPs were formed for each stimulus starting from − 1000 ms to stimulus onset and lasting for 1000 ms after probe onset (refer to Fig. 4), resulting in 512 sample points for each trial. In other words, each trial contained a 1-s stimulus and it was embedded with 0.5 s of black screen at both the beginning and the end of the trial. After this, epoch rejection was applied to remove some trials that appeared to contain significant artifacts, resulting in a range of 100–120 trials for each subject after the segmentation process. Feature extraction Cross-correlation was considered to process the EEG signals. Cross-correlation is a measure of the degree to which two series are correlated. It measures how closely two different observables are related to each other at the same or different times by considering time lag [23]. If \(x\left[ N \right]\) and \(y\left[ N \right]\) are two discrete signals where N is the length of the signal, then the correlation of \(x\left[ n \right]\) with respect to \(y\left[ n \right]\) is given as: $$r_{xy} \left[ l \right] = \mathop \sum \limits_{t = - \infty }^{\infty } x\left[ t \right]y\left[ {t - l} \right],$$ where \(l\) is the lag or delay which indicates the time-shift and t indicates the period of the signal. If both signals are discrete functions of period N, then the − ∞ to ∞ can be replaced by an internal of length N from t0 = 0 to t0 + N. The correlation values between the 14 channels of each trial were computed in a pairwise manner. The maximum of the cross-correlation over all trials for each pair was extracted from the correlation values, which was denoted as \(\max .\) The variation of the range value corresponding to the features can deteriorate the performance of the overall system. Thus, all features were normalized to the range between 0 and 1, so that each feature contributed proportionately to the final distance. Assuming that a feature is denoted as \(x\), the equation for the normalization can be defined as: $$x_{{{\text{norm}}}} = \frac{{x - x_{\min } }}{{x_{\max } - x_{\min } }},$$ where \(x_{{{\text{norm}}}}\) is the normalized features. Therefore, a feature vector \(v\) is constructed by concatenating all normalized features as: $$v = \left( {\max \left( {x_{{{\text{norm}}}} } \right),\mu \left( {x_{{{\text{norm}}}} } \right),\sigma^{2} \left( {x_{{{\text{norm}}}} } \right)} \right).$$ SVM classification A good classification method is essential to accept or reject a claimed person from accessing the system based on an input. An efficient and effective model is necessary for predicting the classes from the data. In general, the learning process is done using the training data chosen from the sample data and their class label. Researchers have widely used SVM to classify EEG signals. SVM is a classification method that involves separating test data with different class labels by learning the structure from the training data and constructing the hyperplanes in a multidimensional space based on that data [24]. SVM adopts a set of mathematical functions that are known as kernels. The function of a kernel is to receive the data as an input and transform it into the desired form. Polynomial SVM is one of the common kernels used in the data classification of non-linear models among the available SVM kernels. It has a good generalization ability and a low learning capacity when the data are non-linearly separated. Since EEG signals are non-stationary and Polynomial SVM had shown a good classification performance in previous EEG studies [25, 26], Polynomial SVM was used in this study as the classifier for classifying EEG patterns. While this work aims to recognize an individual using EEG signals, it leads to a multi-class SVM prediction. Multiple class prediction is more complex than binary prediction, because the classification algorithm has to consider more separation boundaries or relations [27]. The present study considered two decomposition strategies: (OVO) one-vs-one and (OVA) one-vs-all. OVO is a pairwise classification that maps all data sets that belong to a certain class. It splits a multi-class classification data set into binary classification problems. The number of generated models depends on the number of classes. Consider the formula n/(n − 1)/2, where n is the number of classes. If n is equivalent to 5, the total of the generated models is 10. While OVA is also a paired binary class, it splits a multi-class classification data set into one binary classification problem per class. OVA produces the same amount of learned models as the number of classes. If the number of classes is 5, the number of generated models will also be the same [28]. Experiment results In the experiment analysis, the k-fold cross-validation technique was adopted to generate fair and averaged performance results, where the k was set to 5 in this study. Therefore, in this cross-validation, the data were divided into 5 distinct subsets and repeated for 5 iterations. In each iteration, a subset was selected for testing, while the rest of the subsets (k − 1) were used for training. It is noted that the distribution of trials for the subset was randomized and the selection of each subset in each iteration for training and testing purposes was mutually exclusive. The average accuracy was determined for each fold. The average accuracy and its standard deviation, which describes the amount of variability or dispersion around the average, are reported in this section. The experiments were conducted on both morning and afternoon sessions' data sets. In addition, to assess the stability of the signals across the different sessions, the trials from both sessions were also merged to produce another data set, which was named as the combined sessions, during the evaluation process. The performance metrics including accuracy, precision, sensitivity, specificity and F1-score are reported. These metrics were computed based on four parameters: true positive (TP), false positive (FP), true negative (TN) and false negative (FN), where they are derived as follows: $${\text{Accuracy}} = \frac{{{\text{TP}} + {\text{TN}}}}{{{\text{TP}} + {\text{TN}} + {\text{FP}} + {\text{FN}}}},$$ $${\text{Precision}} = \frac{{{\text{TP}}}}{{{\text{TP}} + {\text{FP}}}},$$ $${\text{Sensitivity}} = \frac{{{\text{TP}}}}{{{\text{TP}} + {\text{FN}}}},$$ $${\text{Specificity}} = \frac{{{\text{TN}}}}{{{\text{TN}} + {\text{FP}}}},$$ $$F1\;{\text{score}} = \frac{{{\text{precision}}*{\text{sensitivity}}}}{{{\text{precision}} + {\text{sensitivity}}}}.$$ The averaged classification results of 4 data sets with different sessions are summarized in Tables 1, 2 and 3. As observed from Tables 1, 2 and 3, the visual stimulation task outperformed the EC task in three experiments, including the morning, afternoon and combined sessions. It achieved a very promising accuracy performance especially for morning and afternoon sessions (S1S,OVO = 96.91%, S1S,OVA = 99.06%, S2S,OVO = 97.71%, S2S,OVA = 99.05%) as compared to the EC task (S1EC,OVO = 83.70%, S1EC,OVA = 82.73%, S2EC,OVO = 86.69%, S2EC,OVA = 96.42%). The accuracy performances of visual stimulation for the combined sessions also outperformed the EC task with accuracies of 87.64% (S1 + S2S, OVO) and 96.56% (S1 + S2S, OVA) while the EC task had accuracies of 86.61% (S1 + S2EC, OVO) and 96.41% (S1 + S2EC, OVA) for both OVO and OVA, respectively. Table 1 Experimental results for Task 1 and Task 2 in the morning session, S1 Table 2 Experimental results for Task 1 and Task 2 in the afternoon session, S2 Table 3 Experimental results for Task 1 and Task 2 in combined sessions, S1 + S2 Based on the comparison, it is noticed that the visual stimulation task performed better than the EC task. Some statistical tests were carried out in this study to measure the significant difference between each task. First, the Shapiro–Wilk Test was used to evaluate the acquisition methods' results and examine if they obeyed normal distribution. If the data is normally distributed, the paired t test is applied to assess the consistency of classification performance; otherwise, the Wilcoxon Rank Sum test is considered. These calculations were performed using the SPSS software. The Shapiro–Wilk test showed that the average classification accuracy for OVO–SVM only obeyed the normal distribution in the morning, afternoon and combined sessions. The probabilities are summarized in Table 4. As the data were a normal distribution, a paired t test was conducted to compare the differences of OVO classification measurement between the EC task and the visual stimulation task. The paired t test showed that visual stimulation performed better than EC in the morning and afternoon sessions, where both classification measurements (p < 0.05) were significant. Table 4 Normal distribution results On the other hand, based on Table 4, the distribution for OVA classification accuracy did not resemble a normal distribution. Thus, the Wilcoxon Rank Sum test, the non-parametric alternative to the paired t test, was applied. The Wilcoxon Rank Sum test results indicate that the visual stimulation's performances were better than EC in both morning and afternoon sessions. There was a significant difference at the level of 0.05. Meanwhile, visual stimulation also outperformed EC when the morning session was combined with the afternoon session; however, the difference was not significant. The results are reported in Table 5. Table 5 Paired t test and Wilcoxon sRank Sum test for classification measurements (p values) The results in Table 5 show the range of p values from 0.000 to 0.017 in the morning and afternoon sessions. Therefore, there is sufficient evidence to conclude that the classification accuracy can achieve better performance on average if the EEG data are acquired through visual stimulation task in separate time sessions. The visual stimulation task appears to be effective in terms of better capabilities in recognizing the claimed users. However, it was also observed that the combined sessions did not inherit similar characteristics to both previous sessions where the p values were not significant (p > 0.05). These results imply that the intra-class data variability does not significantly impact the signal's stability, thus leading to similar results between visual stimulation and EC. On the other hand, it was also observed that OVA outperformed OVO in most performance metrics for both EC and visual stimulation tasks in all experiments. The finest accuracy for OVA considering both visual stimulation task and EC task was 99.06%. It had a higher average accuracy of 12.5% (EC) and 4.59% (visual stimulation), respectively, compared to OVO. The comparison results between both decomposition strategies in SVM are reported in Table 6. However, it is noticed that there was degradation in terms of specificity performance in OVA for all experiment tasks. It may be due to the OVA strategy which involves duplication of a single binary classification per class, where the samples from a particular class are assigned as positive while samples from the rest of the classes are assigned as negatives for each iteration. Assuming n is the number of classes, the OVA repeats for n times, and for each time, a class is defined as a positive class while the rest of the classes (n − 1) are denoted as negatives. In this way, there is classifier imbalance as the number of negative samples is significantly larger than positive samples. It increases the chance for the system to rule out the negative samples mistakenly, thus decreasing the true-negative result. Table 6 Classification accuracy of OVO and OVA The overall results obtained in this study reveals that EEG signals are an effective biometric identifier in user authentication. As shown in Figs. 5, 6 and 7, the visual stimulation task had better accuracy performance than the EC task. In addition, the results for the EC task had a higher standard deviation than the visual stimulation task. It is believed that the subjects' minds in EC protocol are uncontrollable without the existence of stimulus, therefore leading to the instability of the signal produced. The findings also reveal that the visual stimulation with ERP protocol is better than EC protocol as ERP allows the experimenter to tightly control the user's cognitive state. Although performance degradation was observed when combining both morning and afternoon sessions, the specificity was still sustained within 81.68–98.23% and 80.53–98.08% for visual stimulation and EC tasks, respectively. These results indicate that the proposed system is able to identify the ratio of true negatives to total negatives in the data set. A comparison of the proposed method with existing works is listed in Table 7. Comparison results of EEG acquisition protocols for the morning session, S1 Comparison results of EEG acquisition protocols for the afternoon session, S2 Comparison of EEG acquisition protocols for the morning and afternoon sessions, S1 + S2 Table 7 Performance comparison of the existing works As seen from the table above, the proposed method had obtained better results than most existing works. Although the accuracy reported in [20] was perfectly accurate, it is the least practical as the acquisition process took one and a half hours to retrieve EEG responses from individuals based on six types of stimulus. In terms of EEG recording devices, most reported studies preferred using research-grade devices due to their reliability. The proposed method uses a consumer-grade device, which is proven to have the capability to recognize the individuals even in separate sessions. In addition, it is cost-effective in terms of practical applications. Furthermore, the acquisition duration is one of the key reasons that makes the proposed protocol more applicable in a real-world environment. In past works, they took a minimum of 55 s for the EO or EC task, and 20 min for the ERP task. The proposed study reduced the duration to 30 s for the EC task and 4 min for the ERP task, which implies that both cases can achieve very promising results. Moreover, tests for different sessions were conducted to assess the stability of the EEG signals and the results demonstrate the suitability of the proposed acquisition protocol in the authentication field. This paper discussed an EEG-based recognition system's acquisition protocols and performance comparison between the EC and visual stimulation protocols. We proposed using a consumer-grade EEG device for individual authentication in our study. A reasonable acquisition period was proposed to ensure the feasibility of the EEG-based biometric in the future. In this study, cross-correlation was determined to measure the correlation between two different EEG channel signals. We obtained good results when the classification was carried out using cross-correlation together with SVM. The results show that using visual stimulation protocol achieved better performance in terms of classification and consistency than using the EC protocol. However, there is a potential to apply incremental learning to model intra-class variability over time. Besides, OVA performed better than OVO. It can be noted that the distribution for OVA's classification accuracy did not resemble a normal distribution due to the small size of the samples. Therefore, a non-parametric test was needed to compare the differences of the classification measurement between the proposed methods. The results indicate that visual stimulation performed better than EC in both morning and afternoon sessions with a significant difference at the level of 0.05. Larger sample classes are recommended for further comparison between OVO and OVA. Future works that can be carried out include investigating the extraction and selection of more reliable features from EEG signals with a larger sample size and applying other classification methods to improve the intra- and inter-individual EEG stability. AAR: Automatic Artifact Removal BCI: DIVA: Divergent Auto Encoder EC: EEG: EO: ERP: Event-Related Potential FFT: Fast Fourier Transform False negative FP: KNN: K-Nearest Neighbors NN: OVA: One-vs-all OVO: One-vs-one PCA: SVM: Support vector machine TN: True negative TP: True positive Abdulkader SN, Atia A, Mostafa MSM (2015) Brain computer interfacing: applications and challenges. Egypt Inform J 16(2):213–230. https://doi.org/10.1016/j.eij.2015.06.002 Khoh WH, Pang YH, Teoh ABJ (2019) In-air hand gesture signature recognition system based on 3-dimensional imagery. Multimed Tools Appl 78(6):6913–6937. https://doi.org/10.1007/s11042-018-6458-7 Traore I, Alshahrani M, Obaidat MS (2018) State of the art and perspectives on traditional and emerging biometrics: a survey. Secur Priv. https://doi.org/10.1002/spy2.44 Yap HY, Choo YH, Khoh WH (2017) Overview of acquisition protocol in EEG based recognition system. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics), vol 10654 LNAI, pp 129–138. https://doi.org/10.1007/978-3-319-70772-3_12 Chan HL, Kuo PC, Cheng CY, Chen YS (2018) Challenges and future perspectives on electroencephalogram-based biometrics in person recognition. Front Neuroinform. https://doi.org/10.3389/fninf.2018.00066 Ruiz Blondet MV, Laszlo S, Jin Z (2015) Assessment of permanence of non-volitional EEG brainwaves as a biometric. In: 2015 IEEE international conference on identity, security and behavior analysis, ISBA 2015. https://doi.org/10.1109/ISBA.2015.7126359 Wu Q, Zeng Y, Zhang C, Tong L, Yan B (2018) An EEG-based person authentication system with open-set capability combining eye blinking signals. Sensors 18(2):335. https://doi.org/10.3390/s18020335 Theofanos M, Stanton B, Wolfson C (2008) Usability and biometrics: ensuring successful biometric systems. In: International workshop on usability and biometrics Zeynali M, Seyedarabi H (2019) EEG-based single-channel authentication systems with optimum electrode placement for different mental activities. Biomed J 42(4):261–267. https://doi.org/10.1016/j.bj.2019.03.005 La Rocca D, Campisi P, Scarano G (2013) On the repeatability of EEG features in a biometric recognition framework using a resting state protocol. In: Biosignals. (January), pp 419–428 Jain AK, Ross A, Prabhakar S (2004) An introduction to biometric recognition. IEEE Trans Circuits Syst Video Technol 14(1):4–20. https://doi.org/10.1109/TCSVT.2003.818349 Campisi P, Scarano G, Babiloni F, DeVico Fallani F, Colonnese S, Maiorana E, Forastiere L (2011) Brain waves based user recognition using the "eyes closed resting conditions" protocol. In: 2011 IEEE international workshop on information forensics and security, WIFS 2011. https://doi.org/10.1109/WIFS.2011.6123138 Brigham K, Kumar BVKV (2010) Subject identification from Electroencephalogram (EEG) signals during imagined speech. In: IEEE 4th international conference on biometrics: theory, applications and systems, BTAS 2010. https://doi.org/10.1109/BTAS.2010.5634515 Armstrong BC, Ruiz-Blondet MV, Khalifian N, Kurtz KJ, Jin Z, Laszlo S (2015) Brainprint: assessing the uniqueness, collectability, and permanence of a novel method for ERP biometrics. Neurocomputing 166:59–67. https://doi.org/10.1016/j.neucom.2015.04.025 Campisi P, Rocca DL (2014) Brain waves for automatic biometric-based user recognition. IEEE Trans Inf Forensics Secur 9(5):782–800. https://doi.org/10.1109/TIFS.2014.2308640 Huang H, Hu L, Xiao F, Du A, Ye N, He F (2019) An EEG-based identity authentication system with audiovisual paradigm in IoT. Sensors. https://doi.org/10.3390/s19071664 Poulos M, Rangoussi M, Alexandris N (1999) Neural network based person identification using EEG features. In: 1999 IEEE international conference on acoustics, speech, and signal processing. Proceedings. ICASSP99 (Cat. No. 99CH36258), vol 2, pp 1117–1120. https://doi.org/10.1109/ICASSP.1999.759940 Ma L, Minett JW, Blu T, Wang WSY (2015) Resting state EEG-based biometrics for individual identification using convolutional neural networks. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, EMBS, vol 2015-Novem, pp 2848–2851. https://doi.org/10.1109/EMBC.2015.7318985 Palaniappan R, Ravi KVR (2003) A new method to identify individuals using signals from the brain. In ICICS-PCM 2003—proceedings of the 2003 joint conference of the 4th international conference on information, communications and signal processing and 4th Pacific-Rim conference on multimedia, vol 3, pp 1442–1445. https://doi.org/10.1109/ICICS.2003.1292704 Ruiz-Blondet MV, Jin Z, Laszlo S (2016) CEREBRE: a novel method for very high accuracy event-related potential biometric identification. IEEE Trans Inf Forensics Secur 11(7):1618–1629. https://doi.org/10.1109/TIFS.2016.2543524 Sabeti M, Boostani R, Moradi E (2020) Event related potential (ERP) as a reliable biometric indicator: a comparative approach. Array 6:1–7. https://doi.org/10.1016/j.array.2020.100026 Gomez-Herrero G (2007) Automatic artifact removal (AAR) toolbox for MATLAB Mayor D, Davey N, Mayor D (2018) The correlation between EEG signals as measured in different positions on scalp varying with distance. Procedia Comput Sci 123:92–97. https://doi.org/10.1016/j.procs.2018.01.015 Burges CJC (1998) A tutorial on support vector machines for pattern recognition. Data Min Knowl Discov 2(2):121–167. https://doi.org/10.1023/A:1009715923555 Zhang Z, Parhi KK (2015) Seizure prediction using polynomial SVM classification. In: Proceedings of the annual international conference of the IEEE engineering in medicine and biology society, EMBS, pp 5748–5751. https://doi.org/10.1109/EMBC.2015.7319698 Ghumman MK, Singh S, Singh N, Jindal B (2021) Optimization of parameters for improving the performance of EEG-based BCI system. J Reliab Intell Environ 10(1):523–531. https://doi.org/10.1007/s40860-020-00117-y Joseph SJ, Robbins KR, Zhang W, Rekaya R (2010) Comparison of two output-coding strategies for multi-class tumor classification using gene expression data and latent variable model as binary classifier. Cancer Inform 9:39–48. https://doi.org/10.4137/cin.s3827 Abdul Raziff AR, Sulaiman MN, Mustapha N, Perumal T, Mohd Pozi MS (2017) Multiclass classification method in handheld based smartphone gait identification. J Telecommun Electron Comput Eng 9(2–12):59–65 The authors gratefully acknowledge the use of facilities and lab at Multimedia University for conducting the experiment purpose. Faculty of Information, Science & Technology, Multimedia University (MMU), Melaka, Malaysia Hui Yen Yap & Wee How Khoh Faculty of Information & Communication Technology, Universiti Teknikal Malaysia Melaka (UTeM), Melaka, Malaysia Yun-Huoy Choo & Zeratul Izzah Mohd Yusoh Hui Yen Yap Yun-Huoy Choo Zeratul Izzah Mohd Yusoh Wee How Khoh HYY: writing/editing original draft, methodology, analysis and investigation. YHC: validation, review and supervision. ZIMY: validation, review and supervision. WHK: software: programming and analysis. All authors read and approved the final manuscript. Correspondence to Hui Yen Yap. Written informed consent was obtained from each participant before participation in this study. Written informed consent for publication was obtained from each participant before participation in this study. Yap, H.Y., Choo, YH., Mohd Yusoh, Z.I. et al. Person authentication based on eye-closed and visual stimulation using EEG signals. Brain Inf. 8, 21 (2021). https://doi.org/10.1186/s40708-021-00142-4 Brainwaves Acquisition protocols Electroencephalography
CommonCrawl
\begin{document} \title{One smoothing property of the scattering map of the KdV on $\mathbb{R}$} \author{A. Maspero\footnote{Institut f\"ur Mathematik, Universit\"at Z\"urich, Winterthurerstrasse 190, CH-8057 Z\"urich, \texttt{[email protected]}}, B. Schaad \footnote{Department of Mathematics, University of Kansas, 405 Snow Hall, 1460 Jayhawk Blvd, Lawrence, Kansas 66045-7594, \texttt{[email protected]} }} \maketitle \begin{abstract} In this paper we prove that in appropriate weighted Sobolev spaces, in the case of no bound states, the scattering map of the Korteweg-de Vries (KdV) on $\mathbb{R}$ is a perturbation of the Fourier transform by a regularizing operator. As an application of this result, we show that the difference of the KdV flow and the corresponding Airy flow is 1-smoothing. \end{abstract} \section{Introduction} In the last decades the problem of a rigorous analysis of the theory of infinite dimensional integrable Hamiltonian systems in 1-space dimension has been widely studied. These systems come up in two setups: (i) on compact intervals (finite volume) and (ii) on infinite intervals (infinite volume). The dynamical behaviour of the systems in the two setups have many similar features, but also distinct ones, mostly due to the different manifestation of dispersion. The analysis of the finite volume case is now quite well understood. Indeed, Kappeler with collaborators introduced a series of methods in order to construct rigorously Birkhoff coordinates (a cartesian version of action-angle variables) for 1-dimensional integrable Hamiltonian PDE's on $\mathbb{T}$. The program succeeded in many cases, like Korteweg-de Vries (KdV) \cite{kamkdv}, defocusing and focusing Nonlinear Schr\"odinger (NLS) \cite{kappeler_grebert,kappeler_lohrmann_topalov_zung}. In each case considered, it has been proved that there exists a real analytic symplectic diffeomorphism, the {\em Birkhoff map}, between two scales of Hilbert spaces which conjugate the nonlinear dynamics to a linear one.\\ An important property of the Birkhoff map $\Phi$ of the KdV on $\mathbb{T}$ and its inverse $\Phi^{-1}$ is the semi-linearity, i.e., the nonlinear part of $\Phi$ respectively $\Phi^{-1}$ is $1$-smoothing. A local version of this result was first proved by Kuksin and Perelman \cite{kuksinperelman} and later extended globally by Kappeler, Schaad and Topalov \cite{beat2}. It plays an important role in the perturbation theory of KdV -- see \cite{kuksin_damped_kdv} for randomly perturbed KdV equations and \cite{Erdogan_Tzirakis_forced} for forced and weakly damped problems. The semi-linearity of $\Phi$ and $\Phi^{-1}$ can be used to prove $1$-smoothing properties of the KdV flow in the periodic setup \cite{beat2}. The analysis of the infinite volume case was developed mostly during the '60-'70 of the last century, starting from the pioneering works of Gardner, Greene, Kruskal and Miura \cite{gardner1,gardner6} on the KdV on the line. In these works the authors showed that the KdV can be integrated by a {\em scattering transform} which maps a function $q$, decaying sufficiently fast at infinity, into the spectral data of the operator $L(q) := -\partial_x^2 + q$. Later, similar results were obtained by Zakharov and Shabat for the NLS on $\mathbb{R}$ \cite{Zakharov_Shabat}, by Ablowitz, Kaup, Newell and Segur for the Sine-Gordon equation \cite{AKNS}, and by Flaschka for the Toda lattice with infinitely many particles \cite{flaschka}. Furthermore, using the spectral data of the corresponding Lax operators, action-angle variables were (formally) constructed for each of the equations above \cite{zakharov_faddeev,zakharov_manakov,mcLaughlin,mcLaughlin_erratum}. See also \cite{Novikov_book,Faddeev_Takhtajan,ablowitz_book} for monographs about the subject. Despite so much work, the analytic properties of the scattering transform and of the action-angle variables in the infinite volume setup are not yet completely understood. In the present paper we discuss these properties, at least for a special class of potentials. \\ The aim of this paper is to show that for the KdV on the line, the scattering map is an analytic perturbation of the Fourier transform by a 1-smoothing nonlinear operator. With the applications we have in mind, we choose a setup for the scattering map so that the spaces considered are left invariant under the KdV flow. Recall that the KdV equation on $\mathbb{R}$ \begin{equation} \label{KdV} \begin{cases} \partial_t u(t,x) = -\partial_x^3u(t,x) - 6u(t,x) \partial_x u(t,x) \ , \\ u(0,x) = q(x) \ , \end{cases} \end{equation} is globally in time well-posed in various function spaces such as the Sobolev spaces $H^N\equiv H^N(\mathbb{R},\mathbb{R}), N\in \mathbb{Z}_{\geq 2}$ ( e.g. \cite{bona,kato79,kenig_ponce_vega0}), as well as on the weighted spaces $H^{2N}\cap L^2_M,$ with integers $ N\geq M \geq 1$ \cite{kato}, endowed with the norm $\|\cdot\|_{H^{2N}}+\|\cdot\|_{L^2_M}$. Here $L^2_M \equiv L^2_M(\mathbb{R}, \mathbb{C})$ denotes the space of complex valued $L^2$-functions satisfying $ \|q\|_{L^2_M}:=\left(\int_{-\infty}^\infty( 1+|x|^2)^{ M} |q(x)|^2dx\right)^{\frac{1}{2}}< \infty.$ Introduce for $q \in L^2_M$ with $M \geq 4$ the Schr\"odinger operator $L(q):= -\partial_x^2 + q$ with domain $H^2_\mathbb{C}$, where, for any integer $N \in \mathbb{Z}_{\geq 0}$, $H^N_\mathbb{C}:= H^N(\mathbb{R}, \mathbb{C})$. For $k \in \mathbb{R}$ denote by $f_1(q,x,k)$ and $f_2(q,x,k)$ the Jost solutions, i.e. solutions of $L(q)f=k^2 f $ with asymptotics $f_1(q,x,k)\sim e^{ikx},\; x\to \infty, \; f_2(q,x,k)\sim e^{-ikx},\; x\to -\infty$. As $f_i(q,\cdot, k), \; f_i(q,\cdot, -k)$, $i= 1,2$, are linearly independent for $k \in \mathbb{R} \setminus \{0\}$, one can find coefficients $S(q,k)$, $W(q,k)$ such that for $k \in \mathbb{R} \setminus \{0\}$ one has \begin{equation} \begin{aligned} \label{refl_tras_rel2} f_2(q,x,k)=& \frac{S(q,-k)}{2ik}f_1(q,x,k) + \frac{W(q,k)}{2ik}f_1(q,x,-k) \ , \\ f_1(q,x,k)=& \frac{S(q,k)}{2ik}f_2(q,x,k) + \frac{W(q,k)}{2ik}f_2(q,x,-k) \ . \end{aligned} \end{equation} It's easy to verify that the functions $W(q,\cdot)$ and $S(q, \cdot)$ are given by the wronskian identities \begin{equation} \label{wronskian_W} W(q,k):=\left[f_2, f_1\right](q,k):= f_2(q, x,k)\partial_x f_1(q,x,k) - \partial_x f_2(q, x,k) f_1(q, x,k) \ , \end{equation} and \begin{equation} \begin{aligned} \label{wronskian} &S(q, k):=\left[f_1(q,x,k), f_2(q,x,-k)\right], \end{aligned} \end{equation} which are independent of $x \in \mathbb{R}$. The functions $S(q,k)$ and $W(q,k)$ are related to the more often used reflection coefficients $r_\pm(q,k)$ and transmission coefficient $t(q,k)$ by the formulas \begin{equation} \label{r.S.rel} r_+(q,k) = \frac{S(q, - k)}{W(q,k)}, \quad r_-(q,k) = \frac{S(q,k)}{W(q,k)}, \quad t(q,k) = \frac{2ik}{W(q,k)} \quad \forall \, k \in \mathbb{R} \setminus \{0\} \ . \end{equation} It is well known that for $q$ real valued the spectrum of $L(q)$ consists of an absolutely continuous part, given by $[0, \infty)$, and a finite number of eigenvalues referred to as bound states, $-\lambda_n < \cdots < -\lambda_1<0$ (possibly none). Introduce the set \begin{equation} \mathcal{Q}:= \left\{ q:\mathbb{R} \to \mathbb{R} \ ,\ q \in L^2_4: W(q,0)\neq 0, \, q \mbox{ without bound states} \right\}. \end{equation} We remark that the property $W(q,0)\neq 0$ is generic. In the sequel we refer to elements in $\mathcal{Q}$ as generic potentials without bound states. Finally we define $$\mathcal{Q}^{N,M}:=\mathcal{Q}\cap H^{N}\cap L^2_M, \quad N\in \mathbb{Z}_{\geq 0},\quad M\in \mathbb{Z}_{\geq 4}.$$ We will see in Lemma \ref{class_open} that for any integers $N\geq 0$, $M\geq 4$, $\mathcal{Q}^{N,M}$ is open in $H^N \cap L^2_M$. Our main theorem analyzes the properties of the scattering map $q \mapsto S(q,\cdot)$ which is known to linearize the KdV flow \cite{gardner6}. To formulate our result on the scattering map in more details let $\mathscr{S}$ denote the set of all functions $\sigma: \mathbb{R} \to \mathbb{C}$ satisfying \begin{enumerate} \item[(S1)] $\sigma(-k) = \overline{\sigma(k)}, \quad \forall k \in \mathbb{R}$; \item[(S2)] $\sigma(0) >0$. \end{enumerate} For $M \in \mathbb{Z}_{\geq 1}$ define the \textit{real} Banach space \begin{equation} \begin{aligned} \label{H^N*} &H^M_{\zeta} := \lbrace f \in H^{M-1}_\mathbb{C} : \quad \overline{f(k)}= f(-k), \quad \zeta \partial_k^M f \in L^2 \rbrace \ , \end{aligned} \end{equation} where $\zeta: \mathbb{R} \to \mathbb{R}$ is an odd monotone $C^\infty$ function with \begin{equation} \label{zeta} \zeta(k)=k \ \mbox{ for } \ |k|\leq 1/2 \quad \mbox{ and } \quad \zeta(k)=1 \ \mbox{ for } \ k\geq 1 \ . \end{equation} The norm on $H^M_{\zeta} $ is given by $$ \norm{f}_{H^M_{\zeta}}^2 := \norm{f}_{H^{M-1}_\mathbb{C}}^2 + \norm{\zeta \partial_k^M f}_{ L^2}^2.$$ For any $N, M \in \mathbb{Z}_{\geq 0}$ let \begin{align} \mathscr{S}^{M, N} := \mathscr{S} \cap H^M_{\zeta} \cap L^2_{N} \ . \label{reflspaceNM0} \end{align} Different choices of $\zeta$, with $\zeta$ satisfying \eqref{zeta}, lead to the same Hilbert space with equivalent norms. We will see in Lemma \ref{lem:S.open} that for any integers $N\geq 0$, $M \geq 4$, $\mathscr{S}^{M,N}$ is an open subset of $H^M_{\zeta} \cap L^2_N $. Moreover let $\mathcal{F}_{\pm}$ be the Fourier transformations defined by $\mathcal{F}_\pm(f) = \int_{-\infty}^{+\infty} e^{\mp 2ikx} f(x) \,dx.$ In this setup, the scattering map $S$ has the following properties -- see Appendix \ref{analytic_map} for a discussion of the notion of real analytic. \begin{theorem}\label{reflthm} For any integers $N \geq 0$, $M \geq 4$, the following holds: \begin{enumerate}[(i)] \item The map $$S: \mathcal{Q}^{N, M}\to \mathscr{S}^{M, N},\quad q \mapsto S(q, \cdot) $$ is a real analytic diffeomorphism. \item The maps $A := S - \mathcal{F}_{-} $ and $B := S^{-1} - \mathcal{F}^{-1}_{-} $ are 1-smoothing, i.e. $$A: \mathcal{Q}^{N, M}\rightarrow H^M_{\zeta} \cap L^2_{N+1}\quad\text{and} \quad B : \mathscr{S}^{M, N}\rightarrow H^{N+1} \cap L^2_{M-1} \ . $$ Furthermore they are real analytic maps. \end{enumerate} \end{theorem} As a first application of Theorem \ref{reflthm} we prove analytic properties of the action variable for the KdV on the line. For a potential $q \in \mathcal{Q}$, the action-angle variable were formally defined for $k \neq 0$ by Zakharov and Faddeev \cite{zakharov_faddeev} as the densities \begin{equation} \label{action_angle} I(q,k) := \frac{k}{\pi} \log \left(1+\frac{|S(q,k)|^2}{4k^2} \right) \ , \quad \theta(q,k):= \arg \left(S(q,k) \right) , \quad k \in \mathbb{R}\setminus\{0\}\ . \end{equation} We can write the action as \begin{equation} \label{action} I(q,k) := -\frac{k}{\pi} \log \left(\frac{4k^2}{4k^2 + S(q,k)S(q,-k)} \right) \ , \quad k \in \mathbb{R}\setminus\{0\} \ . \end{equation} By Theorem \ref{reflthm}, $S(q, \cdot) \in \mathscr{S}$, thus property (S2) implies that $\lim_{k \to 0} I(q,k)$ exists and equals $0$. Furthermore, by (S1), the action $I(q, \cdot)$ is an odd function in $k$, and strictly positive for $k >0$. Thus we will consider just the case $k \in [0, +\infty)$. The properties of $I(q,\cdot)$ for $k$ near $0$ and $k$ large are described separately. \begin{cor} \label{thm:actions} For any integers $N \geq 0$, $M \geq 4$, the maps $$ \mathcal{Q}^{N,M} \to L^1_{2N+1}([1,+\infty), \mathbb{R}) \ , \quad q \mapsto \left.I(q, \cdot)\right|_{[1,\infty)} $$ and $$ \mathcal{Q}^{N,M} \to H^M([0,1], \mathbb{R}) \ , \quad q \mapsto \left.I(q, \cdot)\right|_{[0,1]} + \frac{k}{\pi} \ln \left(\frac{4k^2}{4(k^2+1)} \right) $$ are real analytic. Here $\left.I(q, \cdot)\right|_{[1,\infty)}$ (respectively $\left.I(q, \cdot)\right|_{[0,1]}$) denotes the restriction of the function $k\mapsto I(q,k)$ to the interval $[1, \infty)$ (respectively $[0,1]$). \end{cor} Finally we compare solutions of \eqref{KdV} to solutions of the Cauchy problem for the Airy equation on $\mathbb{R}$, \begin{equation} \label{Airy} \begin{cases} \partial_t v(t,x) = -\partial_x^3v(t,x) \\ v(0,x) = p(x) \end{cases} \end{equation} Being a linear equation with constant coefficients, one sees that the Airy equation is globally in time well-posed on $H^N$ and $H^{2N}\cap L^2_M,$ with integers $N \geq M \geq 1$ (see Remark \ref{rem.airy.flow} below). Denote the flows of \eqref{Airy} and \eqref{KdV} by $U_{Airy}^t(p):=v(t,\cdot)$ respectively $U_{KdV}^t(q):= u(t,\cdot)$. Our third result is to show that for $q\in H^{2N}\cap L^2_M$ with no bound states and $W(q,0)\neq 0$, the difference $U_{KdV}^t(q)-U_{Airy}^t(q)$ is 1-smoothing, i.e. it takes values in $H^{2N+1}$. More precisely we prove the following theorem. \begin{theorem}\label{firstapprox} Let $N$, $M$ be integers with $N \geq 2M \geq 8$. Then the following holds true: \begin{enumerate}[(i)] \item $\mathcal{Q}^{N, M}$ is invariant under the KdV flow. \item For any $q\in \mathcal{Q}^{N, M}$ the difference $U_{KdV}^t(q)-U_{Airy}^t(q)$ takes values in $H^{N+1}\cap L^2_M$. Moreover the map \begin{align*} \mathcal{Q}^{N,M} \times \mathbb{R}_{\geq 0}\to& H^{N+1}\cap L_M^2, \qquad (q,t)\mapsto U_{KdV}^t(q)-U_{Airy}^t(q) \end{align*} is continuous and for any fixed $t$ real analytic in $q$. \end{enumerate} \end{theorem} \noindent{\em Outline of the proof: } In Section 2 we study analytic properties of the Jost functions $f_j(q,x,k)$, $j=1,2$, in appropriate Banach spaces. We use these results in Section \ref{sec:dir.scat} to prove the direct scattering part of Theorem \ref{reflthm}. The inverse scattering part of Theorem \ref{reflthm} is proved in Section \ref{sec:inv.scat}. Finally in Section 5 we prove Corollary \ref{thm:actions} and Theorem \ref{firstapprox}. \noindent{\em Related works: } As we mentioned above, this paper is motivated in part from the study of the $1$-smoothing property of the KdV flow in the periodic setup, established recently in \cite{babin_ilyin_titi,erdogan_tzirakis2,beat2}. In \cite{beat2} the one smoothing property of the Birkhoff map has been exploited to prove that for $q \in H^N(\mathbb{T}, \mathbb{R})$, $N\geq 1$, the difference $U_{KdV}^t(q) - U_{Airy}^t(q)$ is bounded in $H^{N+1}(\mathbb{T}, \mathbb{R})$ with a bound which grows linearly in time. Kappeler and Trubowitz \cite{kapptrub, kapptrub2} studied analytic properties of the scattering map $S$ between weighted Sobolev spaces. More precisely, define the spaces \begin{align*} & H^{n,\alpha} := \left\{ f \in L^2 : x^\beta \partial_x^j f \in L^2 , 0 \leq j \leq n, 0 \leq \beta \leq \alpha \right\} \ , \\ & H^{n,\alpha}_{\sharp}:= \left\{ f \in H^{n,\alpha} : x^\beta \partial_x^{n+1} f \in L^2 , 1 \leq \beta \leq \alpha \right\} \ . \end{align*} In \cite{kapptrub}, Kappeler and Trubowitz showed that the map $q \mapsto S(q, \cdot)$ is a real analytic diffeomorphism from $\mathcal{Q} \cap H^{N,N}$ to $\mathscr{S} \cap H^{N-1,N}_{\sharp}$, $N \in \mathbb{Z}_{\geq 3}$. They extend their results to potentials with finitely many bound states in \cite{kapptrub2}. Unfortunately, $\mathcal{Q} \cap H^{N,N}$ is not left invariant under the KdV flow. Results concerning the 1-smoothing property of the inverse scattering map were obtained previously in \cite{novikovR}, where it is shown that for a potential $q$ in the space $ W^{n,1}(\mathbb{R}, \mathbb{R})$ of real-valued functions with weak derivatives up to order $n$ in $L^1$ $$ q(x) - \frac{1}{\pi} \int_\mathbb{R} e^{-2ikx} \chi_c(k) 2ik r_+(q,k) dk \in W^{n+1,1}(\mathbb{R}, \mathbb{R}) \ . $$ Here $c$ is an arbitrary number with $c > \norm{q}_{L^1}$ and $\chi_c(k)=0$ for $|k|\leq c\,$, $\chi_c(k)=|k|-c$ for $c\leq |k|\leq c+1$, and 1 otherwise. The main difference between the result in \cite{novikovR} and ours concerns the function spaces considered. For the application to the KdV we need to choose function spaces such as $H^N \cap L^2_M$ for which KdV is well posed. To the best of our knowledge it is not known if KdV is well posed in $W^{n,1}(\mathbb{R}, \mathbb{R})$. Furthermore in \cite{novikovR} the question of analyticity of the map $q \mapsto r_+(q)$ and its inverse is not addressed. We remark that Theorem \ref{reflthm} treats just the case of regular potentials. In \cite{perry1,perry2} a special class of distributions is considered. In particular the authors study Miura potentials $q \in H^{-1}_{loc}(\mathbb{R}, \mathbb{R})$ such that $q = u' + u^2$ for some $u \in L^1(\mathbb{R}, \mathbb{R}) \cap L^2(\mathbb{R}, \mathbb{R})$, and prove that the map $q \mapsto r_+$ is bijective and locally bi-Lipschitz continuous between appropriate spaces. Finally we point out the work of Zhou \cite{zhou}, in which $L^2$-Sobolev space bijectivity for the scattering and inverse scattering transforms associated with the ZS-AKNS system are proved. \section{Jost solutions} In this section we assume that the potential $q$ is complex-valued. Often we will assume that $q \in L^2_{M}$ with $M \in \mathbb{Z}_{\geq 4}$. Consider the normalized Jost functions $m_1(q,x,k):= e^{-ikx}f_1(q,x,k)$ and $m_2(q,x,k):= e^{ikx}f_2(q,x,k)$ which satisfy the following integral equations \begin{align} \label{defm} &m_1(q, x,k)=1+\int_x^{+\infty} D_k(t-x)\, q(t) \, m_1(q,t,k) dt \\ &m_2(q, x,k)=1+\int_{-\infty}^{x} D_k(x-t)\, q(t) \, m_2(q,t,k) dt \label{defm2} \end{align} where $D_k(y):= \int_0^y e^{2iks} ds$. The purpose of this section is to analyze the solutions of the integral equations \eqref{defm} and \eqref{defm2} in spaces needed for our application to KdV. We adapt the corresponding results of \cite{kapptrub} to these spaces. As \eqref{defm} and \eqref{defm2} are analyzed in a similar way we concentrate on \eqref{defm} only. For simplicity we write $m(q,x,k)$ for $m_1(q,x,k)$. For $1 \leq p \leq \infty$, $M \geq 1$ and $ a \in \mathbb{R}, \ 1\leq \alpha<\infty$, $1 \leq \beta \leq \infty$ we introduce the spaces $$L^p_{M} := \left\{f: \mathbb{R} \to \mathbb{C}: \; \langle x \rangle^M f \in L^p \right\} \ , \quad L^\alpha_{x\geq a} L^\beta:=\left\{ f: [a, +\infty) \times \mathbb{R} \rightarrow \mathbb{C}: \norm{f}_{L^\alpha_{x\geq a} L^\beta} < +\infty \right\}$$ where $\langle x \rangle:= (1+x^2)^{1/2}$, $L^p$ is the standard $L^p$ space, and $$\norm{f}_{L^\alpha_{x\geq a} L^\beta} := \Big(\int_{a}^{+ \infty} \norm{f(x, \cdot)}^\alpha_{L^\beta} \,dx \Big)^{1/\alpha} $$ whereas for $\alpha=\infty$, $\norm{f}_{L^{\infty}_{x\geq a} L^\beta}:= \sup_{ x \geq a } \norm{f(x, \cdot)}_{L^\beta}.$ We consider also the space $C^0_{x\geq a} L^\beta := C^0\left( [a, +\infty), L^\beta \right)$ with $\norm{f}_{C^0_{x\geq a} L^\beta} := \sup_{ x\geq a} \norm{f(x, \cdot)}_{L^\beta}< \infty$. We will use also the space $ L^\alpha_{x\leq a} L^\beta$ of functions $ f: (-\infty, a]\times \mathbb{R} \to \mathbb{C}$ with finite norm $\norm{f}_{L^\alpha_{x\leq a} L^\beta} := \Big(\int_{-\infty}^{a} \norm{f(x, \cdot)}^\alpha_{L^\beta} \,dx \Big)^{1/\alpha} $. Moreover given any Banach spaces $X$ and $Y$ we denote by $\mathcal{L}(X,Y)$ the Banach space of linear bounded operators from $X$ to $Y$ endowed with the operator norm. If $X=Y$, we simply write $\mathcal{L}(X)$. \\ For the notion of an analytic map between complex Banach spaces we refer to Appendix \ref{analytic_map}.\\ We begin by stating a well known result about the properties of $m$. \begin{theorem}[ \cite{deift}] \label{deift_jost} Let $q \in L^1_1$. For each $k, \, \operatorname{Im} k \geq 0$, the integral equation $$ m(x,k) = 1 + \intx{x} D_k(t-x) q(t) m(t,k) dt \ , \qquad x \in \mathbb{R} $$ has a unique solution $m \in C^2(\mathbb{R}, \mathbb{C})$ which solves the equation $m'' + 2ik m' = q(x) m $ with $m(x,k) \to 1$ as $x \to +\infty$. If in addition $q$ is real valued the function $m$ satisfies the reality condition $\overline{m(q,k)}= m(q,-k)$. Moreover, there exists a constant $K>0$ which can be chosen uniformly on bounded subsets of $L^1_{1}$ such that the following estimates hold for any $x \in \mathbb{R}$ \begin{enumerate}[(i)] \item $|m(x,k) - 1| \leq e^{\eta(x) / |k|} \eta(x)/|k|, \quad k \neq 0 $; \item $|m(x,k) - 1| \leq K \Big((1+ \max(-x,0))\intx{x}(1+|t|) |q(t)| dt \Big)/ (1+ |k|)$; \item $|m'(x,k)| \leq K_1 \Big(\intx{x}(1+|t|) |q(t)| dt\Big)/(1+ |k|) $ \end{enumerate} where $\eta(x) = \intx{x} |q(t)| dt$. For each $x$, $m(x,k)$ is analytic in $\operatorname{Im} k >0$ and continuous in $\operatorname{Im} k \geq 0$. In particular, for every $x$ fixed, $k \mapsto m(x,k) -1 \in H^{2+},$ where $H^{2+}$ is the Hardy space of functions analytic in the upper half plane such that $\sup_{y >0}\int\limits_{-\infty}^{+\infty} |h(k+iy)|^2 \,dk < \infty$. \end{theorem} \noindent {\em Estimates on the Jost functions.} \begin{proposition} \label{prop_minLit} For any $q \in L^2_M$ with $M \geq 2$, $a \in \mathbb{R}$ and $2 \leq \beta \leq +\infty$, the solution $m(q)$ of \eqref{defm} satisfies $m(q)-1 \in C^{0}_{x \geq a} L^\beta \cap L^{2}_{x\geq a} L^2$. The map $L^2_M \ni q \mapsto m(q) -1 \in C^{0}_{x \geq a} L^\beta \cap L^{2}_{x\geq a} L^2$ is analytic. Moreover there exist constants $C_1, C_2 >0$, only dependent on $a, \beta$, such that \begin{equation} \norm{m(q)-1}_{C^{0}_{x \geq a} L^\beta}\leq C_1 e^{\norm{q}_{L^1_1}} \norm{q}_{L^2_1}, \quad \norm{m(q)-1}_{L^{2}_{x\geq a} L^2}\leq C_2 \norm{q}_{L^2_2}\left( 1+ \norm{q}_{L^2_{3/2}} e^{\norm{q}_{L^1_1}}\right). \end{equation} \end{proposition} \begin{remark} In comparison with \cite{kapptrub}, the novelty of Proposition \ref{prop_minLit} consists in the choice of spaces. \end{remark} To prove Proposition \ref{prop_minLit} we first need to establish some auxiliary results. \begin{lemma} \begin{enumerate} \item[(i)] For any $q \in L^1_1$, $a \in \mathbb{R}$ and $1 \leq \beta \leq +\infty$, the linear operator \begin{equation} \mathcal{K}(q): C^{0}_{x \geq a} L^\beta \to C^{0}_{x \geq a} L^\beta, \quad f \mapsto \mathcal{K}(q)[f](x,k) := \intx{x} D_k(t-x) q(t) f(t,k) dt \label{operK} \end{equation} is bounded. Moreover for any $n \geq 1$, the $n^{th}$ composition $K(q)^n$ satisfies $\norm{\mathcal{K}(q)^n}_{\mathcal{L}(C^{0}_{x \geq a} L^\beta)} \leq C^n \norm{q}^n_{L^1_1}/n!\,$ where $C >0$ is a constant depending only on $a$. \item[(ii)] The map $\mathcal{K}: L^1_1 \to \mathcal{L}\left(C^{0}_{x \geq a} L^\beta\right), \; q \mapsto \mathcal{K}(q),$ is linear and bounded, and $Id - \mathcal{K}$ is invertible. More precisely, \begin{align*} \left(Id - \mathcal{K} \right)^{-1}: \, L^1_1 & \to \mathcal{L}\left(C^{0}_{x \geq a} L^\beta\right), \quad q \mapsto \left(Id- \mathcal{K}(q)\right)^{-1} \end{align*} is analytic and $\norm{\left(Id- \mathcal{K}\right)^{-1}}_{\mathcal{L}\left(L^1_1, C^{0}_{x \geq a} L^\beta\right)}\leq e^{C \norm{q}_{L^1_1}}.$ \end{enumerate} \label{KinLit} \end{lemma} \begin{proof} Let $h \in L^\alpha$ with $\frac{1}{\alpha}+ \frac{1}{\beta}=1$. Using $\mmod{D_k(t-x)}\leq |t-x|$, one has \begin{align*} \mmod{\int\limits_{-\infty}^{+\infty} h(k) \mathcal{K}(q)[f](x,k) dk}& \leq \intx{x} dt \, |t-x| |q(t)| \norm{f(t, \cdot)}_{L^\beta} \norm{h}_{L^\alpha} \\ & \leq \left(\intx{a} |t-a| |q(t)| dt \right) \norm{f}_{C^{0}_{x \geq a} L^\beta} \norm{h}_{L^\alpha}, \end{align*} and hence $\norm{\mathcal{K}(q)}_{\mathcal{L}(C^{0}_{x \geq a} L^\beta)}\leq \intx{a} |t-a| |q(t)| dt\leq C \norm{q}_{L^1_1}$, where $C>0$ is a constant depending just on $a$. To compute the norm of the iteration of the map $\mathcal{K}(q)$ it's enough to proceed as above and exploit the fact that the integration in $t$ is over a simplex, yielding $\norm{\mathcal{K}(q)^n}_{C^{0}_{x \geq a} L^\beta}\leq C^n \norm{q}_{L^1_1}^n/n!$ for any $n \geq 1$. Therefore the Neumann series of the operator $ \Big(Id - \mathcal{K}(q) \Big)^{-1}=\sum_{n \geq 0} \mathcal{K}(q)^n$ converges absolutely in $\mathcal{L}\left(C^{0}_{x \geq a} L^\beta \right)$. Since $\mathcal{K}(q)$ is linear and bounded in $q$, the analyticity and, by item $(i)$, the claimed estimate for $(Id - \mathcal{K})^{-1}$ follow. \end{proof} \begin{lemma} Let $a \in \mathbb{R}$. \begin{enumerate}[(i)] \item For any $q \in L^2_{3/2}$, $\mathcal{K}(q)$ defines a bounded linear operator $L^{2}_{x\geq a} L^2 \to L^{2}_{x\geq a} L^2$. Moreover the $n^{th}$ composition $K(q)^n$ satisfies $$ \norm{\mathcal{K}(q)^n}_{\mathcal{L}(L^{2}_{x\geq a} L^2)} \leq C^n \norm{q}_{L^2_{3/2}}\norm{q}^{n-1}_{L^1_1}/(n-1)!$$ where $C>0$ depends only on $a$. \item The map $\mathcal{K}: L^2_{3/2} \to \mathcal{L}\left(L^{2}_{x\geq a} L^2\right), \quad q \mapsto \mathcal{K}(q)$ is linear and bounded; the map \begin{align*} \left(Id - \mathcal{K} \right)^{-1}: \,L^2_{3/2} & \to \mathcal{L}\left(L^{2}_{x\geq a} L^2\right) \quad q \mapsto \left(Id- \mathcal{K}(q)\right)^{-1} \end{align*} is analytic and $\norm{\left(Id- \mathcal{K}\right)^{-1}}_{\mathcal{L}(L^2_{3/2}, L^{2}_{x\geq a} L^2)}\leq C \left( 1+ \norm{q}_{L^2_{3/2}}e^{\norm{q}_{L^1_1}}\right).$ \end{enumerate} \label{KinLtt} \end{lemma} \begin{proof} Proceeding as in the proof of the previous lemma, one gets for $x \geq a$ the estimate $$\norm{\mathcal{K}(q)[f] (x, \cdot)}_{L^2} \leq \intx{x}|t-x| |q(t)| \norm{f(t, \cdot)}_{L^2} \,dt \leq \Big( \intx{x} (t-x)^2 |q(t)|^2 \,dt \Big)^{1/2} \norm{f}_{L^{2}_{x\geq a} L^2},$$ from which it follows that \begin{align*} \norm{\mathcal{K}(q)[f]}_{L^{2}_{x\geq a} L^2}^2 & \leq \norm{\intx{x} (t-x)^2 |q(t)|^2 \,dt}_{L^1_{x\geq a}}^{1/2} \norm{f}_{L^{2}_{x\geq a} L^2} \leq C \norm{ q}_{L^2_{3/2}} \norm{f}_{L^{2}_{x\geq a} L^2} \end{align*} proving item $(i)$. To estimate the composition $\mathcal{K}(q)^n$ viewed as an operator on $L^{2}_{x\geq a} L^2$, remark that \begin{align*} & \norm{\mathcal{K}(q)^n [f](x, \cdot)}_{L^2} \leq \int\limits_{x\leq t_1 \leq \ldots \leq t_n} |t_1-x| |q(t_1)| \cdots |t_n - t_{n-1}| |q(t_n)| \norm{f(t_n, \cdot)}_{L^2} dt \\ & \quad \leq \int\limits_{x\leq t_1 \leq \ldots \leq t_n} |t_1-x| |q(t_1)| \cdots |t_{n-1}-t_{n-2}| |q(t_{n-1})| \Big( \intx{t_{n-1}}dt_n\, (t_n-t_{n-1})^2\, |q(t_n)|^2\Big)^{1/2} \norm{f}_{L^{2}_{x\geq a} L^2} dt \\ & \quad \leq \Big( \intx{x} (t-x)^2 |q(t)|^2 \,dt \Big)^{1/2} \norm{f}_{L^{2}_{x\geq a} L^2} \Big( \intx{x} |t-x| |q(t)| \,dt \Big)^{n-1}/(n-1)! \ . \end{align*} Therefore \begin{align*} \norm{\mathcal{K}(q)^n [f]}_{L^{2}_{x\geq a} L^2} \leq & \norm{\intx{x} (t-x)^2 |q(t)|^2 \,dt}_{L^1_{x\geq a}}^{1/2} \norm{f}_{L^{2}_{x\geq a} L^2} \frac{C^{n-1} \norm{q}^{n-1}_{L^1_1}}{(n-1)!}\\ \leq & \norm{q}_{L^2_{3/2}} \norm{f}_{L^{2}_{x\geq a} L^2} C^n \frac{\norm{q}^{n-1}_{L^1_1}}{(n-1)!} \end{align*} from which item $(i)$ follows. Item $(ii)$ is then proved as in the previous Lemma. \end{proof} Note that for $f \equiv 1$, the expression in \eqref{operK} of $\mathcal{K}(q)[f]$, $\mathcal{K}(q)[1](x,k) = \intx{x} D_k(t-x)\, q(t)\, dt$ is well defined. \begin{lemma} For any $2 \leq \beta \leq +\infty$ and $a \in \mathbb{R}$, the map $L^2_2 \ni q \mapsto \mathcal{K}(q)[1] \in C^{0}_{x \geq a} L^\beta \cap L^{2}_{x\geq a} L^2$ is analytic. Furthermore $$\norm{\mathcal{K}(q)[1]}_{C^{0}_{x \geq a} L^\beta} \leq C_1 \norm{q}_{L^2_2} ,\quad \norm{\mathcal{K}(q)[1]}_{L^{2}_{x\geq a} L^2} \leq C_2 \norm{q}_{L^2_2},$$ where $C_1, C_2 >0$ are constants depending on $a$ and $\beta$. \label{K1} \end{lemma} \begin{proof} Since the map $q \mapsto \mathcal{K}(q)[1]$ is linear in $q$, it suffices to prove its continuity in $q$. Moreover, it is enough to prove the result for $\beta = 2$ and $\beta = +\infty$ as the general case then follows by interpolation. For any $k \in \mathbb{R}$, the bound $|D_k(y)|\leq |y|$ shows that the map $k \mapsto D_k(y)$ is in $L^\infty$. Thus $$ \norm{\mathcal{K}(q)[1](x, \cdot)}_{L^\infty} \leq \intx{x}(t-x) |q(t)| dt \leq \intx{a} |t-a| |q(t)| \, dt \leq C \norm{q}_{L^1_1}, $$ where $C>0$ is a constant depending only on $a \in \mathbb{R}$. The claimed estimate follows by noting that $\norm{q}_{L^1_1}\leq C \norm{q}_{L^2_2}$. \newline Using that for $|k| \geq 1$, $|D_k(y)| \leq \frac{1}{|k|}$, one sees that $k \mapsto D_k(y)$ is $L^2$-integrable. Hence $k \mapsto D_k(t-x) D_{-k}(s-x)$ is integrable. Actually, since the Fourier transform $\mathcal{F}_+ (D_k (y))$ in the $k$-variable of the function $k \mapsto D_k(y)$ is the function $\eta \mapsto \mathbbm{1}_{[0, y]} (\eta) $, by Plancherel's Theorem $$\int^{\infty}_{-\infty} D_k(t-x) \overline{D_{k}(s-x)} \;dk = \frac{1}{\pi}\int^{\infty}_{-\infty} \mathbbm{1}_{[0, t-x]} (\eta) \mathbbm{1}_{[0, s-x]} (\eta) \; d\eta = \frac{1}{\pi}\min(t-x, s-x).$$ For any $x \geq a$ one thus has \begin{align*} \norm{\mathcal{K}(q)[1](x, \cdot)}^2_{L^2} & = \int^{\infty}_{-\infty} \mathcal{K}(q)[1](x, \cdot) \cdot\overline{\mathcal{K}(q)[1]}(x, \cdot)\, dk \\ & = \iint\limits_{[x, \infty)\times [x, \infty)} dt\,ds\; q(t)\, \overline{q(s)} \int\limits_{-\infty}^{+\infty} D_k(t-x) D_{-k}(s-x)\, dk \ . \end{align*} and hence \begin{equation} \norm{\mathcal{K}(q)[1](x, \cdot)}^2_{L^2} \leq \frac{2}{\pi} \intx{x} (t-x) |q(t)| \intx{t} |q(s)| ds \leq \frac{2}{\pi} \intx{a} ds\, |q(s)| \int_a^s |t-a|\, |q(t)| \, dt \leq C \norm{q}_{L^2_1}^2 \ , \label{eq:l1} \end{equation} where the last inequality follows from the Hardy-Littlewood inequality. The continuity in $x$ follows from Lebesgue convergence Theorem. \newline To prove the second inequality, start from the second term in \eqref{eq:l1} and change the order of integration to obtain $$ \norm{\mathcal{K}(q)[1]}_{L^{2}_{x\geq a} L^2}^2 \leq \norm{\intx{x} |t-a| |q(t)| \intx{t} |q(s)| ds}_{L^1_{x \geq a}} \leq \intx{a} |q(s)| \int\limits_a^s (s-a)^2 |q(s)| ds \leq C \norm{q}_{L^2_1} \norm{q}_{L^2_2}.$$ \end{proof} {\em Proof of Proposition \ref{prop_minLit}.} Formally, the solution of equation \eqref{defm} is given by \begin{equation} m(q)-1 = \Big(Id - \mathcal{K}(q) \Big)^{-1}\mathcal{K}(q)[1]. \label{defmK} \end{equation} By Lemma \ref{KinLit}, \ref{KinLtt}, \ref{K1} it follows that the r.h.s. of \eqref{defmK} is an element of $C^{0}_{x \geq a} L^\beta \cap L^{2}_{x\geq a} L^2$, $2 \leq \beta \leq \infty$, and analytic as a function of $q$, since it is the composition of analytic maps. \qed \\ {\em Properties of $\partial_k^n m(q,x,k)$ for $1\leq n \leq M-1$.} In order to study $\partial_k^n m(q,x,k)$, we deduce from \eqref{defm} an integral equation for $\partial_k^n m(q, x, \cdot)$ and solve it. Recall that for any $M \in \mathbb{Z}_{\geq 0}$, $H^M_\mathbb{C} \equiv H^M(\mathbb{R}, \mathbb{C})$ denotes the Sobolev space of functions $\{ f \in L^2 \vert \ \hat{f} \in L^2_M \} $. The result is summarized in the following \begin{proposition} Fix $M \in \mathbb{Z}_{\geq 4}$ and $a \in \mathbb{R}$. For any integer $1\leq n \leq M-1$ the following holds: \begin{enumerate}[(i)] \item for $q \in L^2_{M}$ and $x \geq a$ fixed, the function $k \mapsto m(q, x, k)-1$ is in $H^{M-1}_\mathbb{C}$; \item the map $L^2_{M} \ni q \mapsto \partial_k^n m(q) \in C^{0}_{x\geq a} L^2$ is analytic. Moreover $\norm{\partial_k^n m(q)}_{C^{0}_{x\geq a} L^2}\leq K \norm{q}_{L^2_{M}},$ where $K$ can be chosen uniformly on bounded subsets of $ L^2_{M}$. \end{enumerate} \label{prop_derminLit} \end{proposition} \begin{remark} In \cite{KaCo2} it is proved that if $q \in L^1_{M-1}$ then for every $x \geq a$ fixed the map $k \mapsto m(q,x,k)$ is in $C^{M-2}$; note that since $L^2_M \subset L^1_{M-1}$, we obtain the same regularity result by Sobolev embedding theorem. \end{remark} To prove Proposition \ref{prop_derminLit} we first need to derive some auxiliary results. Assuming that $m(q, x, \cdot)-1$ has appropriate regularity and decay properties, the $n^{th}$ derivative $\partial_k^n m(q, x, k)$ satisfies the following integral equation \begin{equation} \partial_k^n m(q, x,k) = \sum_{j=0}^n \binom{n}{j} \intx{x} \partial_k^j D_k(t-x)\, q(t)\, \partial_k^{n-j} m(q, t,k)\; dt \ . \label{defderkm} \end{equation} To write \eqref{defderkm} in a more convenient form introduce for $1 \leq j \leq n$ and $q \in L^2_{n+1}$ the operators \begin{equation} \mathcal{K}_j(q): C^{0}_{x\geq a} L^2 \to C^{0}_{x\geq a} L^2, \quad f \mapsto \mathcal{K}_j(q)[f](x,k):= \intx{x} \partial_k^j D_k(t-x)\, q(t)\, f(t,k)\; dt \label{defkj} \end{equation} leading to \begin{equation} \Big(Id - \mathcal{K}(q) \Big) \partial_k^n m(q) = \left( \sum_{j=1}^{n-1}\binom{n}{j} \mathcal{K}_j(q)[\partial_k^{n-j} m(q)] + \mathcal{K}_n(q)[m(q)-1] + \mathcal{K}_n(q)[1] \right). \label{defmKj} \end{equation} In order to prove the claimed properties for $\partial_k^n m(q)$ we must show in particular that the r.h.s. of \eqref{defmKj} is in $C^{0}_{x\geq a} L^2$. This is accomplished by the following \begin{lemma} Fix $M \in \mathbb{Z}_{\geq 4}$ and $a \in \mathbb{R}$. Then there exists a constant $C>0$, depending only on $a, M$, such that the following holds: \begin{enumerate}[(i)] \item for any integers $1 \leq n \leq M-1$ \begin{enumerate} \item[(i1)] the map $L^2_{M}\ni q \mapsto \mathcal{K}_n(q)[1] \in C^{0}_{x\geq a} L^2$ is analytic, and $ \norm{\mathcal{K}_n(q)[1]}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_{M}}$. \item[(i2)] the map $ L^2_{M}\ni q \mapsto \mathcal{K}_n(q)\in \mathcal{L}\left(L^{2}_{x\geq a} L^2, C^{0}_{x\geq a} L^2\right)$ is analytic. Moreover $$\norm{\mathcal{K}_n(q)[f]}_{C^{0}_{x\geq a} L^2} \leq \norm{q}_{L^2_{M}}\norm{f}_{L^{2}_{x\geq a} L^2} \ .$$ \end{enumerate} \item For any $1\leq n \leq M-2$, the map $ L^2_M \ni q \mapsto \mathcal{K}_n(q) \in \mathcal{L}\left(C^{0}_{x\geq a} L^2\right)$ is analytic. Moreover one has $ \norm{\mathcal{K}_n(q)[f]}_{C^{0}_{x\geq a} L^2}\leq C \norm{q}_{L^2_{M}}\norm{f}_{C^{0}_{x\geq a} L^2}.$ \item As an application of item $(i)$ and $(ii)$, for any integers $1 \leq n \leq M-1$ the map $ L^2_{M}\ni q \mapsto \mathcal{K}_n(q)[m(q)-1] \in C^{0}_{x\geq a} L^2$ is analytic, and $$\norm{\mathcal{K}_n(q)[m(q)-1]}_{C^{0}_{x\geq a} L^2} \leq K_0' \norm{q}_{L^2_{M}}^2 \ ,$$ where $K_0'>0$ can be chosen uniformly on bounded subsets of $L^2_{M}$. \end{enumerate} \label{Kn1} \end{lemma} \begin{proof}First, remark that all the operators $q \mapsto \mathcal{K}_n(q)$ are linear in $q$, therefore the continuity in $q$ implies the analyticity in $q$. We begin proving item $(i)$. \begin{enumerate} \item[$(i1)$] Let $ \varphi (x,k):= \intx{x} \partial_k^n D_k(t-x)\, q(t) \; dt$ and compute the Fourier transform $\mathcal{F}_+(\varphi(x, \cdot))$ with respect to the $k$ variable for $x\geq a$ fixed, which we denote by $ \hat{\varphi}(x,\xi) \equiv \int_{-\infty}^{\infty} dk \, e^{ik\xi}\varphi(x,k)$. Explicitly $$ \hat{\varphi} (x,\xi)= \intx{x} dt\, q(t) \int\limits_{-\infty}^{+\infty} dk \, e^{2ik\xi}\, \partial_k^n D_k(t-x) = \intx{x} q(t) \,\xi^n \, \mathbbm{1}_{[0, t-x]}(\xi)\, dt.$$ By Parseval's Theorem $\norm{\varphi(x,\cdot)}_{L^2}=\frac{1}{\sqrt{\pi}}\norm{\hat{\varphi}(x,\cdot)}_{L^2}$. By changing the order of integration one has \begin{align*} \norm{\hat{\varphi}(x,\cdot)}^2_{L^2}& = \int\limits_{-\infty}^{+\infty} \hat{\varphi}(x,\xi)\, \overline{\hat{\varphi}(x,\xi)} \; d\xi = \iint\limits_{[x,\infty) \times [x,\infty)} dt\, ds\; q(t)\, \overline{q(s)} \int\limits_{-\infty}^{+\infty} |\xi|^{2n}\, \mathbbm{1}_{[0, t-x]}(\xi)\, \mathbbm{1}_{[0, s-x]}(\xi) d\xi\leq \\ & \leq 2 \intx{x} dt\; |q(t)|\, |t-x|^{2n+1} \intx{t} |q(s)| \; ds \leq \norm{(t-a)^{n+1} q}_{L^2_{t\geq a}}\norm{(t-a)^n \intx{t} |q(s)| ds}_{L^2_{t\geq a}} \\ & \leq C \norm{q}_{L^2_{n+1}}^2, \end{align*} where we used that by $(A3)$ in Appendix \ref{techLemma}, $\norm{(t-a)^n \intx{t} |q(s)| \, ds}_{L^2_{t \geq a}} \leq C \norm{q}_{L^2_{n+1}}.$ \item[$(i2)$] Let $f\in L^{2}_{x\geq a} L^2$, and using $\mmod{\partial_k^n D_k(t-x)} \leq 2^{n}|t-x|^{n+1}$ it follows that \begin{align*} \norm{\mathcal{K}_n(q)[f](x,\cdot)}_{L^2} \leq C \intx{x} |q(t)| \,|t-x|^{n+1}\, \norm{f(t, \cdot)}_{L^2}\; dt \leq C \norm{q}_{L^2_{n+1}} \norm{f}_{L^{2}_{x\geq a} L^2}; \end{align*} by taking the supremum in the $x$ variable one has $\mathcal{K}_n(q) \in \mathcal{L}\left(L^{2}_{x\geq a} L^2, C^{0}_{x\geq a} L^2\right)$, where the continuity in $x$ follows by Lebesgue's convergence theorem. The map $q \mapsto \mathcal{K}_n(q)$ is linear and continuous, therefore also analytic. \end{enumerate} We prove now item $(ii)$. Let $g \in C^{0}_{x\geq a} L^2$. From $\norm{\mathcal{K}_n(q)[g](x,\cdot)}_{L^2} \leq \intx{x} |q(t)|\, |t-x|^{n+1}\, \norm{g(t, \cdot)}_{L^2}\; dt $ it follows that \begin{align*} \sup_{x \geq a} \norm{\mathcal{K}_n(q)[g](x, \cdot)}_{L^2} & \leq \norm{g}_{C^{0}_{x\geq a} L^2} \intx{a} |q(t)|\, |t-a|^{n+1}\; dt \leq C \norm{g}_{C^{0}_{x\geq a} L^2} \norm{q}_{L^2_{n+2}} \ , \end{align*} which implies the claimed estimate. The analyticity follows from the linearity and continuity of the map $q \mapsto \mathcal{K}_n(q)$. Finally we prove item $(iii)$. By Proposition \ref{prop_minLit}, the map $L^2_{n+1} \ni q\mapsto m(q)-1 \in L^{2}_{x\geq a} L^2$ is analytic. By item $(i2)$ above the bilinear map $L^2_{n+1}\times L^{2}_{x\geq a} L^2 \ni (q, f)\mapsto K_n(q)[f] \in C^{0}_{x\geq a} L^2$ is analytic; since the composition of analytic maps is analytic, the map $L^2_{n+1} \ni q \mapsto K_n(q)[m(q)-1] \in C^{0}_{x\geq a} L^2$ is analytic. By $(i2)$ and Proposition \ref{prop_minLit} one has $$ \norm{\mathcal{K}_n(q)[m(q)-1]}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_{n+1}} \norm{m(q) - 1}_{L^{2}_{x\geq a} L^2} \leq K_0' \norm{q}_{L^2_{n+1}}^2 $$ where $K_0'$ can be chosen uniformly on bounded subsets of $L^2_{M}$. \end{proof} {\em Proof of Proposition \eqref{prop_derminLit}.} The proof is carried out by a recursive argument in $n$. We assume that $q \mapsto \partial_k^r m(q)$ is analytic as a map from $L^2_M$ to $C^{0}_{x\geq a} L^2$ for $0 \leq r \leq n-1$, and prove that $L^2_M \to C^{0}_{x\geq a} L^2 : $ $q \mapsto \partial_k^{n} m(q)$ is analytic, provided that $n \leq M-1$. The case $n=0$ is proved in Proposition \ref{prop_minLit}. \newline We begin by showing that for every $x \geq a$ fixed $k \mapsto \partial_k^{n-1} m(q, x,k)$ is a function in $H^1$, therefore it has one more (weak) derivative in the $k$-variable. We use the following characterization of $H^1$ function \cite{brezis}: \begin{equation} \label{H1_char} f \in H^1 \mbox{ iff there exists a constant } C >0 \mbox{ such that } \norm{\tau_h f - f }_{L^2} \leq C |h|, \quad \forall h \in \mathbb{R}, \end{equation} where $(\tau_h f)(k) := f(k+h)$ is the translation operator. Moreover the constant $C$ above can be chosen to be $C = \norm{\partial_k u}_{L^2}$. Starting from \eqref{defmKj} (with $n-1$ instead of $n$), an easy computation shows that for every $x\geq a$ fixed $(\tau_h) \partial_k^{n-1} m(q) \equiv \partial_k^{n-1} m(q, x, k+h)$ satisfies the integral equation \begin{equation} \label{2.10bis} \begin{aligned} \left( Id - \mathcal{K}(q) \right)&(\tau_h \partial_k^{n-1} m(q) - \partial_k^{n-1} m(q) ) \\ = & \int_x^{+\infty} (\tau_h \partial_k^{n-1} D_k(t-x) - \partial_k^{n-1} D_k(t-x)) q(t) ( m(q, t, k+h)-1) \, dt \\ & + \int_x^{+\infty} (\tau_h \partial_k^{n-1} D_k(t-x) - \partial_k^{n-1} D_k(t-x)) q(t) \, dt \\ & + \int_x^{+\infty} (\partial_k^{n-1} D_k(t-x)) \, q(t) \, \left(m(q,t, k+h) - m(q,t,k) \right) \, dt \\ & +\sum_{j=1}^{n-2} \binom{n-1}{j} \Big( \int_x^{+\infty} (\tau_h \partial_k^j D_k(t-x) - \partial_k^j D_k(t-x)) q(t) \partial_k^{n-1-j} m(q, t, k+h) \, dt \\ & + \int_x^{+\infty} \partial_k^j D_k(t-x) \,q(t)\, (\tau_h \partial_k^{n-1-j} m(q,t,k) - \partial_k^{n-1-j} m(q,t,k)) \, dt \Big) \\ & + \int_x^{+\infty} (\tau_h D_k(t-x)- D_k(t-x)) \, q(t) \, \partial_k^{n-1} m(q,t, k+h) \, dt. \end{aligned} \end{equation} In order to estimate the term in the fourth line on the right hand side of the latter identity, use item $(i1)$ of Lemma \ref{Kn1} and the characterization \eqref{H1_char} of $H^1$. To estimate all the remaining lines, use the induction hypothesis, the estimates of Lemma \ref{Kn1}, the fact that the operator norm of $(Id - \mathcal{K}(q))^{-1}$ is bounded uniformly in $k$ and the estimate $$ \mmod{\tau_h \partial_k^j D_k(t-x) - \partial_k^j D_k(t-x)} \leq C |t-x|^{j+2} \, |h|, \quad \forall h \in \mathbb{R}, $$ to deduce that for every $n \leq M-1$ $$ \norm{\tau_h \partial_k^{n-1} m(q) - \partial_k^{n-1} m(q)}_{L^2} \leq C |h|, \quad \forall h \in \mathbb{R}, $$ which is exactly condition \eqref{H1_char}. This shows that $k \mapsto \partial_k^{n-1} m(q,x,k)$ admits a weak derivative in $L^2$. Formula \eqref{defderkm} is therefore justified. We prove now that the map $ L^2_{M}\ni q \mapsto \partial_k^{n} m(q)\in C^{0}_{x\geq a} L^2 $ is analytic for $1 \leq n \leq M-1$. Indeed equation \eqref{defmKj} and Lemma \ref{Kn1} imply that $$ \norm{\partial_k^{n} m(q)}_{C^{0}_{x\geq a} L^2} \leq K' \Big( \norm{q}_{L^2_{M}}+ \norm{q}_{L^2_{M}}^2 + \sum_{j=1}^{n-1}\norm{q}_{L^2_{M}} \norm{\partial_k^{n-j}m(q)}_{C^{0}_{x\geq a} L^2} \Big)$$ where $K'$ can be chosen uniformly on bounded subsets of $q$ in $L^2_M$. Therefore $\partial_k^n m(q) \in C^{0}_{x\geq a} L^2$ and one gets recursively $\norm{\partial_k^n m(q)}_{C^{0}_{x\geq a} L^2}\leq K \norm{q}_{L^2_{M}} $, where $K$ can be chosen uniformly on bounded subsets of $q$ in $L^2_M$. The analyticity of the map $q \mapsto \partial_k^{n} m(q)$ follows by formula \eqref{defmKj} and the fact that composition of analytic maps is analytic. \qed \newline \newline {\em Properties of $k\partial_k^n m(q,x,k)$ for $1\leq n \leq M$.} The analysis of the $M^{th}$ $k$-derivative of $m(q,x,k)$ requires a separate attention. It turns out that the distributional derivative $\partial_k^M m(q,x,\cdot)$ is not necessarily $L^2$-integrable near $k=0$ but the product $k\partial_k^M m(q,x,\cdot)$ is. This is due to the fact that $\partial_k^M D_k(x)q(x) \sim x^{M+1}q(x)$ which might not be $L^2$-integrable. However, by integration by parts, it's easy to see that $k \partial_k^M D_k(x)q(x) \sim x^{M}q(x) \in L^2$. The main result of this section is the following \begin{proposition} Fix $M \in \mathbb{Z}_{\geq 4}$ and $a \in \mathbb{R}$. Then for every integer $1 \leq n \leq M$ the following holds: \begin{enumerate}[(i)] \item for every $q \in L^2_M$ and $x \geq a$ fixed, the function $k \mapsto k \partial_k^n m(q,x,k)$ is in $L^2$; \item the map $ L^2_{M}\ni q \mapsto k\partial_k^n m(q) \in C^{0}_{x\geq a} L^2$ is analytic. Moreover $\norm{k \partial_k^n m}_{C^{0}_{x\geq a} L^2}\leq K_1 \norm{q}_{L^2_{M}}$ where $K_1$ can be chosen uniformly on bounded subsets of $L^2_M$. \end{enumerate} \label{prop_dermiNLit} \end{proposition} Formally, multiplying equation \eqref{defderkm} by $k$, the function $k\partial_k^n m(q)$ solves \begin{equation} \left(Id - \mathcal{K}(q) \right)( k\partial_k^n m(q))= \left( \sum_{j=1}^{n-1} \binom{n}{j} \tilde{\mathcal{K}}_j(q)[\partial_k^{n-j} m(q)] + \tilde{\mathcal{K}}_n(q)[m(q)-1] + \tilde{\mathcal{K}}_n(q)[1] \right) \label{kderk^Nm_formula} \end{equation} where we have introduced for $0 \leq j \leq M$ and $q \in L^2_M$ the operators \begin{equation} \label{Ktilde.def} \tilde{\mathcal{K}}_j(q): C^{0}_{x\geq a} L^2 \to C^{0}_{x\geq a} L^2, \quad f \mapsto \tilde{\mathcal{K}}_j(q)[f](x,k):= \intx{x} k \partial_k^j D_k(t-x)\, q(t)\, f(t,k)\; dt. \end{equation} We begin by proving that each term of the r.h.s. of \eqref{Ktilde.def} is well defined and analytic as a function of $q$. The following lemma is analogous to Lemma \ref{Kn1}: \begin{lemma} \label{lem:derk^Nm(q)} Fix $M \in \mathbb{Z}_{\geq 4}$ and $a \in \mathbb{R}$. There exists a constant $C>0$ such that the following holds: \begin{enumerate}[(i)] \item for any integers $1 \leq n \leq M$ \begin{enumerate} \item[(i1)] the map $ L^2_{M}\ni q \mapsto \tilde{\mathcal{K}}_n(q)[1]\in C^{0}_{x\geq a} L^2$ is analytic, and $\norm{\tilde{\mathcal{K}}_n(q)[1] }_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_M}$; \item[(i2)] the map $L^2_{M} \ni q \mapsto \tilde{\mathcal{K}}_n(q)\in \mathcal{L}\left(L^{2}_{x\geq a} L^2, \, C^{0}_{x\geq a} L^2\right)$ is analytic. Moreover $$\norm{\tilde{\mathcal{K}}_n(q)[f]}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_{M}}\norm{f}_{L^{2}_{x\geq a} L^2} \ ;$$ \end{enumerate} \item for any $ 1 \leq j \leq M-1$ the map $L^2_{M}\ni q \mapsto \tilde{\mathcal{K}}_j(q)\in \mathcal{L}\left(C^{0}_{x\geq a} L^2 \right)$ is analytic, and $$\norm{\tilde{\mathcal{K}}_j(q)[f]}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_{M}} \norm{f}_{C^{0}_{x\geq a} L^2} \ .$$ \item As an application of item $(i)$ and $(ii)$ we get \begin{enumerate} \item[$(iii1)$] for any $1 \leq n \leq M$, the map $L^2_M \ni q \mapsto \tilde{\mathcal{K}}_n(q)[m(q)-1] \in C^{0}_{x\geq a} L^2$ is analytic with \begin{equation} \norm{\tilde{\mathcal{K}}_n(q)[m(q)-1]}_{C^{0}_{x\geq a} L^2} \leq K_1' \norm{q}_{L^2_M}^2, \label{eq:derk^Nm(q)} \end{equation} where $K_1'$ can be chosen uniformly on bounded subsets of $L^2_M$; \item[$(iii2)$] for any $1 \leq j \leq n-1$, the map $L^2_M \ni q \mapsto \tilde{\mathcal{K}}_j(q)[\partial_k^{n-j}m(q)] \in C^{0}_{x\geq a} L^2 $ is analytic with \begin{equation} \norm{\tilde{\mathcal{K}}_j(q)[\partial_k^{n-j}m(q)]}_{C^{0}_{x\geq a} L^2} \leq K_2' \norm{q}_{L^2_M}^2, \label{eq:derk^Nm(q).2} \end{equation} where $K_2'$ can be chosen uniformly on bounded subsets of $L^2_M$. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[$(i)$] \item Since the maps $q \mapsto \tilde{\mathcal{K}}_n(q)$, $0\leq n \leq M$, are linear, it is enough to prove that these maps are continuous. \begin{enumerate} \item[$(i1)$] Introduce $ \varphi (x,k):= \intx{x} k\partial_k^n D_k(t-x)\, q(t) \; dt$. The Fourier transform $\mathcal{F}_+(\varphi(x, \cdot))$ of $\varphi$ with respect to the $k$-variable is given by $\mathcal{F}_+(\varphi(x, \cdot))\equiv \hat{\varphi} (x,\xi)$, where $$\hat{\varphi} (x,\xi)= \intx{x} dt\, q(t) \int\limits_{-\infty}^{+\infty} dk\; e^{-2ik\xi}\, k\partial_k^n D_k(t-x) = -(2i)^{n-1} \intx{x} dt\; q(t)\,\partial_\xi( \xi^n\, \mathbbm{1}_{[0, t-x]}(\xi)),$$ where $\partial_\xi (\xi^n \mathbbm{1}_{[0, t-x]}(\xi))$ is to be understood in the distributional sense. By Parseval's Theorem $\norm{\varphi(x,\cdot)}_{L^2}=\frac{1}{\sqrt{\pi}} \norm{\hat{\varphi}(x,\cdot)}_{L^2}.$ Let $C^\infty_0$ be the space of smooth, compactly supported functions. Since $$\norm{\hat{\varphi}(x,\cdot)}_{L^2_\xi}=\sup_{\substack{\chi \in C^{\infty}_0 \\ \norm{\chi}_{L^2}\leq 1}}\mmod{\int\limits_{-\infty}^{\infty} \chi(\xi)\,\hat{\varphi}(x,\xi) \;d\xi}, $$ one computes \begin{align*} \mmod{\int\limits_{-\infty}^{\infty} \chi(\xi)\, \hat{\varphi}(x,\xi)\; d\xi} &= \mmod{\intx{x} dt \; q(t) \int\limits_{-\infty}^{\infty} \chi(\xi)\,\partial_\xi \left( \xi^n \mathbbm{1}_{[0, t-x]}(\xi)\right)\; d\xi} = \mmod{\intx{x} dt\; q(t) \int\limits_0^{t-x}d\xi\; \xi^n \partial_\xi \chi(\xi) } \\ & \leq \mmod{\intx{x} dt \; q(t) \chi(t-x) (t-x)^n} + n \mmod{\intx{x} dt\; q(t) \int\limits_0^{t-x}d\xi \, \chi(\xi) \xi^{n-1}} \\ &\leq \norm{q}_{L^2_M} \norm{\chi}_{L^2} + n \mmod{\intx{x} dt\; |q(t)| |t-x|^{n-1} \int\limits_0^{t-x}d\xi \, |\chi(\xi)|} \\ & \leq \norm{q}_{L^2_M} \norm{\chi}_{L^2} + n \mmod{\intx{x} dt\; |q(t)| |t-x|^{n} \frac{\int\limits_0^{t-x}d\xi \, |\chi(\xi)|}{|t-x|}} \leq C \norm{q}_{L^2_M} \norm{\chi}_{L^2} \end{align*} where the last inequality follows from Cauchy-Schwartz and Hardy inequality, and $C >0$ is a constant depending on $a$ and $M$. \item[$(i2)$] As $\mmod{k\partial_k^n D_k(t-x)} \leq 2^n |t-x|^{n}$ by integration by parts, it follows that for some constant $C>0$ depending only on $a$ and $M$, \begin{align*} \norm{\tilde{\mathcal{K}}_n(q)[f](x,\cdot)}_{L^2} & \leq C \intx{x} |t-x|^n \,|q(t)|\, \norm{f(t,\cdot)}_{L^2}\; dt \leq C \norm{q}_{L^2_{M}} \norm{f}_{L^{2}_{x\geq a} L^2} \ . \end{align*} Now take the supremum over $x\geq a$ in the expression above and use Lebesgue's dominated convergence theorem to prove item $(i2)$. \end{enumerate} \item The claim follows by the estimate \begin{align*} \norm{\tilde{\mathcal{K}}_j(q)[f](x,\cdot)}_{L^2} & \leq C \intx{x} |t-x|^j \,|q(t)|\, \norm{f(t,\cdot)}_{L^2}\; dt \leq C \norm{q}_{L^1_{j}} \norm{f}_{C^{0}_{x\geq a} L^2} \end{align*} and the remark that $\norm{q}_{L^1_{j}} \leq C \norm{q}_{L^2_M}$ for $0 \leq j\leq M-1$. \item By Propositions \ref{prop_minLit} and \ref{prop_derminLit} the maps $L^2_M \ni q \mapsto m(q)-1 \in C^{0}_{x\geq a} L^2 \cap L^{2}_{x\geq a} L^2$ and $L^2_M \ni q \mapsto \partial_k^{n-j}m(q) \in C^{0}_{x\geq a} L^2$ are analytic; by item $(ii)$ for any $1 \leq n \leq M-1$, the bilinear map $(q, f) \mapsto \tilde{\mathcal{K}}_n(q)[f] $ is analytic from $L^2_M \times C^{0}_{x\geq a} L^2$ to $C^{0}_{x\geq a} L^2$. Since the composition of two analytic maps is again analytic, item $(iii)$ follows. Moreover $\tilde{\mathcal{K}}_n(q)[m(q)-1]$, $\tilde{\mathcal{K}}_j(q)[\partial_k^{n-j} m(q)] \in C^{0}_{x\geq a} L^2$ since $m(q,x,k)$ and $\partial_k^n m(q,x,k)$ are continuous in the $x$-variable. The estimate \eqref{eq:derk^Nm(q)} follows from item $(ii)$ and Proposition \ref{prop_minLit}, \ref{prop_derminLit}. \end{enumerate} \end{proof} {\em Proof of Proposition \ref{prop_dermiNLit}.} One proceeds in the same way as in the proof of Proposition \ref{prop_derminLit}. Given any $1 \leq n \leq M$, we assume that $q \mapsto k \partial_k^r m(q)$ is analytic as a map from $L^2_M$ to $C^{0}_{x\geq a} L^2$ for $1 \leq r \leq n-1$, and deduce that $q \mapsto k\partial_k^{n} m(q)$ is analytic as a map from $L^2_M$ to $C^{0}_{x\geq a} L^2$ and satisfies equation \eqref{kderk^Nm_formula} (with $r$ instead of $n$). \newline We begin by showing that for every $x \geq a$ fixed, $k \mapsto k\partial_k^{n-1} m(q, x,k)$ is a function in $H^1$. Our argument uses again the characterization \eqref{H1_char} of $H^1$. Arguing as for the derivation of \eqref{2.10bis} one gets the integral equation \begin{align*} \left( Id - \mathcal{K}(q) \right)&(\tau_h (k\partial_k^{n-1} m(q)) - k\partial_k^{n-1} m(q) ) = \\ & = \intx{x} \left(\tau_h (k \partial_k^{n-1} D_k(t-x)) - k \partial_k^{n-1} D_k(t-x)\right) q(t) ( m(q, t, k+h)-1) \, dt \\ &+\intx{x} \left(\tau_h (k\partial_k^{n-1} D_k(t-x)) - k \partial_k^{n-1} D_k(t-x)\right) q(t) \, dt \\ & + \intx{x} (k \partial_k^{n-1} D_k(t-x) ) q(t) \left( m(q,t,k+h) - m(q, t, k) \right) \, dt \\ &+\sum_{j=1}^{n-2} \binom{n-1}{j} \Big( \intx{x} \left(\tau_h (k\partial_k^j D_k(t-x)) - k\partial_k^j D_k(t-x)\right) q(t)\, \partial_k^{n-1-j} m(q, t, k+h) \, dt \\ & + \intx{x} k \partial_k^j D_k(t-x) \,q(t)\, \left(\tau_h \partial_k^{n-1-j} m(q,t,k) - \partial_k^{n-1-j} m(q,t,k)\right) \, dt \Big) \\ & + \intx{x} \left(\tau_h D_k(t-x)- D_k(t-x)\right) \, q(t) \, (k+h)\partial_k^{n-1} m(q,t, k+h) \, dt \ . \end{align*} Using the estimates $$ \mmod{\tau_h D_k(t-x) - D_k(t-x)} \leq C |t-x|^2 |h| $$ and $$ \mmod{\tau_h (k\partial_k^j D_k(t-x)) - k\partial_k^j D_k(t-x)} \leq C |t-x|^{j+1} \, |h|, \quad \forall h \in \mathbb{R} \ , $$ obtained by integration by parts, the characterization \eqref{H1_char} of $H^1$, the inductive hypothesis, estimates of Lemma \ref{Kn1} and Lemma \ref{KinLtt} one deduces that for every $n \leq M$ $$ \norm{\tau_h (k\partial_k^{n-1} m(q)) - k\partial_k^{n-1} m(q)}_{L^2} \leq C |h|, \quad \forall h \in \mathbb{R}. $$ This shows that $k \mapsto k\partial_k^{n-1} m(q,x,k)$ admits a weak derivative in $L^2$. Since $$k\partial_k^n m(q,x,k) = \partial_k (k\partial_k^{n-1}m(q,x,k)) - \partial_k^{n-1} m(q,x,k) \ , $$ the estimate above and Proposition \ref{prop_derminLit} show that $k \mapsto k\partial_k^n m(q,x,k)$ is an $L^2$ function. Formula \eqref{defderkm} is therefore justified. \newline The proof of the analyticity of the map $q \mapsto k\partial_k^n m(q)$ is analogous to the one of Proposition \ref{prop_derminLit} and it is omitted. \qed {\em Analysis of $\partial_x m(q,x,k)$.} Introduce a odd smooth monotone function $\zeta: \mathbb{R} \to \mathbb{R}$ with $\zeta(k)=k$ for $|k|\leq 1/2$ and $\zeta(k)=1$ for $k\geq 1$. We prove the following \begin{proposition} Fix $M \in \mathbb{Z}_{\geq 4}$ and $a \in \mathbb{R}$. Then the following holds: \begin{enumerate}[(i)] \item for any integer $0 \leq n \leq M-1$, the map $ L^2_M \ni q \mapsto \partial_k^n \partial_x m(q) \in C^{0}_{x\geq a} L^2$ is analytic, and $\norm{ \partial_k^n \partial_x m(q)}_{C^{0}_{x\geq a} L^2} \leq K_2 \norm{q}_{L^2_{M}}$ where $K_2$ can be chosen uniformly on bounded subsets of $L^2_M$. \item the map $L^2_M \ni q \mapsto \zeta \partial_k^M \partial_x m(q) \in C^{0}_{x\geq a} L^2$ is analytic, and $\norm{ \zeta \partial_k^M \partial_x m(q)}_{C^{0}_{x\geq a} L^2} \leq K_3 \norm{q}_{L^2_{M}}$ where $K_3$ can be chosen uniformly on bounded subsets of $L^2_M$. \end{enumerate} \label{prop:derxderkm} \end{proposition} The integral equation for $\partial_x m(q,x,k)$ is obtained by taking the derivative in the $x$-variable of \eqref{defm}: \begin{equation} \partial_x m(q, x,k) = -\intx{x} e^{2ik(t-x)}\, q(t)\, m(q, t,k)\; dt. \label{derxm.equation} \end{equation} Taking the derivative with respect to the $k$-variable one obtains, for $0 \leq n \leq M-1$, \begin{equation} \partial_k^n \partial_x m(q, x,k) = -\sum_{j=0}^n \binom{n}{j} \intx{x} e^{2ik(t-x)} \,(2i(t-x))^j \, q(t)\, \partial_k^{n-j} m(q, t,k) \;dt. \label{eq:derkderxm} \end{equation} For $0 \leq j \leq M$ introduce the integral operators \begin{equation} \mathcal{G}_j(q):C^{0}_{x\geq a} L^2 \to C^{0}_{x\geq a} L^2, \quad q \mapsto \mathcal{G}_j(q)[f](x,k) := - \intx{x} e^{2ik(t-x)}\, (2i(t-x))^j\, q(t)\, f(t,k)\; dt \end{equation} and rewrite \eqref{eq:derkderxm} in the more compact form \begin{equation} \label{2.17bis} \partial_k^n \partial_x m(q) = \sum_{j=0}^{n-1} \binom{n}{j} \mathcal{G}_j(q) [\partial_k^{n-j} m(q)] + \mathcal{G}_n(q)[m(q)-1] + \mathcal{G}_n(q)[1]. \end{equation} Proposition \ref{prop:derxderkm} $(i)$ follows from Lemma \ref{lem:derxderkm} below. The $M^{th}$ derivative requires a separate treatment, as $\partial_k^M m$ might not be well defined at $k=0$. Indeed for $n=M$ the integral $ \intx{x}e^{2ik(t-x)}\, q(t)\, \partial_k^M m(q,t,k)\; dt$ in \eqref{eq:derkderxm} might not be well defined near $k=0$ since we only know that $k\partial_k^M m(q, x, \cdot)\in L^2$. To deal with this issue we use the function $\zeta$ described above. Multiplying \eqref{2.17bis} with $n=M$ by $\zeta$ we formally obtain \begin{align*} \zeta \partial_k^M \partial_x m(q)& = \sum_{j=1}^{M-1}\binom{M}{j} \zeta \, \mathcal{G}_j(q)[\partial_k^{M-j} m(q)] + \zeta\, \mathcal{G}_M(q)[m(q)-1]+ \zeta\, \mathcal{G}_M(q)[1] + \mathcal{G}_0(q)[\zeta \partial_k^{M} m(q)]. \end{align*} Proposition \ref{prop:derxderkm} $(ii)$ follows from item $(iii)$ of Lemma \ref{lem:derxderkm} and the fact that $\zeta \in L^\infty$: \begin{lemma} \label{lem:derxderkm} Fix $M \in \mathbb{Z}_{\geq 4}$ and $a \in \mathbb{R}$. There exists a constant $C>0$ such that \begin{enumerate}[(i)] \item for any integer $0\leq n \leq M$ the following holds: \begin{enumerate} \item[(i1)] the map $L^2_M \ni q \mapsto \mathcal{G}_n(q)[1] \in C^{0}_{x\geq a} L^2 $ is analytic. Moreover $\norm{\mathcal{G}_n(q)[1]}_{C^{0}_{x\geq a} L^2}\leq C \norm{q}_{L^2_{M}}.$ \item[(i2)] The map $L^2_{M}\ni q \mapsto \mathcal{G}_n(q)\in \mathcal{L}\left(L^{2}_{x\geq a} L^2, C^{0}_{x\geq a} L^2\right)$ is analytic and $$\norm{\mathcal{G}_n(q)[f]}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_{M}}\norm{f}_{L^{2}_{x\geq a} L^2} \ .$$ \end{enumerate} \item For any $0 \leq j \leq M-1$, the map $L^2_{M}\ni q \mapsto \mathcal{G}_j(q)\in \mathcal{L}\left(C^{0}_{x\geq a} L^2 \right)$ is analytic, and $$\norm{\mathcal{G}_j(q)[f]}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_{M}} \norm{f}_{C^{0}_{x\geq a} L^2} \ .$$ \item For any $1 \leq n \leq M-1$, $0 \leq j \leq n-1$ and $\zeta: \mathbb{R} \to \mathbb{R}$ odd smooth monotone function with $\zeta(k)=k$ for $|k|\leq 1/2$ and $\zeta(k)=1$ for $k\geq 1$, the following holds: \begin{enumerate} \item[(iii1)] the maps $L^2_M \ni q \rightarrow \mathcal{G}_j(q)[\partial_k^{n-j}m(q)] \in C^{0}_{x\geq a} L^2 $ and $L^2_M \ni q \rightarrow \mathcal{G}_n(q)[m(q)-1] \in C^{0}_{x\geq a} L^2$ are analytic. Moreover $$ \norm{\mathcal{G}_j(q)[\partial_k^{n-j}m(q)]}_{C^{0}_{x\geq a} L^2}, \quad \norm{\mathcal{G}_n(q)[m(q)-1]}_{C^{0}_{x\geq a} L^2} \leq K_2' \norm{q}_{L^2_M}^2, $$ where $K_2'$ can be chosen uniformly on bounded subsets of $L^2_M$. \item[(iii2)] The map $L^2_M \ni q \rightarrow \mathcal{G}_0(q)[\zeta \partial_k^M m(q)] \in C^{0}_{x\geq a} L^2$ is analytic and $\norm{\mathcal{G}_0(q)[\zeta \partial_k^M m(q)]}_{C^{0}_{x\geq a} L^2} \leq K_3' \norm{q}_{L^2_M}^2$ where $K_3'$ can be chosen uniformly on bounded subsets of $L^2_M$. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} As before it's enough to prove the continuity in $q$ of the maps considered to conclude that they are analytic. \begin{enumerate} \item[$(i1)$] For $x \geq a$ and any $0 \leq n \leq M$ one has $\norm{\mathcal{G}_n(q)[1](x, \cdot)}_{L^2}^2 \leq C \intx{x} |t-x|^{2n} |q(t)|^2 dt \leq C \norm{q}_{L^2_M}^2.$ The claim follows by taking the supremum over $x \geq a$ in the inequality above. \item[$(i2)$] For $x \geq a $ and $0 \leq n \leq M$ one has the bound $\norm{\mathcal{G}_n(q)[f]}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_{n}} \norm{f}_{L^{2}_{x\geq a} L^2}$, which implies the claimed estimate. \item[$(ii)$] For $x \geq a$ and $0 \leq j \leq M-1$ one has the bound $$\norm{\mathcal{G}_j(q)[f]}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^1_{M-1}} \norm{f}_{C^{0}_{x\geq a} L^2} \leq C \norm{q}_{L^2_{M}} \norm{f}_{C^{0}_{x\geq a} L^2} \ .$$ \item[$(iii1)$] By Proposition \ref{prop_derminLit} one has that for any $1 \leq n \leq M-1$ and $0 \leq j \leq n-1$ the map $L^2_M \ni q \mapsto \partial_k^{n-j} m(q) \in C^{0}_{x\geq a} L^2$ is analytic. Since composition of analytic maps is again an analytic map, the claim regarding the analyticity follows. The first estimate follows from item $(ii)$. A similar argument can be used to prove the second estimate. \item[$(iii2)$] By Proposition \ref{prop_dermiNLit}, the map $L^2_M \ni q \mapsto \zeta \partial_k^M m(q) \in C^{0}_{x\geq a} L^2$ is analytic, implying the claim regarding the analyticity. The estimate follows from $ \norm{\mathcal{G}_0[\zeta \partial_k^{M} m(q)]}_{C^{0}_{x\geq a} L^2} \leq \norm{q}_{L^2_M} \norm{\zeta\partial_k^M m(q)}_{C^{0}_{x\geq a} L^2}.$ \end{enumerate} \end{proof} The following corollary follows from the results obtained so far: \begin{cor} \label{m(q,0,k)} Fix $M \in \mathbb{Z}_{\geq 4}$. Then the normalized Jost functions $m_j(q,x,k)$, $j=1,2$, satisfy: \begin{enumerate}[(i)] \item the maps $L^2_{M} \ni q \mapsto m_j(q,0, \cdot) -1 \in L^2$ and $L^2_{M} \ni q \mapsto k^\alpha \partial_k^n m_j(q,0,\cdot) \in L^2 $ are analytic for $1 \leq n \leq M-1 $ $[1 \leq n \leq M]$ if $\alpha=0$ $[\alpha = 1]$. Moreover $$ \norm{m_j(q,0, \cdot) -1}_{L^2}, \, \norm{k^\alpha \partial_k^n m_j(q,0,\cdot)}_{L^2} \leq K_1 \norm{q}_{L^2_{M}}, $$ where $K_1 >0$ can be chosen uniformly on bounded subsets of $L^2_{M}$. \item For $0 \leq n \leq M-1$, the maps $L^2_{M} \ni q \mapsto \partial_k^n \partial_x m_j(q,0, \cdot) \in L^2$ and $L^2_{M} \ni q \mapsto \zeta \partial_k^M \partial_x m_j(q,0, \cdot) \in L^2$ are analytic. Moreover $$ \norm{\partial_k^n \partial_x m_j(q,0, \cdot)}_{L^2}, \, \norm{\zeta \partial_k^M \partial_x m_j(q,0, \cdot)}_{L^2} \leq K_2 \norm{q}_{L^2_{M}}, $$ where $K_2 >0$ can be chosen uniformly on bounded subsets of $L^2_{M}$. \end{enumerate} \end{cor} \begin{proof} The Corollary follows by evaluating formulas \eqref{defm}, \eqref{defderkm}, \eqref{eq:derkderxm} at $x=0$ and using the results of Proposition \ref{prop_minLit}, \ref{prop_derminLit}, \ref{prop_dermiNLit} and \ref{prop:derxderkm}. \end{proof} \section{One smoothing properties of the scattering map.} \label{sec:dir.scat} The aim of this section is to prove the part of Theorem \ref{reflthm} related to the direct problem. To begin, note that by Theorem \ref{deift_jost}, for $q \in L^2_4$ real valued one has $\overline{m_1(q,x,k)}= m_1(q,x,-k)$ and $\overline{m_2(q,x,k)}= m_2(q,x,-k)$; hence \begin{equation} \label{S.conj} \overline{S(q,k)} = S(q,-k) \ , \qquad \overline{W(q,k)}= W(q, -k) \ . \end{equation} Moreover one has for any $q \in L^2_{4}$ \begin{equation} \label{W&S} W(q,k) W(q,-k) = 4k^2 + S(q,k) S(q,-k) \qquad \forall \, k \in \mathbb{R}\setminus \{0 \} \end{equation} which by continuity holds for $k=0$ as well. In the case where $q \in \mathcal{Q}$, the latter identity implies that $S(q,0) \neq 0$. \\ Recall that for $q \in L^2_{4}$ the Jost solutions $f_1(q,x,k)$ and $f_2(q,x,k)$ satisfy the following integral equations \begin{align} \label{duhamelformula} & f_1(x,k)= e^{i k x} + \intx{x} \frac{\sin k(t-x)}{k}q(t)f_1(t,k)dt \ , \\ \label{duhamelformula2} & f_2(x,k) = e^{-ikx} + \int\limits_{-\infty}^x \frac{\sin k(x-t)}{k}q(t)f_2(t,k)dt \ . \end{align} Substituting \eqref{duhamelformula} and \eqref{duhamelformula2} into \eqref{wronskian}, \eqref{wronskian_W}, one verifies that $S(q,k), \, W(q,k)$ satisfy for $k \in \mathbb{R}$ and $q \in L^2_{4}$ \begin{align} & S(q, k)= \int\limits_{-\infty}^{+\infty} e^{ikt} q(t) f_1(q, t,k) dt \ , \label{S.1}\\ & W(q, k) = 2ik - \int\limits_{-\infty}^{+\infty} e^{-ikt} q(t) f_1(q, t,k) dt \ . \label{W.1} \end{align} Note that the integrals above are well defined thanks to the estimate in item $(ii)$ of Theorem \ref{deift_jost}. \newline Inserting formula \eqref{duhamelformula} into \eqref{S.1}, one gets that $$ S(q,k) = \mathcal{F}_-(q,k) +O\left( \tfrac{1}{k}\right) \ . $$ The main result of this section is an estimate of \begin{equation} \label{map.A} A(q,k) := S(q,k) - \mathcal{F}_-(q,k) \ , \end{equation} saying that $A$ is $1$-smoothing. To formulate the result in a precise way, we need to introduce the following Banach spaces. For $M \in \mathbb{Z}_{\geq 1}$ define \begin{equation*} \begin{aligned} &H^M_* := \lbrace f \in H^{M-1}_\mathbb{C} : \quad \overline{f(k)}= f(-k), \quad k \partial_k^M f \in L^2 \rbrace \ , \end{aligned} \end{equation*} endowed with the norm $$ \norm{f}_{H^M_*}^2 := \norm{f}_{H^{M-1}_\mathbb{C}}^2 + \norm{k\partial_k^M f}_{ L^2}^2 \ .$$ Note that $H^M_{*}$ is a \textit{real} Banach space. We will use also the complexification of the Banach spaces $H^M_*$ and $H^M_{\zeta}$ (this last defined in \eqref{H^N*}), in which the reality condition $\overline{f(k)} = f(-k)$ is dropped: \begin{equation*} \begin{aligned} &H^M_{*,\mathbb{C}} := \lbrace f \in H^{M-1}_\mathbb{C} : \quad k \partial_k^M f \in L^2 \rbrace, \qquad H^{M}_{\zeta,\C} := \lbrace f \in H^{M-1}_\mathbb{C}: \quad \zeta \partial_k^M f \in L^2 \rbrace. \end{aligned} \end{equation*} Note that for any $M\geq 2$ \begin{equation} \label{lemHN*} (i)\, H^M_\mathbb{C} \subset H^{M}_{\zeta,\C} \mbox{ and } H^M_{*,\mathbb{C}} \subset H^{M}_{\zeta,\C}, \quad (ii)\, fg \in H^{M}_{\zeta,\C} \qquad \forall \, f \in H^M_{*, \mathbb{C}}, \, g \in H^{M}_{\zeta,\C}. \end{equation} We can now state the main theorem of this section. Let $L^2_{M, \mathbb{R}} := \left\{ f \in L^2_M \ \vert \ f \mbox{ real valued } \right\} $. \begin{theorem} \label{A.prop} Let $N \in \mathbb{Z}_{\geq 0}$ and $M \in \mathbb{Z}_{\geq 4}$. Then one has: \begin{enumerate}[(i)] \item The map $q \mapsto A(q, \cdot)$ is analytic as a map from $L^2_{M}$ to $H^{M}_{\zeta,\C}$. \item The map $q \mapsto A(q, \cdot)$ is analytic as a map from $H^N_\mathbb{C} \cap L^2_4$ to $L^2_{N+1}$. Moreover $$\norm{A(q, \cdot)}_{L^2_{N+1}} \leq C_A \norm{q}^2_{H^N_\mathbb{C} \cap L^2_4}$$ where the constant $C_A>0$ can be chosen uniformly on bounded subsets of $H^N_\mathbb{C} \cap L^2_4$. Furthermore for $q \in L^2_{4, \mathbb{R}}$ the map $A(q, \cdot)$ satisfies $\overline{A(q,k)}= A(q,-k)$ for every $k \in \mathbb{R}$. Thus its restrictions $A: L^2_{M,\mathbb{R}} \to H^M_{\zeta}$ and $A: H^N \cap L^2_4 \to L^2_{N+1}$ are real analytic. \end{enumerate} \end{theorem} The following corollary follows immediately from identity \eqref{map.A}, item $(ii)$ of Theorem \ref{A.prop} and the properties of the Fourier transform: \begin{cor} \label{S.decay} Let $N \in \mathbb{Z}_{\geq 0}$. Then the map $q \mapsto S(q, \cdot)$ is analytic as a map from $H^N_\mathbb{C} \cap L^2_4$ to $L^2_{N}$. Moreover $$\norm{S(q, \cdot)}_{L^2_{N}} \leq C_S \norm{q}_{H^N_\mathbb{C} \cap L^2_4}$$ where the constant $C_S>0$ can be chosen uniformly on bounded subsets of $H^N_\mathbb{C} \cap L^2_4$. \end{cor} In \cite{beat2}, it is shown that in the periodic setup, the Birkhoff map of KdV is 1-smoothing. As the map $q \mapsto S(q, \cdot)$ on the spaces considered can be viewed as a version of the Birkhoff map in the scattering setup of KdV, Theorem \ref{A.prop} confirms that a result analogous to the one on the circle holds also on the line. The proof of Theorem \ref{A.prop} consists of several steps. We begin by proving item $(i)$. Since $\mathcal{F}_-: L^2_{M} \to H^M_\mathbb{C}$ is bounded, item $(i)$ will follow from the following proposition: \begin{proposition} \label{prop:scatt} Let $M \in \mathbb{Z}_{\geq 4}$, then the map $L^2_{M} \ni q \mapsto S(q, \cdot) \in H^{M}_{\zeta,\C}$ is analytic and $$\norm{ S(q, \cdot)}_{ H^{M}_{\zeta,\C}}\leq K_S \norm{q}_{L^2_{M}},$$ where $K_S>0$ can be chosen uniformly on bounded subsets of $ L^2_{M}$. \end{proposition} \begin{proof} Recall that $f_1(q,x,k) = e^{ikx}\,m_1(q,x,k)$ and $f_2(q,x,k) = e^{-ikx} \, m_2(q,x,k)$. The $x$-independence of $S(q, k)$ implies that \begin{equation} \label{SWwronskian} S(q, k) = [m_1(q, 0,k),\, m_2(q, 0,-k)] \ . \end{equation} As by Corollary \ref{m(q,0,k)}, $m_j(q,0, \cdot)-1 \in H^M_{*, \mathbb{C}}$ and $\partial_x m_j(q,0, \cdot) \in H^{M}_{\zeta,\C}$, $j=1,2$, the identity \eqref{SWwronskian} yields \begin{align*} S(q,k) = & (m_1(q,0,k)-1)\, \partial_x m_2(q,0,-k) - (m_2(q,0,-k)-1)\, \partial_x m_1(q,0,k) \\ & + \partial_x m_2(q,0,-k) - \partial_x m_1(q,0,k) \ , \end{align*} thus $S(q, \cdot) \in H^{M}_{\zeta,\C} $ by \eqref{lemHN*}. The estimate on the norm $\norm{ S(q, \cdot)}_{ H^{M}_{\zeta,\C}}$ follows by Corollary \ref{m(q,0,k)}. \end{proof} {\em Proof of Theorem \ref{A.prop} $(i)$.} The claim is a direct consequence of Proposition \ref{prop:scatt} and the fact that for any real valued potential $q$, $\overline{S(q,k)} = S(q, -k)$, $\overline{\mathcal{F}_-(q,k)}= \mathcal{F}_-(q,-k)$ and hence $\overline{A(q,k)}= A(q,-k)$ for any $k \in \mathbb{R}$. \qed \newline In order to prove the second item of Theorem \ref{A.prop}, we expand the map $q \mapsto A(q)$ as a power series of $q$. More precisely, iterate formula \eqref{duhamelformula} and insert the formal expansion obtained in this way in the integral term of \eqref{S.1}, to get \begin{equation} \label{b0series} S(q,k)= \mathcal{F}_-(q,k) + \sum_{n \geq 1} \frac{s_n(q,k)}{k^n} \end{equation} where, with $dt = dt_0 \cdots dt_n$, \begin{equation} \label{expansionSn} s_n(q,k):=\int_{\Delta_{n+1}} e^{ikt_0} q(t_0) \prod_{j=1}^n \Big(q(t_j) \, \sin k(t_j-t_{j-1})\Big)e^{ikt_n} \, dt \end{equation} is a polynomial of degree $n+1$ in $q$ (cf Appendix \ref{analytic_map}) and $\Delta_{n+1}$ is given by $$ \Delta_{n+1} := \left\{(t_0, \cdots, t_n) \in \mathbb{R}^{n+1}: \quad t_0\leq \cdots \leq t_n \right\} . $$ Since by Proposition \ref{prop:scatt} $S(q, \cdot)$ is in $L^2$, it remains to control the decay of $A(q, \cdot) $ in $k$ at infinity. Introduce a cut off function $\chi$ with $\chi(k)=0$ for $|k|\leq 1$ and $\chi(k)=1$ for $|k| > 2$ and consider the series \begin{equation} \label{b1series} \chi(k) S(q,k)= \chi(k) \mathcal{F}_-(q,k) + \sum_{n \geq 1} \frac{\chi(k) s_n(q,k)}{k^n}. \end{equation} Item $(ii)$ of Theorem \ref{A.prop} follows once we show that each term $\tfrac{\chi(k) s_n(q,k)}{k^n}$ of the series is bounded as a map from $H^N_\mathbb{C} \cap L^2_{4}$ into $L^2_{N+1}$ and the series has an infinite radius of convergence in $L^2_{N+1}$. Indeed the analyticity of the map then follows from general properties of analytic maps in complex Banach spaces, see Remark \ref{entire.func}.\\ In order to estimate the terms of the series, we need estimates on the maps $k \mapsto s_n(q,k)$. A first trivial bound is given by \begin{equation} \label{sn.l.inf} \norm{s_n(q, \cdot)}_{L^\infty} \leq \tfrac{1}{(n+1)!} \norm{q}_{L^1}^{n+1}. \end{equation} However, in order to prove convergence of \eqref{b1series}, one needs more refined estimates of the norm of $k\mapsto s_n(q,k)$ in $L^2_{N}$. In order to derive such estimates, we begin with a preliminary lemma about oscillatory integrals: \begin{lemma} \label{hyp_red} Let $f \in L^1(\mathbb{R}^n, \mathbb{C})\cap L^2(\mathbb{R}^n, \mathbb{C})$. Let $\alpha \in \mathbb{R}^n$, $\alpha \neq 0$ and $$ g: \mathbb{R} \rightarrow \mathbb{C}, \quad g(k):= \int_{\mathbb{R}^n}e^{ik \alpha \cdot t} f(t)\; dt. $$ Then $g \in L^2$ and for any component $\alpha_i \neq 0$ one has \begin{equation} \norm{g}_{L^2} \leq \int\limits_{\mathbb{R}^{n-1}} \Big(\int\limits_{-\infty}^{+\infty} |f(t)|^2 \, d t_i \Big)^{1/2} dt_1 \ldots \widehat{d t_i} \ldots dt_n. \end{equation} \end{lemma} \begin{proof} The lemma is a variant of Parseval's theorem for the Fourier transform; indeed \begin{equation} \label{hr1} \norm{g}_{L^2}^2 = \int_{\mathbb{R}} g(k)\, \overline{g(k)}\, dk = \int\limits_{\mathbb{R} \times \mathbb{R}^n \times \mathbb{R}^n} e^{ik \alpha \cdot (t-s)} f(t) \overline{f(s)} \, dt \, ds \, dk. \end{equation} Integrating first in the $k$ variable and using the distributional identity $\int_{\mathbb{R}}e^{ikx} \, dk= \frac{1}{2\pi} \delta_{0}$, where $\delta_0$ denotes the Dirac delta function, one gets \begin{equation} \norm{g}_{L^2}^2 =\frac{1}{2\pi} \int\limits_{\mathbb{R}^{n}\times \mathbb{R}^{n}} f(t)\, \overline{f(s)}\, \delta(\alpha \cdot (t-s)) \,dt \, ds \end{equation} Choose an index $i$ such that $\alpha_i \neq 0$; then $\alpha \cdot (t-s) = 0 $ implies that $s_i = t_i + c_i / \alpha_i$, where $c_i = \sum_{j \neq i} \alpha_j(t_j - s_j)$. Denoting $d\sigma_i= dt_1 \cdots \widehat{dt_i} \cdots dt_n$ and $d\tilde{\sigma_i}= ds_1 \cdots \widehat{ds_i} \cdots ds_n$, one has, integrating first in the variables $s_i$ and $t_i$, \begin{equation} \begin{aligned} \norm{g}_{L^2}^2 & =\frac{1}{2\pi} \int\limits_{\mathbb{R}^{n-1}\times \mathbb{R}^{n-1} } d\sigma_i \, d\tilde{\sigma_i} \, \int_\mathbb{R} f(t_1, \ldots, t_i, \ldots, t_n) \overline{f(s_1, \ldots, t_i + c_i/\alpha_i, \ldots, s_n)} dt_i \\ & \quad \leq \int\limits_{\mathbb{R}^{n-1}\times \mathbb{R}^{n-1} } d\sigma_i \, d\tilde{\sigma_i} \Big(\int\limits_{-\infty}^{+\infty} |f(t)|^2 \, dt_i \Big)^{1/2} \cdot \Big(\int\limits_{-\infty}^{+\infty} |f(s)|^2 \, ds_i \Big)^{1/2} \\ & \quad \leq \Big(\int\limits_{\mathbb{R}^{n-1} } d\tilde\sigma_i \, \Big(\int\limits_{-\infty}^{+\infty} |f(s)|^2 \, ds_i \Big)^{1/2} \Big)^2 \end{aligned} \end{equation} where in the second line we have used the Cauchy-Schwarz inequality and the invariance of the integral $\int\limits_{-\infty}^{+\infty} |f(s_1, \ldots, t_i + c_i/\alpha_i, \ldots, s_n)|^2 $ by translation. \end{proof} To get bounds on the norm of the polynomials $k \mapsto s_n(q,k)$ in $L^2_{N}$ it is convenient to study the multilinear maps associated with them: \begin{align*} \label{multisn} \tilde s_n \ : \ & \left(H^N_\mathbb{C} \cap L^1 \right)^{n+1} \to L^2_N \ , \\ & (f_0, \cdots, f_n) \mapsto \tilde{s}_n(f_0, \cdots, f_n):= \int_{\Delta_{n+1}} e^{ikt_0}f_0(t_0) \prod_{j=1}^{n} \Big(f_j(t_j) \, \sin (k(t_j-t_{j-1})) \Big)\,e^{ikt_n} \; dt \ . \end{align*} The boundedness of these multilinear maps is given by the following \begin{lemma} \label{tilde_s_n} For each $n \geq 1$ and $N \in \mathbb{Z}_{\geq 0}$, $\tilde{s}_n: (H^N_\mathbb{C} \cap L^1)^{n+1} \to L^2_{N}$ is bounded. In particular there exist constants $C_{n,N}>0$ such that \begin{equation} \label{s_n_tilde_estim} \norm{\tilde{s}_n(f_0, \ldots, f_n)}_{L^2_{N}} \leq C_{n,N} \norm{f_0}_{H^N_\mathbb{C} \cap L^1} \cdots \norm{f_n}_{H^N_\mathbb{C} \cap L^1}. \end{equation} \end{lemma} For the proof, introduce the operators $I_j : L^1 \to L^\infty$, $j= 1,2$, defined by \begin{equation} I_1(f)(t):= \intx{t} f(s)\, ds \qquad I_2(f)(t):= \int\limits^t_{-\infty} f(s)\, ds. \end{equation} It is easy to prove that if $u, v \in H^N_\mathbb{C}\cap L^1$, then $u\,I_j(v) \in H^N_\mathbb{C}\cap L^1$ and the estimate $\norm{u\, I_j(v)}_{H^N_\mathbb{C}}\leq \norm{u}_{H^N_\mathbb{C} \cap L^1} \norm{v}_{H^N_\mathbb{C} \cap L^1}$ holds for $j=1,2$. \\ {\em Proof of Lemma \ref{tilde_s_n}.} As $\sin x = (e^{ix}- e^{-ix})/2i$ we can write $e^{ikt_0} \Big( \prod_{j=1}^n \sin k(t_j-t_{j-1})\Big) e^{ik t_n} $ as a sum of complex exponentials. Note that the arguments of the exponentials are obtained by taking all the possible combinations of $\pm$ in the expression $t_0 \pm (t_1-t_0) \pm \ldots \pm (t_n-t_{n-1}) + t_n$. To handle this combinations, define the set \begin{equation} \begin{aligned} \Lambda_n := \Big\{ \sigma=(\sigma_j)_{1 \leq j \leq n}: \, \sigma_j \in \{\pm 1 \} \Big\} \label{index_set} \end{aligned} \end{equation} and introduce $$ \delta_\sigma := \# \{ 1 \leq j \leq n : \, \sigma_j = -1 \}. $$ For any $\sigma \in \Lambda_n$, define $\alpha_\sigma = (\alpha_j)_{0 \leq j \leq n}$ as $$ \alpha_0 = (1- \sigma_1), \quad \alpha_j= \sigma_j - \sigma_{j+1} \mbox{ for } 1 \leq j \leq n-1, \quad \alpha_n = 1 + \sigma_n. $$ Note that for any $t=(t_0, \ldots, t_n)$, one has $\alpha_\sigma \cdot t = t_0 + \sum_{j=1}^n \sigma_j (t_j - t_{j-1}) + t_n$.\\ For every $\sigma \in \Lambda_n$, $\alpha_\sigma$ satisfies the following properties: \begin{equation} \label{set.ind.prop} (i) \; \alpha_0,\, \alpha_n \in \left\{2, 0 \right\}, \; \alpha_j \in \left\{ 0, \pm 2 \right\}\, \forall 1 \leq j \leq n-1; \quad (ii)\;\#\left\{j \middle| \alpha_j \neq 0 \right\} \mbox{ is odd.} \end{equation} Property $(i)$ is obviously true; we prove now $(ii)$ by induction. For $n=1$, property $(ii)$ is trivial. To prove the induction step $n \leadsto n+1$, let $\alpha_0= 1-\sigma_1, \ldots, \alpha_n=\sigma_n - \sigma_{n+1},\, \alpha_{n+1}=1+\sigma_{n+1}$, and define $\tilde \alpha_n:= 1+ \sigma_n \in \{0, 2 \}$. By the induction hypothesis the vector $\tilde \alpha_\sigma = (\alpha_0, \ldots, \alpha_{n-1}, \tilde{\alpha}_n)$ has an odd number of elements non zero. Case $\tilde{\alpha}_n =0 $: in this case the vector $(\alpha_0, \ldots, \alpha_{n-1})$ has an odd number of non zero elements. Then, since $\alpha_n = \sigma_n - \sigma_{n+1} = \tilde{\alpha}_n- \alpha_{n+1}= - \alpha_{n+1}$, one has that $(\alpha_n, \alpha_{n+1}) \in \{ (0,0), \, (-2, 2)\}$. Therefore the vector $\alpha_\sigma$ has an odd number of non zero elements. Case $\tilde{\alpha}_n =2$: in this case the vector $(\alpha_0, \ldots, \alpha_{n-1})$ has an even number of non zero elements. As $\alpha_n = 2 - \alpha_{n+1}$, it follows that $(\alpha_n, \alpha_{n+1}) \in \{ (2,0), \, (0, 2)\}$. Therefore the vector $\alpha_\sigma$ has an odd number of non zero elements. This proves \eqref{set.ind.prop}.\\ As $$e^{ikt_0} \Big( \prod_{j=1}^n \sin k(t_j-t_{j-1})\Big) e^{ik t_n} = \sum_{\sigma \in \Lambda_n} \frac{(-1)^{\delta_\sigma}}{(2i)^n}e^{ik \alpha \cdot t}$$ $\tilde{s}_n$ can be written as a sum of complex exponentials, $\tilde{s}_n(f_0, \ldots, f_n)(k)= \sum_{\sigma \in \Lambda_n } \frac{(-1)^{\delta_\sigma}}{(2i)^n} \tilde{s}_{n,\sigma}(f_0, \ldots, f_n)(k)$ where \begin{equation} \tilde{s}_{n,\sigma}(f_0, \ldots, f_n)(k)= \int_{\Delta_{n+1}}e^{ik \alpha \cdot t} f_0(t_0) \cdots f_n(t_n) dt. \end{equation} The case $N=0$ follows directly from Lemma \ref{hyp_red}, since for each $\sigma \in \Lambda_n$ one has by \eqref{set.ind.prop} that there exists $m$ with $\alpha_m \neq 0$ implying $\norm{\tilde{s}_{n,\sigma}(f_0, \ldots, f_n)}_{L^2} \leq C \norm{f_m}_{L^2} \prod_{j \neq m} \norm{f_j}_{L^1}$, which leads to \eqref{s_n_tilde_estim}. We now prove by induction that $\tilde{s}_n: (H^N_\mathbb{C} \cap L^1)^{n+1} \to L^2_{N} $ for any $N \geq 1$. We start with $n=1$. Since we have already proved that $\tilde{s}_1$ is a bounded map from $(L^2 \cap L^1)^2$ to $L^2$, it is enough to establish the stated decay at $\infty$. One verifies that \begin{align*} \tilde{s}_1(f_0, f_1) & = \frac{1}{2i}\int\limits_{-\infty}^{+\infty} e^{2ikt}\, f_0(t) \, I_1(f_1)(t) \,dt - \frac{1}{2i} \int\limits_{-\infty}^{+\infty} e^{2ikt}\, f_1(t)\, I_2(f_0)(t) \;dt \\ & = \frac{1}{2i}\mathcal{F}_-(f_0\, I_1(f_1)) - \frac{1}{2i}\mathcal{F}_-(f_1\, I_2(f_0)). \end{align*} Hence, for each $N \in \mathbb{Z}_{\geq 0}$, $(f_0, f_1) \mapsto \tilde{s}_1(f_0,f_1)$ is bounded as a map from $(H^N_\mathbb{C} \cap L^1)^2$ to $L^2_{N}$. Moreover $$\norm{\tilde{s}_1(f_0, f_1)}_{L^2_{N}}\leq C_1 \left(\norm{f_0\, I_1(f_1)}_{H^N_\mathbb{C}} + \norm{f_1\, I_2(f_0)}_{H^N_\mathbb{C}}\right) \leq C_{1,N} \norm{f_0}_{H^N_\mathbb{C} \cap L^1 }\norm{f_1}_{H^N_\mathbb{C}\cap L^1}.$$ We prove the induction step $n \leadsto n+1$ with $n \geq 1$ for any $N \geq 1$ (the case $N=0$ has been already treated). The term $\tilde{s}_{n+1}(f_0, \ldots, f_{n+1})$ equals \begin{align*} \int_{\Delta_{n+2}} e^{ikt_0}f_0(t_0) \prod_{j=1}^{n} \Big(\sin k(t_j-t_{j-1}) f_j(t_j) \Big)e^{ikt_n} \sin k(t_{n+1}-t_n)e^{ik(t_{n+1}-t_n)} f_{n+1}(t_{n+1})\, dt \end{align*} where we multiplied and divided by the factor $e^{ikt_n}$. Writing $$\sin k(t_{n+1}-t_n) = (e^{ik(t_{n+1}- t_n)} - e^{-ik(t_{n+1}- t_n)})/2i \ , $$ the integral term $\intx{t_{n}} \, e^{ik(t_{n+1}-t_n)} \sin k(t_{n+1}-t_{n})\, f_{n+1}(t_{n+1})\; dt_{n+1}$ equals \begin{align*} \frac{1}{2i}\intx{t_n}e^{2ik(t_{n+1}-t_n)}f_{n+1}(t_{n+1}) \,dt_{n+1} - \frac{1}{2i} I_1(f_{n+1})(t_{n}). \end{align*} Since $f_{n+1}\in H^N_\mathbb{C}$, for $0 \leq j \leq N-1$ one gets $f_{n+1}^{(j)}\rightarrow 0$ when $x\to \infty$, where we wrote $f_{n+1}^{(j)} \equiv \partial_k^j f_{n+1}$. Integrating by parts $N$-times in the integral expression displayed above one has \begin{align*} \frac{1}{2i}\sum_{j=0}^{N-1} \frac{(-1)^{j+1} }{(2ik)^{j+1}}\, f_{n+1}^{(j)}(t_n) + \frac{(-1)^N}{2i (2ik)^N}\intx{t_n}e^{2ik(t_{n+1}-t_n)} f_{n+1}^{(N)}(t_{n+1}) \, dt_{n+1} - \frac{1}{2i} I_1(f_{n+1})(t_{n}). \end{align*} Inserting the formula above in the expression for $\tilde{s}_{n+1}$, and using the multilinearity of $\tilde{s}_{n+1}$ one gets \begin{align} & \tilde{s}_{n+1}(f_0, \ldots, f_{n+1})=\frac{1}{2i}\sum_{j=0}^{N-1}\frac{(-1)^{j+1}}{(2ik)^{j+1}} \tilde{s}_n(f_0, \ldots, f_n \cdot f_{n+1}^{(j)}) - \frac{1}{2i}\tilde{s}_n(f_0, \ldots, f_n\, I_1(f_{n+1})) \label{line_s_n_1} \\ & \qquad + \frac{(-1)^N}{2i(2ik)^N} \int_{\Delta_{n+2}} e^{ikt_0} f_0(t_0) \prod_{j=1}^n \Big(\sin k(t_j-t_{j-1}) \, f_j(t_j)\Big) e^{2ik t_{n+1}} f_{n+1}^{(N)}(t_{n+1}) \; dt_{n+1}. \label{line_s_n_2} \end{align} We analyze the first term in the r.h.s. of \eqref{line_s_n_1}. For $0 \leq j \leq N-1$, the function $f_{n+1}^{(j)} \in H^{N-j}_\mathbb{C} $ is in $ L^\infty$ by the Sobolev embedding theorem. Therefore $f_n \cdot f_{n+1}^{(j)} \in H^{N-j}_\mathbb{C} \cap L^1$. By the inductive assumption applied to $N-j$, $\tilde{s}_n(f_0, \ldots,f_n \cdot f_{n+1}^{(j)}) \in L^2_{N-j} $. Therefore $\frac{\chi}{(2ik)^{j+1}} \tilde{s}_n(f_0, \ldots, f_n \cdot f_{n+1}^{(j)})\in L^2_{N}$ , where $\chi$ is chosen as in \eqref{b1series}. For the second term in \eqref{line_s_n_1} it is enough to note that $f_n \, I_1(f_{n+1}) \in H^N_\mathbb{C} \cap L^1$ and by the inductive assumption it follows that $\tilde{s}_n(f_0, \ldots, f_n \, I_1(f_{n+1})) \in L^2_{N}$. \newline We are left with \eqref{line_s_n_2}. Due to the factor $(2ik)^N$ in the denominator, we need just to prove that the integral term is $L^2$ integrable in the $k$-variable. Since the oscillatory factor $e^{2ik t_{n+1}}$ doesn't get canceled when we express the sine functions with exponentials, we can apply Lemma \ref{hyp_red}, integrating first in $L^2$ w.r. to the variable $t_{n+1}$, getting $$\norm{\chi \cdot \eqref{line_s_n_2}}_{L^2_{N}} \leq C_{n+1,N} \norm{f_{n+1}^{(N)}}_{L^2} \prod_{j=0}^n \norm{f_j}_{L^1}.$$ Putting all together, it follows that $\tilde{s}_{n+1}$ is bounded as a map from $(H^N_\mathbb{C} \cap L^1)^{n+2}$ to $L^2_{N}$ for each $N \in \mathbb{Z}_{\geq 0}$ and the estimate \eqref{s_n_tilde_estim} holds. \qed \\ By evaluating the multilinear map $\tilde s_n$ on the diagonal, Lemma \ref{tilde_s_n} says that for any $ N \geq 0$, \begin{equation} \label{sn.est.2} \norm{s_n(q, \cdot)}_{L^2_{N}} \leq C_{n,N} \norm{q}_{H^N_\mathbb{C} \cap L^1}^{n+1}, \qquad \forall n \geq 1. \end{equation} Combining the $L^\infty$ estimate \eqref{sn.l.inf} with \eqref{sn.est.2} we can now prove item $(ii)$ of Theorem \ref{A.prop}: \\ {\em Proof of Theorem \ref{A.prop} $(ii)$.} Let $\chi$ be the cut off function introduced in \eqref{b1series} and set \begin{equation} \label{A_n.tilde} \tilde A(q,k) := \sum_{n = 1}^{\infty} \frac{\chi(k) s_n(q,k)}{k^n}. \end{equation} We now show that for any $\rho >0 $, $\tilde A(q, \cdot)$ is an absolutely and uniformly convergent series in $L^2_{N+1}$ for $q$ in $ B_\rho(0)$, where $B_\rho(0)$ is the ball in $H^N_\mathbb{C} \cap L^1$ with center $0$ and radius $\rho$. By \eqref{sn.est.2} the map $q \mapsto \sum_{n = 1}^{N+1} \frac{\chi(k) s_n(q,k)}{k^n} $ is analytic as a map from $H^N_\mathbb{C} \cap L^1$ to $ L^2_{N+1}$, being a finite sum of polynomials - cf. Remark \ref{entire.func}. It remains to estimate the sum $$\tilde{A}_{N+2}(q,k) := \tilde A(q,k) - \sum_{n = 1}^{N+1} \frac{\chi(k) s_n(q,k)}{k^n} \ .$$ It is absolutely convergent since by the $L^\infty$ estimate \eqref{sn.l.inf} \begin{equation} \label{A_N+1_norm} \norm{ \sum_{n\geq N+2} \frac{\chi s_n(q,\cdot)}{k^n} }_{L^2_{N+1}} \leq \sum_{n \geq N+2} \norm{\frac{\chi(k) }{k^n}}_{L^2_{N+1}} \norm{s_n(q,\cdot)}_{L^\infty} \leq C \sum_{n \geq N+2} \frac{\norm{q}_{L^1}^{n+1}}{(n+1)!} \end{equation} for an absolute constant $C>0$. Therefore the series in \eqref{A_n.tilde} converges absolutely and uniformly in $B_\rho(0)$ for every $\rho >0$. The absolute and uniform convergence implies that for any $N \geq 0$, $q \mapsto \tilde A(q, \cdot)$ is analytic as a map from $H^N_\mathbb{C} \cap L^1$ to $L^2_{N+1}$.\\ It remains to show that identity \eqref{b1series} holds, i.e., for every $q \in H^N_\mathbb{C} \cap L^1$ one has $\chi A(q,\cdot) = \tilde A(q,\cdot)$ in $L^2_{N+1}$. Indeed, fix $q \in H^N_\mathbb{C} \cap L^1$ and choose $\rho$ such that $\norm{q}_{H^N_\mathbb{C} \cap L^1} \leq \rho$. Iterate formula \eqref{duhamelformula} $N'\geq 1$ times and insert the result in \eqref{S.1} to get for any $k \in \mathbb{R} \setminus \{0 \}$, $$ S(q,k) = \mathcal{F}_-(q,k) + \sum_{n = 1}^{N'} \frac{ s_n(q,k)}{k^n} + S_{N'+1}(q,k) \ , $$ where $$ S_{N'+1}(q,k) := \frac{1}{k^{N'+1}}\int_{\Delta_{N'+2}} e^{ikt_0} q(t_0) \prod_{j=1}^{N'+1} \Big(q(t_j) \, \sin k(t_j-t_{j-1})\Big)f_1(q, t_{N'+1}, k) \, dt \ . $$ By the definition \eqref{map.A} of $A(q,k)$ and the expression of $S_{N'+1}$ displayed above $$ \chi(k) A(q,k) - \sum_{n = 1}^{N'} \frac{\chi(k) s_n(q,k)}{k^n} = \chi(k) S_{N'+1}(q,k), \qquad \forall N' \geq 1 \ . $$ Let now $N' \geq N$, then by Theorem \ref{deift_jost} $(ii)$ there exists a constant $K_\rho$, which can be chosen uniformly on $B_\rho(0)$ such that $$ \norm{\chi S_{N'+1}(q,\cdot)}_{L^2_{N+1}} \leq K_\rho \frac{\norm{q}_{L^1_1}^{N'+2}}{(N'+2)!}\leq K_\rho \frac{\rho^{N'+2}}{(N'+2)!} \to 0, \qquad \mbox{ when } N'\to \infty \ , $$ where for the last inequality we used that $\norm{q}_{L^1_1} \leq C \norm{q}_{L^2_2}$ for some absolute constant $C>0$. Since $\lim_{N' \to 0} \sum_{n = 1}^{N'} \frac{\chi(k) s_n(q,k)}{k^n} = \tilde{A}(q,k)$ in $L^2_{N+1}$, it follows that $\chi(k) A(q,k) = \tilde A(q,k)$ in $L^2_{N+1}$. \qed For later use we study regularity and decay properties of the map $k \mapsto W(q,k)$. For $q\in L^2_{4}$ real valued with no bound states it follows that $ W(q,k)\neq 0, \, \forall \, \operatorname{Im} k \geq 0$ by classical results in scattering theory. We define \begin{equation} \label{class.C.def} \mathcal{Q}_\mathbb{C}:= \left\{ q \in L^2_{4}: W(q,k)\neq 0, \, \forall \, \operatorname{Im} k \geq 0 \right\}, \quad \mathcal{Q}^{N,M}_\mathbb{C}:= \mathcal{Q}_\mathbb{C} \cap H^N_\mathbb{C} \cap L^2_{M} \ . \end{equation} We will prove in Lemma \ref{class_open} below that $\mathcal{Q}^{N,M}_\mathbb{C}$ is open in $ H^N_\mathbb{C} \cap L^2_{M}$. Finally consider the Banach space $W^M_\mathbb{C}$ defined for $M \geq 1$ by \begin{equation} W^M_\mathbb{C} := \lbrace f \in L^\infty : \quad \partial_k f \in H^{M-1}_\mathbb{C} \rbrace \ , \end{equation} endowed with the norm $\norm{f}_{W^M_\mathbb{C}}^2 = \norm{f}_{L^\infty}^2 + \norm{\partial_k f}_{H^{M-1}_\mathbb{C}}^2 $.\\ Note that $H^M_\mathbb{C} \subseteq W^M_\mathbb{C}$ for any $M\geq 1$ and \begin{equation} \label{lemHN*2} gh \in H^{M}_{\zeta,\C} \quad \forall \, g \in H^{M}_{\zeta,\C}, \ \forall \, h \in W^M_\mathbb{C} \ . \end{equation} The properties of the map $W$ are summarized in the following Proposition: \begin{proposition} \label{Wlem} For $M \in \mathbb{Z}_{\geq 4}$ the following holds: \begin{enumerate}[(i)] \item The map $L^2_{M} \ni q \mapsto W(q,\cdot)-2ik+ \mathcal{F}_-(q,0) \in H^{M}_{\zeta,\C}$ is analytic and $$\norm{W(q, \cdot)-2ik + \mathcal{F}_-(q,0)}_{H^{M}_{\zeta,\C}} \leq C_W \norm{q}_{L^2_M},$$ where the constant $C_W>0$ can be chosen uniformly on bounded subsets of $L^2_M$. \item The map $\mathcal{Q}^{0,M}_\mathbb{C} \ni q \mapsto 1 / W(q, \cdot) \in L^\infty $ is analytic. \item The maps $$ \mathcal{Q}^{0,M}_\mathbb{C} \ni q \mapsto \frac{\partial_k^j W(q,\cdot)}{W(q,\cdot)} \in L^2 \ \mbox{ for } 0 \leq j \leq M-1 \quad \mbox{and} \quad \mathcal{Q}^{0,M}_\mathbb{C} \ni q \mapsto \frac{\zeta \partial_k^M W(q,\cdot)}{W(q,\cdot)} \in L^2$$ are analytic. Here $\zeta$ is a function as in \eqref{zeta}. \end{enumerate} \end{proposition} \begin{proof} The $x$-independence of the Wronskian function \eqref{wronskian_W} implies that \begin{equation} \label{Wwronskian} \begin{aligned} W(q, k) = 2ik \, m_2(q, 0,k)\,m_1(q, 0,k) + [m_2(q, 0,k),\, m_1(q, 0,k)]. \end{aligned} \end{equation} Introduce for $j=1,2$ the functions $\grave{m}_j(q,k) := 2ik \, (m_j(q,0,k) -1)$. By the integral formula \eqref{defm} one verifies that \begin{equation} \begin{aligned} \label{grave.m} \grave{m}_1(q,k) & = \int\limits_0^{+\infty} \left( e^{2ikt}-1 \right)\, q(t)\, (m_1(q,t,k) - 1) \, dt + \int\limits_0^{+\infty} e^{2ikt}\, q(t) \, dt - \int\limits_0^{+\infty} q(t)\, dt;\\ \grave{m}_2(q,k) & = \int\limits_{-\infty}^0 \left( e^{-2ikt}-1 \right)\, q(t)\, (m_2(q,t,k) - 1) \, dt + \int\limits_{-\infty}^0 e^{-2ikt} \, q(t) \, dt - \int\limits_{-\infty}^0 q(t)\, dt. \end{aligned} \end{equation} A simple computation using \eqref{Wwronskian} shows that $W(q,k)-2ik + \mathcal{F}_-(q,0) = I + II + III$ where \begin{equation} \label{W_wronskian} \begin{aligned} & I := \grave{m}_1(q,k) + \grave{m}_2(q,k) +\mathcal{F}_-(q,0), \\ & II := \grave{m}_1(q,k) (m_2(q,0,k)-1) \quad \mbox{ and } \quad III := [m_2(q,0,k), m_1(q,0,k)]. \end{aligned} \end{equation} We prove now that each of the terms $I, II$ and $III$ displayed above is an element of $H^{M}_{\zeta,\C}$. We begin by discussing the smoothness of the functions $k \mapsto \grave{m}_j(q,k)$, $j=1,2$. For any $1 \leq n \leq M,$ $$ \partial_k^n \grave{m}_j(q,k) = 2in \, \partial_k^{n-1} (m_j(q,0,k)-1) + 2ik \,\partial_k^n m_j(q,0,k) \ . $$ Thus by Corollary \ref{m(q,0,k)} $(i)$, $\grave{m}_j(q, \cdot) \in W^M_\mathbb{C}$ and $ q \mapsto \grave{m}_j(q, \cdot) $, $j=1,2$, are analytic as maps from $L^2_{M}$ to $W^M_\mathbb{C}$. Consider first the term $III $ in \eqref{W_wronskian}. By Corollary \ref{m(q,0,k)}, $\norm{III(q,\cdot)}_{H^{M}_{\zeta,\C}} \leq K_{III} \norm{q}_{L^2_{M}} $, where $K_{III}>0$ can be chosen uniformly on bounded subsets of $L^2_{M}$. Arguing as in the proof of Proposition \ref{prop:scatt}, one shows that it is an element of $H^{M}_{\zeta,\C}$ and it is analytic as a map $L^2_M \to H^{M}_{\zeta,\C}$. Next consider the term $II$. Since $ \grave{m}_1(q,\cdot)$ is in $W^M_\mathbb{C}$ and $m_2(q,0,\cdot)-1$ is in $H^{M}_{\zeta,\C}$, it follows by \eqref{lemHN*2} that their product is in $H^{M}_{\zeta,\C}$. It is left to the reader to show that $L^2_M \to H^{M}_{\zeta,\C}\,$, $q \mapsto II(q)$ is analytic and furthermore $ \norm{II(q,\cdot)}_{H^{M}_{\zeta,\C}} \leq K_{II} \norm{q}_{L^2_M}$, where $K_{II}>0$ can be chosen uniformly on bounded subsets of $L^2_{M}$.\\ Finally let us consider term $I$. By summing the identities for $\grave{m}_1$ and $\grave{m}_2$ in equation \eqref{grave.m}, one gets that \begin{equation} \begin{aligned} \grave{m}_1(q,k) + \grave{m}_2(q,k)+ \mathcal{F}_-(q,0) &= \int\limits_0^{+\infty} e^{2ikt}\, q(t)\, m_1(q,t,k) \, dt - \int\limits_0^{+\infty} q(t)\, (m_1(q,t,k) -1)\, dt \\ &\quad + \int\limits_{-\infty}^0 e^{-2ikt}\, q(t)\, m_2(q,t,k) - \int\limits_{-\infty}^0 q(t)\, (m_2(q,t,k) -1)\, dt . \end{aligned} \end{equation} We study just the first line displayed above, the second being treated analogously. By equation \eqref{derxm.equation} one has that $\int\limits_0^{+\infty} e^{2ikt}\, q(t)\, m_1(q,t,k) \, dt = \partial_x m(q,0,k)$, which by Corollary \ref{m(q,0,k)} is an element of $H^{M}_{\zeta,\C}$ and analytic as a function $L^2_{M} \to H^{M}_{\zeta,\C} $. Furthermore, by Proposition \ref{prop_derminLit} and Proposition \ref{prop_dermiNLit} it follows that $k \mapsto \int\limits_0^{+\infty} q(t)\, (m_1(q,t,k) -1)\, dt$ is an element of $H^{M}_{\zeta,\C}$ and it is analytic as a function $L^2_{M} \to H^{M}_{\zeta,\C} $. This proves item $(i)$. By Corollary \ref{m(q,0,k)}, it follows that $ \norm{I(q, \cdot)}_{H^{M}_{\zeta,\C}} \leq K_{I} \norm{q}_{L^2_{M}}$, where $K_{I}>0$ can be chosen uniformly on bounded subsets of $L^2_{M}$. \\ We prove now item $(ii)$. By the definition of $\mathcal{Q}_\mathbb{C}$, for $q \in \mathcal{Q}^{0,4}_\mathbb{C}$ the function $W(q,k) \neq 0$ for any $k$ with $ \operatorname{Im} k \geq 0$. By Proposition \ref{prop:scatt} $(ii)$ and the condition $M \geq 4$, it follows that $W(q,k) = 2ik + L^\infty$; therefore the map $\mathcal{Q}^{0,M}_\mathbb{C} \ni q \mapsto 1/W(q) \in L^2 $ is analytic. \\ Item $(iii)$ follows immediately from item $(i)$ and $(ii)$. \end{proof} \begin{lemma} \label{W0<0} For any $q \in \mathcal{Q}^{0,4}$, $W(q,0) < 0$. \end{lemma} \begin{proof} Let $q\in \mathcal{Q}^{0,4}$ and $\kappa \geq 0$. By formulas \eqref{duhamelformula} and \eqref{duhamelformula2} with $k=i\kappa$, it follows that $f_j(q,x,i\kappa)$ ($j=1,2$) is real valued (recall that $q$ is real valued). By the definition $W(q,i\kappa) = \left[ f_2, f_1\right](q,i\kappa)$ it follows that for $\kappa \geq 0$, $W(q,i\kappa)$ is real valued. As $q$ is generic, $W(q,i\kappa)$ has no zeroes for $\kappa \geq 0$. Furthermore for large $\kappa$ we have $W(q,i\kappa) \sim 2i(i\kappa) = -2\kappa$. Thus $W(q, i\kappa) < 0 $ for $\kappa \geq 0$. \end{proof} We are now able to prove the direct scattering part of Theorem \ref{reflthm}. \\ \noindent{\em Proof of Theorem \ref{reflthm}: direct scattering part.} Let $N \geq 0$, $M \geq 4$ be fixed integers. First we remark that $S(q, \cdot)$ is an element of $\mathscr{S}^{M,N}$ if $q \in \mathcal{Q}^{N,M}$. By \eqref{S.conj}, $S(q,\cdot)$ satisfies (S1). To see that $S(q,0) >0$ recall that $S(q,0) = -W(q,0)$, and by Lemma \ref{W0<0} $W(q,0)<0$. Thus $S(q,\cdot)$ satisfies (S2). Finally by Corollary \ref{S.decay} and Proposition \ref{prop:scatt} it follows that $S(q,\cdot) \in \mathscr{S}^{M,N}$. The analyticity properties of the map $q \mapsto S(q,\cdot)$ and $q \mapsto A(q,\cdot)$ follow by Corollary \ref{S.decay}, Proposition \ref{prop:scatt} and Theorem \ref{A.prop}. \qed \\\\ We conclude this section with a lemma about the openness of $\mathcal{Q}^{N,M}$ and $\mathscr{S}^{M,N}$. \begin{lemma} \label{class_open} For any integers $N \geq 0, \, M \geq 4$, $\mathcal{Q}^{N,M}$ $[ \mathcal{Q}^{N,M}_\mathbb{C}]$ is open in $H^N \cap L^2_M$ $[ H^N_\mathbb{C} \cap L^2_{M} ]$. \end{lemma} \begin{proof} The proof can be found in \cite{kapptrub2}; we sketch it here for the reader's convenience. By a classical result in scattering theory \cite{deift}, $W(q,k)$ admits an analytic extension to the upper plane $\operatorname{Im} k \geq 0$. By definition \eqref{class.C.def} one has $\mathcal{Q}_\mathbb{C} = \{ q \in L^2_4: \, W(q,k) \neq 0\; \quad \forall \, \operatorname{Im} k \geq 0 \}$. Using that $(q,k)\mapsto W(q,k)$ is continuous on $L^2_4\times \mathbb{R}$ and that by Proposition \ref{Wlem}, $\| W(q, \cdot)-2ik\|_{L^{\infty}}$ is bounded locally uniformly in $q\in L^2_4$ one sees that $\mathcal{Q}_\mathbb{C}$ is open in $L^2_4$. The remaining statements follow in a similar fashion. \end{proof} Denote by $H^{M}_{\zeta,\C}$ the complexification of the Banach space $H^M_{\zeta}$, in which the reality condition $\overline{f(k)} = f(-k)$ is dropped: \begin{equation} \begin{aligned} \label{H^N*C} &H^{M}_{\zeta,\C} := \lbrace f \in H^{M-1}_\mathbb{C}: \quad \zeta \partial_k^M f \in L^2 \rbrace. \end{aligned} \end{equation} On $H^{M}_{\zeta,\C} \cap L^2_{N}$ with $M \geq 4$, $N \geq 0$, define the linear functional $$\Gamma_0 :H^{M}_{\zeta,\C} \cap L^2_{N} \to \mathbb{C}, \quad h \mapsto h(0).$$ By the Sobolev embedding theorem $\Gamma_0$ is a linear analytic map on $H^{M}_{\zeta,\C} \cap L^2_{N} $. In view of the definition \eqref{reflspaceNM0}, $\mathscr{S}^{M,N} \subseteq H^M_{\zeta}$. Furthermore denote by $\mathscr{S}^{M,N}_\mathbb{C}$ the complexification of $\mathscr{S}^{M,N}$. It consists of functions $\sigma:\mathbb{R} \to \mathbb{C}$ with $\operatorname{Re} (\sigma(0)) > 0$ and $\sigma \in H^{M}_{\zeta,\C} \cap L^2_{N}$.\\ In the following we denote by $C^{n, \gamma}(\mathbb{R}, \mathbb{C})$, with $n \in \mathbb{Z}_{\geq 0}$ and $0 < \gamma \leq 1$, the space of complex-valued functions with $n$ continuous derivatives such that the $n^{th}$ derivative is H\"older continuous with exponent $\gamma$. \begin{lemma} \label{lem:S.open} For any integers $M\geq 4$, $N\geq 0$ the subset $\mathscr{S}^{M,N}$ $\ [\mathscr{S}^{M,N}_\mathbb{C}]$ is open in $H^M_{\zeta} \cap L^2_{N}$ $\ [H^{M}_{\zeta,\C} \cap L^2_{N}]$. \end{lemma} \begin{proof} Clearly $H^4_{\zeta,\mathbb{C}} \subseteq H^{3}_\mathbb{C}$, and by the Sobolev embedding theorem $H^3_{\mathbb{C}} \hookrightarrow C^{2, \gamma}(\mathbb{R}, \mathbb{C})$ for any $0 < \gamma < 1/2$. It follows that $\sigma \to \sigma(0)$ is a continuous functional on $H^4_{\zeta,\mathbb{C}}$. In view of the definition of $\mathscr{S}^{M,N}$, the claimed statement follows. \end{proof} \section{Inverse scattering map} \label{sec:inv.scat} The aim of this section is to prove the inverse scattering part of Theorem \ref{reflthm}. More precisely we prove the following theorem. \begin{theorem} \label{thm:inv.scat} Let $N \in \mathbb{Z}_{\geq 0}$ and $M \in \mathbb{Z}_{\geq 4}$ be fixed. Then the scattering map $S: \mathcal{Q}^{N,M} \to \mathscr{S}^{M,N}$ is bijective. Its inverse $S^{-1}:\mathscr{S}^{M,N} \to \mathcal{Q}^{N,M}$ is real analytic. \end{theorem} The smoothing and analytic properties of $B:= S^{-1} - \mathcal{F}_{-}^{-1}$ claimed in Theorem \ref{reflthm} follow now in a straightforward way from Theorem \ref{thm:inv.scat} and \ref{A.prop}. \begin{proof}[Proof of Theorem \ref{reflthm}: inverse scattering part.] By Theorem \ref{thm:inv.scat}, $S^{-1}:\mathscr{S}^{M,N} \to \mathcal{Q}^{N,M}$ is well defined and real analytic. As by definition $B = S^{-1} - \mathcal{F}_-^{-1}$ and $S = \mathcal{F}_- + A$ one has $B \circ S = Id - \mathcal{F}_-^{-1} \circ S = - \mathcal{F}_-^{-1}\circ A$ or $$B=-\mathcal{F}_-^{-1}\circ A \circ S^{-1} \ .$$ Hence, by Theorem \ref{A.prop} and Theorem \ref{thm:inv.scat}, for any $M \in \mathbb{Z}_{\geq 4}$ and $N \in \mathbb{Z}_{\geq 0}$ the restriction $B:\mathscr{S}^{M,N}\to H^{N+1} \cap L^2_{M-1} $ is a real analytic map. \end{proof} The rest of the section is devoted to the proof of Theorem \ref{thm:inv.scat}. By the direct scattering part of Theorem \ref{reflthm} proved in Section \ref{sec:dir.scat}, $S(\mathcal{Q}^{N,M}) \subseteq \mathscr{S}^{M,N}$. Furthermore, the map $S: \mathcal{Q} \to \mathscr{S}$ is 1-1, see \cite[Section 4]{kapptrub}. Thus also its restriction $\left.S\right|_{\mathcal{Q}^{N,M}}: \mathcal{Q}^{N,M} \to \mathscr{S}^{M,N}$ is 1-1. Let us denote by $\mathcal{H}: L^2 \to L^2$ the Hilbert transform \begin{equation} \label{def.hilbert.trans} \mathcal{H}(v)(k):=-\frac{1}{\pi} \operatorname{p.v.} \int_{-\infty}^{\infty}\frac{v(k')}{k'-k} dk' \ . \end{equation} We collect in Appendix \ref{Hilbert.transf} some well known properties of the Hilbert transform which will be exploited in the following. In order to prove that $S:\mathcal{Q}^{N,M} \to \mathscr{S}^{M,N}$ is onto, we need some preparation. Following \cite{kapptrub} define for $\sigma \in \mathscr{S}^{M,N}$, \begin{equation} \omega(\sigma, k) := \exp\left(\frac{1}{2} l(\sigma,k) + \frac{i}{2} \mathcal{H}(l(\sigma, \cdot))(k) \right) \ , \quad l(\sigma, k) := \log\left( \frac{4(k^2+1)}{4k^2 + \sigma(k) \sigma(-k)} \right) \ , \quad k \in \mathbb{R} \end{equation} and \begin{equation} \label{inv.scatt.elem} \begin{aligned} &\frac{1}{w(\sigma,k)} := \frac{\omega(\sigma,k)}{2i(k+i)} \ , \qquad \tau(\sigma,k) := \frac{2ik}{w(\sigma,k)} \ , \\ &\rho_+(\sigma,k) := \frac{\sigma(-k)}{w(\sigma,k)}\ , \qquad \rho_-(\sigma, k) := \frac{\sigma(k)}{w(\sigma,k)} \ . \end{aligned} \end{equation} The aim is to show that $\rho_+(\sigma, \cdot)$, $\rho_-(\sigma, \cdot)$ and $\tau(\sigma, \cdot)$ are the scattering data $r_+, r_-$ and $t$ of a potential $q \in \mathcal{Q}^{N,M}$. In the next proposition we discuss the properties of the map $\sigma \to l(\sigma, \cdot)$. To this aim we introduce, for $M \in \mathbb{Z}_{\geq 2}$ and $\zeta$ as in \eqref{zeta}, the auxiliary Banach space \begin{equation} W^M_{\zeta} := \lbrace f \in L^\infty : \overline{f(k)} = f(-k) , \quad \partial_k^{n} f \in L^2 \mbox{ for } 1 \leq n \leq M-1 \ , \ \ \ \zeta \partial_k^M f \in L^2 \rbrace \ \end{equation} and its complexification \begin{equation} W^M_{\zeta,\mathbb{C}} := \lbrace f \in L^\infty : \quad \partial_k^{n} f \in L^2 \mbox{ for } 1 \leq n \leq M-1 \ , \ \ \ \zeta \partial_k^M f \in L^2 \rbrace \ , \end{equation} both endowed with the norm $\norm{f}_{W^M_{\zeta,\mathbb{C}}}^2 := \norm{f}_{L^\infty}^2 + \norm{\partial_k f}_{H^{M-2}_\mathbb{C}}^2 + \norm{\zeta \partial_k^M f}_{L^2}^2$. Note that $W^M_{\zeta}$ differs from $H^M_{\zeta}$ since we require that $f$ lies just in $L^\infty$ (and not in $L^2$ as in $H^M_{\zeta}$). \begin{proposition} \label{prop:inv.l} Let $N \in \mathbb{Z}_{\geq 0}$ and $M \in \mathbb{Z}_{\geq 4}$ be fixed. The map $\mathscr{S}^{M,N} \to H^M_{\zeta}$, $\sigma \to l(\sigma, \cdot)$ is real analytic. \end{proposition} \begin{proof} Denote by $$ h(\sigma, k):= \frac{4(k^2+1)}{4k^2 + \sigma(k) \sigma(-k)} \ . $$ We show that the map $\mathscr{S}^{M,N} \to W^M_{\zeta}$, $\sigma \to h(\sigma, \cdot)$ is real analytic. First note that the map $\mathscr{S}^{M,N}_\mathbb{C} \to L^\infty$, assigning to $\sigma$ the function $\sigma(k)\sigma(-k)$ is analytic by the Sobolev embedding theorem. For $\sigma \in \mathscr{S}^{M,N}_\mathbb{C}$ write $\sigma = \sigma_1 + i \sigma_2$, where $\sigma_1:= \operatorname{Re} \sigma$, $\sigma_2 := \operatorname{Im} \sigma$. Then \begin{equation} \label{re.sigma} \operatorname{Re} (\sigma(k)\sigma(-k)) = \sigma_1(k)\sigma_1(-k) - \sigma_2(k)\sigma_2(-k) \ . \end{equation} Now fix $\sigma^0 \in \mathscr{S}^{M,N}$ and recall that $\mathscr{S}^{M,N} = \mathscr{S} \cap H^M_{\zeta} \cap L^2_N$. Remark that $\sigma_2^0 := \operatorname{Im} \sigma^0 = 0$, while $\sigma_1^0:= \operatorname{Re} \sigma^0$ satisfies $\sigma_1^0(k)\sigma_1^0(-k) \geq 0$ and $\sigma_1^0(0)^2 > 0$. Thus, by formula \eqref{re.sigma} and the Sobolev embedding theorem, there exists $V_{\sigma^0}\subset \mathscr{S}^{M,N}_\mathbb{C}$ small complex neighborhood of $\sigma^0$ and a constant $C_{\sigma^0}>0$ such that $$\operatorname{Re} (4k^2 + \sigma(k)\sigma(-k)) > C_{\sigma^0} \ , \quad \forall \sigma \in V_{\sigma^0} \ .$$ It follows that there exist constants $C_1, C_2 >0$ such that \begin{equation} \label{h.bound} \operatorname{Re} h(\sigma, k) \geq C_1 \ , \qquad |h(\sigma,k)| \leq C_2 \ , \qquad \forall k \in \mathbb{R}, \ \forall \sigma \in V_{\sigma^0} \ , \end{equation} implying that the map $V_{\sigma^0} \to L^\infty$, $\sigma \to h(\sigma,\cdot)$ is analytic. In a similar way one proves that $V_{\sigma^0} \to W^M_{\zeta,\mathbb{C}}$, $\sigma \mapsto h(\sigma, \cdot)$ is analytic (we omit the details). If $\overline{\sigma(k)} = \sigma(-k)$, the function $h(\sigma, \cdot)$ is real valued. Thus it follows that $\mathscr{S}^{M,N} \to W^M_{\zeta}$, $\sigma \to h(\sigma, \cdot)$ is real analytic. We consider now the map $\sigma \to l(\sigma, \cdot)$. By \eqref{h.bound}, $l(\sigma, k) = \log(h(\sigma,k))$ is well defined for every $k \in \mathbb{R}$. Since the logarithm is a real analytic function on the right half plane, the map $\mathscr{S}^{M,N} \to L^\infty$, $\sigma \to l(\sigma, \cdot)$ is real analytic as well. Furthermore for $|k| >1$ one finds a constant $C_3 >0$ such that $|l(\sigma,k)| \leq C_3/|k|^2$, $\forall \sigma \in V_{\sigma^0}$. Thus $\sigma \to l(\sigma, \cdot)$ is real analytic as a map from $\mathscr{S}^{M,N}$ to $L^2$. One verifies that $\partial_k \log(h(\sigma,\cdot)) = \frac{\partial_k h(\sigma, \cdot) }{h(\sigma, \cdot)}$ is in $L^2$ and one shows by induction that the map $\mathscr{S}^{M,N} \to H^M_{\zeta}$, $\sigma \mapsto l(\sigma,\cdot)$ is real analytic. \end{proof} In the next proposition we discuss the properties of the map $\sigma \to \omega(\sigma, \cdot)$. \begin{proposition} \label{prop:inv.omega} Let $N \in \mathbb{Z}_{\geq 0}$ and $M \in \mathbb{Z}_{\geq 4}$ be fixed. The map $\mathscr{S}^{M,N} \to W^M_{\zeta}$, $\sigma \to \omega(\sigma, \cdot)$ is real analytic. Furthermore $\omega(\sigma, \cdot)$ has the following properties: \begin{enumerate}[(i)] \item $\omega(\sigma, k)$ extends analytically in the upper half plane $\operatorname{Im} k > 0$, and it has no zeroes in $\operatorname{Im} k \geq 0$. \item $\overline{\omega(\sigma, k)} = \omega(\sigma, -k)$ $\, \forall k \in \mathbb{R}$. \item For every $k \in \mathbb{R}$ $$\omega(\sigma, k) \omega(\sigma,-k) = \frac{4(k^2+1)}{4k^2 + \sigma(k) \sigma(-k)} \ .$$ \end{enumerate} \end{proposition} \begin{proof} By Lemma \ref{lem:hilb.zeta}, the Hilbert transform is a bounded linear operator from $H^{M}_{\zeta,\C}$ to $H^{M}_{\zeta,\C}$. By Proposition \ref{prop:inv.l} it then follows that the map $$\mathscr{S}^{M,N} \to H^M_{\zeta} \ , \quad \sigma \mapsto \mathcal{H}(l(\sigma,\cdot))$$ is real analytic as well. Since the exponential function is real analytic and $\partial_k \omega(\sigma, \cdot) = \frac{1}{2}\partial_k (l(\sigma,\cdot) + i \mathcal{H}(l(\sigma, \cdot))) \omega(\sigma,\cdot)$, one proves by induction that $\mathscr{S}^{M,N} \to W^M_{\zeta}$, $\sigma \to \omega(\sigma, \cdot)$ is real analytic. Properties $(i)$--$(iii)$ are proved in \cite[Section 4]{kapptrub}. \end{proof} Next we consider the map $\sigma \to \frac{1}{w(\sigma, \cdot)}$. The following proposition follows immediately from Proposition \ref{prop:inv.omega} and the definition $\frac{1}{w(\sigma,k)}=\frac{\omega(\sigma,k)}{2i(k+i)}$. \begin{proposition} \label{prop:w.sigma} The map $\mathscr{S}^{M,N} \to H^{M-1}_\mathbb{C}$, $\sigma \to \frac{1}{w(\sigma, \cdot)}$ is real analytic. Furthermore the maps $$ \mathscr{S}^{M,N} \to L^2 \ , \qquad \sigma \to \partial_k^n \frac{2ik}{w(\sigma, \cdot)} \ , \quad 1 \leq n \leq M $$ are real analytic. The function $\frac{1}{w(\sigma, \cdot)}$ fulfills \begin{enumerate}[(i)] \item $\overline{\left(\frac{1}{w(\sigma, k)}\right)} = \frac{1}{w(\sigma, -k)}$ for every $k \in \mathbb{R}$. \item $\mmod{ \frac{2ik}{w(\sigma, k)}} \leq 1 $ for every $k \in \mathbb{R}$. \item For every $k \in \mathbb{R}$ $$ w(\sigma, k) w(\sigma, -k) = 4k^2 + \sigma(k) \sigma(-k) \ . $$ In particular $\mmod{w(\sigma, k)} > 0$ for every $k \in \mathbb{R}$ and $\sigma \in \mathscr{S}^{M,N}$. \end{enumerate} \end{proposition} Now we study the properties of $\rho_+(\sigma, \cdot)$ and $\rho_-(\sigma,\cdot)$ defined in formulas \eqref{inv.scatt.elem}. \begin{proposition} \label{prop:inv.r} Let $N \in \mathbb{Z}_{\geq 0}$ and $M \in \mathbb{Z}_{\geq 4}$ be fixed. Then the maps $\mathscr{S}^{M,N} \to H^M_{\zeta} \cap L^2_N$, $\sigma \to \rho_\pm(\sigma,\cdot)$ are real analytic. There exists $C >0$ so that $\norm{\rho_{\pm}(\sigma, \cdot)}_{H^{M}_{\zeta,\C}\cap L^2_N} \leq C \norm{\sigma}_{H^M_{\zeta}\cap L^2_N} $, where $C$ depends locally uniformly on $\sigma \in \mathscr{S}^{M,N}$. Furthermore the following holds: \begin{enumerate}[(i)] \item unitarity: $\tau(\sigma, k) \tau(\sigma,-k) + \rho_\pm(\sigma, k)\rho_\pm(\sigma,-k) = 1$ and $\rho_+(\sigma,k) \overline{\tau(\sigma,k)} + \overline{\rho_{-}(\sigma,k)} \tau(\sigma,k) = 0$ for every $ k\in \mathbb{R}$ . \item reality: $\tau(\sigma,k)= \overline{\tau(\sigma,-k)}, \, \rho_\pm(\sigma,k)= \overline{\rho_\pm(\sigma,-k)}$; \item analyticity: $\tau(\sigma,k)$ admits an analytic extension to $\{ \operatorname{Im} k > 0 \}$; \item asymptotics: $\tau(\sigma, z) = 1 + O(1/|z|)$ as $|z| \to \infty$, $\, \operatorname{Im} z \geq 0$, and $\rho_\pm(\sigma,k) = O(1/k)$, as $|k| \to \infty$, $k$ real; \item rate at $k=0$: $|\tau(\sigma,z)|>0$ for $z \neq 0$, $\operatorname{Im} z \geq 0$ and $|\rho_\pm(\sigma,k)|< 1$ for $k \neq 0$. Furthermore \begin{align*} \tau(\sigma,z) =& \alpha z + o(z), \quad \alpha\neq 0, \quad \operatorname{Im} z \geq 0 \\ 1+\rho_\pm(\sigma,k) =& \beta_\pm k + o(k),\quad k\in \mathbb{R}; \end{align*} \end{enumerate} \end{proposition} \begin{proof} The real analyticity of the maps $\mathscr{S}^{M,N} \to H^M_{\zeta} \cap L^2_N$, $\sigma \to \rho_\pm(\sigma,\cdot)$ follows from Proposition \ref{prop:w.sigma} and the definition $\rho_\pm(\sigma,k) = \sigma(\mp,k)/w(\sigma,k)$ (see also the proof of Proposition \ref{prop:inv.R}). Since $\sigma \mapsto \frac{1}{w(\sigma,\cdot)}$ is real analytic, it is locally bounded, i.e., there exists $C >0$ so that $\norm{\rho_{\pm}(\sigma, \cdot)}_{H^{M}_{\zeta,\C}\cap L^2_N} \leq C \norm{\sigma}_{H^M_{\zeta}\cap L^2_N} $, where $C$ depends locally uniformly on $\sigma \in \mathscr{S}^{M,N}$. Properties $(i),(ii), (v)$ follow now by simple computations. Property $(iii)-(iv)$ are proved in \cite[Lemma 4.1]{kapptrub}. \end{proof} Finally define the functions \begin{equation} R_\pm(\sigma, k) := 2ik \rho_\pm(\sigma,k) \ . \end{equation} \begin{proposition} \label{prop:inv.R} Let $N \in \mathbb{Z}_{\geq 0}$ and $M \in \mathbb{Z}_{\geq 4}$ be fixed. Then the maps $\mathscr{S}^{M,N} \to H^M_\mathbb{C} \cap L^2_N$, $\sigma \to R_\pm(\sigma,\cdot)$ are real analytic. There exists $C >0$ so that $\norm{R_{\pm}(\sigma, \cdot)}_{H^M_\mathbb{C}\cap L^2_N} \leq C \norm{\sigma}_{H^M_{\zeta}\cap L^2_N} $, where $C$ depends locally uniformly on $\sigma \in \mathscr{S}^{M,N}$. Furthermore the following holds: \begin{enumerate}[(i)] \item $\overline{R_\pm(\sigma, k)} = R_\pm(\sigma, -k)$ for every $k \in \mathbb{R}$. \item $|R_\pm(\sigma, k)| < 2|k|$ for any $k \in \mathbb{R} \setminus \{0 \}$. \end{enumerate} \end{proposition} \begin{proof} In order to prove the statements, we will use that $R_\pm(\sigma, k) = 2ik \frac{\sigma(\mp k)}{w(\sigma,k)}$. We will consider just $R_-$, since the analysis for $R_+$ is identical. To simplify the notation, we will denote $R_-(\sigma, \cdot) \equiv R(\sigma,\cdot)$. By Proposition \ref{prop:w.sigma}$(ii)$, $\mmod{R(\sigma,k)}\leq \mmod{\sigma(k)}\,$, thus $R(\sigma,\cdot) \in L^2_{N}$. In order to prove that $R(\sigma, \cdot) \in H^M_\mathbb{C}$, take $n$ derivatives ($1 \leq n \leq M$) of $R(\sigma,\cdot)$ to get the identity \begin{equation} \partial_k^n R(\sigma, k) = \frac{2ik}{w(\sigma,k)} \partial_k^n \sigma(k) + \sum_{j=1}^{n-1} \binom{n}{j} \left(\partial_k^j\frac{2ik}{w(\sigma,k)}\right) \ \partial_k^{n-j} \sigma(k) + \left(\partial_k^n\frac{2ik}{w(\sigma,k)}\right) \sigma(k) \ . \label{derRrecursion} \end{equation} We show now that each term of the r.h.s. of the identity above is in $L^2$. Consider first the term $I_1:= \frac{2ik}{w(\sigma,k)} \partial_k^n \sigma(k)$. If $1 \leq n < M$, then $\partial_k^n \sigma \in L^2$ and $|2ik/w(\sigma,k)| \leq 1$, thus proving that $I_1 \in L^2$. If $n=M$, let $\chi$ be a smooth cut-off function with $\chi(k)\equiv 1$ in $[-1,1]$ and $\chi(k)\equiv 0$ in $\mathbb{R} \setminus [-2,2]$. Then one has $$ I_1 = \frac{1}{w(\sigma,k)}\chi(k) 2ik \partial_k^M \sigma(k) + \frac{ 2ik}{w(\sigma,k)}(1-\chi(k)) \partial_k^M \sigma(k) \ . $$ As $\sigma \in \mathscr{S}^{M,N}$ it follows that $k \mapsto \chi(k) 2ik \partial_k^M \sigma(k)$ and $k \mapsto (1-\chi(k))\partial_k^M\sigma(k)$ are in $L^2$. By Proposition \ref{prop:w.sigma}, $ \frac{1}{w(\sigma,\cdot)}$ and $\frac{2ik}{w(\sigma, \cdot)}$ are in $L^\infty$. Altogether it follows that $I_1 \in L^2$ for any $1 \leq n \leq M$. Consider now $I_2 := \sum_{j=1}^{n-1} \binom{n}{j} \left(\partial_k^j\frac{2ik}{w(\sigma,k)}\right) \ \partial_k^{n-j} \sigma(k)$. By Proposition \ref{prop:w.sigma}, $\left(\partial_k^{j} \frac{2ik}{w(\sigma,k)}\right) \in H^{1}_\mathbb{C}$ for every $1 \leq j \leq M-1$, thus by the Sobolev embedding theorem $\left(\partial_k^j\frac{2ik}{w(\sigma,k)}\right) \in L^{\infty}$ for every $1 \leq j \leq M-1$. As $\partial_k^{n-j} \sigma \in L^2$ for $1 \leq j \leq n-1 < M$, it follows that $I_2 \in L^2$ for any $1 \leq n \leq M$. Finally consider $I_3:=\left(\partial_k^n\frac{2ik}{w(\sigma,k)}\right) \sigma(k)$. By Proposition \ref{prop:w.sigma}, $\left(\partial_k^n\frac{2ik}{w(\sigma,k)}\right) \in L^2$ for any $1 \leq n \leq M$. Since $\sigma \in L^\infty$, $I_3 \in L^2$ for any $1 \leq n \leq M$. Altogether we proved that $R(\sigma, \cdot) \in H^M_\mathbb{C} \cap L^2_N$. The claimed estimate on $\norm{R(\sigma,\cdot)}_{H^M_\mathbb{C} \cap L^2_N}$ and item $(i)$ and $(ii)$ follow in a straightforward way. The real analyticity of the map $\mathscr{S}^{M,N} \to H^M_\mathbb{C} \cap L^2_N$, $\sigma \to R(\sigma,\cdot)$ follows by Proposition \ref{prop:w.sigma}. \end{proof} For $\sigma \in \mathscr{S}^{M,N}$, define the Fourier transforms \begin{equation} \label{F.four} F_{\pm}(\sigma, y) := \mathcal{F}_\pm^{-1}(\rho_\pm(\sigma, \cdot))(y) = \frac{1}{\pi}\int_{\mathbb{R}} \rho_{\pm}(\sigma, k) e^{\pm 2ik y} dk \ . \end{equation} Then \begin{equation} \label{FR2} \pm \partial_y F_{\pm} (\sigma, y) = \frac{1}{\pi} \int\limits_{-\infty}^{+\infty} 2ik \rho_\pm(\sigma, k) e^{\pm 2iky} \; dk = \mathcal{F}_\pm^{-1}(R_\pm(\sigma, \cdot ))(y) \ . \end{equation} In the next proposition we analyze the properties of the maps $\sigma \mapsto F_{\pm}(\sigma, \cdot)$. \begin{proposition} \label{rem:dec_rel} Let $N \in \mathbb{Z}_{\geq 0}$ and $M \in \mathbb{Z}_{\geq 4}$ be fixed. Then the following holds true: \begin{enumerate} \item[(i)] $\sigma \mapsto F_{\pm}(\sigma, \cdot)$ are real analytic as maps from $\mathscr{S}^{4,0}$ to $H^1 \cap L^2_3$. Moreover there exists $C >0$ so that $\norm{F_{\pm}(\sigma, \cdot)}_{H^1\cap L^2_3} \leq C \norm{\sigma}_{H^M_{\zeta}} $, where $C$ depends locally uniformly on $\sigma \in \mathscr{S}^{M,N}$. \item[(ii)] $\sigma \mapsto F'_{\pm}(\sigma, \cdot)$ are real analytic as maps from $\mathscr{S}^{M,N}$ to $H^N \cap L^2_M$. Moreover there exists $C' >0$ so that $\norm{F'_{\pm}(\sigma,\cdot)}_{H^N\cap L^2_M} \leq C' \norm{\sigma}_{H^M_{\zeta} \cap L^2_{N}} $, where $C'$ depends locally uniformly on $\sigma \in \mathscr{S}^{M,N}$. \end{enumerate} \end{proposition} \begin{proof} By Proposition \ref{prop:inv.r}, the map $\mathscr{S}^{4,0}\to H^3_\mathbb{C} \cap L^2_1$, $\sigma \to \rho_\pm(\sigma, \cdot)$ is real analytic. Thus item $(i)$ follows by the properties of the Fourier transform. By Proposition \ref{prop:inv.r} $(ii)$, $F_{\pm}(\sigma,\cdot) = \mathcal{F}_\pm^{-1}(\rho_\pm)$ is real valued. Item $(ii)$ follows from \eqref{FR2} and the characterizations \begin{equation} \label{cor.R.F.equiv} R_\pm \in H^{M}_\mathbb{C} \ \Longleftrightarrow \ \mathcal{F}_{\pm}^{-1}(R_\pm) \in L^2_M \qquad \mbox{and} \qquad R_\pm \in L^2_{N} \ \Longleftrightarrow \ \mathcal{F}_{\pm}^{-1}(R_\pm) \in H^N_\mathbb{C} \ . \end{equation} The claimed estimates follow from the properties of the Fourier transform, Proposition \ref{prop:inv.r} and Proposition \ref{prop:inv.R}. \end{proof} We are finally able to prove that there exists a potential $q \in \mathcal{Q}$ with prescribed scattering coefficient $\sigma \in \mathscr{S}^{M,N}$. More precisely the following theorem holds. \begin{theorem} \label{S.onto} Let $N \in \mathbb{Z}_{\geq 0}$, $M \in \mathbb{Z}_{\geq 4}$ and $\sigma \in \mathscr{S}^{M,N}$ be fixed. Then there exists a potential $q \in \mathcal{Q}$ such that $S(q, \cdot) = \sigma$. \end{theorem} \begin{proof} Let $\rho_\pm:=\rho_\pm(\sigma, \cdot)$ and $\tau:= \tau(\sigma, \cdot)$ be given by formula \eqref{inv.scatt.elem}. Let $F_\pm(\sigma, \cdot)$ be defined as in \eqref{F.four}. By Proposition \ref{rem:dec_rel} it follows that $F_\pm(\sigma, \cdot)$ are absolutely continuous and $F'_\pm(\sigma, \cdot) \in H^N \cap L^2_M$. As $M \geq 4$ it follows that \begin{equation} \label{F'.L1} \int_{-\infty}^\infty (1+x^2) |F'_\pm (\sigma, x)| \, dx <\infty \ . \end{equation} The main theorem in inverse scattering \cite{faddeev} assures that if \eqref{F'.L1} and item $(i)$--$(v)$ of Proposition \ref{prop:inv.R} hold, then there exists a potential $q \in \mathcal{Q}$ such that $r_\pm(q,\cdot) = \rho_\pm$ and $t(q,\cdot) = \tau$, where $r_\pm$ and $t$ are the reflection respectively transmission coefficients defined in \eqref{r.S.rel}. From the formulas \eqref{inv.scatt.elem} it follows that $S(q,\cdot) = \sigma$. \end{proof} It remains to show that $q \in \mathcal{Q}^{N,M}$ and that the map $S^{-1}: \mathscr{S}^{M,N} \to \mathcal{Q}^{N,M}$ is real analytic. We take here a different approach then \cite{kapptrub}. In \cite{kapptrub} the authors show that the map $S$ is complex differentiable and its differential $d_qS$ is bounded invertible. Here instead we reconstruct $q$ by solving the Gelfand-Levitan-Marchenko equations and we show that the inverse map $\mathscr{S}^{M,N} \to \mathcal{Q}^{N,M} $, $\sigma \mapsto q$ is real analytic. We outline briefly the procedure. Given two reflection coefficients $\rho_\pm$ satisfying items $(i)$--$(v)$ of Proposition \ref{prop:inv.r} and arbitrary real numbers $c_+ \leq c_-$, it is possible to construct a potential $q_+$ on $[c_+, \infty)$ using $\rho_+$ and a potential $q_-$ on $(-\infty, c_-]$ using $\rho_-$, such that $q_+$ and $q_-$ coincide on the intersection of their domains, i.e., $\left.q_+\right|_{[c_+, c_-]}= \left.q_-\right|_{[c_+, c_-]}$. Hence $q$ defined on $\mathbb{R}$ by $q\vert_{[c_+, + \infty)} = q_+$ and $q\vert_{(-\infty, c_-]} = q_-$ is well defined, $q \in \mathcal{Q}$ and $r_\pm (q, \cdot) = \rho_\pm$, i.e., $\rho_+$ and $\rho_-$ are the reflection coefficients of the potential $q$ \cite{faddeev,Masturm,deift}. We postpone the details of this procedure to the next section. \subsection{ Gelfand-Levitan-Marchenko equation} In this section we prove how to construct for any $\sigma \in \mathscr{S}^{M,N}$ two potentials $q_+$ and $q_-$ with $q_+ \in H^N_{x\geq c} \cap L^2_{M, x \geq c}$ respectively $q_- \in H^N_{x \leq c} \cap L^2_{M, x \leq c}$, where for any $c \in \mathbb{R}$ and $1 \leq p \leq \infty$ \begin{equation} L^p_{x \geq c}:= \left\{f : [c, +\infty) \to \mathbb{C}: \, \norm{f}_{L^p_{x\geq c}}< \infty \right\} \ , \end{equation} where $\norm{f}_{L^p_{x\geq c}}:= \left(\int_c^{+\infty} |f(x)|^p \, dx \right)^{1/p} $ for $1 \leq p < \infty$ and $\norm{f}_{L^{\infty}_{x \geq c}} := \esssup_{x \geq c} |f(x)|$. For any integer $N \geq 1$ define \begin{equation} \label{H^N_c} H^N_{x \geq c }:= \left\{ f: [c, +\infty) \to \mathbb{R}: \, \norm{f}_{H^N_{x \geq c }} < \infty \right\}, \qquad \norm{f}_{H^N_{x \geq c }}^2 := \sum_{j=0}^N \norm{\partial_x^n f}_{L^2_{x \geq c}}^2, \end{equation} and for any real number $M \geq 1$ define \begin{equation} \label{L^M_c} L^2_{M, x \geq c }:= \left\{ f: [c, +\infty) \to \mathbb{C}: \, \norm{f}_{L^2_{M,x \geq c }} < \infty \right\}, \qquad \norm{f}_{L^2_{M, x \geq c }}= \norm{\langle x \rangle^M f}_{L^2_{x \geq c}}, \end{equation} where $\langle x \rangle := (1 + x^2)^{1/2}$. We will write $H^N_{\mathbb{C}, x \geq c}$ for the complexification of $H^N_{x \geq c }$. For $1\leq \alpha, \beta \leq \infty$, we define $$L^\alpha_{x \geq c}L^\beta_{y \geq 0}:=\left\{ f: [c, +\infty) \times [0, +\infty) \rightarrow \mathbb{C}: \, \norm{f}_{L^\alpha_{x \geq c} L^\beta_{y \geq 0}} < \infty \right\} \ , $$ where $\norm{f}_{L^\alpha_{x \geq c}L^\beta_{y \geq 0}} := \Big( \int_{c}^{+\infty} \norm{f(x, \cdot)}^\alpha_{L^\beta_{y \geq 0}} \,dx \Big)^{1/\alpha}.$ Analogously one defines the spaces $L^p_{x \leq c}$, $H^N_{x\leq c}$, $L^2_{M, x\leq c}$ and $L^\alpha_{x \leq c} L^\beta_{y \leq 0}$, \textit{mutatis mutandis}. \\ Let us denote by $C^0_{y \geq 0} := C^0([0, \infty), \mathbb{C})$ and by $C^0_{x \geq c, y \geq 0} := C^0([c, \infty)\times [0, \infty), \mathbb{C})$. Finally we denote by $C^0_{x \geq c} L^2_{y \geq 0} := C^0([c, \infty), L^2_{y \geq 0})$ the set of continuous functions on $[c, \infty)$ taking value in $L^2_{y \geq 0}$. \\ The potentials $q_+$ and $q_-$ mentioned at the beginning of this section are constructed by solving an integral equation, known in literature as the \textit{Gelfand-Levitan-Marchenko equation}, which we are now going to described in more detail. Given $\sigma \in \mathscr{S}$, define the functions $F_\pm(\sigma, \cdot)$ as in \eqref{F.four}. See Proposition \ref{rem:dec_rel} for the analytical properties of the maps $\sigma \to F_\pm(\sigma, \cdot)$. To have a more compact notation, in the following we will denote $F_{\pm, \sigma} := F_\pm(\sigma, \cdot)$. \begin{remark} From the decay properties of $F'_{\pm, \sigma}$ one deduces corresponding decay properties of $F_{\pm, \sigma}$. Indeed one has \begin{equation} \label{dec_rel} \begin{aligned} \langle x \rangle^{m} \, F_\pm' \in L^2_{x \geq c} \Rightarrow \langle x \rangle^{m-1} F_\pm' \in L^1_{ x \geq c} \Rightarrow x^{m-2} F_\pm \in L^1_{x \geq c} \ , \quad \forall \, m \geq 2 \ . \end{aligned} \end{equation} \end{remark} The Gelfand-Levitan-Marchenko equations are the integral equations given by \begin{align} \label{ME} F_{+, \sigma}(x+y)+E_{+, \sigma}(x,y) + \int\limits_0^{+\infty} F_{+, \sigma}(x+y+z)E_{+, \sigma}(x,z) dz &= 0, \qquad y \geq 0 \\ \label{ME-}F_{-, \sigma}(x+y)+ E_{-, \sigma}(x,y) + \int\limits_{-\infty}^0 F_{-, \sigma}(x+y+z) E_{-, \sigma}(x,z) dz &= 0, \qquad y \leq 0 \end{align} where $ E_{\pm, \sigma}(x,y)$ are the unknown functions and $F_{\pm, \sigma}$ are given and uniquely determined by $\sigma$ through formula \eqref{F.four}. If \eqref{ME} and \eqref{ME-} have solutions with enough regularity, then one defines the potentials $q_+$ and $q_-$ through the well-known formula -- \cite{faddeev} \begin{equation} \label{ainversedefinition} \begin{aligned} &q_+(x) = - \partial_xE_{+, \sigma}(x,0), \quad \forall \, c_+ \leq x < \infty \ , \qquad q_-(x) = \partial_x E_{-, \sigma}(x,0), \quad \forall \, -\infty < x \leq c_- \ . \end{aligned} \end{equation} The main purpose of this section is to study the maps $\mathcal{R}_{\pm, c} $ defined by \begin{align} \label{ainversedefinition1} \sigma \mapsto \mathcal{R}_{\pm,c}(\sigma), \qquad \mathcal{R}_{\pm,c}(\sigma)(x):= \mp\partial_x E_{\pm, \sigma}(x,0), \qquad x \in [c, \pm \infty) \ . \end{align} \begin{theorem} \label{inversemain1} Fix $N \in \mathbb{Z}_{\geq 0}$, $M\in \mathbb{Z}_{\geq 4}$ and $c\in \mathbb{R}$. Then the maps $\mathcal{R}_{+,c}$ $\, [\mathcal{R}_{-,c}]$ are well defined on $ \mathscr{S}^{M, N}$ and take values in $H^N_{x \geq c} \cap L^2_{M, x \geq c}$ $\, [ H^N_{x \leq c} \cap L^2_{M, x \leq c} ]$. As such they are real analytic. \end{theorem} In order to prove Theorem \ref{inversemain1} we look for solutions of \eqref{ME} and \eqref{ME-} of the form \begin{equation} \label{decomp} E_{\pm, \sigma}(x,y) \equiv - F_{\pm, \sigma}(x+y) + B_{\pm, \sigma}(x,y) \end{equation} where $ B_{\pm, \sigma}(x,y)$ are to be determined. Inserting the ansatz \eqref{decomp} into the Gelfand-Levitan-Marchenko equations \eqref{ME}, \eqref{ME-}, one gets \begin{align} \label{bGLM} & B_{+, \sigma}(x,y) + \int\limits_0^{+\infty} F_{+, \sigma}(x+y+z) B_{+, \sigma}(x,z) dz = \int\limits_0^{+\infty} F_{+, \sigma}(x+y+z)F_{+, \sigma}(x+z) \, dz, \qquad y \geq 0 \ ,\\ \label{bGLM2} & B_{-, \sigma}(x,y) + \int\limits_{-\infty}^0 F_{-, \sigma}(x+y+z) B_{-, \sigma}(x,z) dz = \int\limits_{-\infty}^0 F_{-, \sigma}(x+y+z) F_{-, \sigma}(x+z)\, dz, \qquad y \leq 0 . \end{align} We will prove in Lemma \ref{prop:AnBn} below that there exists a solution $B_{+, \sigma}$ of \eqref{bGLM} and a solution $B_{-, \sigma}$ of \eqref{bGLM2} with $\partial_x B_{+, \sigma}(\cdot, 0) \in H^{1}_{x \geq c}$ respectively $\partial_x B_{-, \sigma}(\cdot, 0) \in H^{1}_{x \leq c}$. By \eqref{ainversedefinition} we get therefore \begin{equation} \label{structureS^-1} q_{+} = \partial_x F_{+, \sigma} - \partial_x B_{+, \sigma}(\cdot,0) \ \ \ \forall \, c \leq x < \infty , \qquad q_{-} = -\partial_x F_{-, \sigma} + \partial_x B_{-, \sigma}(\cdot,0) \ \ \ \forall \, -\infty < x \leq c \ . \end{equation} Define the maps $$\mathcal{B}_{\pm,c}: \sigma \mapsto \mathcal{B}_{\pm,c}(\sigma)$$ as \begin{equation} \mathcal{B}_{+, c}(\sigma)(x) := - \partial_x B_{+, \sigma}(x, 0) \quad \forall \, x \geq c \qquad \mbox{and} \qquad \mathcal{B}_{-, c}(\sigma)(x) := \partial_x B_{-, \sigma}(x, 0) \quad \forall \, x \leq c \ , \label{def_Binv} \end{equation} with $B_{\pm, \sigma}(x,y):= E_{\pm, \sigma}(x,y) + F_{\pm,\sigma}(x,y)$ as in \eqref{decomp}. Now we study analytic properties of the maps $\mathcal{B}_{\pm , c}$ in case the scattering coefficient $\sigma$ belongs to $\mathscr{S}^{4,N}$ with arbitrary $N \in \mathbb{Z}_{\geq 0}$. Later we will treat the case where $\sigma \in \mathscr{S}^{M,0}$, $M \in \mathbb{Z}_{\geq 4}$. \begin{proposition} \label{Bsmoothing} Fix $N \in \mathbb{Z}_{\geq 0}$ and $c \in \mathbb{R}$. Then $\mathcal{B}_{+, c}$ $\, [\mathcal{B}_{-, c}]$ is real analytic as a map from $ \mathscr{S}^{4, N}$ to $H^{N}_{ x \geq c}$ $\, [H^{N}_{ x \leq c}]$. Moreover $$\norm{\mathcal{B}_{+, c}(\sigma)}_{H^{N}_{ x \geq c}} \ , \ \norm{\mathcal{B}_{-, c}(\sigma)}_{H^{N}_{ x \leq c}} \ \leq K \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 $$ where $K>0$ is a constant which can be chosen locally uniformly in $\sigma \in \mathscr{S}^{4,N}$. \end{proposition} The main ingredient of the proof of Proposition \ref{Bsmoothing} is a detailed analysis of the solutions of the integral equations \eqref{bGLM}-\eqref{bGLM2}, which we rewrite as \begin{align} \label{opma1} & \left(Id + \mathcal{K}_{x,\sigma}^{\pm} \right)[B_{\pm, \sigma}(x,\cdot)](y) = f_{\pm, \sigma}(x,y) \end{align} where for every $x \in \mathbb{R}$ fixed, the two operators $\mathcal{K}_{x, \sigma}^+ : L^2_{y \geq 0} \to L^2_{y \geq 0}$ and $\mathcal{K}_{x, \sigma}^- : L^2_{y \leq 0} \to L^2_{y \leq 0}$ are defined by \begin{align} \label{kxdef} \mathcal{K}_{x, \sigma}^{+} \, [f](y):=& \int\limits_0^{+\infty} F_{+, \sigma}(x+y+z) f(z) \,dz \ , &f\in L^2_{y \geq 0} \ ,\\ \mathcal{K}_{x,\sigma}^{-} \, [f](y) := &\int\limits_{-\infty}^0 F_{-, \sigma}(x+y+z) f(z) \,dz \ , & f\in L^2_{y \leq 0} \ , \end{align} and the functions $f_{\pm, \sigma}$ are defined by \begin{equation} \label{f.def} f_{\pm, \sigma}(x,y):= \pm \int\limits_0^{\pm \infty} F_{\pm, \sigma}(x+y+z) F_{\pm, \sigma}(x+z) \, dz \ . \end{equation} As the claimed statements for $\mathcal{B}_{+, c}$ and $\mathcal{B}_{-,c}$ can be proved in a similar way we consider $\mathcal{B}_{+, c}$ only. To simplify notation, in the following we omit the subscript $"+"$. In particular we write $B_{\sigma} \equiv B_{+, \sigma}$, $F_{\sigma} \equiv F_{+, \sigma}$, $f_{\sigma} \equiv f_{+, \sigma}$ and $\mathcal{K}_{x,\sigma}\equiv \mathcal{K}_{x,\sigma}^+$. \\ We give the following definition: a function $h_\sigma:[c, \infty) \times [0, \infty) \to \mathbb{R}$, which depends on $\sigma \in \mathscr{S}^{4,N}$, will be said to satisfy $(P)$ if the following holds true: \begin{enumerate} \item[$(P1)$] $ h_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0} \cap C^0_{x \geq c, y \geq 0}$. Finally $h_\sigma (\cdot,0) \in L^2_{x \geq c}$. \item[$(P2)$] There exists a constant $K_c>0$, which depends locally uniformly on $\sigma\in H^4_{\zeta,\mathbb{C}} \cap L^2_{N} $, such that \begin{align} \label{prop:AnBn:est} \norm{ h_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} + \norm{ h_\sigma(\cdot, 0)}_{L^2_{x \geq c}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 . \end{align} \item[$(P3)$] $\sigma \mapsto h_\sigma$ $\, [\sigma \mapsto h_\sigma(\cdot,0)]$ is real analytic as a map from $\mathscr{S}^{4,N}$ to $L^2_{x\geq c} L^2_{y\geq 0}$ $[L^2_{x \geq c}]$. \end{enumerate} We have the following lemma: \begin{lemma} \label{prop:AnBn} Fix $N \geq 0$ and $c \in \mathbb{R}$. For every $ \sigma \in \mathscr{S}^{4,N}$ equation \eqref{bGLM} has a unique solution $B_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0}$. Moreover for all integers $n_1, n_2\geq 0$ with $ n_1 + n_2 \leq N+1$ , the function $ \partial_x^{n_1}\partial_y^{n_2}B_\sigma$ satisfies $(P)$. \end{lemma} {\em Proof.} Let $N \in \mathbb{Z}_{\geq 0}$ and $c \in \mathbb{R}$ be fixed. The proof is by induction on $j_1 + j_2 = n$, $0 \leq n \leq N$. For each $n$ we prove that $\partial_x^{j_1}\partial_y^{j_2}B_\sigma$ and its derivatives $\partial_x^{j_1+1}\partial_y^{j_2}B_\sigma$, $\partial_x^{j_1}\partial_y^{j_2+1}B_\sigma$ satisfy $(P)$. Thus the claim follows. {\em Case $n=0$. } Then $j_1 = j_2 = 0$. We need to prove existence and uniqueness of the solution of equation \eqref{opma1}. By Lemma \ref{f.prop} [Proposition \ref{rem:dec_rel} and Lemma \ref{lem:Fx}] the function $f_\sigma$ and its derivatives $\partial_x f_\sigma,$ $\, \partial_y f_\sigma$ $\, [F_\sigma]$ satisfy assumption $(P)$ [$(H)$-- cf Appendix \ref{app:ab.eq}]. Thus by Lemma \ref{lem:ab.eq2} $(i)$ it follows that $B_\sigma =\left(Id + \mathcal{K}_\sigma \right)^{-1}f_\sigma$ and its derivatives $\partial_x B_\sigma$, $\partial_y B_\sigma$ satisfy $(P)$. Note that if $N=0$ the lemma is proved. Thus in the following we assume $N \geq 1$. {\em Case $n-1 \leadsto n $.} Let $j_1 + j_2 = n$. By the induction assumption we already know that $\partial_x^{j_1}\partial_y^{j_2} B_\sigma $ satisfies $(P)$. By Lemma \ref{lem:ab.eq2} it follows that $\partial_x^{j_1}\partial_y^{j_2} B_\sigma $ satisfies \begin{equation} \begin{cases} \label{int.eq.1} (Id + \mathcal{K}_{x, \sigma})[ \partial_x^n B_\sigma(x,\cdot)](y) = f_\sigma^{n,0} (x,y) & \mbox{ if } j_2 = 0 \ , \\ \partial_x^{j_1} \partial_y^{j_2} B_\sigma(x,y) = f_\sigma^{j_1, j_2} (x,y) & \mbox{ if } j_2 > 0 \ , \end{cases} \end{equation} where \begin{equation} \begin{aligned} \label{f_n0} & f_\sigma^{n,0} (x,y) := \partial_x^{n}f_\sigma(x,y) -\sum_{l=1}^{n}\binom{n}{l} \int\limits_0^{+\infty} \partial_x^l F_\sigma(x+y+z) \, \partial_x^{n-l}B_\sigma(x,z) \, dz \ , \\ & f_\sigma^{j_1, j_2} (x,y) := \partial_x^{j_1}\partial_y^{j_2} f_\sigma(x,y) -\sum_{l=0}^{j_1}\binom{j_1}{l} \int\limits_0^{+\infty} \partial_z^{j_2 + l} F_\sigma(x+y+z) \, \partial_x^{j_1-l}B_\sigma(x,z) \, dz \ . \end{aligned} \end{equation} In order to prove the induction step, we show in Lemma \ref{f_sigma.P} that for any $j_1+j_2 = n$, $0 \leq n \leq N$, $f_\sigma^{j_1, j_2}$ and its derivatives $\partial_y f_\sigma^{j_1, j_2}$, $\partial_x f_\sigma^{j_1, j_2}$ satisfy $(P)$. In view of identities \eqref{int.eq.1} and Lemma \ref{lem:ab.eq2} $(i)$, it follows that $\partial_x^{j_1} \partial_y^{j_2} B_\sigma$ and its derivatives $\partial_x^{j_1+1} \partial_y^{j_2} B_\sigma$ and $\partial_x^{j_1} \partial_y^{j_2+1} B_\sigma$ satisfy $(P)$, thus proving the induction step. \qed Lemma \ref{prop:AnBn} implies in a straightforward way Proposition \ref{Bsmoothing}. \begin{proof}[Proof of Proposition \ref{Bsmoothing}] By Lemma \ref{prop:AnBn}, $\partial_x^{n} B_\sigma$ satisfies $(P)$ for every $1 \leq n \leq N+1$. In particular for every $1 \leq n \leq N+1$, $\sigma \mapsto \partial_x^{n} B_\sigma(\cdot,0)$ is real analytic as a map from $\mathscr{S}^{4,N}$ to $L^2_{x \geq c}$ and $\norm{ \partial_x^{n} B_\sigma(\cdot, 0)}_{L^2_{x \geq c}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2$. Thus the map $\sigma \to -\partial_x B_\sigma(\cdot,0)$ is real analytic as a map from $\mathscr{S}^{4,N}$ to $H^N_{x \geq c}$. The claimed estimate follows in a straightforward way. \end{proof} In the next result we study the case $\sigma \in \mathscr{S}^{M, 0}$ for arbitrary $M \geq 4$. \begin{proposition} \label{decayinverse} Fix $M \in \mathbb{Z}_{\geq 4}$ and $c \in \mathbb{R}$. For any $\sigma \in \mathscr{S}^{M, 0}$ the equations \eqref{ME} and \eqref{ME-} admit solutions $E_{\pm, \sigma}$. The maps $\mathcal{R}_{+, c}$ $\, [\mathcal{R}_{-, c}]$, defined by \eqref{ainversedefinition1}, are real analytic as maps from $\mathscr{S}^{M,0}$ to $L^2_{M, x \geq c}$ $\,[L^2_{M, x \leq c}]$. Moreover $\norm{\mathcal{R}_{+, c}(\sigma)}_{ L^2_{M, x \geq c}}\ , \ \norm{\mathcal{R}_{-, c}(\sigma)}_{ L^2_{M, x \leq c}} \leq K_c \norm{\sigma}_{H^M_{\zeta,\mathbb{C}}},$ where $K_c>0$ can be chosen locally uniformly in $\sigma \in \mathscr{S}^{M,0}$. \end{proposition} \begin{proof} We prove the result just for $\mathcal{R}_{+,c}$, since for $\mathcal{R}_{-,c}$ the proof is analogous. As before, we suppress the subscript ''+'' from the various objects. \\ Consider the Gelfand-Levitan-Marchenko equation \eqref{ME}. Multiply it by $\langle x \rangle^{M-3/2}$ to obtain \begin{equation} \left(Id + \mathcal{K}_{x,\sigma} \right) \left[ \langle x \rangle^{M-3/2} E_\sigma(x,y)\right] = - \langle x \rangle^{M-3/2} F_\sigma(x+y). \label{lineD.1} \end{equation} The function $$ h_\sigma(x,y) := - \langle x \rangle^{M-3/2} F_\sigma(x+y) \ , $$ satisfies $h_\sigma(x, \cdot) \in L^2_{y \geq 0}$ and one checks that $h_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap C^0_{x \geq c, y \geq 0}$. We show now that $h_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$. By Lemma \ref{lem:techLemma} $(A3)$ and Proposition \ref{rem:dec_rel} for $N=0$ it follows that $$\norm{\langle x \rangle^{M-3/2}h_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}}^2\leq K_c \intx{c} \langle x \rangle^{2M-2} |F_\sigma(x)|^2 \, dx \leq K_c \norm{\langle x \rangle^M F_\sigma'}_{L^2_{x \geq c}}^2 \leq K_c \norm{\sigma}_{H^M_{\zeta,\mathbb{C}}}^2 \ .$$ Consider now $h_\sigma(x, 0) = - \langle x \rangle^{M-3/2} F_\sigma(x)$. By \eqref{dec_rel} it follows that $h_\sigma(\cdot, 0) \in L^2_{x \geq c}$. Finally the map $\sigma \mapsto h_\sigma$ $\,[\sigma \mapsto h_\sigma(\cdot, 0)]$ is real analytic as a map from $\mathscr{S}^{M,0}$ to $L^2_{x\geq c} L^2_{y\geq 0}$ $[L^2_{M-3/2, x\geq c}]$. Proceeding as in the proof of Lemma \ref{lem:ab.eq}, one shows that there exists a solution $E_\sigma$ of equation \eqref{ME} which satisfies $(i)$ $\langle x \rangle^{M-3/2} E_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0}$, $\langle x \rangle^{M-3/2} E_\sigma(x, \cdot) \in C^0_{y \geq 0}$, $\langle \cdot \rangle^{M-3/2}E_\sigma(\cdot, 0) \in L^2_{x \geq c}$, $(ii)$ $ \norm{\langle x \rangle^{M-3/2} E_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c\norm{\sigma}_{H^M_{\zeta,\mathbb{C}}}$, $(iii)$ $ \sigma \mapsto \langle x \rangle^{M-3/2} E_\sigma$ $\,[ \sigma \mapsto E_\sigma(\cdot,0)]$ is real analytic as a map from $ \mathscr{S}^{M,0}$ to $L^2_{x\geq c} L^2_{y\geq 0}$ $[L^2_{M-3/2, x\geq c}]$. Furthermore its derivative $\partial_x E_\sigma$ satisfies the integral equation \begin{equation} \label{derxB1} (Id + \mathcal{K}_{x,\sigma})\left(\partial_x E_\sigma(x,y)\right) = - F_\sigma'(x+y) - \int\limits_0^{+\infty} F_\sigma'(x+y+z)\; E_\sigma(x,z)\; dz . \end{equation} Multiply the equation above by $\langle x \rangle^{M-3/2}$, to obtain $(Id + \mathcal{K}_\sigma)\left(\langle x \rangle^{M-3/2}\partial_x E_\sigma \right) = \tilde h_\sigma$, where \begin{equation} \label{tilde.h.R} \tilde h_\sigma(x,y) := -\langle x \rangle^{M-3/2}h_\sigma'(x,y) - \int\limits_0^{+\infty} F_\sigma'(x+y+z)\, \langle x \rangle^{M-3/2} E_\sigma(x,z) \, dz \ . \end{equation} where $h_\sigma'(x,y):= F_\sigma'(x+y)$. We claim that $\tilde h_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$ and $\sigma \mapsto \tilde h_\sigma$ is real analytic as a map $\mathscr{S}^{M,0} \to L^2_{x\geq c} L^2_{y\geq 0}$. By Lemma \ref{lem:techLemma} $(A0)$ the first term of \eqref{tilde.h.R} satisfies $$\norm{\langle x \rangle^{M-3/2} h_\sigma'}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\langle x \rangle^{M-1} F_\sigma'}_{L^2_{x \geq c}} \leq K_c \norm{\sigma}_{H^M_{\zeta,\mathbb{C}}} \ , $$ and by Lemma \ref{lem:techLemma} $(A1)$ the second term of \eqref{tilde.h.R} satisfies $$ \norm{\int\limits_0^{+\infty} F_\sigma'(x+y+z)\, \langle x \rangle^{M-3/2} E_\sigma(x,z) \, dz}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq \norm{F_\sigma'}_{L^1} \norm{\langle x \rangle^{M-3/2} E_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^M_{\zeta,\mathbb{C}}}^2 \ . $$ Moreover $\sigma \mapsto \tilde h_\sigma$ is real analytic as a map from $\mathscr{S}^{M,0}$ to $L^2_{x\geq c} L^2_{y\geq 0}$, being composition of real analytic maps. Thus, by Lemma \ref{lem:ab.eq}, it follows that $\langle x \rangle^{M-3/2}\partial_x E_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$, $\norm{\langle x \rangle^{M-3/2}\partial_x E_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^M_{\zeta,\mathbb{C}}}$ and $\sigma \mapsto \langle \cdot \rangle^{M-3/2}\partial_x E_\sigma$ is real analytic as a map from $\mathscr{S}^{M,0}$ to $L^2_{x\geq c} L^2_{y\geq 0}$ .\\ Consider now equation \eqref{ME}. Evaluate it at $y=0$ to get $$ E_\sigma(x,0) = -F_\sigma(x) - \int\limits_0^{+\infty} F_\sigma(x+z) E_\sigma(x,z) \, dz \ . $$ Take the $x$-derivative of the equation above and multiply it by $\langle x \rangle^{M}$ to obtain \begin{align*} \langle x \rangle^M \partial_x E_\sigma(x,0) = & - \langle x \rangle^M F_\sigma'(x) - \int\limits_0^{+\infty} \langle x \rangle^{3/2} F_\sigma'(x+z)\langle x \rangle^{M-3/2} E_\sigma(x,z) \, dz \\ & \quad - \int\limits_0^{+\infty} \langle x \rangle^{3/2} F_\sigma(x+z) \langle x \rangle^{M-3/2}\partial_x E_\sigma(x,z) \, dz \ . \end{align*} We prove now that $\partial_x E_\sigma(\cdot,0) \in L^2_{M, x \geq c}$ and $\sigma \mapsto \partial_x E_\sigma(\cdot,0)$ is real analytic as a map from $\mathscr{S}^{M,0}$ to $L^2_{M, x \geq c}$. The result follows by Proposition \ref{rem:dec_rel} and Lemma \ref{lem:techLemma} $(A2)$. Indeed one has that $\sigma \mapsto F_\sigma'$ $\, [\sigma \mapsto F_\sigma]$ is real analytic as a map from $\mathscr{S}^{M,0}$ to $L^2_M$ $\,[L^2_{3/2} ]$, and we proved above that $\sigma \mapsto \langle \cdot \rangle^{M-3/2} E_\sigma$ and $\sigma \mapsto \langle \cdot \rangle^{M-3/2}\partial_x E_\sigma $ are real analytic as maps from $\mathscr{S}^{M,0}$ to $L^2_{x\geq c} L^2_{y\geq 0}$. \end{proof} Combining the results of Proposition \ref{Bsmoothing} and Proposition \ref{decayinverse}, we can prove Theorem \ref{inversemain1}. \\ \noindent{\em Proof of Theorem \ref{inversemain1}.} It follows from Proposition \ref{rem:dec_rel}, Proposition \ref{Bsmoothing} and Proposition \ref{decayinverse} by restricting the scattering maps $\mathcal{R}_{\pm,c}$ to the spaces $\mathscr{S}^{M,N} = \mathscr{S}^{4,N }\cap \mathscr{S}^{M,0}$. \qed Using the results of Theorem \ref{inversemain1} and Theorem \ref{S.onto} we can prove Theorem \ref{thm:inv.scat}, showing that $S^{-1}: \mathscr{S}^{N,M} \to \mathcal{Q}^{N,M}$ is real analytic. \begin{proof}[Proof of Theorem \ref{thm:inv.scat}] Let $\sigma \in \mathscr{S}^{M,N}$. By Theorem \ref{S.onto} there exists $q \in \mathcal{Q}$ with $S(q,\cdot) = \sigma$. Now let $c_{+}\leq c \leq c_{-}$ be arbitrary real numbers and consider $\mathcal{R}_{+,c_+}(\sigma)$ and $\mathcal{R}_{-,c_-}(\sigma)$, where $\mathcal{R}_{\pm, c_\pm}$ are defined in \eqref{ainversedefinition1}. By classical inverse scattering theory \cite{faddeev}, \cite{Masturm} the following holds: \begin{enumerate}[(i)] \item $\left.\mathcal{R}_{+,c_+}(\sigma)\right|_{x \in [c_+, c]} = \left.\mathcal{R}_{-,c_-}(\sigma)\right|_{x \in [c, c_-]} \ ,$ \item the potential $q_c$ defined by \begin{equation} \label{q_a} q_c:= \mathcal{R}_{+,c_+}(\sigma) \mathbbm{1}_{[c,\infty)}+\mathcal{R}_{-,c_-}(\sigma) \mathbbm{1}_{(-\infty, c]} \end{equation} is in $\mathcal{Q} $ and satisfies $r_+(q_c,\cdot)=\rho_{+}(\sigma,\cdot) $, $r_-(q_c,\cdot)=\rho_-(\sigma,\cdot)$ and $t(q_c,\cdot) = \tau(\sigma,\cdot)$. Thus by formulas \eqref{r.S.rel} and \eqref{inv.scatt.elem} it follows that $S(q_c,\cdot) = \sigma$. \end{enumerate} Since $S$ is 1-1 it follows that $q_c \equiv q$. Finally, by Theorem \ref{inversemain1}, $\mathscr{S}^{M,N} \to H^{N}_{x \geq c_+} \cap L^2_{M, x \geq c_+},$ $\sigma \mapsto \mathcal{R}_{+,c_+}(\sigma)$ and $\mathscr{S}^{M,N} \to H^{N}_{x \leq c_-}\cap L^2_{M, x \leq c_-},$ $\sigma \mapsto \mathcal{R}_{-,c_-}(\sigma)$ are real analytic. It follows that $q \in H^N \cap L^2_M $ and the map $S^{-1}: \sigma \to q$ is real analytic. \end{proof} \section{Proof of Corollary \ref{thm:actions} and Theorem \ref{firstapprox}} This section is devoted to the proof of Corollary \ref{thm:actions} and Theorem \ref{firstapprox}. Both results are easy applications of Theorem \ref{reflthm}. \noindent {\em Proof of Corollary \ref{thm:actions}}. Let $N \geq 0$, $M \geq 4$ be fixed integers. Fix $q \in \mathcal{Q}^{N,M}$. By Theorem \ref{reflthm} the scattering map $S(q,\cdot)$ is in $\mathscr{S}^{M,N}$. Furthermore by the definition \eqref{action_angle} of $I(q,k)$ there exists a constant $C >0$ such that for any $|k| \geq 1$ $$ \mmod{I(q,k)} \leq \frac{C |S(q,k)|^2}{|k|} \ . $$ In particular $I(q,\cdot) \in L^1_{2N+1}([1,\infty), \mathbb{R})$. By the real analyticity of the map $q \mapsto S(q,\cdot)$, it follows that $\mathcal{Q}^{N,M} \to L^1_{2N+1}([1,\infty),\mathbb{R}) $, $q \mapsto \left.I(q,\cdot)\right|_{[1,\infty)}$ is real analytic. Now let us analyze $I(q,k)$ for $0 \leq k \leq 1$. By the definition \eqref{action_angle} of $I(q,k)$ one has $$ I(q,k) + \frac{k}{\pi} \log \left( \frac{4k^2}{4(k^2+1)}\right) = - \frac{k}{\pi} \log \left( \frac{4(k^2 + 1)}{4k^2 + S(q,k) S(q,-k)} \right) \ . $$ By Proposition \ref{prop:inv.l}, the map $\mathscr{S}^{M,N} \to H^M_{\zeta} ([0,1],\mathbb{R})$, $\sigma \to l(\sigma,k):= \log \left( \frac{4(k^2 + 1)}{4k^2 + \sigma(k) \sigma(-k)} \right)$ is real analytic.\\ Thus also the map $\mathcal{Q}^{N,M} \to H^M_{\zeta}([0,1],\mathbb{R})$, $q \to l(S(q),\cdot)$ is real analytic, being composition of real analytic maps. Since the interval $[0,1]$ is bounded, the map $f \mapsto kf $, which multiplies a function by $k$, is analytic as a map $H^M_{\zeta}([0,1],\mathbb{R}) \to H^M_{\zeta}([0,1],\mathbb{R})$. It follows that the map $q \mapsto - \frac{k}{\pi} l(S(q), k)$ is real analytic as a map from $\mathcal{Q}^{N,M}$ to $H^M([0,1],\mathbb{R})$. \qed \\ For $t \in \mathbb{R}$ and $\sigma \in H^1_\mathbb{C}$, let us denote by \begin{equation} \label{def.rot} \Omega^t(\sigma)(k):= e^{- i8 k^3 t} \sigma(k) \ . \end{equation} We prove the following lemma. \begin{lemma} \label{lem:airy.flow.invariant} Let $N, M$ be integers with $N \geq 2M \geq 2$. Let $\sigma \in \mathscr{S}^{M,N}$. Then $\Omega^t(\sigma) \in \mathscr{S}^{M,N},$ $\, \forall t \geq 0$. \end{lemma} \begin{proof} As a first step we show that $\Omega^t(\sigma) \in \mathscr{S}$ for every $t \geq 0$. Since $\Omega^t(\sigma)(0) = \sigma(0) >0$ and $\overline{\Omega^t(\sigma)(k)} = \Omega^t(\sigma)(-k)$, $\Omega^t(\sigma) $ satisfies (S1) and (S2) for every $t \geq 0$. Thus $\Omega^t(\sigma) \in \mathscr{S}$, $\forall t \geq 0$. Next we show that $\Omega^t(\sigma) \in H^{M}_{\zeta,\C} \cap L^2_N$. Clearly $|\Omega^t(\sigma)(k)| \leq |\sigma(k)|$, thus $\Omega^t (\sigma) \in L^2_N,$ $\, \forall t \geq 0$. Now we show that $\Omega^t (\sigma) \in H^{M}_{\zeta,\C}, $ $\, \forall t \geq 0$. In particular we prove that $\zeta \partial_k^M \Omega^t (\sigma) \in L^2$, the other cases being analogous. Using the expression \eqref{def.rot} one gets that $\zeta(k) \partial_k^M \Omega^t (\sigma)(k)$ equals $$ e^{- i8 k^3 t}\left( \zeta (k) \partial_k^{M}\sigma(k) + \sum_{j=1}^{M-1} \binom{M}{j} \left(-i 24 t k^2 \right)^j \zeta (k) \partial_k^{M-j}\sigma(k) + \left(-i 24 t k^2 \right)^M \zeta (k) \sigma(k) \right) \ . $$ As $\sigma \in \mathscr{S}^{M,N}$, the first and last term above are in $L^2$. Now we show that for $1 \leq j \leq M-1$, $|k|^{2j} \zeta \partial_k^{M-j}\sigma \in L^2$. We will use the following interpolating estimate, proved in \cite[Lemma 4]{nahas_ponce}. Assume that $J^a f := (1-\partial_k^2)^{a/2} f \in L^2$ and $\langle k \rangle^b f := (1+|k|^2)^{b/2} f \in L^2$. Then for any $\theta \in (0,1)$ \begin{equation} \label{inter.est} \norm{\langle k \rangle^{\theta b} J^{(1-\theta)a} f}_{L^2} \leq c \norm{f}_{L^2_b}^{\theta} \norm{f}_{H^a_\mathbb{C}}^{1-\theta} \ . \end{equation} Note that $\zeta \sigma \in H^M_\mathbb{C} \cap L^2_N$, thus we can apply estimate \eqref{inter.est} with $f=\zeta \sigma$, $b=N$, $a=M$, $\theta = \frac{j}{M}$, to obtain that $\langle k \rangle^{\frac{Nj}{M}} \partial_k^{M-j} (\zeta\sigma) \in L^2 $. Since $N \geq 2M$, we have $\langle k \rangle^{2j} \partial_k^{M-j} (\zeta \sigma) \in L^2 $. By integration by parts $$ \langle k \rangle^{2j} \zeta(k) \partial_k^{M-j} \sigma(k) = \langle k \rangle^{2j} \partial_k^{M-j} (\zeta \sigma) - \sum_{l=1}^{M-j}\binom{M-j}{l} \langle k \rangle^{2j} \partial_k^l \zeta(k) \, \partial_k^{M-j-l}\sigma(k) \ . $$ Since for any $l \geq 1$ the function $\partial_k^l \zeta$ has compact support, it follows that the r.h.s. above is in $L^2$. Thus for every $1 \leq j \leq M-1$ we have $\langle k \rangle^{2j} \zeta(k) \partial_k^{M-j} \sigma \in L^2$ and it follows that $\zeta \partial_k^M \Omega^t (\sigma) \in L^2$ for every $t \geq 0$. \end{proof} \begin{remark} \label{rem.airy.flow} One can adapt the proof above, putting $\zeta(k)\equiv 1$, to shows that the spaces $H^N \cap L^2_M$, with integers $N \geq 2M \geq 2$, are invariant by the Airy flow. Indeed the Fourier transform $\mathcal{F}_-$ conjugates the Airy flow with the linear flow $\Omega^t$, i.e., $U_{Airy}^t =\mathcal{F}^{-1}_{-}\circ \Omega^t \circ \mathcal{F}_-$. \end{remark} \noindent {\em Proof of Theorem \ref{firstapprox}}. Recall that by \cite{gardner6} the scattering map $S $ conjugate the KdV flow with the linear flow $\Omega^t(\sigma)(k):= e^{- i8 k^3 t} \sigma(k)$, i.e., \begin{equation} U_{KdV}^t =S^{-1}\circ \Omega^t \circ S \ , \label{kdv_scat} \end{equation} whereas $U_{Airy}^t =\mathcal{F}^{-1}_{-}\circ \Omega^t \circ \mathcal{F}_-$. Take now $q \in \mathcal{Q}^{N, M}$, where $N, M$ are integers with $N \geq 2M \geq 8$. By Theorem \ref{reflthm}, $S(q)\equiv S(q,\cdot) \in \mathscr{S}^{M,N}$. By Lemma \ref{lem:airy.flow.invariant} the flow $\Omega^t$ preserves the space $\mathscr{S}^{M,N}$ for every $t \geq 0$. Thus $\Omega^t \circ S(q) \in \mathscr{S}^{M,N}$, $\ \forall t \geq 0$. By the bijectivity of $S$ it follows that $S^{-1}\circ \Omega^t \circ S(q) \in \mathcal{Q}^{N,M}$ $\, \forall t \geq 0$. Thus item $(i)$ is proved. We prove now item $(ii)$. Remark that by item $(i)$, $U_{KdV}^t(q) \in L^2_M$ for any $t \geq 0$. Since $U_{Airy}^t$ preserves the space $H^{N} \cap L^2_M$ ($N \geq 2M \geq 8$), it follows that for $q \in \mathcal{Q}^{N, M}$ the difference $U_{KdV}^t(q) - U_{Airy}^t(q) \in H^{N} \cap L^2_M$, $\forall t \geq 0$. We prove now the smoothing property of the difference $U_{KdV}^t(q) - U_{Airy}^t(q)$. Since $S^{-1} = \mathcal{F}_-^{-1} + B$, \begin{align} U_{KdV}^t(q)= &\mathcal{F}_{-}^{-1}\circ \Omega^t \circ S (q)+ B \circ \Omega^t \circ S (q)\label{secondairy} \end{align} and since $S = \mathcal{F}_- + A$, $$\mathcal{F}_{-}^{-1}\circ \Omega^t \circ S (q) = \mathcal{F}_{-}^{-1}\circ \Omega^t \circ \mathcal{F}_- (q) + \mathcal{F}_{-}^{-1}\circ \Omega^t \circ A (q) \ . $$ Hence \begin{align} U_{KdV}^t(q) = & U_{Airy}^t(q) + \mathcal{F}_{-}^{-1}\circ \Omega^t \circ A (q) + B \circ \Omega^t \circ S (q) \label{firstairy}. \end{align} The $1$-smoothing property of the difference $U_{KdV}^t(q) - U_{Airy}^t(q)$ follows now from the smoothing properties of $A$ and $B$ described in item (ii) of Theorem \ref{reflthm}. The real analyticity of the map $q \mapsto U_{KdV}^t(q) - U_{Airy}^t(q)$ follows from formula \eqref{firstairy} and the real analyticity of the maps $A$, $B$ and $S$. \qed \appendix \section{Auxiliary results. } \label{techLemma} For the convenience of the reader in this appendix we collect various known estimates used throughout the paper. \begin{lemma} \label{lem:techLemma} Fix an arbitrary real number $c$. Then the following holds: \begin{enumerate} \item[(A0)] The linear map $T_0: L^2_{1/2, x \geq c}\to L^2_{x\geq c} L^2_{y\geq 0}$ defined by \begin{equation} \label{T4.def} g \mapsto T_0(g)(x,y) := g(x+y) \end{equation} is continuous, and there exists a constant $K_c>0$, depending on $c$, such that \begin{equation} \label{T4.norm} \norm{T_0(g)}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{g}_{L^2_{1/2, x \geq c}}. \end{equation} \item[(A1)] The bilinear map $T_1:L^2_{x \geq c} \times L^2_{x \geq c} \to L^2_{x\geq c} L^2_{y\geq 0}$ defined by \begin{equation} \label{T2.def} (g,h) \mapsto T_1(g,h)(x,y):= g(x+y) h(x) \end{equation} is continuous, and \begin{equation} \label{T2.norm} \norm{T_1(g,h)}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq \norm{g}_{L^2_{x \geq c}} \norm{h}_{L^2_{x \geq c}}. \end{equation} \item[(A2)]The bilinear map $T_2: L^2_{x \geq c} \times L^2_{x\geq c} L^2_{y\geq 0} \to L^2_{x \geq c}$ defined by \begin{equation} \label{T3.def} (g,h) \mapsto T_2(g,h)(x):= \int\limits_0^{+\infty} g(x+z) h(x, z) \, dz \end{equation} is continuous, and there exists a constant $K_c>0$, depending on $c$, such that \begin{equation} \label{T3.norm} \norm{T_2(g,h)}_{L^2_{x \geq c}} \leq K_c \norm{g}_{L^2_{x \geq c}} \norm{h}_{L^2_{x\geq c} L^2_{y\geq 0}}. \end{equation} \item[(A3)](Hardy inequality) The linear map $T_3: L^2_{m+1, x \geq c} \to L^2_{m, x \geq c}$ defined by $$ g \mapsto T_3(g) (x):= \intx{x} g(z) dz $$ is continuous, and there exists a constant $K_c >0$, depending on $c$, such that $$\norm{T_3(g)}_{L^2_{m, x \geq c}}\leq K_c \norm{ g}_{L^2_{m+1, x \geq c}} \ . $$ \item[(A4)] The bilinear map $T_4: L^1_{x \geq c} \times L^2_{x\geq c} L^2_{y\geq 0} \to L^2_{x\geq c} L^2_{y\geq 0}$ defined by \begin{equation} (g, h) \mapsto T_4(g,h)(x,y) := \int\limits_0^{+\infty} g(x+y+z) h(x,z) dz \end{equation} is continuous, and there exists a constant $K_c>0$, depending on $c$, such that \begin{equation} \label{T1.norm} \norm{T_4(g,h)}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{g}_{L^1_{x \geq c}} \norm{h}_{L^2_{x\geq c} L^2_{y\geq 0}}. \end{equation} \item[(A5)]The bilinear map $T_5: L^2_{x \geq c} \times L^2_{1, x \geq c} \to L^2_{x\geq c} L^2_{y\geq 0}$ defined by \begin{equation} (g, h) \mapsto T_5(g, h)(x,y):= \int\limits_0^{+\infty} g(x+y+z) h(x+z) dz \end{equation} is bounded and satisfies \begin{equation} \label{T1.norm.2} \norm{T_5(g,h)}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{g}_{L^2_{x \geq c}} \norm{h}_{L^2_{1, x \geq c}}. \end{equation} \end{enumerate} \end{lemma} \begin{proof} Inequality $(A1), (A4)$ can be verified in a straightforward way. To prove $(A0)$ make the change of variable $\xi=x+y$ and remark that $$ \intx{c} \int\limits_0^{+\infty} |g(x+y)|^2 \,dx \,dy \leq K_c \int\limits_0^{+\infty} |\xi-c| \, |g(\xi)|^2 d\xi \ . $$ We prove now $(A2)$: using Cauchy-Schwartz, one gets \begin{align*} \norm{\intx{0} g(x+z)h(x,z) \, dz}_{L^2_{x \geq c}}^2 & \leq \int\limits_c^{+\infty} \left( \intx{x}|g(z)|^2 \, dz \right) \, \left( \int\limits_0^{+\infty} |h(x,z)|^2 \, dz \right) \, dx \leq \norm{g}_{L^2_{x \geq c}}^2 \norm{h}_{L^2_{x\geq c} L^2_{y\geq 0}}^2 \ . \end{align*} In order to prove $(A3)$ take a function $h \in L^2_{x \geq c}$ and remark that \begin{align*} \mmod{\intx{c} dx\, h(x)\, \langle x \rangle^m \intx{x} g(z)\, dz} & = \mmod{\intx{c} dz\, g(z) \int_{c}^z \langle x \rangle^m h(x) dx} \leq \tilde K_c \intx{c} dz\, \langle z \rangle^m |g(z)| \int_{c}^z |h(x)| \,dx \\ & \leq K_c \intx{c} dz\,\langle z \rangle^{m+1} |g(z)| \, \frac{\int_{c}^z |h(x)| dx}{|z-c|} \leq K_c \norm{\langle z \rangle^{m+1} g}_{L^2_{x \geq c}} \norm{h}_{L^2_{x \geq c}} \end{align*} where for the last inequality we used the Hardy-Littlewood inequality. \\ To prove $(A4)$ take a function $f \in L^2_{x\geq c} L^2_{y\geq 0}$, define $\Omega_c= [c, \infty)\times \mathbb{R}^+ \times \mathbb{R}^+$ and remark that \begin{align*} &\int\limits_{\Omega_c } |g(x+y+z)| \, |h(x,z)| \, |f(x,y)| \, dx \, dy \, dz \leq \\ & \qquad \qquad \leq \Big(\int\limits_{\Omega_c} |g(x+y+z)| \, |h(x,z)|^2 \, dx \, dy \, dz \Big)^{1/2} \Big(\int\limits_{\Omega_c} |g(x+y+z)| \, |f(x,y)|^2 \, dx \, dy \, dz \Big)^{1/2} \\ & \qquad \qquad \leq \norm{g}_{L^1_{x \geq c}} \norm{h}_{L^2_{x\geq c, z \geq 0}} \norm{f}_{L^2_{x\geq c} L^2_{y\geq 0}}, \end{align*} where the first inequality follows by writing $ |g|= |g|^{1/2}\cdot |g|^{1/2}$ and applying Cauchy-Schwartz.\\ To prove $(A5)$ note that $$\norm{\intx{0} g(x+y+z) h(x+z) \, dz}_{L^2_{y \geq 0}} \leq \norm{g}_{L^2_{x \geq c}} \intx{x}\mmod{ h(z)} \, dz \ .$$ By $(A3)$ one has that $\norm{\intx{x}\mmod{ h(z)} \, dz}_{L^2_{x \geq c}} \leq K_c \norm{\langle x \rangle h }_{L^2_{x \geq c}}$, then $(A5)$ follows. \end{proof} \section{Analytic maps in complex Banach spaces} \label{analytic_map} In this appendix we recall the definition of an analytic map from \cite{mujica}. Let $E$ and $F$ be complex Banach spaces. A map $\tilde P^k:E^k\to F$ is said to be $k$-multilinear if $\tilde P^k(u^1,\ldots,u^k)$ is linear in each variable $u^j$; a multilinear map is said to be bounded if there exist a constant $C$ such that $$\norm{\tilde P^k(u^1,\cdots,u^k)}\leq C \norm{u^1}\cdots \norm{u^k} \quad \forall u^1,\ldots, u^k \in E. $$ Its norm is defined by $$\norm{\tilde P^k}:=\sup_{u^j \in E,\;\norm{u^j}\leq 1}{\norm{\tilde P^k(u^1,\ldots,u^k)}}.$$ A map $P^k: E \to F $ is said to be a polynomial of order $k$ if there exists a $k$-multilinear map $\tilde{P}^k: E \to F$ such that $$P^k(u)=\tilde{P}^k(u,\ldots,u)\quad \forall u\in E.$$ The polynomial is bounded if it has finite norm $$\norm{P^k}:=\sup_{\norm{u}\leq 1}\norm{P^k(u)}.$$ We denote with $\mathcal{P}^k(E, F)$ the vector space of all bounded polynomials of order $k$ from $E$ into $F$. \begin{definition} \label{def.analytic_map} Let $E$ and $F$ be complex Banach spaces. Let $U$ be a open subset of $E$. A mapping $f: U \to F$ is said to be \textit{analytic} if for each $a \in U$ there exists a ball $B_r(a)\subset U$ with center $a$ and radius $r>0$ and a sequence of polynomials $P^k \in \mathcal{P}^k(E, F)$, $k \geq 0$, such that $$ f(u) = \sum_{k=0}^{\infty} P^k(u-a)$$ is convergent uniformly for $u \in B_r(a)$; i.e., for any $\epsilon >0$ there exists $K >0$ so that $$\norm{f(u) - \sum_{k=0}^K P^k(u-a)} \leq \epsilon$$ for any $u \in B_r(a)$. \end{definition} Finally let us recall the notion of real analytic map. \begin{definition}Let $E,\; F$ be real Banach spaces and denote by $ E_{\mathbb{C}}$ and $F_{\mathbb{C}}$ their complexifications. Let $U \subset E$ be open. A map $f: U \to F$ is called \textit{real analytic} on $U$ if for each point $u \in U$ there exists a neighborhood $V$ of $u$ in $E_{\mathbb{C}}$ and an analytic map $g: V \to F_{\mathbb{C}}$ such that $f = g$ on $U \cap V$. \end{definition} \begin{remark} The notion of an analytic map in Definition \ref{def.analytic_map} is equivalent to the notion of a $\mathbb{C}$-differentiable map. Recall that a map $f: U \to F$, where $U$, $E$ and $F$ are given as in Definition \ref{def.analytic_map}, is said to be \textit{$\mathbb{C}$-differentiable} if for each point $a \in U$ there exists a linear, bounded operator $A: E \to F$ such that $$ \lim_{u \to a}\frac{\norm{f(u) - f(a) - A(u-a)}_F}{\norm{u-a}_E} = 0. $$ Therefore analytic maps inherit the properties of $\mathbb{C}$-differentiable maps; in particular the composition of analytic maps is analytic. For a proof of the equivalence of the two notions see \cite{mujica}, Theorem 14.7. \end{remark} \begin{remark} Any $P^k \in \mathcal{P}^k(E, F)$ is an analytic map. Let $f(u)=\sum_{m=0}^{\infty} P^m(u)$ be a power series from $E$ into $F$ with infinite radius of convergence with $P^m \in \mathcal{P}^m(E,F)$. Then $f$ is analytic (\cite{mujica}, example 5.3, 5.4). \label{entire.func} \end{remark} \section{Properties of the solutions of integral equation \eqref{opma1}} \label{app:ab.eq} In this section we discuss some properties of the solution of equation \eqref{opma1} which we rewrite as \begin{equation} \label{int.eq} g(x,y) + \intx{0} F_\sigma(x+y+z) \, g(x,z) \, dz = h_\sigma(x,y) \ . \end{equation} Here $\sigma \in \mathscr{S}^{4,N}$, $N \geq 0$, $h_\sigma$ is a function $h_\sigma: [c, +\infty) \times [0, +\infty) \to \mathbb{R}$, with $c$ arbitrary, which satisfies $(P)$. We denote by \begin{equation} \label{norm_0} \norm{h}_0 := \norm{h}_{L^2_{x\geq c} L^2_{y\geq 0}} + \norm{h(\cdot, 0)}_{L^2_{x \geq c}} \ . \end{equation} Furthermore $F_\sigma:\mathbb{R} \to \mathbb{R}$ is a function that satisfies \begin{enumerate} \item[$(H)$] The map $\sigma \mapsto F_\sigma $ $\,[ \sigma \mapsto F_\sigma']$ is real analytic as a map from $\mathscr{S}^{4,N}$ to $H^1 \cap L^2_3$ $\,[L^2_4]$. Moreover the operators $Id \pm \mathcal{K}_{x,\sigma}: L^2_{y \geq 0} \to L^2_{y \geq 0}$ with $\mathcal{K}_{x,\sigma}$ defined as \begin{equation} \label{oper.K.app} \mathcal{K}_{x,\sigma}[f](y) := \intx{0} F_\sigma(x+y+z) \, f(z) \, dz \ \end{equation} are invertible for any $x \geq c$, and there exists a constant $C_\sigma>0$, depending locally uniformly on $\sigma \in H^4_{\zeta,\mathbb{C}}\cap L^2_N$, such that \begin{equation} \label{est.inv} \sup_{x \geq c} \norm{(Id \pm \mathcal{K}_{x,\sigma})^{-1}}_{\mathcal{L}(L^2_{y \geq 0})} \leq C_\sigma \ . \end{equation} Finally $\sigma \mapsto (Id \pm \mathcal{K}_{x,\sigma})^{-1}$ are real analytic as maps from $\mathscr{S}^{4,N}$ to $\mathcal{L}(L^2_{x\geq c} L^2_{y\geq 0})$. \end{enumerate} \begin{remark} \label{pairing} The pairing \begin{align*} \mathcal{L}(L^2_{x\geq c} L^2_{y\geq 0}) \times L^2_{x\geq c} L^2_{y\geq 0} \to L^2_{x\geq c} L^2_{y\geq 0}, \qquad (H, f) \mapsto H[f] \end{align*} is a bounded bilinear map and hence analytic. Let now $\sigma \mapsto h_\sigma$ be a real analytic map from $\mathscr{S}^{4,0}$ to $L^2_{x\geq c} L^2_{y\geq 0}$ and let $\mathcal{K}_\sigma$ as in $(H)$. Then by Lemma \ref{lem:Fx} $(iii)$ it follows that $\sigma \mapsto \left( Id + \mathcal{K}_\sigma\right)^{-1}[ h_\sigma]$ is real analytic as a map from $\mathscr{S}^{4,0}$ to $L^2_{x\geq c} L^2_{y\geq 0}$ as well. \end{remark} \begin{remark} By the Sobolev embedding theorem, assumption $(H)$ implies that $F_\sigma \in C^{0, \gamma}(\mathbb{R}, \mathbb{C})$, $\gamma < \frac{1}{2}$. \end{remark} By assumption $(H)$ the map $(c, \infty) \to \mathcal{L}(L^2_{y \geq 0}), $ $\, x \mapsto \mathcal{K}_{x,\sigma}$ is differentiable and its derivative is the operator \begin{equation} \label{K'.def} \mathcal{K}_{x,\sigma}'[f](y) = \intx{0} F_\sigma'(x+y+z)\, f(z) \, dz \ , \end{equation} as one verifies using that for $x > c$ and $\epsilon \neq 0$ sufficiently small \begin{equation} \label{op.K.diff} \begin{aligned} \norm{\frac{\mathcal{K}_{x+\epsilon,\sigma} - \mathcal{K}_{x,\sigma}}{\epsilon} - \mathcal{K}_{x,\sigma}' }_{\mathcal{L}(L^2_{y \geq 0})} \leq \intx{x} \mmod{\frac{F_\sigma(z+\epsilon) - F_\sigma(z)}{\epsilon} - F_\sigma'(z)} dz \\ \leq \frac{1}{|\epsilon|} \mmod{ \int_0^{\epsilon} \intx{x}\mmod{F_\sigma'(z+s) - F_\sigma'(z)} dz\, ds } \leq \sup_{ |s| \leq |\epsilon | } \intx{x}\mmod{F_\sigma'(z+s) - F_\sigma'(z)} dz \end{aligned} \end{equation} and the fact that the translations are continuous in $L^1$. Therefore the following lemma holds \begin{lemma} \label{lem:der.K} $\mathcal{K}_{x,\sigma}$ and thus $(Id + \mathcal{K}_{x,\sigma})^{-1}$ is a family of operators from $L^2_{y \geq 0}$ to $L^2_{y \geq 0}$ which depends continuously on the parameter $x$. Moreover the map $(c, \infty) \to \mathcal{L}(L^2_{y \geq 0})$, $x \mapsto \mathcal{K}_{x,\sigma}$ is differentiable and its derivative is the operator $\mathcal{K}'_{x,\sigma}$ defined in \eqref{K'.def}. \end{lemma} \begin{lemma} \label{lem:g.p} Let $F_\sigma$ satisfy assumption $(H)$, and $g_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0}$ be such that $\norm{g_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}^2$ and $\mathscr{S}^{4,N} \to L^2_{x\geq c} L^2_{y\geq 0},$ $\, \sigma \mapsto g_\sigma$ be real analytic. Then $$ \mathbf{F}_\sigma (x,y) := \int\limits_0^{+\infty} F_\sigma(x+y+z) \, g_\sigma(x,z) \, dz $$ satisfies $(P)$. \end{lemma} \begin{proof} $(P1)$ For $\epsilon \neq 0$ sufficiently small \begin{align*} \norm{\mathbf{F}_\sigma(x+\epsilon, \cdot) - \mathbf{F}_\sigma(x,\cdot)}_{L^2_{y \geq 0}} \leq & \norm{F_\sigma(x+\epsilon, \cdot) - F_\sigma(x,\cdot)}_{L^1} \norm{g_\sigma(x+\epsilon,\cdot)}_{L^2_{y \geq 0}} \\ & \quad + \norm{F_\sigma}_{L^1}\norm{g_\sigma(x+\epsilon,\cdot) - g_\sigma(x,\cdot)}_{L^2_{y \geq 0}} \end{align*} which goes to $0$ as $\epsilon \to 0$, proving that $\mathbf{F}_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$. Furthermore, by Lemma \ref{lem:techLemma} $(A4)$, $\mathbf{F}_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$ and fulfills \begin{equation} \label{lem:g.p1} \norm{\mathbf{F}_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq \norm{F_\sigma}_{L^1} \norm{g_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}^2 \ . \end{equation} Now we show that $\mathbf{F}_\sigma \in C^0_{x \geq c, y \geq 0}$. Let $(x_n)_{n \geq 1} \subseteq [c,\infty)$ and $(y_n)_{n \geq 1}\subseteq [0,\infty)$ be two sequences such that $x_n \to x_0$, $y_n \to y_0$. Then $F_\sigma(x_n + y_n +\cdot) g_\sigma(x_n, \cdot) \to F_\sigma(x_0 + y_0 +\cdot) g_\sigma(x_0, \cdot) $ in $L^1_{z \geq 0}$ as $n \to \infty$. Indeed \begin{align*} & \norm{F_\sigma(x_n + y_n +\cdot) g_\sigma(x_n, \cdot) - F_\sigma(x_0 + y_0 +\cdot) g_\sigma(x_0, \cdot)}_{L^1_{z \geq 0}} \leq \\ & \qquad \qquad \qquad \leq \norm{F_\sigma(x_n+y_n+ \cdot) - F_\sigma(x_0 + y_0 +\cdot)}_{L^2_{z\geq 0}} \norm{g_\sigma(x_n,\cdot)}_{L^2_{y \geq 0}} \\ &\qquad \qquad \qquad \quad + \norm{F_\sigma(x_0 + y_0 +\cdot)}_{L^2_{z\geq 0}}\norm{g_\sigma(x_n,\cdot) - g_\sigma(x_0,\cdot)}_{L^2_{y \geq 0}} , \end{align*} and the r.h.s. of the inequality above goes to $0$ as $(x_n, y_n) \to (x_0,y_0)$, by the continuity of the translations in $L^2$ and the fact that $g_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$. Thus it follows that $\mathbf{F}_\sigma(x_n, y_n) \to \mathbf{F}_\sigma(x_0, y_0)$ as $n \to \infty$, i.e., $\mathbf{F}_\sigma \in C^0_{x \geq c, y \geq 0}$.\\ We evaluate $\mathbf{F}_\sigma$ at $y=0$, getting $$ \mathbf{F}_\sigma(x,0) = \int\limits_0^{+\infty} F_\sigma(x+z) g_\sigma(x,z) \, dz \ . $$ By Lemma \ref{lem:techLemma} $(A2)$, $\mathbf{F}_\sigma(\cdot,0) \in L^2_{x \geq c}$ and fulfills \begin{equation} \label{lem:g.p2} \norm{\mathbf{F}_\sigma (\cdot,0)}_{L^2_{x \geq c}} \leq \norm{F_\sigma}_{L^2} \norm{g_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}^2 \ . \end{equation} $(P2)$ It follows from \eqref{lem:g.p1} and \eqref{lem:g.p2}. $(P3)$ It follows by Lemma \ref{lem:techLemma} $(A2)$ and the fact that $\mathbf{F}_\sigma$ and $\mathbf{F}_\sigma(\cdot,0)$ are composition of real analytic maps. \end{proof} We study now the solution of equation \eqref{int.eq}. \begin{lemma} \label{lem:ab.eq} Assume that $h_\sigma$ satisfies $(P)$ and $F_\sigma$ satisfies $(H)$. Then equation \eqref{int.eq} has a unique solution $g_\sigma$ in $C^0_{x \geq c} L^2_{y \geq 0}\cap L^2_{x\geq c} L^2_{y\geq 0}$ which satisfies $(P)$. \end{lemma} \noindent {\sl Proof.} We start to show that $g_\sigma$ exists and satisfies $(P1)$. Since $h_\sigma$ satisfies $(P)$ and $F_\sigma$ satisfies $(H)$, it follows that for any $x \geq c$, $g_\sigma(x, \cdot):= (Id + \mathcal{K}_{x,\sigma})^{-1}[h_\sigma(x, \cdot)]$ is the unique solution in $L^2_{y \geq 0} $ of the integral equation \eqref{int.eq}. Furthermore, by \eqref{est.inv}, $\norm{g_\sigma(x, \cdot)}_{L^2_{y \geq 0}} \leq C_\sigma \norm{h_\sigma(x, \cdot)}_{L^2_{y \geq 0}}$, which implies \begin{equation} \label{g.eq.10} \norm{g_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq C_\sigma \norm{h_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \ . \end{equation} Since $h_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$, Lemma \ref{lem:der.K} implies that $g_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$ as well. Thus we have proved that $g_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0}$. Now write \begin{equation} \label{g.eq.1} g_\sigma(x, y) = h_\sigma(x, y) - \intx{0} F_\sigma(x+y+z) g_\sigma(x,z) \, dz \ . \end{equation} By Lemma \ref{lem:g.p} and the assumption that $h_\sigma$ satisfies $(P)$, it follows that the r.h.s. of formula \eqref{g.eq.1} satisfies $(P)$. \qed \\ The following lemma will be useful in the following: \begin{lemma} \label{lem:Phi} \begin{enumerate} \item[$(i)$] Let $F_\sigma$ satisfy $(H)$, and $g_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0}$ be such that $\norm{g_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}^2$ and $\mathscr{S}^{4,N} \to L^2_{x\geq c} L^2_{y\geq 0},$ $\, \sigma \mapsto g_\sigma$ be real analytic. Denote \begin{equation} \label{Phi.def} \Phi_\sigma(x,y) := \int\limits_0^{+\infty} F_\sigma'(x+y+z) g_\sigma(x,z) \, dz \ . \end{equation} Then $\Phi_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0}$, the map $\mathscr{S}^{4,N} \to L^2_{x\geq c} L^2_{y\geq 0},$ $\, \sigma \mapsto \Phi_\sigma$ is real analytic and \begin{align} \label{eq:Phi.est1} \norm{\Phi_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}}\leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}^2 \ , \end{align} where $K_c >0$ depends locally uniformly on $ \sigma \in H^4_{\zeta,\mathbb{C}}\cap L^2_N$. \item[$(ii)$] Let $g_\sigma$ as in item $(i)$, and furthermore let $g_\sigma \in C^0_{x \geq c, y \geq 0}$ and $g_\sigma(\cdot, 0) \in L^2_{x \geq c}$. Assume furthermore that $\partial_y g_\sigma$ satisfies the same assumptions as $g_\sigma$ in item $(i)$. Then $\Phi_\sigma$, defined in \eqref{Phi.def}, satisfies $(P)$. \item[$(iii)$] Assume that $F_\sigma$ satisfies $(H)$ and that the map $\mathscr{S}^{4,N} \to H^1_{x \geq c}$, $\, \sigma \mapsto b_\sigma$ is real analytic with $\norm{b_\sigma}_{H^1_{x \geq c}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}^2 $. Then the function $$ \phi_\sigma(x,y) : = F_\sigma(x+y) b_\sigma(x) $$ satisfies $(P)$. \end{enumerate} \end{lemma} \begin{proof} $(i)$ Clearly $\norm{\Phi_\sigma(x, \cdot)}_{L^2_{y \geq 0}} \leq \norm{F_\sigma'}_{L^1}\norm{g_\sigma(x, \cdot)}_{L^2_{y \geq 0}}$, and since $g_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$ it follows that $\Phi_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$ with $\norm{\Phi_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq \norm{F_\sigma'}_{L^2_4} \norm{g_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}}$, which implies \eqref{Phi.def}. We show now that $\Phi_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$. For $\epsilon \neq 0$ one has \begin{align*} \norm{\Phi_\sigma(x+\epsilon, \cdot) - \Phi_\sigma(x, \cdot)}_{L^2_{y \geq 0}} \leq \norm{ F_\sigma'(\cdot + \epsilon) - F_\sigma'}_{L^1} \norm{g_\sigma(x, \cdot)}_{L^2_{y \geq 0}} + \norm{F_\sigma'}_{L^1}\norm{g_\sigma(x+\epsilon, \cdot) - g_\sigma(x, \cdot)}_{L^2_{y \geq 0}} \ . \end{align*} The continuity of the translation in $L^1$ and the assumption $g_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$ imply that $\norm{\Phi_\sigma(x+\epsilon, \cdot) - \Phi_\sigma(x, \cdot)}_{L^2_{y \geq 0}} \to 0$ as $\epsilon \to 0$, thus $\Phi_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$. The real analyticity of $\sigma \mapsto \Phi_\sigma$ follows from Lemma \ref{lem:techLemma} $(A4)$ and the fact that $\Phi_\sigma$ is composition of real analytic maps. $(ii)$ Fix $x \geq c$ and use integration by parts to write \begin{equation} \label{K'.parts} \Phi_\sigma(x,y) = - F_\sigma(x+y)g_\sigma(x,0) - \int\limits_0^{+\infty} F_\sigma(x+y+z) \partial_z g_\sigma(x,z) \, dz \ , \end{equation} where we used that since $F_\sigma \in H^1$ $\, [g(x, \cdot) \in H^1_{y \geq 0}]$, $\lim_{x \to \infty} F_\sigma(x) = 0$ $\, [\lim_{y \to \infty} g_\sigma(x,y) = 0]$. By the assumption and the proof of Lemma \ref{lem:g.p} $(P1)$, $\Phi_\sigma \in C^0_{x \geq c, y \geq 0}$. We evaluate \eqref{K'.parts} at $y=0$ to get the formula $$ \Phi_\sigma(x,0) = - F_\sigma(x)g_\sigma(x,0) - \int\limits_0^{+\infty} F_\sigma(x+z) \partial_z g_\sigma(x,z) \, dz \ . $$ Together with Lemma \ref{lem:techLemma} $(A2)$ we have the estimate \begin{equation} \label{eq:Phi.est2} \norm{\Phi_\sigma(\cdot, 0)}_{L^2_{x \geq c}} \leq \norm{F_\sigma}_{H^1} \left( \norm{g_\sigma(\cdot, 0)}_{L^2_{x \geq c}} + \norm{\partial_y g_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \right)\leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}^2 \ . \end{equation} Estimate \eqref{eq:Phi.est2} together with estimate \eqref{eq:Phi.est1} imply that $\Phi_\sigma$ satisfies $(P2)$. Finally $\sigma \mapsto \Phi_\sigma(\cdot,0)$ is real analytic, being a composition of real analytic maps. $(iii)$ We skip an easy proof. \end{proof} If the function $h_\sigma$ is more regular one deduces better regularity properties of the corresponding solution of \eqref{int.eq}. \begin{lemma} \label{lem:ab.eq2} Consider the integral equation \eqref{int.eq} and assume that $F_\sigma$ satisfies $ (H)$. Assume that $h_\sigma$, $\partial_x h_\sigma$, $\partial_y h_\sigma$ satisfy $(P)$. Then $g_\sigma$ solution of \eqref{int.eq} satisfies $(P)$. Its derivatives $\partial_x g_\sigma$ and $\partial_y g_\sigma$ satisfy $(P)$ and solve the equations \begin{align} \label{x.eq} &\left( Id + \mathcal{K}_{x,\sigma} \right) [\partial_x g_\sigma] = \partial_x h_\sigma - \mathcal{K}_{x,\sigma}'[g_\sigma] \ ,\\ \label{y.eq} & \partial_y g_\sigma = \partial_y h_\sigma - \mathcal{K}_{x,\sigma}'[g_\sigma] \ . \end{align} \end{lemma} \noindent {\em Proof.} By Lemma \ref{lem:ab.eq}, $g_\sigma$ satisfies $(P)$.\\ {\em $\partial_y g_\sigma$ satisfies $(P)$.} For $\epsilon \neq 0$ sufficiently small, we have in $L^2_{y \geq 0}$ $$ \frac{g_\sigma(x, y+\epsilon) - g_\sigma(x, y)}{\epsilon} = \Psi^\epsilon_\sigma(x,y) $$ where \begin{align} \label{tras.y} & \Psi^\epsilon_\sigma(x,y) := \frac{h_\sigma(x, y+\epsilon) - h_\sigma(x,y) }{\epsilon}- \int\limits_0^{+\infty} \frac{F_\sigma(x+y+\epsilon+z) - F_\sigma(x+y+z)}{\epsilon} g_\sigma(x,z) \, dz \ . \end{align} Define $$ \Psi^0_\sigma(x,y):= \partial_y h_\sigma(x,y) - \int\limits_0^{+\infty} F_\sigma'(x+y+z) g_\sigma(x,z) \, dz \ . $$ Since $\partial_y h_\sigma$ and $g_\sigma$ satisfy $(P)$, by Lemma \ref{lem:Phi} $(i)$ it follows that $\Psi^0_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0}$, the map $\mathscr{S}^{4,N} \to L^2_{x\geq c} L^2_{y\geq 0},$ $\, \sigma \mapsto \Psi^0_\sigma$ is real analytic and $\norm{\Psi^0_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}}\leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}^2$. Furthermore one verifies that $$ \partial_y g_\sigma(x,\cdot)= \lim_{\epsilon \to 0} \frac{g_\sigma(x, \cdot+\epsilon) - g_\sigma(x, \cdot)}{\epsilon} = \lim_{\epsilon \to 0} \Psi^\epsilon_\sigma(x, \cdot) = \Psi^0_\sigma(x, \cdot) \qquad \mbox{in } L^2_{y \geq 0} \ . $$ Thus $\partial_y g_\sigma$ fulfills \begin{equation} \label{dery.g.eq} \partial_y g_\sigma(x,y) = \partial_y h_\sigma(x,y) - \int\limits_0^{+\infty} F_\sigma'(x+y+z) g_\sigma(x,z) dz \ , \end{equation} i.e., $\partial_y g_\sigma$ satisfies equation \eqref{y.eq}. Since $\partial_y g_\sigma = \Psi^0_\sigma$, $g_\sigma$ satisfies the assumptions of Lemma \ref{lem:Phi} $(ii)$. Since $\partial_y h_\sigma$ satisfies $(P)$ as well, it follows that $\partial_y g_\sigma$ satisfies $(P)$.\\ {\em $\partial_x g_\sigma$ satisfies $(P)$}. For $\epsilon \neq 0$ small enough we have in $L^2_{y \geq 0}$ $$ \left( Id + \mathcal{K}_{x+\epsilon,\sigma}\right) \left[\frac{g_\sigma(x+\epsilon, \cdot) - g_\sigma(x, \cdot)}{\epsilon}\right] = \Phi^\epsilon_\sigma(x, \cdot) $$ where $$ \Phi^\epsilon_\sigma(x,y):= \frac{h_\sigma(x+\epsilon, y) - h_\sigma(x,y)}{\epsilon} - \int\limits_0^{+\infty} \frac{F_\sigma(x+y+\epsilon+z) - F_\sigma(x+y+z)}{\epsilon} g_\sigma(x,z) \, dz \ . $$ Define $$ \Phi^0_\sigma(x,y):= \partial_x h_\sigma(x,y) - \int\limits_0^{+\infty} F_\sigma'(x+y+z) g_\sigma(x,z) \, dz \ . $$ Proceeding as above, one proves that $\Phi^0_\sigma$ satisfies $(P)$, and $$ \lim_{\epsilon \to 0} \Phi^\epsilon_\sigma(x, \cdot) = \Phi^0_\sigma(x, \cdot) \qquad \mbox{in } L^2_{y \geq 0} \ . $$ Together with Lemma \ref{lem:der.K} we get for $x >c$ in $L^2_{y \geq 0}$ \begin{equation} \label{derx.g.limit} \partial_x g_\sigma(x, \cdot) = \lim_{\epsilon \to 0} \frac{g_\sigma(x+\epsilon, \cdot) - g_\sigma(x,\cdot)}{\epsilon} = \lim_{\epsilon \to 0} \left( Id + \mathcal{K}_{x+\epsilon,\sigma}\right)^{-1} \Phi^\epsilon_\sigma(x, \cdot) = \left( Id + \mathcal{K}_{x,\sigma}\right)^{-1} \Phi^0_\sigma(x, \cdot) \ . \end{equation} In particular $(Id + \mathcal{K}_\sigma) (\partial_x g_\sigma(x,\cdot)) = \Phi^0_\sigma(x,\cdot)$. Since $\Phi^0_\sigma$ satisfies $(P)$, by Lemma \ref{lem:ab.eq}, $\partial_x g_\sigma$ satisfies $(P)$. Formula \eqref{derx.g.limit} implies that \begin{equation} \label{derx.g.eq} \partial_x g_\sigma(x,y) + \int\limits_0^{+\infty} F_\sigma(x+y+z) \partial_x g_\sigma(x,z) \, dz = \partial_x h_\sigma(x,y) - \int\limits_0^{+\infty} F_\sigma'(x+y+z) g_\sigma(x,z) \, dz \ , \end{equation} namely $\partial_x g_\sigma$ satisfies equation \eqref{x.eq}.\\ \qed \section{Proof from Section \ref{sec:inv.scat}} \label{proof.prop.B} \subsection{Properties of $\mathcal{K}_{x,\sigma}^\pm$ and $f_{\pm, \sigma}$.} We begin with proving some properties of $\mathcal{K}_{x,\sigma}^\pm$ and $f_{\pm, \sigma}$, defined in \eqref{kxdef} and \eqref{f.def}, which will be needed later. \\ {\em \underline{Properties of $Id + \mathcal{K}^{\pm}_{x,\sigma}$}}. In order to solve the integral equations \eqref{opma1} we need the operator $Id + \mathcal{K}^{+}_{x,\sigma}$ to be invertible on $L^2_{y \geq 0}$ (respectively $Id + \mathcal{K}^{-}_{x,\sigma}$ to be invertible on $L^2_{y \leq 0}$). The following result is well known: \begin{lemma}[\cite{deift,KaCo}] Let $\sigma \in \mathscr{S}^{4, 0}$ and fix $c \in \mathbb{R}$. Then the following holds: \label{lem:Fx} \begin{enumerate}[(i)] \item For every $x\geq c$, $\mathcal{K}_{x,\sigma}^{+} : L^2_{y_\geq 0} \rightarrow L^2_{y_\geq 0}$ is a bounded linear operator; moreover \begin{equation} \sup_{x \geq c} \norm{\mathcal{K}_{x,\sigma}^{+}}_{\mathcal{L}(L^2_{y\geq 0})} < 1, \quad \mbox{ and } \quad \norm{\mathcal{K}_{x, \sigma}^+}_{\mathcal{L}(L^2_{y\geq 0})} \leq \int\limits_x^{+ \infty} |F_{+, \sigma}(\xi)| \; d \xi \rightarrow 0 \quad \mbox{ if } \quad x \rightarrow + \infty. \label{Kx_norm1} \end{equation} \item The map $\mathcal{K}_\sigma^{+}: L^2_{x\geq c} L^2_{y\geq 0} \to L^2_{x\geq c} L^2_{y\geq 0}$, $f \mapsto \mathcal{K}_\sigma^+[f]$, where $\mathcal{K}_\sigma^+[f](x,y) := K_{x, \sigma}^+[ f](y) $, is linear and bounded. Moreover the operators $Id\pm \mathcal{K}_\sigma^{+}$ are invertible on $L^2_{x\geq c} L^2_{y\geq 0}$ and there exists a constant $K_c>0$, which depends locally uniformly on $\sigma \in \mathscr{S}^{4,0}$, such that \begin{equation} \norm{\left( Id \pm \mathcal{K}_\sigma^{+}\right)^{-1}}_{\mathcal{L}(L^2_{x\geq c} L^2_{y\geq 0})} \leq K_c. \label{Kx_norm} \end{equation} \item $\sigma \mapsto \left( Id \pm \mathcal{K}_\sigma^{+}\right)^{-1}$ are real analytic as maps from $\mathscr{S}^{4, 0}$ to $\mathcal{L}(L^2_{x\geq c} L^2_{y\geq 0})$. \end{enumerate} Analogous results hold also for $\mathcal{K}_{x,\sigma}^-$ replacing $L^2_{x\geq c} L^2_{y\geq 0}$ by $L^2_{x \leq c}L^2_{y \leq 0}$. \end{lemma} {\em \underline{Properties of $f_{\pm, \sigma}$}}. First note that $f_{\pm, \sigma}$, defined by \eqref{f.def}, are well defined. Indeed for any $\sigma \in \mathscr{S}^{4, 0}$, Proposition \ref{rem:dec_rel} implies that $F_{\pm, \sigma} \in H^1 \cap L^2_3 \subset L^2$. Hence for any $x \geq c$, $\, y \geq 0$ the map given by $z \mapsto F_{+, \sigma}(x+y+z)F_{+, \sigma}(x+z) $ is in $L^1_{z \geq 0}$. Similarly, for any $x \geq c$, $\, y \geq 0$, the map given by $z \mapsto F_{-, \sigma}(x+y+z) F_{-, \sigma}(x+z) $ is in $L^1_{z \leq 0}$. In the following we will use repeatedly the Hardy inequality \cite{hardy} \begin{equation} \label{hardy.ineq} \norm{\langle x \rangle^m \intx{x} g(z) dz}_{L^2_{x \geq c}} \leq K_c \norm{\langle x \rangle^{m+1} g}_{L^2_{x \geq c}}, \qquad \forall \, m \geq 0 \ . \end{equation} The inequality is well known, but for sake of completeness we give a proof of it in Lemma \ref{lem:techLemma} $(A3)$. \\ We analyze now the maps $\sigma \mapsto f_{\pm, \sigma}$. Since the analysis of $f_{+, \sigma}$ and the one of $f_{-, \sigma}$ are similar, we will consider $f_{+, \sigma}$ only. To shorten the notation we will suppress the subscript '' + '' in what follows. \begin{lemma} \label{f.prop} Fix $N \in \mathbb{Z}_{\geq 0}$ and let $\sigma \in \mathscr{S}^{4,N}$. Let $f_{\sigma}\equiv f_{+, \sigma}$ be given as in \eqref{f.def}. Then for every $j_1, j_2\in \mathbb{Z}_{\geq 0}$ with $0 \leq j_1 + j_2 \leq N+1$, the function $\partial_x^{j_1}\partial_y^{j_2}f_\sigma$ satisfies $(P)$. \end{lemma} \begin{proof} We prove at the same time $(P1), (P2)$ and $(P3)$ for any $j_1, j_2 \geq 0$ with $j_1 + j_2 = n$ for any $0 \leq n \leq N+1$. \noindent{\em Case $n=0$.} Then $j_1 = j_2 = 0$. By Proposition \ref{rem:dec_rel}, for any $N \in \mathbb{Z}_{\geq 0}$ one has $F_\sigma \equiv F_{+, \sigma} \in H^1 \cap L^2_3$. \begin{enumerate} \item[$(P1)$] We show that $f_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$. For any $x \geq c$ fixed one has $\norm{f_\sigma(x, \cdot)}_{L^2_{y \geq 0}} \leq \norm{F_\sigma}_{L^1} \norm{F_\sigma(x+\cdot)}_{L^2_{y \geq 0}}$, which shows that $f_\sigma(x, \cdot) \in L^2_{y \geq 0}$. For $\epsilon \neq 0$ sufficiently small one has \begin{align*} \norm{f_\sigma(x+\epsilon, \cdot) - f_\sigma(x,\cdot)}_{L^2_{x \geq c}} \leq & \norm{F_\sigma}_{L^1} \norm{F_\sigma(x+\epsilon+\cdot) -F_\sigma(x+\cdot) }_{L^2_{y \geq 0}} \\ & + \norm{F_\sigma(\epsilon + \cdot) - F_\sigma}_{L^1}\norm{F_\sigma(x+\cdot)}_{L^2_{y \geq 0}} \end{align*} which goes to $0$ as $\epsilon \to 0$, due to the continuity of the translations in $L^p$-space, $1 \leq p < \infty$. Thus $f_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$. We show now that $f_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$. Introduce $h_\sigma(x,y):= F_{\sigma}(x+y)$. Then $h_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$, since for some $C, C' >0$ \begin{equation} \label{F(x+y).est} \norm{h_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq C \norm{ F_\sigma}_{L^2_{1/2, x \geq c}} \leq C' \norm{\sigma}_{H^4_{\zeta,\mathbb{C}}} \end{equation} where for the first [second] inequality we used Lemma \ref{lem:techLemma} $(A0)$ [Proposition \ref{rem:dec_rel} $(i)$]. By Lemma \ref{lem:techLemma}$(A4)$ and using once more Proposition \ref{rem:dec_rel} $(i)$, one gets \begin{equation} \label{f_sigma.norm} \norm{f_{\sigma}}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq C'' \norm{F_{\sigma}}_{L^1_{x \geq c}} \norm{h_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq C''' \norm{F_{\sigma}}_{L^2_1} \norm{h_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq C'''' \norm{\sigma}_{H^4_{\zeta,\mathbb{C}}}^2 \ , \end{equation} for some $ C'', C''', C'''' >0 $. Thus $f_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$. To show that $f_\sigma \in C^0_{x \geq c, y \geq 0}$ proceed as in Lemma \ref{lem:g.p}. Finally we show that $f_\sigma(\cdot,0) \in L^2_{x \geq c}$. Evaluate \eqref{f.def} at $y=0$ to get $f_{\sigma}(x,0)=\intx{x} F_{\sigma}^2(z)\, dz$. Using the Hardy inequality \eqref{hardy.ineq}, $F_{\sigma}(x) = -\int_{x}^{+ \infty} F_{\sigma}'(s) \, ds$ and Proposition \ref{rem:dec_rel} one obtains \begin{align} \nonumber \norm{f_{\sigma}(\cdot, 0)}_{L^2_{x \geq c}} &\leq \norm{\langle x \rangle F_{\sigma}^2}_{L^2_{x \geq c}} \leq \norm{\langle x \rangle F_{\sigma}}_{L^{\infty}_{x \geq c}} \norm{F_{\sigma}}_{L^2_{x \geq c}} \leq K_c \norm{\langle x \rangle F_{\sigma}'}_{L^1_{x \geq c}} \norm{F_{\sigma}}_{L^2_{x \geq c}}\\ \label{f.0.T} &\leq K_c' \norm{\langle x \rangle^2 F_{\sigma}'}_{L^2_{x \geq c}} \norm{F_{\sigma}}_{L^2_{x \geq c}} \leq K_c'' \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \ , \end{align} for some constants $K_c, K_c', K_c'' >0$. Thus $f_\sigma(\cdot,0) \in L^2_{x \geq c}$. \item[$(P2)$] It follows from \eqref{f_sigma.norm} and \eqref{f.0.T}. \item[$(P3)$] By Proposition \ref{rem:dec_rel} $(i)$, $\mathscr{S}^{4,0} \to H^1_\mathbb{C} \cap L^2_3, $ $\, \sigma \mapsto F_\sigma$ is real analytic and by Lemma \ref{lem:techLemma} $(A0)$ so is $\mathscr{S}^{4,0} \to L^2_{x\geq c} L^2_{y\geq 0}, $ $\, \sigma \mapsto h_\sigma$. By Lemma \ref{lem:techLemma} $(A4)$ it follows that $\mathscr{S}^{4,0} \to L^2_{x\geq c} L^2_{y\geq 0}, $ $\, \sigma \mapsto f_\sigma$ is real analytic. Since the map $\sigma \mapsto f_{\sigma}(\cdot,0)$ is a composition of real analytic maps, it is real analytic as a map from $\mathscr{S}^{4,N}$ to $L^2_{x \geq c}$. \end{enumerate} \noindent{\em Case $n \geq 1$.} By Proposition \ref{rem:dec_rel}, $F_{\sigma} \in H^{N+1}$ and $\norm{F_{\sigma}}_{H^{N+1}} \leq C' \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}$. By Sobolev embedding theorem, it follows that $F_\sigma \in C^{N, \gamma}(\mathbb{R}, \mathbb{R})$, $\gamma < \tfrac{1}{2}$. Moreover since $\lim_{x \to +\infty} F_{\sigma}(x) = 0$, one has \begin{equation} \label{derxfR} \partial_x f_{\sigma}(x,y) = \partial_x \intx{x} F_{\sigma}(y+z) F_{\sigma}(z) \, dz = - F_{\sigma}(x+y) F_{\sigma}(x) \ . \end{equation} Consider first the case $j_1 \geq 1$. Then $j_2 \leq N$. By \eqref{derxfR} it follows that \begin{align} \label{derf(x,y).1} \partial_x^{j_1} \partial_y^{j_2} f_{\sigma}(x,y) &= -\sum_{l=0}^{j_1-1}\binom{j_1-1}{l} F_{\sigma}^{(j_2 + l)}(x+y) F_{\sigma}^{(j_1-1-l)}(x) \ , \end{align} where $F^{(l)}_{\sigma}\equiv \partial_x^l F_{\sigma}$. Thus $\partial_x^{j_1} \partial_y^{j_2} f_{\sigma}$ is a linear combination of terms of the form \eqref{bH.def0}, with $b_\sigma = F_\sigma^{(j_1-1-l)}$ satisfying the assumption of Lemma \ref{lem:P} $(i)$, thus $\partial_x^{j_1} \partial_y^{j_2} f_{\sigma}$, with $j_1 \geq 1$, satisfies $(P)$. \\ Consider now the case $j_1 =0$. Then $1 \leq j_2 \leq n \leq N+1$. Since $ \partial_y F_{\sigma}(x+y+z) = \partial_z F_{\sigma}(x+y+z) = F_{\sigma}'(x+y+z) $, by integration by parts one obtains \begin{align} \label{derf(x,y).2} \partial_y^{j_2} f_{\sigma}(x,y) &= - F_{\sigma}^{(j_2-1)}(x+y) F_{\sigma}(x) - \int\limits_0^{+\infty} F_{\sigma}^{(j_2-1)}(x+y+z) F_{\sigma}'(x+z) dz \ . \end{align} Then, by Lemma \ref{lem:P} $(i)$ and $(ii)$, $\partial_y^{j_2} f_{\sigma}$ is the sum of two terms which satisfy $(P)$, thus it satisfies $(P)$ as well. \end{proof} \begin{lemma} \label{lem:P} Fix $ c \in \mathbb{R}$, $N \in \mathbb{Z}_{\geq 0}$ and let $\sigma \in \mathscr{S}^{4, N}$. Let $F_\sigma$ be given as in \eqref{F.four}. Then the following holds true: \begin{enumerate}[(i)] \item Let $\sigma \mapsto b_\sigma$ be real analytic as a map from $\mathscr{S}^{4,N}$ to $H^1_{x \geq c}$, satisfying $\norm{b_\sigma}_{H^1_{x \geq c}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_N}$, where $K_c >0$ depends locally uniformly on $\sigma \in H^4_{\zeta,\mathbb{C}}\cap L^2_N$. Then for every integer $k$ with $0 \leq k \leq N$, the function \begin{equation} \label{bH.def0} \mathbf{H}_\sigma(x,y) := F_\sigma^{(k)}(x+y)\, b_\sigma(x) \end{equation} satisfies $(P)$. \item For every integer $0 \leq k \leq N$, the function \begin{equation} \label{bG.def0} \mathbf{G}_\sigma(x,y) = \int\limits_0^{+\infty} F_{\sigma}^{(k)}(x+y+z) F_{\sigma}'(x+z) dz \end{equation} satisfies $(P)$. \item Let $N \geq 1$ and let $G_\sigma$ be a function satisfying $(P)$. Then the function \begin{equation} \label{bF.def0} \mathbf{F}_\sigma(x,y) := \int\limits_0^{+\infty} F_\sigma'(x+y+z) G_\sigma(x,z) \, dz \end{equation} satisfies $(P)$. \end{enumerate} \end{lemma} \begin{proof} $(i)$ {\em $\mathbf{H}_\sigma$ satisfies $(P1)$.} Clearly $\mathbf{H}_\sigma(x, \cdot) \in L^2_{y \geq 0}$ and by the continuity of the translations in $L^2$ one verifies that $\norm{\mathbf{H}_\sigma(x+\epsilon, \cdot) - \mathbf{H}_\sigma(x,\cdot)}_{L^2_{y \geq 0}} \to 0 $ as $\epsilon \to 0$, thus proving that $ \mathbf{H}_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$. \\ We show now that $\mathbf{H}_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$. By Lemma \ref{lem:techLemma} $(A1)$, Proposition \ref{rem:dec_rel} and the assumption on $b_\sigma$, one has that \begin{equation} \label{derf.norm1} \norm{\mathbf{H}_\sigma }_{L^2_{x\geq c} L^2_{y\geq 0}} \leq C \norm{F_\sigma}_{H^{N+1}} \norm{b_\sigma}_{L^2_{x \geq c}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \ , \end{equation} where $K_c >0$ can be chosen locally uniformly for $\sigma \in H^4_{\zeta,\mathbb{C}} \cap L^2_{N}$. \\ For $0 \leq k \leq N$, $F_\sigma^{(k)} \in C^0(\mathbb{R}, \mathbb{R})$ by the Sobolev embedding theorem. Thus $\mathbf{H}_\sigma \in C^0_{x \geq c, y \geq 0}$. \\ Finally we show that $\mathbf{H}_\sigma(\cdot,0) \in L^2_{x \geq c}$. We evaluate the r.h.s. of formula \eqref{bH.def0} at $y=0$, getting $$ \mathbf{H}_\sigma(x,0) = F_{\sigma}^{(k)}(x) b_\sigma(x) \ .$$ It follows that there exists $C >0$ and $K_c >0$, depending locally uniformly on $\sigma \in H^4_{\zeta,\mathbb{C}} \cap L^2_{N}$, such that \begin{equation} \label{norm_derf(x,0)31} \norm{\mathbf{H}_\sigma(\cdot,0)}_{L^2_{x \geq c}} \leq C \norm{F_\sigma}_{H^{N+1}}\norm{b_\sigma}_{H^1_{x\geq c}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \ , \end{equation} where we used that both $F_{\sigma}^{(k)}$ and $b_\sigma$ are in $H^1_{x \geq c}$. {\em $\mathbf{H}_\sigma$ satisfies $(P2)$.} It follows from \eqref{derf.norm1} and \eqref{norm_derf(x,0)31}. {\em $\mathbf{H}_\sigma$ satisfies $(P3)$.} The real analyticity property follows from Lemma \ref{lem:techLemma} and Proposition \ref{rem:dec_rel}, since for every $0 \leq k \leq N$, $\mathbf{H}_\sigma$ is product of real analytic maps. \\ $(ii)$ {\em $\mathbf{G}_\sigma$ satisfies $(P1)$.} We show that $\mathbf{G}_\sigma \in L^2_{x\geq c} L^2_{y\geq 0}$. By Lemma \ref{lem:techLemma} $(A5)$ and Proposition \ref{rem:dec_rel} it follows that \begin{equation} \label{deryf.norm2} \norm{\mathbf{G}_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq \norm{F_\sigma}_{H^{N+1}} \norm{F'_\sigma}_{L^1} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \ , \end{equation} where $K_c >0$ depends locally uniformly on $\sigma \in H^4_{\zeta,\mathbb{C}} \cap L^2_{N}$. One verifies easily that $\mathbf{G}_\sigma \in C^0_{x \geq c} L^2_{y \geq 0}$. \\ In order to prove that $\mathbf{G}_\sigma \in C^0_{x \geq c, y \geq 0}, $ proceed as in Lemma \ref{lem:g.p}. \\ Now we show that $\mathbf{G}_\sigma(\cdot, 0) \in L^2_{x \geq c}$. We evaluate formula \eqref{bG.def0} at $y=0$ getting that $$ \mathbf{G}_\sigma(x,0) = \int_0^{\infty} F_{\sigma}^{(k)}(x+z) F_{\sigma}'(x+z) dz \ . $$ Let $h_\sigma'(x,z) := F_{\sigma}'(x+z)$. By Lemma \ref{lem:techLemma} $(A0)$ and Proposition \ref{rem:dec_rel} one has $$ \norm{h_\sigma'}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq \norm{\langle x \rangle^{1/2} F_{\sigma}'}_{L^2_{x \geq c}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}} \ , $$ where $K_c >0$ can be chosen locally uniformly for $\sigma \in H^4_{\zeta,\mathbb{C}} \cap L^2_{N}$. Thus by Lemma \ref{lem:techLemma} $(A2)$ one gets \begin{equation} \label{dery.f.norm3} \norm{\mathbf{G}_\sigma(\cdot,0)}_{L^2_{x \geq c}} \leq K_c \norm{F_{\sigma}^{(k)}}_{L^2_{x \geq c}}\norm{h_\sigma'}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \ , \end{equation} where $K_c >0$ can be chosen locally uniformly for $\sigma \in H^4_{\zeta,\mathbb{C}} \cap L^2_{N}$. {\em $\mathbf{G}_\sigma$ satisfies $(P2)$.} It follows from \eqref{deryf.norm2} and \eqref{dery.f.norm3}. {\em $\mathbf{G}_\sigma$ satisfies $(P3)$.} The real analyticity property follows from Lemma \ref{lem:techLemma} and Proposition \ref{rem:dec_rel}, since for every $0 \leq k\leq N$, $\mathbf{G}_\sigma$ is composition of real analytic maps.\\ $(iii)$ {\em $\mathbf{F}_\sigma$ satisfies $(P1)$.} By Lemma \ref{lem:Phi} $(i)$, $\mathbf{F}_\sigma \in C^0_{x \geq c} L^2_{y \geq 0} \cap L^2_{x\geq c} L^2_{y\geq 0}$ and $$\norm{\mathbf{F}_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq \norm{F_\sigma'}_{L^2_4} \norm{G_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \ . $$ Proceeding as in the proof of Lemma \ref{f.prop} $(P1)$ one shows that $\mathbf{F}_\sigma \in C^0_{x \geq c, y \geq 0}$. Since $F_\sigma' \in H^N$, $N \geq 1$, $F_\sigma'$ is a continuous function. Thus we can evaluate $\mathbf{F}_\sigma$ at $y=0$, obtaining $\mathbf{F}_\sigma(x,0) = \int\limits_0^{+\infty} F_\sigma'(x+z) G_\sigma(x,z) \, dz$. By Lemma \ref{lem:techLemma} $(A2)$ we have that \begin{equation*} \norm{\mathbf{F}_\sigma(\cdot,0)}_{L^2_{x \geq c}} \leq \norm{F_\sigma'}_{L^2_{x \geq c}} \norm{G_\sigma}_{L^2_{x\geq c} L^2_{y\geq 0}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \ . \end{equation*} The proof that $\mathbf{F}_\sigma$ satisfies $(P2)$ and $(P3)$ follows as in the previous items. We omit the details. \end{proof} \begin{lemma} \label{f_sigma.P} Let $N \geq 1$ be fixed. For every $j_1, j_2 \geq 0$ with $1 \leq j_1 + j_2 \leq N$, the function $f_\sigma^{j_1, j_2}$ defined in \eqref{f_n0} and its derivatives $\partial_y f_\sigma^{j_1, j_2}$, $\partial_x f_\sigma^{j_1, j_2}$ satisfy $(P)$. \end{lemma} \begin{proof} First note that by Lemma \ref{f.prop} the terms $\partial_x^{j_1}\partial_y^{j_2} f_\sigma$ and its derivatives $\partial_x^{j_1+1}\partial_y^{j_2} f_\sigma $, $\partial_x^{j_1}\partial_y^{j_2+1} f_\sigma$ satisfy $(P)$. It thus remains to show that \begin{equation} \label{mix.term} \mathbf{F}_\sigma^{k_1, k_2}(x,y) := \int\limits_0^{+\infty} \partial_x^{k_1} F_\sigma(x+y+z) \, \partial_x^{k_2}B_\sigma(x,z) \, dz \ , \qquad k_1 \geq 1, \ k_2 \geq 0, \quad k_1 + k_2 = n \leq N \end{equation} and its derivatives $\partial_y \mathbf{F}_\sigma^{k_1,k_2}, \, \partial_x \mathbf{F}_\sigma^{k_1,k_2} $ satisfy $(P)$. Remark that, by the induction assumption in the proof of Lemma \ref{prop:AnBn}, for every integers $k_1, k_2 \geq 0$ with $k_1+k_2 \leq n$, $\partial_x^{k_1} \partial_y^{k_2} B_\sigma$ satisfies $(P)$.\\ {\em $\mathbf{F}_\sigma^{k_1, k_2}$ satisfies $(P)$}. If $k_1=1$, it follows by Lemma \ref{lem:P} $(iii)$. Let $k_1 >1$. By integration by parts $k_1 -1 $ times we obtain \begin{equation} \label{bF.R} \begin{aligned} \mathbf{F}_\sigma^{k_1, k_2}(x,y) = & \sum_{l=1}^{k_1-1} (-1)^l \partial_x^{k_1 - l} F_\sigma(x+y) (\partial_x^{k_2}\partial_z^{l-1} B_\sigma)(x,0) \\ & + (-1)^{k_1-1} \int\limits_0^{+\infty} F_\sigma'(x+y+z) \, \partial_x^{k_2} \partial_z^{k_1-1} B_\sigma(x,z) \, dz \ , \end{aligned} \end{equation} where we used that for $1 \leq l \leq k_1 -1$ one has $F_\sigma^{(k_1 - l)} \in H^1$ $\, [(\partial_x^{k_2}\partial_y^{l-1} B_\sigma)(x,\cdot) \in H^1_{y \geq 0}]$, thus $\lim_{x \to \infty} F_\sigma^{(k_1 - l)}(x) = 0$ $\, [\lim_{y \to \infty} \partial_x^{k_2}\partial_y^{l-1} B_\sigma)(x,y) = 0]$. Consider the r.h.s. of \eqref{bF.R}. It is a linear combinations of terms of the form \eqref{bH.def0} and \eqref{bF.def0}. By the induction assumption, these terms satisfy the hypothesis of Lemma \ref{lem:P} $(i)$ and $(iii)$. It follows that $\mathbf{F}_\sigma^{k_1, k_2}$ satisfies $(P)$, and in particular there exists a constant $K_c>0$, depending locally uniformly on $\sigma \in H^4_{\zeta,\mathbb{C}} \cap L^2_{N}$, such that \begin{equation} \label{F.norm} \norm{ \mathbf{F}_\sigma^{k_1, k_2}}_{L^2_{x\geq c} L^2_{y\geq 0}} + \norm{ \mathbf{F}_\sigma^{k_1, k_2}(\cdot,0)}_{L^2_{x \geq c}} \leq K_c \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \ . \end{equation} {\em $\partial_y \mathbf{F}_\sigma^{k_1, k_2}$ satisfies $(P)$.} For $\epsilon \neq 0$ sufficiently small, by integration by parts $k_1$-times we obtain \begin{align*} \frac{\mathbf{F}_\sigma^{k_1, k_2}(x,y + \epsilon) - \mathbf{F}_\sigma^{k_1, k_2}(x,y)}{\epsilon} = & \sum_{l=1}^{k_1}(-1)^{l} \frac{\partial_x^{k_1-l}F_\sigma(x+y + \epsilon) - \partial_x^{k_1-l} F_\sigma(x+y) }{\epsilon} (\partial_x^{k_2}\partial_z^{l-1} B_\sigma)(x,0) \\ & + (-1)^{k_1} \int\limits_0^{+\infty} \frac{ F_\sigma(x+y+\epsilon +z) - F_\sigma(x+y+z) }{\epsilon} \, \partial_x^{k_2} \partial_z^{k_1} B_\sigma(x,z) \, dz \ , \end{align*} where once again we used that for $1 \leq l \leq k_1$ one has $F_\sigma^{(k_1 - l)} \in H^1$ $\, [(\partial_x^{k_2}\partial_y^{l-1} B_\sigma)(x,\cdot) \in H^1_{y \geq 0}]$, thus $\lim_{x \to \infty} F_\sigma^{(k_1 - l)}(x) = 0$ $\, [\lim_{y \to \infty} \partial_x^{k_2}\partial_y^{l-1} B_\sigma)(x,y) = 0]$. Define also \begin{equation} \begin{aligned} \label{dery.bF} \partial_y \mathbf{F}_\sigma^{k_1, k_2}(x,y) := & \sum_{l=1}^{k_1}(-1)^{l} \, \partial_x^{k_1-l + 1} F_\sigma(x+y) \, (\partial_x^{k_2}\partial_z^{l-1} B_\sigma)(x,0) \\ & + (-1)^{k_1} \int\limits_0^{+\infty} F_\sigma'(x+y+z) \, \partial_x^{k_2} \partial_z^{k_1} B_\sigma(x,z) \, dz \ . \end{aligned} \end{equation} Consider the r.h.s. of equation \eqref{dery.bF}. It is a linear combinations of terms of the form \eqref{bH.def0} and \eqref{bF.def0}. By the induction assumption, these terms satisfy the hypothesis of Lemma \ref{lem:P} $(i)$ and $(iii)$. It follows that $\partial_y \mathbf{F}_\sigma^{k_1, k_2}$ satisfies $(P)$ and one has \begin{equation} \label{deryF.norm} \norm{\partial_y \mathbf{F}_\sigma^{k_1, k_2}}_{L^2_{x\geq c} L^2_{y\geq 0}} + \norm{\partial_y \mathbf{F}_\sigma^{k_1, k_2}(\cdot, 0)}_{L^2_{x \geq c}} \leq K_c' \norm{\sigma}_{H^4_{\zeta,\mathbb{C}} \cap L^2_{N}}^2 \end{equation} for some constant $K_c'>0$, depending locally uniformly on $\sigma \in H^4_{\zeta,\mathbb{C}} \cap L^2_{N}$. Furthermore one verifies that $$ \lim_{\epsilon \to 0} \frac{\mathbf{F}_\sigma^{k_1, k_2}(x,\cdot + \epsilon) - \mathbf{F}_\sigma^{k_1, k_2}(x,\cdot)}{\epsilon} = \partial_y \mathbf{F}_\sigma^{k_1, k_2}(x,\cdot) \quad \mbox{ in } L^2_{y \geq 0} \ . $$ {\em $\partial_x\mathbf{F}_\sigma^{k_1, k_2}$ satisfies $(P)$.} The proof is similar to the previous case, and the details are omitted. This conclude the proof of the inductive step. \end{proof} \section{Hilbert transform} \label{Hilbert.transf} Define $\mathcal{H} : L^2(\mathbb{R}, \mathbb{C}) \to L^2(\mathbb{R}, \mathbb{C})$ as the Fourier multiplier operator \[\widehat{(\mathcal{H} (v))}(\xi)= -i\operatorname{sign}(\xi)\; \hat{v}(\xi) \ .\] Thus $\mathcal{H}$ is an isometry on $L^2(\mathbb{R}, \mathbb{C})$. It is easy to see that $\left.\mathcal{H}\right|_{H^N_\mathbb{C}}: H^N_\mathbb{C} \to H^N_\mathbb{C}$ is an isometry for any $N\geq 1$ -- cf. \cite{duo}. In case $v\in C^1(\mathbb{R}, \mathbb{C})$ with $\|v'\|_{L^\infty},\|xv(x)\|_{L^\infty}<\infty$, one has \[\mathcal{H}(v)(k)=-\frac{1}{\pi} \lim_{\epsilon \to 0^+} \int_{|k'-k|\geq \epsilon}\frac{v(k')}{k'-k} dk' \] and obtains the estimate $\mmod{\mathcal{H}(v)(k)} \leq C (\|v'\|_{\infty}+\|xv(x)\|_{\infty})$, where $C>0$ is a constant independent of $v$ and $k$. Let $g \in C^1(\mathbb{R}, \mathbb{R})$ with $\|g'\|_{L^\infty},\|xg(x)\|_{L^\infty}<\infty$. Then define for $z\in \mathbb{C}^+:= \{z\in \mathbb{C}: \operatorname{Im}(z)> 0\} $ the function \begin{align}\nonumber f(z):= \frac{1}{\pi i}\int_{-\infty}^\infty \frac{g(s)}{s-z}ds \ . \end{align} Decompose $\frac{1}{s-z}$ into real and imaginary part \[\frac{1}{s-z}= \frac{1}{s-a- ib}= \frac{s-a}{(s-a)^2+b^2}+i \frac{b}{(s-a)^2+b^2}\] to get the formulas for the real and imaginary part of $f(z)$ \begin{align}\label{realpartformula} \operatorname{Re} f(z) =&\frac{1}{\pi }\int_{-\infty}^\infty \ \frac{b}{(s-a)^2+b^2} g(s) ds \ , \\ \label{imaginarypartformula} \operatorname{Im} f(z)=& \frac{-1}{\pi }\int_{-\infty}^\infty \frac{s-a}{(s-a)^2+b^2 }g(s)ds \ . \end{align} The following Lemma is well known and can be found in \cite{duo}. \begin{lemma}\label{hilbertdeltausw} The function $f$ is analytic and admits a continuous extension to the real line. Furthermore it has the following properties for any $a \in \mathbb{R}$: \begin{enumerate}[(i)] \item $ \lim_{b\to 0^+}\operatorname{Im} f(a+bi)=\mathcal{H}(g)(a)$. \item $ \lim_{b\to 0^+}\operatorname{Re} f(a+bi) = g(a)$. \item There exists $C>0$ such that $|f(z)| \leq \frac{C}{1+|z|}, $ $\, \forall \, z \in \{z: \operatorname{Im} z \geq 0 \}$. \item Let $\tilde f(z)$ be a continuous function on $\operatorname{Im} z \geq 0$ which is analytic on $\operatorname{Im} z > 0$ and satisfies $\operatorname{Re} \tilde f_{|\mathbb{R}}= g$ and $|\tilde f(z)| = O(\frac{1}{|z|})$ as $|z|\to \infty$, then $\tilde f = f$. \end{enumerate} \end{lemma} The next lemma follows from the commutator estimates due to Calder\'on \cite{calderon}: \begin{lemma}[\cite{calderon}] \label{lem:comm.est} Let $b:\mathbb{R} \to \mathbb{R}$ have first-order derivative in $L^\infty$. For any $p \in (1, \infty)$ there exists $C>0$, such that $$ \norm{\left[ \mathcal{H}, b \right] \partial_x g}_{L^p} \leq C \norm{g}_{L^p} \ . $$ \end{lemma} We apply this lemma to prove the following result: \begin{lemma} \label{lem:hilb.zeta} Let $M \in \mathbb{Z}_{\geq 1}$ be fixed. Then $\mathcal{H}: H^{M}_{\zeta,\C} \to H^{M}_{\zeta,\C}$ is a bounded linear operator. \end{lemma} \begin{proof} Let $f \in H^{M}_{\zeta,\C}$. As the Hilbert transform commutes with the derivatives, we have that $\mathcal{H}(f) \in H^{M-1}_\mathbb{C}$. Next we show that if $\zeta \partial_k^M f \in L^2$, then $\zeta \partial_k^M \mathcal{H}(f) = \zeta \mathcal{H}(\partial_k^M f) \in L^2$. By Lemma \ref{lem:comm.est} with $p=2$, $g = \partial_k^{M-1}f$ and $b = \zeta$, we have that $$ \norm{\zeta \mathcal{H}(\partial_k^M f)}_{L^2} \leq \norm{ \mathcal{H}(\zeta \partial_k^M f)}_{L^2} + \norm{\left[ \mathcal{H}, \zeta \right] \partial_k^M f }_{L^2} \leq \norm{f}_{H^{M}_{\zeta,\C}} + C \norm{\partial_k^{M-1}f}_{L^2} < \infty \ . $$ \end{proof} {\bf Acknowledgments} We are particularly grateful to Thomas Kappeler for his continued support and the numerous discussions about the paper. The first author is supported by the Swiss National Science Foundation. This paper is part of the first author's PhD thesis. \end{document}
arXiv
Using value of information methods to determine the optimal sample size for effectiveness trials of alcohol interventions for HIV-infected patients in East Africa Lingfeng Li1, Jennifer Uyei1, Kimberly A. Nucifora1, Jason Kessler1, Elizabeth R. Stevens1, Kendall Bryant2 & R. Scott Braithwaite1 Unhealthy alcohol consumption exacerbates the HIV epidemic in East Africa. Potential benefits of new trials that test the effectiveness of alcohol interventions could not be evaluated by traditional sampling methods. Given the competition for health care resources in East Africa, this study aims to determine the optimal sample size given the opportunity cost of potentially re-allocating trial funds towards cost-effective alcohol treatments. We used value of information methods to determine the optimal sample size by maximizing the expected net benefit of sampling for a hypothetical 2-arm intervention vs. control randomized trial, across ranges of policymaker's willingness-to-pay for the health benefit of an intervention. Probability distributions describing the relative likelihood of alternative trial results were imputed based on prior studies. In the base case, policymaker's willingness-to-pay was based on a simultaneously resource-constrained priority (routine HIV virological testing). Sensitivity analysis was performed for various willingness-to-pay thresholds and intervention durations. A new effectiveness trial accounting for the benefit of more precise decision-making on alcohol intervention implementation would benefit East Africa $67,000 with the optimal sample size of 100 persons per arm under the base case willingness-to-pay threshold and intervention duration of 20 years. At both a conservative willingness-to-pay of 1 x GDP/capita and a high willingness-to-pay of 3 x GDP/capita for an additional health gain added by an alcohol intervention, a new trial was not recommended due to limited decision uncertainty. When intervention duration was 10 or 5 years, there was no return on investment across suggested willingness-to-pay thresholds. Value of information methods could be used as an alternative approach to assist the efficient design of alcohol trials. If reducing unhealthy alcohol use is a long-term goal for HIV programs in East Africa, additional new trials with optimal sample sizes ranging from 100 to 250 persons per arm could save the opportunity cost of implementing less cost-effective alcohol strategies in HIV prevention. Otherwise, conducting a new trial is not recommended. Unhealthy alcohol consumption is common in East Africa [1] and multiple studies have shown that unhealthy alcohol consumption has exacerbated the HIV epidemic [2, 3]. For example, binge drinking and heavy alcohol consumption are associated with increased quantity of sexual partners, less likelihood of condom use, and increased commercial sex trade participation [4,5,6,7,8]. Furthermore, alcohol consumption also has a negative impact on adherence to antiretroviral therapies (ART) [7, 9,10,11,12,13,14,15,16,17,18,19]. Therefore, it is recommended that alcohol interventions that reduce harmful alcohol use among HIV infected patients in East Africa should be developed [3, 18], tested [19], and integrated as a part of HIV prevention and treatment programs [2, 6, 7, 20, 21], especially considering the fact that sub-Saharan African countries still account for almost 70% of new infections in the global HIV epidemic [22]. Even though a few alcohol interventions, such as brief interventions [23,24,25,26,27] and cognitive-behavioral therapy [28], have shown positive results in reducing alcohol consumption or increasing alcohol abstinence in prior studies, the prior evidence on the effectiveness of alcohol interventions may not be sufficient to eliminate decision uncertainty with regard to the implementation of such interventions among HIV-infected patients. Therefore, new trials to gain additional information/evidence on the effectiveness of alcohol interventions are likely necessary. Note that we define prior information or prior evidence as the effectiveness of an alcohol intervention studied in prior trials and define additional information or additional evidence as the information that will be concluded from new trials. A new trial could benefit HIV prevention and treatment programs in East Africa by providing additional evidence on alcohol interventions and thereby reducing decision uncertainty regarding the implementation of alcohol interventions. However, traditional sampling approaches do not account for the potential benefit of such trials since they are primarily based on determining the minimum sample size required to detect the desired intervention efficacy for given Type I and Type II error probabilities [29, 30]. Value of information (VOI) methodology was introduced as an alternative to the traditional sampling methods based on the notion that new trial information is valuable because the benefit of generating additional evidence may exceed the opportunity cost of potentially re-allocating trial funds towards treatments with uncertain cost-effectiveness [31, 32], which is critical in resource-limited settings. Specifically, VOI estimates optimal sample size by maximizing the expected net benefit of sampling (ENBS), which is the difference between the expected value of sample information (EVSI) from a new trial and the cost of sampling. Applying the VOI method to identify optimal sample size is more favorable in literature when trial information could yield clinical actionable inferences, and this method should be considered in the early phase of trial design [29, 30, 33, 34]. For example, VOI calculations have been previously applied to determine the optimal sample size for future trials on catheter securement devices to inform the decision-making on the adoption of the devices [30]. Another advantage of VOI is that it could pre-determine the necessity of conducting a new trial before the sample size calculation for a new trial. VOI could provide an upper bound estimation on the potential benefit of a new trial by determining the expected value of perfect information (EVPI) [33, 35]. Therefore, conducting a new trial would be necessary only when EVPI is positive. For example, Micieli et al. used EVPI to quantify the uncertainty on the adoption of left atrial appendage occlusion devices over dabigatran or warfarin in atrial fibrillation and concluded that additional trials on the relative efficacy of stroke reduction between the two strategies would be necessary due to high EVPI [34]. Accordingly, the objective of this study was to use VOI methods to determine the necessity of a new clinical trial that aims to address the effectiveness of alcohol interventions for HIV-infected patients in East Africa and to identify the optimal sample size for such trials if the necessary condition is met. Assumptions regarding hypothetical RCT To be concordant with a prior high evidence level alcohol study among HIV-infected patients in East Africa [36] and four recent National Institute of Health(NIH) funded RCTs in East Africa [37,38,39,40], we studied a hypothetical randomized-controlled trial (RCT) that aimed to investigate the effect size of an alcohol intervention for HIV infected patients in East Africa with following assumptions: (1) Participants will be equally randomized to an intervention arm and a control arm; (2) Participant eligibility criteria are hazardous or binge drinkers (score ≥ 3 on the Alcohol Use Disorders Identification Test (AUDIT-C), a brief screening for heavy drinking and/or active alcohol abuse or dependence [36, 41,42,43]) and being ART-eligible or ART-initiated in the past 12 months; (3) The trial could be funded for 6–36 months; (4) Baseline trial costs, including program costs, marginal treatment costs, and reporting costs, were aggregated into a marginal cost per sample that was also based on the four RCTs (Table 1). Note that the primary interest of this work was to investigate evidence necessary to reduce uncertainty regarding the effect size of an alcohol intervention. The effect size was measured as the relative risk reduction of unhealthy alcohol consumption. Table 1 Key model inputs Integrating prior information with new trial information: a Bayesian procedure Prior information provided by the alcohol trials that test the effect size of an alcohol intervention is defined as the information already available before a new trial, which may not be sufficient to support optimal decision-making. We specified the certitude regarding an alcohol intervention's effect size prior to additional information (e.g., "prior distribution") based on results of Papas and colleagues who evaluated a cognitive behavioral therapy-based intervention for HIV-positive outpatients with unhealthy alcohol use in Kenya [28, 36]. The intervention was based on social cognitive theory [44], and was designed to increase alcohol abstinence by teaching skills to mitigate substance use circumstances caused by stress or other problems. The intervention had been proven effective during the treatment and follow-up phases in Papas' study, and effect size information reported in the study was used to estimate the prior distribution below. Specifically, in prior distribution, e0 is the intervention effect size sampled from the prior beta distribution; \( \overline{e} \)is the expected prior effect size; and n0 is sample size of the prior study (Table 1). Prior distribution:\( {e}_0\sim Beta\left({n}_0\overline{e},{n}_0\left(1-\overline{e}\right)\right). \) If a new trial with sample size n is initiated to gain additional information about the intervention effect size, in order to predict the sample statistics αD for the new trial, we used a conjugate pair of the prior distribution to construct a predictive distribution. The predictive distribution is summarized below. Predictive distribution: αD~Binomial(e0, n) In order to utilize both of the prior and the new information on intervention effect sizes, an updated distribution was obtained by integrating the prior distribution and the predictive distribution through Bayesian Updating. e1 is updated effect size. Updated distribution: \( {e}_1\sim Beta\left({n}_0\overline{e}+{\alpha}_D,{n}_0+n-{n}_0\overline{e}-{\alpha}_D\right) \) Given a specific simple size n for the new trial, we ran I iterations to generate statistics \( {\alpha}_D^{(i)}\ \left(i=1,\dots, I\right) \). For each \( {\alpha}_D^{(i)} \), we ran J iterations to generate \( {e}_1^{(j)}\mid {\alpha}_D^{(i)}\ \left(j=1,\dots, J\right) \). We used a total number of one million (I × J) iterations to output a set of updated effect sizes to ensure the robustness of our VOI results. More details about the approach can be found elsewhere [31, 32]. Incorporating the integrated information into a decision model of HIV treatment and prevention strategies As previously discussed, alcohol interventions among HIV infected patients might reduce HIV transmission and improve health, and additional information on the effectiveness of such interventions comes with the opportunity cost of not spending those funds on the interventions themselves, which are potentially cost-effective. We used a previously published and validated HIV model to test the impact of the updated distribution of intervention effect size e1 on the decision of adopting (s = 1) versus not adopting (s = 0) the alcohol intervention [45, 46]. The model outcomes necessary for the subsequent cost-effectiveness evaluations and VOI calculations were quality-adjusted life years QALY(s, e1) and costs cost(s, e1) for the intervention scenario (s = 1) and the null scenario (s = 0) respectively. A detailed description of the HIV model is reported elsewhere [45, 46]. Briefly, the HIV model contains an HIV progression module and an HIV transmission module. The HIV progression module approximates the health outcomes by evaluating the change of CD4 cell counts and HIV-1 viral load for each HIV-infected individual. This module also simulates the effect of ART and takes the major causes of ART failure into account, such as ART non-adherence, non-adherence related genotypic resistance, and medication toxicity [46]. The individual-based HIV progression module interacts with a population-based compartmental HIV transmission module which simulates heterosexual transmission among the population in East Africa. The model compartments are differentiated based on health characteristics as well as behavioral risk characteristics. A hypothetical population switches compartments based on change to their health and behavior status. Probability of transmission is a function of multiple factors, including rate of acquiring new partners, duration of partnership, frequency of sexual contact within a partnership, and likelihood of condom use. The people who are in the compartments of unhealthy alcohol consumption were modeled as having these three major factors: increased risk of condom nonuse, increased risk of ART non-adherence, and increased STI prevalence (Table 1). The alcohol intervention reduces the HIV infection rate by transferring people from the compartments representing unhealthy alcohol consumption to those without unhealthy alcohol consumption. Assessing the value of information (VOI) added by a RCT In value of information (VOI) methods, expected value of perfect information (EVPI) represents an upper bound estimate on the potential benefit of a new trial if we assume new trial information would be perfect to help policymakers eliminate all decision uncertainty. Therefore, EVPI is used in economic evacuation to determine the necessity of conducting a new trial. Mathematically, it is the difference between the value of the decision made based on perfect information (e.g., after an RCT without bias and with infinite sample size) and the value of the decision made based on prior information (e.g., before a RCT) [31, 32]: $$ EVPI={E}_{e_1}{\mathit{\max}}_{\boldsymbol{r}} NB\left(s,{e}_1\right)-{\mathit{\max}}_r{E}_{e_1} NB\left(s,{e}_1\right) $$ A positive EVPI is a necessary condition for conducting a new RCT. Net benefit was calculated using the QALYs (s, e1) and costs (s, e1) generated by the HIV model using the following equation [31, 32]: $$ NB\left(s,{e}_1\right)= QALY\left(s,{e}_1\right)\times WTP- cost\left(s,{e}_1\right) $$ WTP is the decision maker's willingness-to-pay for incremental health benefit. The World Health Organization's (WHO) Choosing Interventions that are Cost-Effective (CHOICE) program recommends benchmarking willingness-to-pay (WTP) based on gross domestic product (GDP) per capita [47], in particular between 1 and 3 times the annual GDP per capita. Alternatively and in greater accord with economic theory, WTP may be inferred from a desired program that is not fully implemented because of resource constraints (e.g., routine viral load testing). While EVPI estimates an upper bound benefit when information is perfectly known after a new trial (i.e., sample size of the new trial approaches infinity), expected value of sample information (EVSI) estimates the potential benefit of a new trial with a finite sample size. We then calculated the expected value of sample information (EVSI) which compares the net monetary benefit of a decision made with the updated information (e.g. after the RCT) and the benefit of a decision made with the prior information (e.g., before the RCT): $$ EVSI={E}_{\alpha_D}\mathit{\max}{E}_{e_1\mid {\alpha}_D} NB\left(s,{e}_1\right)-\mathit{\max}{E}_{e_1} NB\left(s,{e}_1\right) $$ \( {E}_{\alpha_D}\mathit{\max}{E}_{e_1\mid {\alpha}_D} NB\left(s,{e}_1\right) \) is the expected net benefit of the decision made based on the updated information (e.g., after the RCT), and \( \mathit{\max}{E}_{e_1} NB\left(s,{e}_1\right) \) is the expected net benefit of the decision made with the prior information (e.g., before the RCT) [31, 32]. In additional to evaluating expected trial benefit, costs associated with conducting such a trial should be weighed against potential benefit in trial design. Expected net benefit of sampling (ENBS) values the expected net benefit of a new trial as the difference between the monetized expected value of sample information (EVSI) and the marginal investment of such a trial [31, 32]: $$ ENBS= EVSI- CS= EVSI-\overline{c}\bullet n $$ In the ENBS equation, CS is cost of sampling; \( \overline{c} \) is marginal cost per sample (Table 1); and n is the sample size of a trial. As sample size grows larger, the incremental certitude from information represented by EVSI grows smaller (i.e., diminishing returns) whereas the incremental cost of conducting the trial may not. As long as EVSI, or the value of conducting the new RCT, exceeds the corresponding study cost (ENBS > 0), the return on investment would be positive. An optimal sample size could be obtained at which the expected net benefit of sampling (ENBS) is maximized. Since ENBS is greatly influenced by policymakers' willingness-to-pay (WTP), we performed a sensitivity analysis to evaluate how optimal results change over a series of WTP benchmarks, where WTP was measured using the standard metric of US$ per additional quality-adjusted life year (QALY), where QALY is a measure that aggregates additional quality and quantity of life. We also performed a sensitivity analysis across three intervention duration scenarios: long, medium, and short (20 years, 10 years, and 5 years respectively). Cost-effectiveness acceptability based on prior information Given QALY and cost outcomes estimated by the HIV model, we applied cost-effectiveness acceptability curves [48] to identify where the decision uncertainty was greatest and how it varied by willingness-to-pay threshold. In Fig. 1a, when WTP threshold was greater than $3200/QALY or smaller than $900/QALY (shaded area), there would be no decision uncertainty with regard to choosing a more cost-effective strategy: prior evidence suggested that policymakers should adopt the intervention when WTP > $3200/QALY or should not adopt the intervention when WTP < $900/QALY. In Fig. 1a, decision uncertainty existed in the unshaded area, and the greatest decision uncertainty arose when WTP = $1710/QALY. In this case, additional evidence from a new trial might be necessary to increase the certitude. Similarly, if we implement a study for 10 or 5 years (Fig. 1b and c), the greatest uncertainty exists when WTP = $8000/QALY or $31,000/QALY respectively. Given the distributions of decision uncertainty in unshaded areas in Fig. 1, EVPI can be used to quantify the consequence of the decision uncertainty (or the health benefit of addressing the uncertainty) in a monetary format. Cost-effectiveness acceptability curves when strategy implementation durations were 20 (a), 10 (b), and 5 years (c) Expected value of perfect information We used EVPI (Fig. 2) to quantify the uncertainties in Fig. 1, and the EVPI varied substantially over different WTPs and different durations of intervention implementation. When implementation duration was assumed to be 20 years, EVPI was maximized ($14.8 million) at a WTP of $1710/QALY, well within the suggested range of WTPs for East Africa ($1014/QALY to $3042/QALY), suggesting that a new RCT could yield valuable information. However, at shorter durations, the minimum WTP required to produce a positive EVSI fell outside the recommended range rendering the prospect of conducting a trial too expensive and not a good return on investment. For a duration of 10-years, EVPI was positive when WTP was greater than $4500/QALY and maximized ($5.8 million) at a WTP of $8000/QALY. For a duration of 5-years EVPI was positive when WTP was greater than $18,000/QALY and maximized ($3.4 million) at a WTP of $31,000/QALY. Since EVPI is the upper bound estimate on the potential benefit of conducing a new trial, our EVPI results in Fig. 2 indicated that it might be worthwhile to conduct a new trial when the intervention implementation duration is 20 years. However, when intervention duration was set to 10 or 5 years, EVPI was zero across the WHO recommended range of WTPs for East Africa ($1014/QALY to $3042/QALY) [47], suggesting that a RCT should not be conducted under these two scenarios. EVPI curves when the alcohol intervention durations were 20, 10, and 5 years Optimal sample size estimation - base case EVSI and ENBS were calculated to estimate an optimal sample size for a new study after the necessary condition was satisfied (EVPI > 0). At baseline intervention duration of 20 years and baseline WTP equal to $2473/QALY, the incremental cost per QALY of implementing routine viral load testing for HIV-infected patients in East Africa, incremental cost of sampling for the trial grew larger whereas EVSI produced by the trial was diminished as the sample size of the trial grew larger in Fig. 3. Thus, the optimal sample size was found at the place where ENBS was maximized in Fig. 3. Specifically, the optimal sample size for the new RCT was 200 (100 per arm) and the corresponding maximum ENBS was $67 thousand US dollars (Fig. 3) for the base case scenario. EVSI, ENBS, and cost of sampling curves for the base case Optimal sample size estimation - sensitivity analysis For a scenario of using a conservative WTP (1 X GDP/capita of East Africa, $1014/QALY) and assuming an implementation duration of 20 years (Table 2), conducting a new trial was not recommended since the return on investment was negative (ENBS < 0). Conducting a new trial was also not suggested even when policymakers were willing to pay more for an additional health gain and thereby using an upper bound WTP of 3 X GDP/capita ($3042/QALY) and also assuming an implementation duration of 20 years (Table 2). However, setting WTP equal to the incremental cost-effectiveness ratio (ICER) of the alcohol intervention ($1710/QALY, within the lower and upper WTP bounds but distinct from our base case assumption), ENBS rose to more than $11 million and corresponding optimal sample size reached 500 (250 per arm). Table 2 Optimal sample sizes and maximum ENBS values for WTP benchmarks Our results suggest that new RCTs to test interventions aimed at increasing alcohol abstinence among HIV-infected patients in East Africa are worthwhile investments assuming that policymakers intend to implement the intervention for a longer duration. Under such circumstances and assuming a WTP of a simultaneously resource-constrained priority (routine virological testing for HIV-infected patients), a new RCT with the optimal sample size of 200 (100 per arm) would yield an expected net benefit of $67 thousand for East Africa. When the alcohol intervention was assumed to be implemented for 10 or 5 years, an additional RCT would not yield information of favorable value across a plausible range of WTPs, and therefore our analyses suggest that it should not be conducted. In these scenarios, decision uncertainty was limited and standard care was always more cost-effective than the alcohol intervention given resource constraints in East Africa. Notably, WTPs higher than the plausible range would result in RCTs having favorable value even with implementation durations of 10 years or 5 years. It is important to be mindful of this because even though the intervention duration is shorter, the opportunity cost of choosing a less cost-effective decision could still be high if decision makers are willing to pay a higher price for a QALY gain from the intervention. The maximum ENBS for the intervention durations of 20 years, 10 years, and 5 years are $11 million, $5 million, and $3 million respectively (Table 2), illustrating that the longer implementation of the alcohol intervention could result in far greater health gains. This is because (1) The benefit of the alcohol intervention is unlikely to be fully captured at a population level if the intervention implementation duration is short; and (2) The selection of other HIV strategies, other than the alcohol intervention, could result in more health gains in an East African population given the same healthcare research budget. This study has several limitations. First, even though our VOI analyses and HIV simulation model are robust, there is still a stochastic noise that cannot be eliminated due to the considerable computational complexity. Second, due to lack of evidence on the effectiveness of alcohol intervention among HIV-infected patients in East Africa, the prior information was primarily based on one study [36], however, our VOI model structure allows for it to be updated if additional evidence becomes available. Third, decision uncertainty caused by cost variables was not addressed in this study. In summary, we identified distributions of decision uncertainty regarding the adoption of alcohol interventions for HIV-infected patients in East Africa over a range of willingness-to-pay thresholds, quantified that uncertainty, and specified optimal sample sizes using a VOI approach rather than based on the minimum statistical power that is required to detect a pre-specified effect size. In situations in which trial information is likely to yield clinically actionable inferences with health importance, the VOI approach leads to larger sample sizes compared to power-based estimates, equivalent to requiring p-values below 0.05. In situations in which trial information is unlikely to yield clinically actionable inferences with health importance, the approach leads to smaller sample sizes compared to power-based estimates, equivalent to allowing p-values above 0.05. Systematic application of this approach to trial design questions would be expected to produce increased health benefit from available resources for conducting research. Value of information methods can be used to determine the optimal sample sizes of alcohol trials by reducing the risk of implementing less cost-effective alcohol strategies for HIV programs in East Africa, and they can be used as alternative approaches for the design of new trials when health care resources are limited. If reducing unhealthy alcohol use is a long-term goal for HIV programs in East Africa, additional new trials with optimal sample sizes ranging from 100 to 250 could save the opportunity cost of implementing less cost-effective alcohol strategies. Otherwise, conducting a new trial is not recommended. ENBS: Expected net benefit of sampling EVPI: EVSI: Expected value of sample information QALY: Quality-adjusted life year STIs: VOI: Value of information WTP: Willingness-to-pay Global Status Report on Alcohol and Health. [http://www.who.int/substance_abuse/publications/global_alcohol_report/en/]. Accessed 10 July 2018. Sundararajan R, Wyatt MA, Woolf-King S, Pisarski EE, Emenyonu N, Muyindike WR, Hahn JA, Ware NC. Qualitative study of changes in alcohol use among HIV-infected adults entering care and treatment for HIV/AIDS in rural southwest Uganda. AIDS Behav. 2015;19(4):732–41. Kiwanuka N, Ssetaala A, Nalutaaya A, Mpendo J, Wambuzi M, Nanvubya A, Sigirenda S, Kitandwe PK, Nielsen LE, Balyegisawa A, et al. High incidence of HIV-1 infection in a general population of fishing communities around Lake Victoria, Uganda. PLoS One. 2014;9(5):e94932. Woolf-King SE, Maisto SA. Alcohol use and high-risk sexual behavior in sub-Saharan Africa: a narrative review. Arch Sex Behav. 2011;40(1):17–42. Kalichman SC, Simbayi LC, Kaufman M, Cain D, Jooste S. Alcohol use and sexual risks for HIV/AIDS in sub-Saharan Africa: systematic review of empirical findings. Prev Sci. 2007;8(2):141–51. Chimoyi LA, Musenge E. Spatial analysis of factors associated with HIV infection among young people in Uganda, 2011. BMC Public Health. 2014;14:555. Mbonye M, Rutakumwa R, Weiss H, Seeley J. Alcohol consumption and high risk sexual behaviour among female sex workers in Uganda. Afr J AIDS Res. 2014;13(2):145–51. Chersich MF, Bosire W, King'ola N, Temmerman M, Luchters S. Effects of hazardous and harmful alcohol use on HIV incidence and sexual behaviour: a cohort study of Kenyan female sex workers. Glob Health. 2014;10:22. Hendershot CS, Stoner SA, Pantalone DW, Simoni JM. Alcohol use and antiretroviral adherence: review and meta-analysis. J Acquir Immune Defic Syndr. 2009;52(2):180–202. Bhat VG, Ramburuth M, Singh M, Titi O, Antony AP, Chiya L, Irusen EM, Mtyapi PP, Mofoka ME, Zibeke A, et al. Factors associated with poor adherence to anti-retroviral therapy in patients attending a rural health centre in South Africa. Eur J Clin Microbiol Infect Dis. 2010;29(8):947–53. Do NT, Phiri K, Bussmann H, Gaolathe T, Marlink RG, Wester CW. Psychosocial factors affecting medication adherence among HIV-1 infected adults receiving combination antiretroviral therapy (cART) in Botswana. AIDS Res Hum Retrovir. 2010;26(6):685–91. Kader R, Seedat S, Govender R, Koch JR, Parry CD. Hazardous and harmful use of alcohol and/or other drugs and health status among south African patients attending HIV clinics. AIDS Behav. 2014;18(3):525–34. Haberer JE, Baeten JM, Campbell J, Wangisi J, Katabira E, Ronald A, Tumwesigye E, Psaros C, Safren SA, Ware NC, et al. Adherence to antiretroviral prophylaxis for HIV prevention: a substudy cohort within a clinical trial of serodiscordant couples in East Africa. PLoS Med. 2013;10(9):e1001511. Van geertruyden JP, Woelk G, Mukumbi H, Ryder R, Colebunders R. Alcohol and antiretroviral adherence? What about Africa? J Acquir Immune Defic Syndr. 2010;54(4):e10. Pefura-Yone EW, Soh E, Kengne AP, Balkissou AD, Kuaban C. Non-adherence to antiretroviral therapy in Yaounde: prevalence, determinants and the concordance of two screening criteria. J Infect Public Health. 2013;6(4):307–15. Jaquet A, Ekouevi DK, Bashi J, Aboubakrine M, Messou E, Maiga M, Traore HA, Zannou MD, Guehi C, Ba-Gomis FO, et al. Alcohol use and non-adherence to antiretroviral therapy in HIV-infected patients in West Africa. Addiction. 2010;105(8):1416–21. Naidoo P, Peltzer K, Louw J, Matseke G, McHunu G, Tutshana B. Predictors of tuberculosis (TB) and antiretroviral (ARV) medication non-adherence in public primary care patients in South Africa: a cross sectional study. BMC Public Health. 2013;13:396. Denison JA, Koole O, Tsui S, Menten J, Torpey K, van Praag E, Mukadi YD, Colebunders R, Auld AF, Agolory S, et al. Incomplete adherence among treatment-experienced adults on antiretroviral therapy in Tanzania, Uganda and Zambia. AIDS. 2015;29(3):361–71. Wandera B, Tumwesigye NM, Nankabirwa JI, Kambugu AD, Parkes-Ratanshi R, Mafigiri DK, Kapiga S, Sethi AK. Alcohol consumption among HIV-infected persons in a large urban HIV Clinic in Kampala Uganda: a constellation of harmful behaviors. PLoS One. 2015;10(5):e0126236. Hahn JA, Fatch R, Wanyenze RK, Baveewo S, Kamya MR, Bangsberg DR, Coates TJ. Decreases in self-reported alcohol consumption following HIV counseling and testing at Mulago hospital, Kampala, Uganda. BMC Infect Dis. 2014;14:403. Medley A, Seth P, Pathak S, Howard AA, DeLuca N, Matiko E, Mwinyi A, Katuta F, Sheriff M, Makyao N, et al. Alcohol use and its association with HIV risk behaviors among a cohort of patients attending HIV clinical care in Tanzania, Kenya, and Namibia. AIDS Care. 2014;26(10):1288–97. Fact sheet 2014. [http://www.unaids.org/sites/default/files/en/media/unaids/contentassets/documents/factsheet/2014/20140716_FactSheet_en.pdf]. Accessed 10 July 2018. WHO. A cross-national trial of brief interventions with heavy drinkers. Am J Public Health. 1996;86(7):948–55. Babor TF, Higgins-Biddle JC. Alcohol screening and brief intervention: dissemination strategies for medical practice and public health. Addiction. 2000;95(5):677–86. Moyer A, Finney JW, Swearingen CE, Vergun P. Brief interventions for alcohol problems: a meta-analytic review of controlled investigations in treatment-seeking and non-treatment-seeking populations. Addiction. 2002;97(3):279–92. Kaner EF, Beyer F, Dickinson HO, Pienaar E, Campbell F, Schlesinger C, Heather N, Saunders J, Burnand B. Effectiveness of brief alcohol interventions in primary care populations. Cochrane Database Syst Rev. 2007;2:CD004148. L'Engle KL, Mwarogo P, Kingola N, Sinkele W, Weiner DH. A randomized controlled trial of a brief intervention to reduce alcohol use among female sex workers in Mombasa, Kenya. J Acquir Immune Defic Syndr. 2014;67(4):446–53. Papas RK, Sidle JE, Martino S, Baliddawa JB, Songole R, Omolo OE, Gakinya BN, Mwaniki MM, Adina JO, Nafula T, et al. Systematic cultural adaptation of cognitive-behavioral therapy to reduce alcohol use among HIV-infected outpatients in western Kenya. AIDS Behav. 2010;14(3):669–78. Willan AR, Pinto EM. The value of information and optimal clinical trial design. Stat Med. 2005;24(12):1791–806. Tuffaha HW, Reynolds H, Gordon LG, Rickard CM, Scuffham PA. Value of information analysis optimizing future trial design from a pilot study on catheter securement devices. Clin Trials. 2014;11(6):648–56. Ades AE, Lu G, Claxton K. Expected value of sample information calculations in medical decision modeling. Med Decis Mak. 2004;24(2):207–27. Briggs A, Claxton K, Sculpher M. Decision modelling for health economic evaluation. New York: Oxford University Press; 2006. Claxton K, Neumann PJ, Araki S, Weinstein MC. Bayesian value-of-infomation analysis - an application to a policy model of Alzheimer's disease. Int J Technol Assess. 2001;17(1):38–55. Micieli A, Bennell MC, Pham B, Krahn M, Singh SM, Wijeysundera HC. Identifying future research priorities using value of information analyses: left atrial appendage occlusion devices in atrial fibrillation. J Am Heart Assoc. 2014;3(5):e001432. Uyei J, Li LF, Braithwaite RS. Is more research always needed? Estimating optimal sample sizes for trials of retention in care interventions for HIV-positive East Africans. BMJ Glob Health. 2017;2(2):e000195. Papas RK, Sidle JE, Gakinya BN, Baliddawa JB, Martino S, Mwaniki MM, Songole R, Omolo OE, Kamanda AM, Ayuku DO, et al. Treatment outcomes of a stage 1 cognitive-behavioral trial to reduce alcohol use among human immunodeficiency virus-infected out-patients in western Kenya. Addiction. 2011;106(12):2156–66. Kurth A. Computerized counseling to promote positive prevention and HIV health in Kenya. In: NIH RePORT; 2012. Cohen M, Donenberg G, Nsanzimana S. Improving adherence among HIV+ Rwandan youth: a TI-CBT indigenous leader model. In: NIH RePORT: NIH; 2015. https://clinicaltrials.gov/ct2/show/NCT02464423. Musoke P, Fowler MG. Using enhanced peer group strategies to support option B+ in Uganda. In: NIH RePORT: NIH; 2015. http://grantome.com/grant/NIH/R01-HD080476-01. Linnemayr S. Improving drug adherence among adolescents in Uganda using SMS reminders (RATA): ClinicalTrials.gov: NIH; 2014. https://clinicaltrials.gov/ct2/show/NCT02128087. Gordon AJ, Maisto SA, McNeil M, Kraemer KL, Conigliaro RL, Kelley ME, Conigliaro J. Three questions can detect hazardous drinkers. J Fam Pract. 2001;50(4):313–20. Saunders JB, Aasland OG, Babor TF, de la Fuente JR, Grant M. Development of the alcohol use disorders identification test (AUDIT): WHO collaborative project on early detection of persons with harmful alcohol consumption--II. Addiction. 1993;88(6):791–804. Bush K, Kivlahan DR, McDonell MB, Fihn SD, Bradley KA, Project ACQI. The AUDIT alcohol consumption questions (AUDIT-C) - an effective brief screening test for problem drinking. Arch Intern Med. 1998;158(16):1789–95. Bandura A. Citation classic - principles of behavior-modification. CC/Soc Behav Sci. 1979;29:10. Braithwaite RS, Nucifora KA, Kessler J, Toohey C, Mentor SM, Uhler LM, Roberts MS, Bryant K. Impact of interventions targeting unhealthy alcohol use in Kenya on HIV transmission and AIDS-related deaths. Alcohol Clin Exp Res. 2014;38(4):1059–67. Braithwaite RS, Nucifora KA, Yiannoutsos CT, Musick B, Kimaiyo S, Diero L, Bacon MC, Wools-Kaloustian K. Alternative antiretroviral monitoring strategies for HIV-infected patients in East Africa: opportunities to save more lives? J Int AIDS Soc. 2011;14:38. Cost effectiveness and strategic planning (WHO-CHOICE). [http://www.who.int/choice/en/]. Accessed 10 July 2018. van Hout BA, Al MJ, Gordon GS, Rutten FF. Costs, effects and C/E-ratios alongside a clinical trial. Health Econ. 1994;3(5):309–19. EAC Facts & Figures Report (2015). [http://www.eala.org/documents/view/east-africancommunity-facts-and-figures-2015]. Accessed 10 July 2018. The VOI analysis in this study was achieved by using the High Performance Computing Facility in the Center for Health Informatics and Bioinformatics at New York University Langone Medical Center. This study is funded by National Institute on Alcohol Abuse and Alcoholism. This study is supported by National Institute on Alcohol Abuse and Alcoholism. All data generated or analyzed during this study are included in this published article [28]. Department of Population Health, New York University School of Medicine, 227 East 30th Street, Floor 6, New York, NY, 10016, USA Lingfeng Li, Jennifer Uyei, Kimberly A. Nucifora, Jason Kessler, Elizabeth R. Stevens & R. Scott Braithwaite National Institute on Alcohol Abuse and Alcoholism, National Institutes of Health, Bethesda, MD, USA Kendall Bryant Lingfeng Li Jennifer Uyei Kimberly A. Nucifora Jason Kessler Elizabeth R. Stevens R. Scott Braithwaite LL performed value of Information analysis, contributed research ideas, and drafted the manuscript. JU conducted literature review, performed value of information analysis, and contributed research ideas. KAN provided modeling support in HIV progression and transmission. JK provided clinical insights and guidance on HIV care and treatment. ERS supported data interpretation and manuscript revision. KB provided suggestions and research insights in alcohol research. RSB contributed research ideas and manuscript writing. All authors read and approved the final manuscript. Correspondence to Kimberly A. Nucifora. Li, L., Uyei, J., Nucifora, K.A. et al. Using value of information methods to determine the optimal sample size for effectiveness trials of alcohol interventions for HIV-infected patients in East Africa. BMC Health Serv Res 18, 590 (2018). https://doi.org/10.1186/s12913-018-3356-7 Optimal sample size Alcohol intervention
CommonCrawl
Only show content I have access to (32) Only show open access (8) Last 12 months (4) Last 3 years (12) Physics and Astronomy (36) Chemistry (10) Materials Research (10) Earth and Environmental Sciences (1) Politics and International Relations (1) European Astronomical Society Publications Series (10) Proceedings of the International Astronomical Union (10) European Psychiatry (6) Journal of Materials Research (6) Publications of the Astronomical Society of Australia (6) MRS Online Proceedings Library Archive (4) Microscopy and Microanalysis (4) British Journal of Nutrition (3) Epidemiology & Infection (2) Public Health Nutrition (2) Business and Human Rights Journal (1) Epidemiology and Psychiatric Sciences (1) Experimental Agriculture (1) Journal of Helminthology (1) Journal of the Marine Biological Association of the United Kingdom (1) Proceedings of the Nutrition Society (1) Psychological Medicine (1) The Spanish Journal of Psychology (1) International Astronomical Union (10) Materials Research Society (10) European Psychiatric Association (6) BSAS (3) Nestle Foundation - enLINK (3) Nutrition Society (3) AMA Mexican Society of Microscopy MMS (1) Brazilian Society for Microscopy and Microanalysis (SBMM) (1) EAAP (1) MAS - Microbeam Analysis Society (1) MBA Online Only Members (1) MSC - Microscopical Society of Canada (1) Testing Membership Number Upload (1) WALLABY pilot survey: Public release of H i data for almost 600 galaxies from phase 1 of ASKAP pilot observations T. Westmeier, N. Deg, K. Spekkens, T. N. Reynolds, A. X. Shen, S. Gaudet, S. Goliath, M. T. Huynh, P. Venkataraman, X. Lin, T. O'Beirne, B. Catinella, L. Cortese, H. Dénes, A. Elagali, B.-Q. For, G. I. G. Józsa, C. Howlett, J. M. van der Hulst, R. J. Jurek, P. Kamphuis, V. A. Kilborn, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, C. Murugeshan, J. Rhee, P. Serra, L. Shao, L. Staveley-Smith, J. Wang, O. I. Wong, M. A. Zwaan, J. R. Allison, C. S. Anderson, Lewis Ball, D. C.-J. Bock, D. Brodrick, J. D. Bunton, F. R. Cooray, N. Gupta, D. B. Hayman, E. K. Mahony, V. A. Moss, A. Ng, S. E. Pearce, W. Raja, D. N. Roxby, M. A. Voronkov, K. A. Warhurst, H. M. Courtois, K. Said Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022 Published online by Cambridge University Press: 15 November 2022, e058 We present WALLABY pilot data release 1, the first public release of H i pilot survey data from the Wide-field ASKAP L-band Legacy All-sky Blind Survey (WALLABY) on the Australian Square Kilometre Array Pathfinder. Phase 1 of the WALLABY pilot survey targeted three $60\,\mathrm{deg}^{2}$ regions on the sky in the direction of the Hydra and Norma galaxy clusters and the NGC 4636 galaxy group, covering the redshift range of $z \lesssim 0.08$ . The source catalogue, images and spectra of nearly 600 extragalactic H i detections and kinematic models for 109 spatially resolved galaxies are available. As the pilot survey targeted regions containing nearby group and cluster environments, the median redshift of the sample of $z \approx 0.014$ is relatively low compared to the full WALLABY survey. The median galaxy H i mass is $2.3 \times 10^{9}\,{\rm M}_{{\odot}}$ . The target noise level of $1.6\,\mathrm{mJy}$ per 30′′ beam and $18.5\,\mathrm{kHz}$ channel translates into a $5 \sigma$ H i mass sensitivity for point sources of about $5.2 \times 10^{8} \, (D_{\rm L} / \mathrm{100\,Mpc})^{2} \, {\rm M}_{{\odot}}$ across 50 spectral channels ( ${\approx} 200\,\mathrm{km \, s}^{-1}$ ) and a $5 \sigma$ H i column density sensitivity of about $8.6 \times 10^{19} \, (1 + z)^{4}\,\mathrm{cm}^{-2}$ across 5 channels ( ${\approx} 20\,\mathrm{km \, s}^{-1}$ ) for emission filling the 30′′ beam. As expected for a pilot survey, several technical issues and artefacts are still affecting the data quality. Most notably, there are systematic flux errors of up to several 10% caused by uncertainties about the exact size and shape of each of the primary beams as well as the presence of sidelobes due to the finite deconvolution threshold. In addition, artefacts such as residual continuum emission and bandpass ripples have affected some of the data. The pilot survey has been highly successful in uncovering such technical problems, most of which are expected to be addressed and rectified before the start of the full WALLABY survey. Dietary diversity and depression: cross-sectional and longitudinal analyses in Spanish adult population with metabolic syndrome. Findings from PREDIMED-Plus trial Naomi Cano-Ibáñez, Lluis Serra-Majem, Sandra Martín-Peláez, Miguel Ángel Martínez-González, Jordi Salas-Salvadó, Dolores Corella, Camille Lassale, Jose Alfredo Martínez, Ángel M Alonso-Gómez, Julia Wärnberg, Jesús Vioque, Dora Romaguera, José López-Miranda, Ramon Estruch, Ana María Gómez-Pérez, José Lapetra, Fernando Fernández-Aranda, Aurora Bueno-Cavanillas, Josep A Tur, Naiara Cubelos, Xavier Pintó, José Juan Gaforio, Pilar Matía-Martín, Josep Vidal, Cristina Calderón, Lidia Daimiel, Emilio Ros, Alfredo Gea, Nancy Babio, Ignacio Manuel Gimenez-Alba, María Dolores Zomeño-Fajardo, Itziar Abete, Lucas Tojal Sierra, Rita P Romero-Galisteo, Manoli García de la Hera, Marian Martín-Padillo, Antonio García-Ríos, Rosa M Casas, JC Fernández-García, José Manuel Santos-Lozano, Estefanía Toledo, Nerea Becerra-Tomas, Jose V Sorli, Helmut Schröder, María A Zulet, Carolina Sorto-Sánchez, Javier Diez-Espino, Carlos Gómez-Martínez, Montse Fitó, Almudena Sánchez-Villegas Journal: Public Health Nutrition , First View Published online by Cambridge University Press: 19 July 2022, pp. 1-13 To examine the cross-sectional and longitudinal (2-year follow-up) associations between dietary diversity (DD) and depressive symptoms. An energy-adjusted dietary diversity score (DDS) was assessed using a validated FFQ and was categorised into quartiles (Q). The variety in each food group was classified into four categories of diversity (C). Depressive symptoms were assessed with Beck Depression Inventory-II (Beck II) questionnaire and depression cases defined as physician-diagnosed or Beck II >= 18. Linear and logistic regression models were used. Spanish older adults with metabolic syndrome (MetS). A total of 6625 adults aged 55–75 years from the PREDIMED-Plus study with overweight or obesity and MetS. Total DDS was inversely and statistically significantly associated with depression in the cross-sectional analysis conducted; OR Q4 v. Q1 = 0·76 (95 % CI (0·64, 0·90)). This was driven by high diversity compared to low diversity (C3 v. C1) of vegetables (OR = 0·75, 95 % CI (0·57, 0·93)), cereals (OR = 0·72 (95 % CI (0·56, 0·94)) and proteins (OR = 0·27, 95 % CI (0·11, 0·62)). In the longitudinal analysis, there was no significant association between the baseline DDS and changes in depressive symptoms after 2 years of follow-up, except for DD in vegetables C4 v. C1 = (β = 0·70, 95 % CI (0·05, 1·35)). According to our results, DD is inversely associated with depressive symptoms, but eating more diverse does not seem to reduce the risk of future depression. Additional longitudinal studies (with longer follow-up) are needed to confirm these findings. Mental impact of Covid-19 among Spanish healthcare workers. A large longitudinal survey J. Alonso, G. Vilagut, I. Alayo, M. Ferrer, F. Amigo, A. Aragón-Peña, E. Aragonès, M. Campos, I. del Cura-González, I. Urreta, M. Espuga, A. González Pinto, J. M. Haro, N. López Fresneña, A. Martínez de Salázar, J. D. Molina, R. M. Ortí Lucas, M. Parellada, J. M. Pelayo-Terán, A. Pérez Zapata, J. I. Pijoan, N. Plana, M. T. Puig, C. Rius, C. Rodriguez-Blazquez, F. Sanz, C. Serra, R. C. Kessler, R. Bruffaerts, E. Vieta, V. Pérez-Solá, P. Mortier, MINDCOVID Working group Journal: Epidemiology and Psychiatric Sciences / Volume 31 / 2022 Published online by Cambridge University Press: 29 April 2022, e28 Longitudinal data on the mental health impact of the coronavirus disease 2019 (Covid-19) pandemic in healthcare workers is limited. We estimated prevalence, incidence and persistence of probable mental disorders in a cohort of Spanish healthcare workers (Covid-19 waves 1 and 2) -and identified associated risk factors. 8996 healthcare workers evaluated on 5 May–7 September 2020 (baseline) were invited to a second web-based survey (October–December 2020). Major depressive disorder (PHQ-8 ≥ 10), generalised anxiety disorder (GAD-7 ≥ 10), panic attacks, post-traumatic stress disorder (PCL-5 ≥ 7), and alcohol use disorder (CAGE-AID ≥ 2) were assessed. Distal (pre-pandemic) and proximal (pandemic) risk factors were included. We estimated the incidence of probable mental disorders (among those without disorders at baseline) and persistence (among those with disorders at baseline). Logistic regression of individual-level [odds ratios (OR)] and population-level (population attributable risk proportions) associations were estimated, adjusting by all distal risk factors, health care centre and time of baseline interview. 4809 healthcare workers participated at four months follow-up (cooperation rate = 65.7%; mean = 120 days s.d. = 22 days from baseline assessment). Follow-up prevalence of any disorder was 41.5%, (v. 45.4% at baseline, p < 0.001); incidence, 19.7% (s.e. = 1.6) and persistence, 67.7% (s.e. = 2.3). Proximal factors showing significant bivariate-adjusted associations with incidence included: work-related factors [prioritising Covid-19 patients (OR = 1.62)], stress factors [personal health-related stress (OR = 1.61)], interpersonal stress (OR = 1.53) and financial factors [significant income loss (OR = 1.37)]. Risk factors associated with persistence were largely similar. Our study indicates that the prevalence of probable mental disorders among Spanish healthcare workers during the second wave of the Covid-19 pandemic was similarly high to that after the first wave. This was in good part due to the persistence of mental disorders detected at the baseline, but with a relevant incidence of about 1 in 5 of HCWs without mental disorders during the first wave of the Covid-19 pandemic. Health-related factors, work-related factors and interpersonal stress are important risks of persistence of mental disorders and of incidence of mental disorders. Adequately addressing these factors might have prevented a considerable amount of mental health impact of the pandemic among this vulnerable population. Addressing health-related stress, work-related factors and interpersonal stress might reduce the prevalence of these disorders substantially. Study registration number: NCT04556565 Informal Mining in Colombia: Gender-Based Challenges for the Implementation of the Business and Human Rights Agenda Lina M Céspedes-Báez, Enrique Prieto-Ríos, Juan P Pontón-Serra Journal: Business and Human Rights Journal / Volume 7 / Issue 1 / February 2022 Published online by Cambridge University Press: 02 March 2022, pp. 67-83 This paper analyses whether the implementation of business and human rights (BHR) frameworks in Colombia properly responds to the challenges posed by informal mining and gender-based violence and discrimination in the context of conflict and peacebuilding. The mining sector has been considered key in Colombia to promote economic growth, but it is also characterized by significant informality. Informal mining in Colombia has been linked to gender-based violence and discrimination. We contend that while informality has been identified as a substantial hurdle to the realization of human rights, BHR frameworks still fall short in addressing this aspect of business. By examining the specific measures Colombia has devised to implement BHR, including two National Action Plans on BHR, we demonstrate the urgency of addressing informal economies in BHR and to continue developing particular insights to properly protect, respect and remedy the human rights wrongs women experience in the context of informal mining. Australian square kilometre array pathfinder: I. system description Australian SKA Pathfinder A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier Published online by Cambridge University Press: 05 March 2021, e009 In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown. P01.18 Role of pro-inflammatory cytokines in down's syndrome M.G. Carta, M.C. Hardoy, P.E. Manconil, P. Serra, A. Barrancal, C.M. Caffarelli, E. Mancal, A. Ghianil Journal: European Psychiatry / Volume 15 / Issue S2 / October 2000 Published online by Cambridge University Press: 16 April 2020, p. 325s Psychiatric Emergency Service use in Coimbra University Hospitals: Results from a 6-Month Cross-Sectional Study Sample J. Cerejeira, H. Firmino, I. Boto, H. Rita, G. Santos, J. Teixeira, L. Vale, P. Abrantes, A. Vaz Serra Journal: European Psychiatry / Volume 24 / Issue S1 / January 2009 Published online by Cambridge University Press: 16 April 2020, p. 1 The Psychiatric Emergency Service (PES) is an important part of the mental health care system for the management of acute conditions requiring prompt intervention representing also a significant part of workload of specialists and trainees. The objective of this study was to characterize the clinical features of patients observed in PES of Coimbra University Hospitals. During the first 6 months of 2008, demographic and clinical data were obtained for all patients observed by the first author of the study, together with a specialist in Psychiatry. The sample consisted of 159 patients, 103 females and 56 males. Mean age was 45,9 ± 18,367 years. The majority of patients presented in the emergency room either alone (56,6%) or with a first degree relative (34,6%) by self-initiative and having a past psychiatric history (71,1%). Disturbing mood symptoms (depression, anxiety or both) were the motive of assessment in 58% of patients but several other causes were reported including behavioural symptoms, agitation, psychosis, drug or alcohol related disorders, sleep and cognitive disorders. Average Clinical Global Impression was 4,12 ± 1,177. After the psychiatric assessment, several diagnosis were made namely Major Depressive Episode (14,5%), Adaptation Disorders (13,9%), Schizophrenia and related disorders (13,8%), Anxiety Disorder Not Otherwise Specified (11,9%) and Drug or Alcohol related disorders (8,2%). Most patients were discharged without referral (50,3%). A significant percentage of patients went to the PES for conditions that could have been treated by a primary care physician or in an outpatient clinic setting. 1954 – Intervention Group In Patients With Chronic Low Back Pain: a Multidisciplinary Approach P. Lusilla, C. Castellano-Tejedor, E. Barnola-Serra, C. Ramos Rodon, T. Biedermann-Villagra, M.L. Torrent-Bertran, G. Costa-Requena, L. Camprubí-Roca, A. Palacios-González, A. Cuxart-Fina, A. Ginés-Puertas, A. Bosch-Graupera Journal: European Psychiatry / Volume 28 / Issue S1 / 2013 Non-specific chronic low back pain is one of common causes of disability and a recurrent medical complaint with high costs. From rehabilitative medicine, physiotherapy programs and general postural recommendations are offered. Although this treatment is aimed to reduce disability, severity of pain and anxiety-depressive symptoms, many patients report partial improvements and recurrence of pain. Therefore, a new approach to treat this pathology with a broaden focus on psychososocial issues that might modulate pain and its evolution is required. Aims and hypothesis To assess the effectiveness of two complementary interventions to physiotherapy, such as relaxation techniques (specifically, sophrology) and cognitive behavioral intervention. It is hypothesized that intervention groups will significantly improve their adherence to physiotherapy and will gain control over their pain. Ultimately, this will foster better quality of life. Longitudinal design with pre-post intervention measures and follow-up appointments (at 6 and 12 months) carried out in a sample of 66 participants. The sample will be divided into three groups: control (physiotherapy), intervention group 1 (physiotherapy & sophrology) and intervention group 2 (physiotherapy & cognitive behavioral intervention). In all groups biomedical aspects regarding type, evolution and characterization of pain as well as several psychosocial factors will be assessed. Preliminary results are expected by December 2013. If hypotheses are confirmed, we will be able to provide empirical evidences to justify a multidisciplinary care model for chronic low back pain, which will favor a significant cost reduction in terms of health care and human suffering. Motivations behind suicide attempts: A study in the ER of Maggiore hospital – Novara D. Marangon, C. Gramaglia, E. Gattoni, M. Chiarelli Serra, C. Delicato, S. Di Marco, A. Venesia, L. Castello, G.C. Avanzi, P. Zeppegno Journal: European Psychiatry / Volume 41 / Issue S1 / April 2017 Published online by Cambridge University Press: 23 March 2020, pp. S398-S399 A previous study, conducted in the province of Novara stated that, from an epidemiological and clinical point of view, being a female, being a migrant, as well as being in the warmer months of the year, or suffering from an untreated psychiatric disease are associated with suicide attempts. Literature suggests there is a positive relation between negative life events and suicidal behaviours. In this study, we intend to deepen knowledge, individuating motivations and meanings underlying suicidal behaviours. This appears a meaningful approach to integrate studies and initiatives in order to prevent suicide and suicidal behaviours. To examine possible correlation between socio-demographic and clinical characteristics and motivations underlying suicide attempts. Patients aged > 16 years admitted for attempted suicide in the Emergency Room of the AOU Maggiore della Carità Hospital, Novara, Italy, were studied retrospectively from the 1st January 2015 to the 31st December 2016. Each patient was assessed by an experienced psychiatrist with a clinical interview; socio-demographic and clinical features were gathered. Analysis were performed with SPSS. Data collection are still ongoing; results and implications will be discussed. We expect to find different motivations in relation to socio-demographic and clinical characteristics [1,2]. Disclosure of interest The authors have not supplied their declaration of competing interest. Early clinical predictors and correlates of long-term morbidity in bipolar disorder G. Serra, A Koukopoulos, L. De Chiara, A.E. Koukopoulos, G. Sani, L. Tondo, P. Girardi, D. Reginaldi, R.J. Baldessarini Journal: European Psychiatry / Volume 43 / June 2017 Identifying factors predictive of long-term morbidity should improve clinical planning limiting disability and mortality associated with bipolar disorder (BD). We analyzed factors associated with total, depressive and mania-related long-term morbidity and their ratio D/M, as %-time ill between a first-lifetime major affective episode and last follow-up of 207 BD subjects. Bivariate comparisons were followed by multivariable linear regression modeling. Total % of months ill during follow-up was greater in 96 BD-II (40.2%) than 111 BD-I subjects (28.4%; P = 0.001). Time in depression averaged 26.1% in BD-II and 14.3% in BD-I, whereas mania-related morbidity was similar in both, averaging 13.9%. Their ratio D/M was 3.7-fold greater in BD-II than BD-I (5.74 vs. 1.96; P < 0.0001). Predictive factors independently associated with total %-time ill were: [a] BD-II diagnosis, [b] longer prodrome from antecedents to first affective episode, and [c] any psychiatric comorbidity. Associated with %-time depressed were: [a] BD-II diagnosis, [b] any antecedent psychiatric syndrome, [c] psychiatric comorbidity, and [d] agitated/psychotic depressive first affective episode. Associated with %-time in mania-like illness were: [a] fewer years ill and [b] (hypo)manic first affective episode. The long-term D/M morbidity ratio was associated with: [a] anxious temperament, [b] depressive first episode, and [c] BD-II diagnosis. Long-term depressive greatly exceeded mania-like morbidity in BD patients. BD-II subjects spent 42% more time ill overall, with a 3.7-times greater D/M morbidity ratio, than BD-I. More time depressed was predicted by agitated/psychotic initial depressive episodes, psychiatric comorbidity, and BD-II diagnosis. Longer prodrome and any antecedent psychiatric syndrome were respectively associated with total and depressive morbidity. The Psycho-geriatric Patient in the Emergency Room (ER) of the Maggiore della Carità Hospital in Novara E. Di Tullio, C. Vecchi, A. Venesia, L. Girardi, C. Molino, P. Camera, M. Chiarelli serra, C. Gramaglia, A. Feggi, P. Zeppegno Due to population aging, the health system will face increasing challenges in the next years. Concerning mental disorders, they are major public health issues in late life, with mood and anxiety disorders being some of the most common mental disorder among the elderly. For this reason, increasing attention has to be paid to the evaluation of the elderly in psychiatry emergency settings. To evaluate the socio-demographic and clinical features of over 65 patients referred to psychiatric consultations in the ER of "Maggiore della Carità" Hospital in Novara, in a 7 years period. The analysis of the characteristics of the study sample could be potentially useful in resource planning in order to better serve this important segment of the general population. Determinants of ER visits for over 65 patients referred to psychiatric evaluation were studied retrospectively from 2008 to 2015. Elderly patients made up 14,7% (n = 458) of all psychiatric evaluation in the ER (n = 3124). About two thirds (65,9%) were females and one third were males (34,1%). The mean age of patients recruited was 75.11 years. The majority of subjects (68.6%) presented without a diagnosis of Axis I according to DSM-IV. The other most frequent diagnosis was "cognitive disorders" (11.4%) and "mood disorders" (10.9%). The large proportion of patients without a diagnosis of Axis I, could be related to the misunderstanding of the psychosocial aspects of aging. Preliminary results highlight the importance of research on this topic, considering population aging and the impact of mental disorders in late-life. The recurrent nuclear activity of Fornax A and its interaction with the cold gas F. M. Maccagni, P. Serra, M. Murgia, F. Govoni, K. Morokuma-Matsui, D. Kleiner Journal: Proceedings of the International Astronomical Union / Volume 15 / Issue S359 / March 2020 Print publication: March 2020 Sensitive (noise ∼16 μJy beam−1), high-resolution (∼10″) MeerKAT observations of show that its giant lobes have a double-shell morphology, where dense filaments are embedded in a diffuse and extended cocoon, while the central radio jets are confined within the host galaxy. The spectral radio properties of the lobes and jets of reveal that its nuclear activity is rapidly flickering. Multiple episodes of nuclear activity must have formed the radio lobes, for which the last stopped 12 Myr ago. More recently (∼3 Myr ago), a less powerful and short (≲1 Myr) phase of nuclear activity generated the central jets. The distribution and kinematics of the neutral and molecular gas in the centre give insights on the interaction between the recurrent nuclear activity and the surrounding interstellar medium. Active and passive surveillance for bat lyssaviruses in Italy revealed serological evidence for their circulation in three bat species S. Leopardi, P. Priori, B. Zecchin, G. Poglayen, K. Trevisiol, D. Lelli, S. Zoppi, M. T. Scicluna, N. D'Avino, E. Schiavon, H. Bourhy, J. Serra-Cobo, F. Mutinelli, D. Scaravelli, P. De Benedictis Journal: Epidemiology & Infection / Volume 147 / 2019 Published online by Cambridge University Press: 04 December 2018, e63 The wide geographical distribution and genetic diversity of bat-associated lyssaviruses (LYSVs) across Europe suggest that similar viruses may also be harboured in Italian insectivorous bats. Indeed, bats were first included within the passive national surveillance programme for rabies in wildlife in the 1980s, while active surveillance has been performed since 2008. The active surveillance strategies implemented allowed us to detect neutralizing antibodies directed towards European bat 1 lyssavirus in six out of the nine maternity colonies object of the study across the whole country. Seropositive bats were Myotis myotis, M. blythii and Tadarida teniotis. On the contrary, the virus was neither detected through passive nor active surveillance, suggesting that fatal neurological infection is rare also in seropositive colonies. Although the number of tested samples has steadily increased in recent years, submission turned out to be rather sporadic and did not include carcasses from bat species that account for the majority of LYSVs cases in Europe, such as Eptesicus serotinus, M. daubentonii, M. dasycneme and M. nattereri. A closer collaboration with bat handlers is therefore mandatory to improve passive surveillance and decrypt the significance of serological data obtained up to now. Genetic parameters of backfat fatty acids and carcass traits in Large White pigs R. Davoli, G. Catillo, A. Serra, M. Zappaterra, P. Zambonelli, D. Meo Zilio, R. Steri, M. Mele, L. Buttazzoni, V. Russo Journal: animal / Volume 13 / Issue 5 / May 2019 Print publication: May 2019 Subcutaneous fat thickness and fatty acid composition (FAC) play an important role on seasoning loss and organoleptic characteristics of seasoned hams. Dry-cured ham industry prefers meats with low contents of polyunsaturated fatty acids (PUFA) because these negatively affect fat firmness and ham quality, whereas consumers require higher contents in those fatty acids (FA) for their positive effect on human health. A population of 950 Italian Large White pigs from the Italian National Sib Test Selection Programme was investigated with the aim to estimate heritabilities, genetic and phenotypic correlations of backfat FAC, Semimembranosus muscle intramuscular fat (IMF) content and other carcass traits. The pigs were reared in controlled environmental condition at the same central testing station and were slaughtered at reaching 150 kg live weight. Backfat samples were collected to analyze FAC by gas chromatography. Carcass traits showed heritability levels from 0.087 for estimated carcass lean percentage to 0.361 for hot carcass weight. Heritability values of FA classes were low-to-moderate, all in the range 0.245 for n-3 PUFA to 0.264 for monounsaturated FA (MUFA). Polyunsaturated fatty acids showed a significant genetic correlation with loin thickness (0.128), backfat thickness (−0.124 for backfat measured by Fat-O-Meat'er and −0.175 for backfat measured by calibre) and IMF (−0.102). Obviously, C18:2(n-6) shows similar genetic correlations with the same traits (0.211 with loin thickness, −0.206 with backfat measured by Fat-O-Meat'er, −0.291 with backfat measured by calibre and −0.171 with IMF). Monounsaturated FA, except with the backfat measured by calibre (0.068; P<0.01), do not show genetic correlations with carcass characteristics, whereas a negative genetic correlation was found between MUFA and saturated FA (SFA; −0.339; P<0.001). These results suggest that MUFA/SFA ratio could be increased without interfering with carcass traits. The level of genetic correlations between FA and carcass traits should be taken into account in dealing with the development of selection schemes addressed to modify carcass composition and/or backfat FAC. The Australian Square Kilometre Array Pathfinder: Performance of the Boolardy Engineering Test Array D. McConnell, J. R. Allison, K. Bannister, M. E. Bell, H. E. Bignall, A. P. Chippendale, P. G. Edwards, L. Harvey-Smith, S. Hegarty, I. Heywood, A. W. Hotan, B. T. Indermuehle, E. Lenc, J. Marvil, A. Popping, W. Raja, J. E. Reynolds, R. J. Sault, P. Serra, M. A. Voronkov, M. Whiting, S. W. Amy, P. Axtens, L. Ball, T. J. Bateman, D. C.-J. Bock, R. Bolton, D. Brodrick, M. Brothers, A. J. Brown, J. D. Bunton, W. Cheng, T. Cornwell, D. DeBoer, I. Feain, R. Gough, N. Gupta, J. C. Guzman, G. A. Hampson, S. Hay, D. B. Hayman, S. Hoyle, B. Humphreys, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, J. Joseph, B. S. Koribalski, M. Leach, E. S. Lensson, A. MacLeod, S. Mackay, M. Marquarding, N. M. McClure-Griffiths, P. Mirtschin, D. Mitchell, S. Neuhold, A. Ng, R. Norris, S. Pearce, R. Y. Qiao, A. E. T. Schinckel, M. Shields, T. W. Shimwell, M. Storey, E. Troup, B. Turner, J. Tuthill, A. Tzioumis, R. M. Wark, T. Westmeier, C. Wilson, T. Wilson Published online by Cambridge University Press: 09 September 2016, e042 We describe the performance of the Boolardy Engineering Test Array, the prototype for the Australian Square Kilometre Array Pathfinder telescope. Boolardy Engineering Test Array is the first aperture synthesis radio telescope to use phased array feed technology, giving it the ability to electronically form up to nine dual-polarisation beams. We report the methods developed for forming and measuring the beams, and the adaptations that have been made to the traditional calibration and imaging procedures in order to allow BETA to function as a multi-beam aperture synthesis telescope. We describe the commissioning of the instrument and present details of Boolardy Engineering Test Array's performance: sensitivity, beam characteristics, polarimetric properties, and image quality. We summarise the astronomical science that it has produced and draw lessons from operating Boolardy Engineering Test Array that will be relevant to the commissioning and operation of the final Australian Square Kilometre Array Path telescope. Iodine status and thyroid function among Spanish schoolchildren aged 6–7 years: the Tirokid study L. Vila, S. Donnay, J. Arena, J. J. Arrizabalaga, J. Pineda, E. Garcia-Fuentes, C. García-Rey, J. L. Marín, M. Serra-Prat, I. Velasco, A. López-Guzmán, L. M. Luengo, A. Villar, Z. Muñoz, O. Bandrés, E. Guerrero, J. A. Muñoz, G. Moll, F. Vich, E. Menéndez, M. Riestra, Y. Torres, P. Beato-Víbora, M. Aguirre, P. Santiago, J. Aranda, C. Gutiérrez-Repiso Journal: British Journal of Nutrition / Volume 115 / Issue 9 / 14 May 2016 Published online by Cambridge University Press: 10 March 2016, pp. 1623-1631 Print publication: 14 May 2016 I deficiency is still a worldwide public health problem, with children being especially vulnerable. No nationwide study had been conducted to assess the I status of Spanish children, and thus an observational, multicentre and cross-sectional study was conducted in Spain to assess the I status and thyroid function in schoolchildren aged 6–7 years. The median urinary I (UI) and thyroid-stimulating hormone (TSH) levels in whole blood were used to assess the I status and thyroid function, respectively. A FFQ was used to determine the consumption of I-rich foods. A total of 1981 schoolchildren (52 % male) were included. The median UI was 173 μg/l, and 17·9 % of children showed UI<100 μg/l. The median UI was higher in males (180·8 v. 153·6 μg/l; P<0·001). Iodised salt (IS) intake at home was 69·8 %. IS consumption and intakes of ≥2 glasses of milk or 1 cup of yogurt/d were associated with significantly higher median UI. Median TSH was 0·90 mU/l and was higher in females (0·98 v. 0·83; P<0·001). In total, 0·5 % of children had known hypothyroidism (derived from the questionnaire) and 7·6 % had TSH levels above reference values. Median TSH was higher in schoolchildren with family history of hypothyroidism. I intake was adequate in Spanish schoolchildren. However, no correlation was found between TSH and median UI in any geographical area. The prevalence of TSH above reference values was high and its association with thyroid autoimmunity should be determined. Further assessment of thyroid autoimmunity in Spanish schoolchildren is desirable. Morphology of the oxyurid nematodes Trypanoxyuris (T.) cacajao n. sp. and T. (T.) ucayalii n. sp. from the red uakari monkey Cacajao calvus ucayalii in the Peruvian Amazon D.F. Conga, E.G. Giese, N.M. Serra-Freire, M. Bowler, P. Mayor Journal: Journal of Helminthology / Volume 90 / Issue 4 / July 2016 Cacajao calvus ucayalii (Thomas, 1928) (Primates: Pitheciidae), a subspecies endemic to the Peruvian Amazon, occurs in patchy and sometimes isolated populations in north-eastern Peru and is in a vulnerable situation, mainly due to habitat loss and hunting. This rareness and remote distribution means that, until now, parasitical studies have been limited. Based on optical and scanning electron microscopy of specimens of both sexes, we report two new species of Trypanoxyuris pinworms occurring in the large intestine of the Peruvian red uakari, namely Trypanoxyuris (Trypanoxyuris) cacajao and Trypanoxyuris (Trypanoxyuris) ucayalii. Both species showed a distinct morphology of the lips and cephalic structure. Sexual dimorphism in the lateral alae was observed in both male and the female worms, with ventral ornamentation being shown in the oesophageal teeth. The finding of these new pinworm species highlights the possibility of discovering other species. Volatiles in raw and cooked meat from lambs fed olive cake and linseed R. S. Gravador, A. Serra, G. Luciano, P. Pennisi, V. Vasta, M. Mele, M. Pauselli, A. Priolo Journal: animal / Volume 9 / Issue 4 / April 2015 Print publication: April 2015 This study was conducted to determine the effects of feeding olive cake and linseed to lambs on the volatile organic compounds (VOCs) in raw and cooked meat. Four groups of eight male Appenninica lambs each were fed: conventional cereal-based concentrates (diet C), concentrates containing 20% on a dry matter (DM) basis of rolled linseed (diet L), concentrates containing 35% DM of stoned olive cake (diet OC), or concentrates containing both rolled linseed (10% DM) and stoned olive cake (17% DM; diet OCL). The longissimus dorsi muscle of each lamb was sampled at slaughter and was subjected to VOC profiling through the use of SPME-GC-MS. In the raw meat, the concentration of 3-methylpentanoic acid was higher in treatment C as compared with treatments L, OC and OCL (P<0.01). Moreover the level of nonanoic acid was greater in treatments C and OC than in treatment L (P<0.05). With respect to alcohols, in raw meat the amount of 2-phenoxyethanol in treatment OCL was lower than in treatments C (P<0.01) and OC (P<0.05), while in cooked meat the amount of 1-pentanol was higher in treatment C than in treatment OC (P<0.05). Apart from these compounds, none of the lipid oxidation-derived volatiles was significantly affected by the dietary treatment. Therefore, the results suggest that the replacement of cereal concentrates with linseed and/or olive cake did not cause appreciable changes in the production of volatile organic compounds in lamb meat. On the connection between the thick disk and the galactic bar A. Spagna, A. Curir, R. Drimmel, M.G. Lattanzi, P. Re Fiorentin, A.L. Serra Journal: European Astronomical Society Publications Series / Volume 68 / 2014 Published online by Cambridge University Press: 17 July 2015, p. 405 Print publication: 2014 Although the thick disk in our Galaxy was revealed more than thirty years ago, its formation scenario is still unclear. Here, we analyze a chemo-dynamical simulation of a primordial disk population representative of the Galactic thick disk and investigate how the spatial, kinematic, and chemical properties are affected by the presence of a central bar. The use of stoned olive cake and rolled linseed in the diet of intensively reared lambs: effect on the intramuscular fatty-acid composition M. Mele, A. Serra, M. Pauselli, G. Luciano, M. Lanza, P. Pennisi, G. Conte, A. Taticchi, S. Esposto, L. Morbidini Journal: animal / Volume 8 / Issue 1 / January 2014 Print publication: January 2014 The aim of the present study was to evaluate the effect of the inclusion of stoned olive cake and rolled linseed in a concentrate-based diet for lambs on the fatty-acid composition of polar and non-polar intramuscular lipids of the longissimus dorsi muscle. To achieve this objective, 32 Appenninica lambs were randomly distributed into four groups of eight lambs each and were fed conventional cereal-based concentrates (diet C); concentrates containing 20% on a dry matter (DM) basis of rolled linseed (diet L); concentrates containing 35% DM of stoned olive cake (diet OC); and concentrates containing both rolled linseed (10% DM) and stoned olive cake (17% DM; diet OCL). The concentrates were administered together with grass hay at a 20:80 forage:concentrate ratio. Growing performances and carcass traits were evaluated. The fatty-acid composition was analysed in the total intramuscular lipids, as well as in the polar and neutral lipids. The average feed intake and the growth performance of lambs were not affected by the dietary treatments, as a consequence of similar nutritional characteristics of the diets. The inclusion of rolled linseed in the L and OCL diets increased the content of C18:3 n-3 in intramuscular total lipids, which was threefold higher in meat from the L lambs and more than twofold higher in meat from the OCL lambs compared with the C and OC treatments. The n-6:n-3 ratio significantly decreased in the meat from lambs in the L and OCL groups, reaching values below 3. The L treatment resulted in the highest level of trans-18:1 fatty acids in the muscle. Regardless of the dietary treatment, the t10-18:1 was the major isomer, representing 55%, 45%, 49% and 45% of total trans-18:1 for C, L, OC and OCL treatments, respectively. Neutral lipids from the OC-fed lambs contained the highest amount of c9-18:1 (more than 36% of total fatty acids); however, the content of c9-18:1 did not differ between the OC and C lambs, suggesting an intensive biohydrogenation of dietary c9-18:1 in the case of OC treatment. The highest content of c9,t11-18:2 was detected in the intramuscular fat from the L-fed lambs, followed by the OCL treatment. A similar trend was observed in the neutral lipid fraction and, to a lower extent, in the polar lipids.
CommonCrawl
\begin{document} \title{ extbf{Multiple Petersen subdivisions\in permutation graphs} \footnotetext[1]{Department of Mathematics and Institute for Theoretical Computer Science, University of West Bohemia, Univerzitn\'{\i}~8, 306~14~Plze\v{n}, Czech Republic. E-mail: \texttt{[email protected]}. Supported by project P202/12/G061 of the Czech Science Foundation.} \footnotetext[2]{CNRS (LIAFA, Universit\'{e} Diderot), Paris, France. E-mail: \texttt{[email protected]}. This author's work was partially supported by the French \emph{Agence Nationale de la Recherche} under reference \textsc{anr 10 jcjc 0204 01}.} \footnotetext[3]{LIAFA, Universit\'{e} Denis Diderot (Paris 7), 175 Rue du Chevaleret, 75013 Paris, France. E-mail: \texttt{[email protected]}. This author's work was partially supported by the French \emph{Agence Nationale de la Recherche} under reference \textsc{anr 10 jcjc 0204 01}.} \begin{abstract} A permutation graph is a cubic graph admitting a 1-factor $M$ whose complement consists of two chordless cycles. Extending results of Ellingham and of Goldwasser and Zhang, we prove that if $e$ is an edge of $M$ such that every 4-cycle containing an edge of $M$ contains $e$, then $e$ is contained in a subdivision of the Petersen graph of a special type. In particular, if the graph is cyclically 5-edge-connected, then every edge of $M$ is contained in such a subdivision. Our proof is based on a characterization of cographs in terms of twin vertices. We infer a linear lower bound on the number of Petersen subdivisions in a permutation graph with no 4-cycles, and give a construction showing that this lower bound is tight up to a constant factor. \end{abstract} \section{Introduction} \label{sec:introduction} A special case of Tutte's 4-flow conjecture~\cite{Tut:algebraic} states that every bridgeless cubic graph with no minor isomorphic to the Petersen graph is 3-edge-colourable. Before this special case was shown to be true by Robertson et al. (cf.~\cite{Tho:recent}), one of the classes of cubic graphs for which the conjecture was known to hold was the class of permutation graphs --- i.e., graphs with a 2-factor consisting of two chordless cycles. Indeed, by a result of Ellingham~\cite{Ell:petersen}, every permutation graph is either Hamiltonian --- and hence 3-edge-colourable --- or contains a subdivision of the Petersen graph. To state his theorem more precisely, we introduce some terminology. Rephrasing the above definition, a cubic graph $G$ is a \emph{permutation graph} if it contains a perfect matching $M$ such that $G-E(M)$ is the disjoint union of two cycles, none of which has a chord in $G$. A perfect matching $M$ with this property is called a \emph{distinguished matching} in $G$. For brevity, if $G$ is a permutation graph with a distinguished matching $M$, then the pair $(G,M)$ is referred to as a \emph{marked permutation graph}. We let $P_{10}$ be the Petersen graph. Given a distinguished matching $M$ in $G$, an \emph{$M$-copy of $P_{10}$} is a subgraph $G'$ of $G$ isomorphic to a subdivision of $P_{10}$ and composed of the two cycles of $G-E(M)$ together with five edges of $M$. Following Goldwasser and Zhang~\cite{GZ:permutation}, an $M$-copy of $P_{10}$ is also referred to as an $M$-$P_{10}$. Furthermore, an \emph{$M$-copy of the 4-cycle $C_4$} (or an \emph{$M$-$C_4$}) is a $4$-cycle in $G$ using two edges of $M$. The proof of Ellingham's result implies that if a marked permutation graph $(G,M)$ contains no $M$-$P_{10}$, then it contains an $M$-$C_4$ (and is therefore Hamiltonian). Goldwasser and Zhang~\cite{GZ:permutation} obtained a slight strengthening: \begin{theorem}\label{t:zhang} If $(G,M)$ is a marked permutation graph, then $G$ contains either two $M$-copies of $C_4$, or an $M$-copy of $P_{10}$. \end{theorem} Lai and Zhang~\cite{LZ:hamilton} studied permutation graphs satisfying a certain minimality condition and proved that in a sense, they contain `many' subdivisions of the Petersen graph. The main result of this note is the following generalization of Theorem~\ref{t:zhang}. \begin{theorem}\label{t:main} Let $(G,M)$ be a marked permutation graph on at least six vertices and let $e\in E(M)$. If $e$ is contained in every $M$-$C_4$ of $G$, then $e$ is contained in an $M$-copy of $P_{10}$. \end{theorem} Theorem~\ref{t:main} is established in Section~\ref{sec:proof}. The proof is based on a relation between $M$-copies of $P_{10}$ in permutation graphs and induced paths in a related class of graphs. Of particular interest is the corollary for cyclically 5-edge-connected graphs, that is, graphs containing no edge-cut of size at most $4$ whose removal leaves at least two non-tree components. \begin{corollary}\label{cor:5-conn} Every edge of a cyclically 5-edge-connected marked permutation graph $(G,M)$ is contained in an $M$-$P_{10}$. \end{corollary} The class of cyclically 5-edge-connected permutation graphs is richer than one might expect. Indeed, it had been conjectured~\cite{Zha:integer} that every cyclically 5-edge-connected permutation graph is 3-edge-colourable, but this conjecture has been recently disproved~\cite[Observation~4.2]{BGHM:generation}. Theorem~\ref{t:main} readily implies a lower bound on the number of $M$-copies of the Petersen graph in a marked permutation graph $(G,M)$ such that $G$ contains no $M$-$C_4$ and has $n$ vertices. We improve this lower bound in Section~\ref{sec:counting}. We also show that the bounds (which are linear in $n$) are optimal up to a constant factor. We close this section with some terminology. If $G$ is a graph and $X\subseteq V(G)$, then $G[X]$ is the induced subgraph of $G$ on $X$. The set of all neighbours of a vertex $v$ of $G$ is denoted by $N_G(v)$. \section{Proof of Theorem~\ref{t:main}} \label{sec:proof} Let $(G,M)$ be a marked permutation graph. If $v\in V(G)$, then we write $v'$ for the neighbour of $v$ in $M$ (which we call the \emph{friend} of $v$). We extend this notation to arbitrary sets of vertices of $G$: if $X\subseteq V(G)$, then we set \begin{equation*} X' = \Set{v'}{v\in X}. \end{equation*} Let $A$ be the vertex set of one component of $G-E(M)$. Thus, $A'$ is the vertex set of the other component and both $G[A]$ and $G[A']$ are chordless cycles. In this section, we prove Theorem~\ref{t:main}. Fix an edge $e$ of the matching $M$. Let $a$ and $a'$ be its end-vertices. We also choose an orientation for each of the cycles $G[A]$ and $G[A']$. All these will be fixed throughout this section. If $X\subseteq A$, then $\match G X$ is the spanning subgraph of $G$ obtained by adding to $G-E(M)$ all the edges $vv'$, where $v\in X$. In expressions such as $\match G {\Setx{a,b}}$, we omit one pair of set brackets, and write just $\match G {a,b}$. The \emph{auxiliary graph} $H_a$ (with respect to the vertex $a$) is defined as follows. The vertex set of $H_a$ is $A-\Setx{a}$. Two vertices $x$ and $y$ of $H_a$ are adjacent in $H_a$ whenever the cyclic order of $a$, $x$ and $y$ on $G[A]$ is $axy$ and the cyclic order of their friends on $G[A']$ is $a'y'x'$. \begin{figure} \caption{(a) The standard drawing of the graph $G$. (b) The corresponding graph $H_a$.} \label{fig:cross} \end{figure} Alternatively, consider the following standard procedure, illustrated in Figure~\ref{fig:cross}. Arrange the vertices of $A$ on a horizontal line in the plane, starting on the left with $a$ and continuing along the cycle $G[A]$ according to the fixed orientation. Place the vertices of $A'$ on another horizontal line, putting $a'$ leftmost and continuing in accordance with the orientation of $G[A']$. Join each vertex $z\in A$ with its friend by a straight line segment. The segment $aa'$ is not crossed by any other segment, and for $x,y\in A-\Setx a$, the segments $xx'$ and $yy'$ cross each other if and only if $x$ and $y$ are adjacent in $H_a$. Thus, $H_a$ can be directly read off the resulting figure, which is called the \emph{standard drawing} of $G$. A similar construction, without fixing the vertex $a$, gives rise to a class of graphs also called `permutation graphs' (see~\cite{BLS:graph}). In this paper, we only use this term as defined in Section~\ref{sec:introduction}. The following lemma provides a link between induced paths in $H_a$ and $M$-copies of $P_{10}$ in $G$. \begin{lemma}\label{l:path} Suppose that $H_a$ contains an induced path $xyzw$ on 4 vertices. Then $\match G {a,x,y,z,w}$ is an $M$-$P_{10}$ in $G$. \end{lemma} \begin{proof} Let $P$ be the path $xyzw$ in $H_a$. Since $xy\in E(P)$, the edges $xx'$ and $yy'$ cross. By symmetry, we may assume that $x\in aCy$ and $y'\in a'C'x'$. First, note that $z\notin yCa$. Otherwise, as $zz'$ crosses $yy'$, it would follow that $zz'$ also crosses $xx'$, which contradicts the assumption that $x$ and $z$ are not adjacent in $G$. We now consider two cases, regarding whether or not $z\in aCx$. Case 1: $z\in aCx$. Since the edges $zz'$ and $xx'$ do not cross, $z'\in a'C'x'$; moreover, since $zz'$ and $yy'$ cross, it follows that $z'\in y'C'x'$. We assert that $w\in zCx$. Suppose that this is not the case. If $w\in aCz$, then $ww'$ cannot cross $zz'$ without crossing $yy'$, contradicting the fact that $zw\in E(P)$ and $yw\notin E(P)$. If $w\in xCy$, then $ww'$ crosses $xx'$ or $yy'$ regardless of the position of $w'$, which results in a similar contradiction. Finally, if $w\in yCa$, then $ww'$ cannot cross $zz'$ without crossing $xx'$. Thus, we have shown that $w\in zCx$, which implies that $w'\in a'C'y'$, as $ww'$ and $yy'$ do not cross. Summing up, $\match G {a,x,y,z,w}$ is precisely as in Figure~\ref{fig:case} and constitutes an $M$-copy of the Petersen graph. \begin{figure} \caption{The graph $\match G {a,x,y,z,w}$ in Case 1 of the proof of Lemma~\ref{l:path}.} \label{fig:case} \end{figure} Case 2: $z\notin aCx$. Then, $z\in xCy$. Since $zz'$ and $xx'$ do not cross, $z'\in x'C'a'$. As $ww'$ crosses $zz'$ but none of $xx'$ and $yy'$, the only possibility is that $w\in yCa$ and $w'\in x'C'z'$, which again produces an $M$-$P_{10}$. \end{proof} By Lemma~\ref{l:path}, if there is no $M$-copy of $P_{10}$ containing $aa'$ in $G$, then $H_a$ contains no induced path on 4 vertices. Such graphs are known as \emph{cographs} or \emph{$P_4$-free} graphs. There are various equivalent ways to describe them, summarized in the survey~\cite[Theorem~11.3.3]{BLS:graph} by Brandst\"{a}dt, Le and Spinrad. We use the characterisation that involves pairs of twin vertices. (Two vertices $x$ and $y$ of a graph $H$ are \emph{twins} if $N_H(x) = N_H(y)$.) \begin{theorem}\label{t:p4free} A graph $G$ is $P_4$-free if and only if every induced subgraph of $G$ with at least two vertices contains a pair of twins. \end{theorem} To be able to use Lemma~\ref{l:path} in conjunction with Theorem~\ref{t:p4free}, we need to interpret twin pairs of $H_a$ in terms of $G$. \begin{lemma}\label{l:twin} Let $x$ and $y$ be twins in $H_a$. Let $Q'$ be the path in $G$ defined by \begin{equation*} Q' = \begin{cases} x'C'y' & \text{if $xy\notin E(H_a)$,}\\ y'C'x' & \text{otherwise.} \end{cases} \end{equation*} Then $M$ matches the vertices of the path $xCy$ to those of $Q'$ and \emph{vice versa}. \end{lemma} \begin{proof} Suppose, on the contrary, that the statement does not hold. By symmetry, we may assume that $x\in aCy$, and that $M$ contains an edge $ww'$ with $w \in V(xCy)$ and $w'\notin V(Q')$. We assert that $ww'$ crosses exactly one edge from $\Setx{xx',yy'}$. To prove this, we consider two cases according to whether or not $xx'$ and $yy'$ cross. If they do not cross, then $ww'$ crosses only $xx'$ (if $w'\in V(a'C'x')$) or only $yy'$ (if $w'\in V(y'C'a')$). Otherwise, $ww'$ crosses only $xx'$ (if $w'\in V(a'C'y')$) or only $yy'$ (if $w'\in V(x'C'a')$). In each case, we obtain a contradiction with the assumption that $x$ and $y$ are twins in $H_a$. \end{proof} \begin{figure} \caption{The triangular prism $G$ with the unique 1-factor $M$ such that $(G,M)$ is a marked permutation graph.} \label{fig:prism} \end{figure} We now prove Theorem~\ref{t:main}, proceeding by induction on the number of vertices of $G$. The base case is the triangular prism, the unique permutation graph on $6$ vertices (Figure~\ref{fig:prism}), for which the theorem is trivially true since $e$ cannot be contained in every $M$-$C_4$ of $G$. Therefore, we assume that $G$ has at least $8$ vertices and that every $M$-copy of $C_4$ in $G$ contains the edge $aa'$. Suppose first that $azz'a'$ is such an $M$-copy of $C_4$. Let $G_0$ be the cubic graph obtained by removing the edge $zz'$ and suppressing the resulting degree 2 vertices $z$ and $z'$. Set $M_0=M\setminus\Setx{zz'}$. All $M_0$-copies of $C_4$ created by this operation contain the edge $aa'$. Therefore, regardless of whether or not $G_0$ contains an $M_0$-$C_4$, the induction hypothesis implies that $aa'$ is contained in an $M_0$-copy of $P_{10}$. This yields an $M$-copy of $P_{10}$ in $G$ containing $aa'$, as required. Consequently, it may be assumed that $G$ does not contain any $M$-$C_4$. Assume that $H_a$ contains no pair of twin vertices. Theorem~\ref{t:p4free} implies that $H_a$ is not $P_4$-free. Let $X$ be a subset of $V(H_a)$ of size 4 such that $H_a[X] \simeq P_4$. By Lemma~\ref{l:path}, $\match G {X\cup\Setx{a}}$ is an $M$-copy of $P_{10}$, and the sought conclusion follows. Thus, we may assume that $H_a$ contains twin vertices $x$ and $y$. Without loss of generality, $x$ belongs to $aCy$. Let the path $Q'$ be defined as in Lemma~\ref{l:twin}. Thus, since $x$ and $y$ are twins in $H_a$, vertices of the path $xCy$ are only adjacent in $M$ to vertices of $Q'$ and \emph{vice versa}. We transform $G$ into another cubic graph $G_1$ by removing all vertices that are not contained in $\Setx{a,a'} \cup V(xCy\cup Q')$ and adding the edges $ax$, $ay$, $a'x'$ and $a'y'$ (if they are not present yet). Let $M_1$ be the perfect matching of $G_1$ consisting of all the edges of $M$ contained in $G_1$. Note that although the transformation may create $M$-copies of $C_4$ not present in $G$, the edge $aa'$ is contained in every $M_1$-copy of $C_4$ in $G_1$. Furthermore, the path $yCx$ in $G$ must have some internal vertices other than $a$, since otherwise $G$ would contain an $M$-$C_4$, namely $yaa'y'$. Thus, $G_1$ has fewer vertices than $G$. The induction hypothesis implies that $aa'$ is contained in an $M_1$-copy of $P_{10}$ in $G_1$, and therefore also in $G$. \section{Counting the Petersen copies} \label{sec:counting} Turning to the quantitative side of the question studied in Section~\ref{sec:proof}, we now derive from Theorem~\ref{t:main} a lower bound on the number of $M$-copies of $P_{10}$ in a permutation graph with no $M$-$C_4$. The bound is linear in the order of the graph. We give a construction showing that this lower bound is tight up to a constant factor. Throughout this section, $(G,M)$ is a marked permutation graph with vertex set $A\cup A'$ just like in Section~\ref{sec:proof}. We will need two lemmas, the second of which we find to be of interest in its own right. The first lemma is an observation on auxiliary graphs which follows readily from the definition; its proof is omitted. \begin{lemma}\label{l:redrawing} Let $a,b \in A$. Then the following hold for each $x,y\in A-\Setx{a,b}$: \begin{enumerate}[\quad(i)] \item $ax \in H_b$ if and only if $bx \in H_a$, \item $xy \in H_b$ if and only if $\size{\Setx{bx, by, xy} \cap H_a} \in \Setx{1,3}$. \end{enumerate} \end{lemma} \begin{lemma}\label{l:replace} Let $a,b \in A$. One of the following conditions holds: \begin{itemize} \item there is some $M$-$P_{10}$ in $G$ containing both $aa'$ and $bb'$, or \item for any $F \subset A$ with $\size F = 4$ and $\Setx{a,b}\cap F = \emptyset$, it holds that $\match G {F\cup\Setx a} \simeq P_{10}$ if and only if $\match G {F\cup\Setx b} \simeq P_{10}$. \end{itemize} \end{lemma} \begin{proof} Assume that there exists no $M$-$P_{10}$ containing both $aa'$ and $bb'$. By Lemma~\ref{l:path}, it is sufficient to show that a set $\Setx{u,w,x,y} \subseteq A \setminus \Setx{a,b}$ induces a path of length $4$ in $H_a$ if and only if it induces a path of length $4$ in $H_b$. Let $U_1 = N_{H_a}(b) = N_{H_b}(a)$ and $U_2 = A \setminus (U_1\cup\Setx{a,b})$. Lemma~\ref{l:path} implies that in the auxiliary graph $H_a$, there is no induced path of length $4$ containing the vertex $b$. Therefore, \begin{enumerate}[\quad(i)] \item if $x,y \in U_1$, $z \in U_2$, and $xy \notin H_a$, then $xz \in H_a$ if and only if $yz \in H_a$, and \item if $x,y \in U_2$, $z \in U_1$, and $xy \in H_a$, then $xz \in H_a$ if and only if $yz \in H_a$. \end{enumerate} Hence, if $uwxy$ is an induced path of length $4$ in $H_a$, then $\Setx{u,w,x,y} \cap U_1 \in \Setx{\Setx{u,w,x,y},\Setx{w,x}, \emptyset}$. By Lemma~\ref{l:redrawing}, it follows that $\Setx{u,w,x,y}$ induces a path of length $4$ in $H_b$ as well. More precisely, this path is $uwxy$ if $\Setx{u,w,x,y} \cap U_1 \in \Setx{\Setx{u,w,x,y},\emptyset}$, and $uxwy$ if $\Setx{u,w,x,y} \cap U_1 = \Setx{w,x}$. The conclusion follows by symmetry of the roles played by $a$ and $b$. \end{proof} We can now prove the aforementioned lower bound. \begin{proposition}\label{p:lower} If $(G,M)$ is a marked permutation graph with $n \geq 40$ vertices and no $M$-$C_4$, then $(G,M)$ contains at least $n/2-4$ $M$-copies of the Petersen graph. \end{proposition} \begin{proof} If each edge of $M$ is contained in at least 5 $M$-copies of $P_{10}$, the total number of copies is at least $(5n/2)/5 = n/2$. Hence, we may assume that there exists $x\in\{1,2,3,4\}$ and an edge $e \in E(M)$ that is contained in only $x$ $M$-copies of $P_{10}$. Let $\mathcal C$ be the set of these copies. At least $n/2-4x-1$ edges of $M$ are not contained in any $M$-$P_{10}$ containing $e$. By Lemma~\ref{l:replace}, if we replace $e$ by any such edge in any $M$-$P_{10}$ from $\mathcal C$, we obtain an $M$-$P_{10}$ again. These replacements yield $x(n/2-4x-1)$ distinct $M$-copies of $P_{10}$. Thus, in total, $(G,M)$ contains at least $x(n/2-4x)$ distinct $M$-copies of $P_{10}$. Minimizing this expression over $x\in\{1,2,3,4\}$ and using the assumption that $n\geq 40$, we deduce that the number of copies is at least $n/2-4$, as asserted. \end{proof} We now construct a family of marked permutation graphs $(G_k,M_k)$ showing that the linear estimate in Proposition~\ref{p:lower} is tight up to a constant factor. The graph $G_k$ has $6k+14$ vertices, contains no $M_k$-$C_4$, and the number of $M_k$-copies of the Petersen graph in $G_k$ is only $6k+6$. (We note that graphs with a somewhat similar structure are constructed in~\cite[Section~3]{GZ:permutation}.) \begin{figure} \caption{The marked permutation graph $(G_4,M_4)$. Labels are given only for the circled vertices.} \label{fig:linear} \end{figure} For $k=4$, the graph $(G_k,M_k)$ is shown in Figure~\ref{fig:linear}. We now give a formal definition and determine the number of $M_k$-copies of $P_{10}$. Let $A = \Setx{1,2,\dots,3k+7}$ and $A' = \Setx{\overline 1,\overline 2,\dots,\overline{3k+7}}$. The vertex set of $G_k$ is $A\cup A'$. On each of $A$ and $A'$, we consider the standard linear order (in particular, $\overline 1 < \overline 2 < \dots < \overline{3k+7}$). As in Section~\ref{sec:proof}, we write $i'$ for the neighbour in $M_k$ of a vertex $i\in A$. Thus, $i' = \overline j$ for a suitable $j$. Let \begin{align*} E_1 &= \Set{(2i-1)\overline i}{1\leq i \leq k} \cup \Set{(2k+i+3)\overline{(k+2i+3)}}{1\leq i \leq k},\\ E_2 &= \Set{(2i)\overline{(3k+4-2i)}}{1\leq i \leq k-1},\\ E_3 &= \{(2k)\overline{(k+2)},(2k+1)\overline{(k+4)},(2k+2)\overline{(k+1)},(2k+3)\overline{(k+3)},\\ &\qquad (3k+4)\overline{(3k+5)},(3k+5)\overline{(3k+7)},(3k+6)\overline{(3k+4)},(3k+7)\overline{(3k+6)}\}. \end{align*} Edges in $E_1$, $E_2$ and $E_3$ will be called \emph{vertical}, \emph{skew} and \emph{special}, respectively. Moreover, the first four and the last four edges in $E_3$ are two \emph{groups of special edges}. \begin{proposition}\label{prop:last} The marked permutation graph $(G_k,M_k)$ contains exactly $6k+6$ $M$-copies of the Petersen graph. \end{proposition} \begin{proof} Each of the groups of special edges forms an $M_k$-$P_{10}$ with each of the remaining $3k+3$ edges of $M_k$. We prove that besides these $6k+6$ copies, there are no other $M_k$-copies of $P_{10}$ in $G_k$. For $X\subseteq M_k$, we let $G_X$ be the graph obtained from $G_k-M_k$ by adding the edges in $X$ and suppressing the degree $2$ vertices. Let $X$ be a subset of $M_k$ that contains no group of special edges. Suppose that $\match{G_k} X$ is isomorphic to the Petersen graph. To obtain a contradiction, we show that $\match{G_k} X$ contains a $4$-cycle. First of all, if $X$ contains a special edge, then it contains no other special edge from the same group. Indeed, a quick case analysis shows that if $Y$ consists of any two or three special edges in the same group, then $G_Y$ contains a $Y$-$C_4$. Thus, for the purposes of our argument, special edges behave just like vertical ones. We assert next that $X$ contains at most one skew edge. Let $j_1j'_1$ and $j_2j'_2$ be skew edges with $j_1 < j_2$ and $j_1+j_2$ maximum among the skew edges in $X$. Observe that $X$ contains no vertical edge $ii'$ with $i > j_2$ and $i' < j'_2$. Indeed, if there is only one such edge, then it forms an $X$-$C_4$ in $G_X$ together with $j_2j'_2$, while if there are at least two such edges, then an $X$-$C_4$ is obtained from a consecutive pair among them. By a similar argument, $X$ contains neither any vertical edge $ii'$ with $j_1 < i < j_2$, nor any vertical edge $ii'$ with $j'_2 < i' < j'_1$. It follows that $j_1j'_1$ and $j_2j'_2$ are contained in an $X$-$C_4$ in $G_X$, a contradiction which proves that there is at most one skew edge in $X$. Consequently, $X$ contains a set $Y$ of at least four edges that are vertical or special, as $\size X = 5$. Further, $\size Y \neq 5$, so there are exactly four $Y$-copies of $C_4$ in $G_Y$. Only at most two of these will be affected by the addition of the fifth edge of $X$. Thus, an $X$-$C_4$ persists in $G_X$, a contradiction. The proof is complete. \end{proof} While the graphs constructed in the proof of Proposition~\ref{prop:last} are $C_4$-free, they are not cyclically $5$-edge connected. A slight modification of the construction ensures this stronger property, but makes the discussion somewhat more complicated. For this reason, we only described the simpler version. \end{document}
arXiv
\begin{document} \title[Two weight $L^{p}$ inequalities]{Two weight $L^{p}$ inequalities for smooth Calder\'{o}n-Zygmund operators and doubling measures} \author[E. T. Sawyer]{Eric T. Sawyer$^\dagger$} \address{Eric T. Sawyer, Department of Mathematics and Statistics\\ McMaster University\\ 1280 Main Street West\\ Hamilton, Ontario L8S 4K1 Canada} \thanks{$\dagger $ Research supported in part by a grant from the National Science and Engineering Research Council of Canada.} \email{[email protected]} \author[B. D. Wick]{Brett D. Wick$^\ddagger$} \address{Brett D. Wick, Department of Mathematics \& Statistics, Washington University -- St. Louis, One Brookings Drive, St. Louis, MO USA 63130-4899.} \email{[email protected]} \thanks{$\ddagger $ B. D. Wick's research is supported in part by National Science Foundation Grants DMS \# 1800057, \# 2054863, and \# 20000510 and Australian Research Council -- DP 220100285.} \date{\today } \begin{abstract} If $T^{\lambda }$ is a smooth Stein elliptic $\lambda $-fractional singular integral on $\mathbb{R}^{n}$, $1<p<\infty $, and $\left( \sigma ,\omega \right) $ is a pair of doubling measures, then the two weight $L^{p}$ norm inequality, \begin{equation*} \int_{\mathbb{R}^{n}}\left\vert T^{\lambda }\left( f\sigma \right) \right\vert ^{p}d\omega \leq \mathfrak{N}_{T^{\lambda },p}^{p}\int_{\mathbb{R }^{n}}\left\vert f\right\vert ^{p}d\sigma ,\ \ \ \ \ f\in L^{p}\left( \sigma \right) \end{equation*} holds \emph{if and only if} the following quadratic triple testing conditions of Hyt\"{o}nen and Vuorinen hold, \begin{eqnarray*} \int_{\mathbb{R}^{n}}\left( \sum_{j=1}^{\infty }\left( a_{j}\mathbf{1} _{3I_{j}}T^{\lambda }\left( \mathbf{1}_{I_{j}}\sigma \right) \right) ^{2}\right) ^{\frac{p}{2}}d\omega &\leq &\left( \mathfrak{T}_{T^{\lambda },p}^{\ell ^{2},\limfunc{triple}}\right) ^{p}\int_{\mathbb{R}^{n}}\left( \sum_{j=1}^{\infty }\left( a_{j}\mathbf{1}_{I_{j}}\right) ^{2}\right) ^{ \frac{p}{2}}d\sigma , \\ \int_{\mathbb{R}^{n}}\left( \sum_{j=1}^{\infty }\left( a_{j}\mathbf{1} _{3I_{j}}T^{\lambda }\left( \mathbf{1}_{I_{j}}\omega \right) \right) ^{2}\right) ^{\frac{p^{\prime }}{2}}d\sigma &\leq &\left( \mathfrak{T} _{T^{\lambda ,\ast },p^{\prime }}^{\ell ^{2},\limfunc{triple}}\right) ^{p^{\prime }}\int_{\mathbb{R}^{n}}\left( \sum_{j=1}^{\infty }\left( a_{j} \mathbf{1}_{I_{j}}\right) ^{2}\right) ^{\frac{p^{\prime }}{2}}d\omega , \end{eqnarray*} where the inequalities are taken over all sequences $\left\{ I_{j}\right\} _{j=1}^{\infty }$ and $\left\{ a_{j}\right\} _{j=1}^{\infty }$ of cubes and real numbers respectively. We also show that these quadratic triple testing conditions can be relaxed to scalar testing conditions, quadratic offset Muckenhoupt conditions, and a quadratic weak boundedness property. \end{abstract} \maketitle \tableofcontents \section{Introduction} The Nazarov-Treil-Volberg $T1$ conjecture on the boundedness of the Hilbert transform from one weighted space $L^{2}\left( \sigma \right) $ to another $ L^{2}\left( \omega \right) $, was settled affirmatively in the two part paper \cite{LaSaShUr3},\cite{Lac} when the measures have no common point masses, and this restriction was removed by Hyt\"{o}nen in \cite{Hyt}. Since then there have been a number of generalizations of boundedness of Calder \'{o}n-Zygmund operators from one weighted $L^{2}$ space to another, including \begin{itemize} \item to higher dimensional Euclidean spaces (see e.g. \cite{SaShUr7}, \cite {LaWi} and \cite{LaSaShUrWi}), \item to spaces of homogeneous type (see e.g. \cite{DuLiSaVeWiYa}), and \item to the case when both measures are doubling (see \cite{AlSaUr}). \end{itemize} It had been known from work of Neugebauer \cite{Neu} and Coifman and Fefferman \cite{CoFe} some time ago that in the case of $A_{\infty }$ weights, the two weight norm inequality for a Calder\'{o}n-Zygmund operator was implied by the classical two weight $A_{p}$ condition; see \cite{AlSaUr2} for the elementary proof when $p=2$, and \cite{HyLa} for a sharp estimate on the characteristics. In addition there have been some generalizations to Sobolev spaces in place of\thinspace $L^{2}$ spaces in \cite{SaWi}, and also in the setting of a single weight (see e.g. \cite{DilWiWi} and \cite {KaLiPeWa}). The purpose of this paper is to prove a \emph{two weight} $T1$ theorem for general Calder\'{o}n-Zygmund operators on weighted\emph{\ L}$^{p}\left( \mathbb{R}^{n}\right) $ spaces with $1<p<\infty $, in the special case when the measures are\ both doubling. In view of the $L^{2}$ result in \cite {LaSaShUr3},\cite{Lac} one might conjecture that the Hilbert transform $H$ is bounded from $L^{p}\left( \sigma \right) $ to $L^{p}\left( \omega \right) $ with general locally finite positive Borel measures $\sigma $ and $\omega $ if and only if the local testing conditions for $H$, \begin{equation*} \int_{I}\left\vert H\mathbf{1}_{I}\sigma \right\vert ^{p}d\omega \lesssim \left\vert I\right\vert _{\sigma }\text{ and }\int_{I}\left\vert H\mathbf{1} _{I}\omega \right\vert ^{p}d\sigma \lesssim \left\vert I\right\vert _{\omega }, \end{equation*} both hold, along with the tailed Muckenhoupt $\mathcal{A}_{p}$ conditions, \begin{equation*} \left( \int_{I}\frac{\left\vert I\right\vert }{\left[ \left\vert I\right\vert +\limfunc{dist}\left( x,I\right) \right] ^{p}}d\omega \right) ^{ \frac{1}{p}}\left( \frac{\left\vert I\right\vert _{\sigma }}{\left\vert I\right\vert }\right) ^{\frac{1}{p^{\prime }}}\lesssim 1\text{ and }\left( \frac{\left\vert I\right\vert _{\omega }}{\left\vert I\right\vert }\right) ^{ \frac{1}{p}}\left( \int_{I}\frac{\left\vert I\right\vert }{\left[ \left\vert I\right\vert +\limfunc{dist}\left( x,I\right) \right] ^{p^{\prime }}}d\sigma \right) ^{\frac{1}{p^{\prime }}}\lesssim 1. \end{equation*} In fact this conjecture was already made in \cite[see\ Conjecture 1.8] {LaSaUr1}, where the case of maximal singular integrals was treated when one of the measures was doubling, but with more complicated testing conditions. While we do not know if this conjecture is true, another more likely conjecture, but difficult nonetheless, has been put forward by Hyt\"{o}nen and Vuorinen \cite[pages 16-18.]{HyVu}, see also \cite{Vuo} and \cite{Vuo2}. Namely, that $H$ is bounded from $L^{p}\left( \sigma \right) $ to $ L^{p}\left( \omega \right) $ if and only if certain \emph{quadratic} interval testing conditions for $H$ hold, along with corresponding \emph{ quadratic} Muckenhoupt conditions and a \emph{quadratic} weak boundedness property. Here `quadratic' refers to $\ell ^{2}$-valued extensions of the familiar scalar conditions. More generally, these quadratic conditions can be formulated for fractional singular integrals $T^{\lambda }$ in higher dimensions in a straightforward way. We emphasize that our doubling assumptions are in part offset by the fact that we characterize boundedness for \textbf{all} (Stein-elliptic) Calder \'{o}n-Zygmund operators, and in part due to the fact that we have obtained a two weight $T1$ theorem for$\ p\neq 2$ (for the first time). If one considers a matrix of Calder\'{o}n-Zygmund operators and weight pairs such as, \begin{center} \frame{$ \begin{array}{cccccc} & T=Hilbert & T=Cauchy & T=Beurling & T=Riesz & T=General \\ \sigma ,\omega \in A_{p} & \ast & \ast & \ast & \ast & \ast \\ \sigma ,\omega \in A_{\infty } & \ast & \ast & \ast & \ast & \ast \\ \sigma ,\omega \in doubling & \ast & \ast & \ast & \ast & \limfunc{known}\ \text{for }1<p<\infty \\ \sigma ,\omega \in Borel & \limfunc{known}\ \text{for }p=2 & & & & \end{array} $}, \end{center} two features stand out, \begin{enumerate} \item for general (locally finite positive) Borel measures, a two weight $T1$ characterization for $1<p<\infty $ has been found in this matrix \textbf{ only }for the Hilbert transform when $p=2$, \item for general (Stein-elliptic) Calder\'{o}n-Zygmund operators, a two weight $T1$ characterization for $1<p<\infty $ has been found in this matrix \textbf{only} for pairs of doubling measures. \end{enumerate} The starred entries in the matrix correspond to $T1$ characterizations that hold by virtue of the $\limfunc{known}$ results, and the blank entries remain unknown at this time. Of course there are other geometric restrictions on the measures that give rise to a $T1$ theorem, and these can be found in the references at the end of this paper. On the other hand, it appears quite challenging to find a natural class of measures $\mathcal{M}$, more general than doubling measures, for which a $T1$ theorem can be obtained for all $1<p<\infty $, all (Stein-elliptic) Calder\'{o}n-Zygmund operators, and all measure pairs in $\mathcal{M}\times \mathcal{M}$. \subsection{Quadratic conditions of Hyt\"{o}nen and Vuorinen} For a $\lambda $-fractional singular integral operator $T^{\lambda }$ on $ \mathbb{R}^{n}$, and locally finite positive Borel measures $\sigma $ and $ \omega $, let $T_{\sigma }^{\lambda }f=T^{\lambda }\left( fd\sigma \right) $ and $T_{\omega }^{\lambda ,\ast }g=T^{\lambda ,\ast }\left( gd\omega \right) $ (see below for definitions). The \emph{quadratic} cube testing conditions of Hyt\"{o}nen and Vuorinen are \begin{eqnarray} \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\mathbf{1} _{I_{i}}T_{\sigma }^{\lambda }\mathbf{1}_{I_{i}}\right\vert ^{2}\right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } &\leq &\mathfrak{T} _{T^{\lambda },p}^{\limfunc{quad}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\mathbf{1}_{I_{i}}\right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \label{quad HV} \\ \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\mathbf{1} _{I_{i}}T_{\omega }^{\lambda ,\ast }\mathbf{1}_{I_{i}}\right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \sigma \right) } &\leq &\mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}^{\limfunc{quad} }\left( \omega ,\sigma \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\mathbf{1}_{I_{i}}\right\vert ^{2}\right) ^{\frac{1}{2} }\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \notag \end{eqnarray} taken over all sequences $\left\{ I_{i}\right\} _{i=1}^{\infty }$ and $ \left\{ a_{i}\right\} _{i=1}^{\infty }$ of cubes and numbers respectively. The corresponding quadratic \emph{global} cube testing constants $\mathfrak{T }_{T^{\lambda },p}^{\limfunc{quad},\func{global}}\left( \sigma ,\omega \right) $ and $\mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}^{\limfunc{quad}, \func{global}}\left( \omega ,\sigma \right) $ are defined as in (\ref{quad HV}), but \emph{without} the indicator $\mathbf{1}_{I_{i}}$ outside the operator, namely with $\mathbf{1}_{I_{i}}T_{\sigma }^{\lambda }\mathbf{1} _{I_{i}}$ replaced by $T_{\sigma }^{\lambda }\mathbf{1}_{I_{i}}$. The \emph{ quadratic} Muckenhoupt conditions of Hyt\"{o}nen and Vuorinen are \begin{eqnarray} \left\Vert \left( \sum_{i=1}^{\infty }\left\vert \int_{\mathbb{R} ^{n}\setminus I_{i}}\frac{f_{i}\left( y\right) }{\left\vert y-c_{i}\right\vert ^{n-\lambda }}d\sigma \left( y\right) \right\vert ^{2} \mathbf{1}_{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } &\leq &\mathcal{A}_{p}^{\lambda ,\limfunc{quad}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert f_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \label{quad Muck} \\ \left\Vert \left( \sum_{i=1}^{\infty }\left\vert \int_{\mathbb{R} ^{n}\setminus I_{i}}\frac{f_{i}\left( y\right) }{\left\vert y-c_{i}\right\vert ^{n-\lambda }}d\omega \left( y\right) \right\vert ^{2} \mathbf{1}_{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \sigma \right) } &\leq &\mathcal{A}_{p^{\prime }}^{\lambda ,\limfunc{quad} }\left( \omega ,\sigma \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert f_{i}\right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \notag \end{eqnarray} taken over all sequences $\left\{ I_{i}\right\} _{i=1}^{\infty }$ and $ \left\{ f_{i}\right\} _{i=1}^{\infty }$ of cubes and functions respectively. Note that $\mathcal{A}_{p}^{\lambda ,\limfunc{quad}}\left( \sigma ,\omega \right) $ is homogeneous of degree $1$ in the measure pair $\left( \sigma ,\omega \right) $, as opposed to the usual formulation with degree $2$. Finally, the \emph{quadratic} weak boundedness property of Hyt\"{o}nen and Vuorinen (not so named in \cite{HyVu}) is \begin{eqnarray} &&\sum_{i=1}^{\infty }\left\vert \int_{\mathbb{R}^{n}}a_{i}T_{\sigma }^{\lambda }\mathbf{1}_{I_{i}}\left( x\right) b_{i}\mathbf{1}_{J\left( I_{i}\right) }\left( x\right) d\omega \left( x\right) \right\vert \label{WBP HV} \\ &\leq &\mathcal{WBP}_{T^{\lambda },p}^{\limfunc{quad}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\mathbf{1} _{I_{i}}\right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert \left( \sum_{i=1}^{\infty }\left\vert b_{i} \mathbf{1}_{I_{i}}\right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \notag \end{eqnarray} taken over all sequences $\left\{ I_{i}\right\} _{i=1}^{\infty }$, $\left\{ J\left( I_{i}\right) \right\} _{i=1}^{\infty }$, $\left\{ a_{i}\right\} _{i=1}^{\infty }$ and $\left\{ b_{i}\right\} _{i=1}^{\infty }$of cubes and numbers respectively where $J\left( I_{i}\right) $ denotes any cube adjacent to $I_{i}$ with the same side length. If the Calder\'{o}n-Zygmund operator $T^{\lambda }$ is bounded from $ L^{p}\left( \sigma \right) $ to $L^{p}\left( \omega \right) $, then the Hilbert space valued extension $\left( T^{\lambda }\right) ^{\ell ^{2}}$ is bounded from $L^{p}\left( \sigma ;\ell ^{2}\right) $ to $L^{p}\left( \omega ;\ell ^{2}\right) $, and it is now not hard to see that \begin{eqnarray*} &&\mathfrak{T}_{T^{\lambda },p}^{\limfunc{quad},\func{global}}\left( \sigma ,\omega \right) +\mathfrak{T}_{T^{\lambda },p^{\prime }}^{\limfunc{quad}, \func{global}}\left( \omega ,\sigma \right) +\mathcal{A}_{p}^{\lambda , \limfunc{quad}}\left( \sigma ,\omega \right) +\mathcal{A}_{p^{\prime }}^{\lambda ,\limfunc{quad}}\left( \omega ,\sigma \right) \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\mathcal{WBP}_{T^{\lambda },p}^{ \limfunc{quad}}\left( \sigma ,\omega \right) \lesssim \mathfrak{N} _{T^{\lambda },p}\left( \sigma ,\omega \right) , \end{eqnarray*} where $\mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) $ denotes the operator norm of $H$ from $L^{p}\left( \sigma \right) $ to $L^{p}\left( \omega \right) $, more generally see (\ref{two weight'}) below. For now the conjecture of Hyt\"{o}nen and Vuorinen for the Hilbert transform also remains open, but we settle here in the affirmative the boundedness question for $H$, and more generally for smooth Stein elliptic $\lambda $ -fractional Calder\'{o}n-Zygmund operators $T^{\lambda }$ on $\mathbb{R}^{n}$ , in the case that the measures $\sigma $ and $\omega $ are both \emph{ doubling}. Moreover, we use certain `logically weaker' quadratic conditions which we now describe. \subsection{Weaker quadratic conditions for doubling measures} First we will use local scalar testing conditions, \begin{eqnarray} \left\Vert \mathbf{1}_{I}T_{\sigma }^{\lambda }\mathbf{1}_{I}\right\Vert _{L^{p}\left( \omega \right) } &\leq &\mathfrak{T}_{T^{\lambda },p}\left( \sigma ,\omega \right) \left\vert I\right\vert _{\sigma }^{\frac{1}{p}}, \label{test} \\ \left\Vert \mathbf{1}_{I}T_{\omega }^{\lambda .\ast }\mathbf{1} _{I}\right\Vert _{L^{p^{\prime }}\left( \sigma \right) } &\leq &\mathfrak{T} _{T^{\lambda ,\ast },p^{\prime }}\left( \omega ,\sigma \right) \left\vert I\right\vert _{\omega }^{\frac{1}{p^{\prime }}}, \notag \end{eqnarray} which do not involve any vector-valued extensions. Second, we use quadratic \emph{offset} Muckenhoupt conditions given by \begin{eqnarray} \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\frac{ \min_{I_{i}^{\ast }}\left\vert I_{i}^{\ast }\right\vert _{\sigma }}{ \left\vert I_{i}\right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1 }_{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } &\leq &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\right\vert ^{2}\mathbf{1}_{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \label{quad A2 tailless} \\ \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\frac{ \min_{I_{i}^{\ast }}\left\vert I_{i}^{\ast }\right\vert _{\omega }}{ \left\vert I_{i}\right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1 }_{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \sigma \right) } &\leq &A_{p^{\prime }}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \omega ,\sigma \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\right\vert ^{2}\mathbf{1}_{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \notag \end{eqnarray} where for each $i$, the minimums are taken over the finitely many dyadic cubes $I_{i}^{\ast }$ such that $\ell \left( I_{i}^{\ast }\right) =\ell \left( I_{i}\right) $ and $\limfunc{dist}\left( I_{i}^{\ast },I_{i}\right) \leq C_{0}\ell \left( I_{i}\right) $ for some positive constant $C_{0}$ \footnote{ In applications one takes $C_{0}$ sufficiently large depending on the Stein elliptic constant for the operator $T^{\lambda }$. But if $\sigma $ is doubling the condition doesn't depend on $C_{0}$.}. Of course, when the measures are doubling, we may take $I_{i}^{\ast }=I_{i}$ so that (\ref{quad A2 tailless}) is equivalent to the following condition of Vuorinen \cite {Vuo2} that was introduced in the context of dyadic shifts, \begin{eqnarray} \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\frac{\left\vert I_{i}\right\vert _{\sigma }}{\left\vert I_{i}\right\vert ^{1-\frac{\lambda }{ n}}}\right\vert ^{2}\mathbf{1}_{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } &\lesssim &A_{p}^{\lambda ,\ell ^{2},\limfunc{ offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\right\vert ^{2}\mathbf{1}_{I_{i}}\right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \sigma \right) }, \label{quad A2 tailless'} \\ \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\frac{\left\vert I_{i}\right\vert _{\omega }}{\left\vert I_{i}\right\vert ^{1-\frac{\lambda }{ n}}}\right\vert ^{2}\mathbf{1}_{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \sigma \right) } &\lesssim &A_{p^{\prime }}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \omega ,\sigma \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\right\vert ^{2}\mathbf{1} _{I_{i}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \notag \end{eqnarray} We prove below that the offset constants $A_{p}^{\lambda ,\ell ^{2},\limfunc{ offset}}\left( \sigma ,\omega \right) $ in (\ref{quad A2 tailless}) are necessary for the norm inequality $\left\Vert T_{\sigma }^{\lambda }f\right\Vert _{L^{p}\left( \omega \right) }\leq \mathfrak{N}_{T^{\lambda }}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }$ when $\sigma $ and $\omega $ are doubling. Here we simply note that using the Fefferman-Stein vector-valued inequality for the maximal function $M_{\sigma }$ on a space of homogeneous type $\left( \mathbb{R} ^{n},\left\vert \cdot \right\vert ,\sigma \right) $ \cite{GrLiYa}, we see that $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) $ is smaller than $\mathcal{A}_{p}^{\lambda ,\ell ^{2},\limfunc{quad} }\left( \sigma ,\omega \right) $ for doubling measures because \begin{equation*} \frac{\left\vert I_{i}^{\ast }\right\vert _{\sigma }}{\left\vert I_{i}\right\vert ^{1-\frac{\lambda }{n}}}\lesssim \int_{\mathbb{R} ^{n}\setminus I_{i}}\frac{M_{\sigma }\mathbf{1}_{I_{i}^{\ast }}\left( y\right) }{\left\vert y-c_{i}\right\vert ^{n-\lambda }}d\sigma \left( y\right) ,\ \ \ \ \ \text{when }I_{i}^{\ast }\cap I_{i}=\emptyset . \end{equation*} Such use of the Fefferman-Stein vector-valued inequality occurs frequently in the sequel. Note again that $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset} }\left( \sigma ,\omega \right) $ is homogeneous of degree $1$ in the measure pair $\left( \sigma ,\omega \right) $. Third, we use an extension of the weak boundedness property (\ref{WBP HV}) of Hyt\"{o}nen and Vuorinen given by \begin{eqnarray} &&\sum_{i=1}^{\infty }\sum_{I_{i}^{\ast }\in \func{Adj}\left( I_{i}\right) }\left\vert \int_{\mathbb{R}^{n}}a_{i}T_{\sigma }^{\lambda }\mathbf{1} _{I_{i}}\left( x\right) b_{i}^{\ast }\mathbf{1}_{I_{i}^{\ast }}\left( x\right) d\omega \left( x\right) \right\vert \label{WBP} \\ &\leq &\mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\mathbf{1} _{I_{i}}\right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert \left( \sum_{i=1}^{\infty }\sum_{I_{i}^{\ast }\in \func{Adj}\left( I_{i}\right) }\left\vert b_{i}^{\ast }\mathbf{1} _{I_{i}^{\ast }}\right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \notag \end{eqnarray} where for $I\in \mathcal{D}$, its \emph{adjacent} cubes are defined by \begin{equation*} \func{Adj}\left( I\right) \equiv \left\{ I^{\ast }\in \mathcal{D}:I^{\ast }\cap I\neq \emptyset \text{ and }\ell \left( I^{\ast }\right) =\ell \left( I\right) \right\} . \end{equation*} Condition (\ref{WBP HV}) differs from (\ref{WBP}) only in that $I^{\ast }=I$ is excluded. Finally, we also define the stronger quadratic \emph{triple} testing constants by \begin{eqnarray} \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1} _{3I_{i}}T_{\sigma }^{\lambda }\mathbf{1}_{I_{i}}\right) ^{2}\right) ^{\frac{ 1}{2}}\right\Vert _{L^{p}\left( \omega \right) } &\leq &\mathfrak{T} _{T^{\lambda },p}^{\ell ^{2},\limfunc{triple}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1}_{I_{i}}\right) ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \label{quad triple test} \\ \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1} _{3I_{i}}T_{\omega }^{\lambda ,\ast }\mathbf{1}_{I_{i}}\right) ^{2}\right) ^{ \frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \sigma \right) } &\leq & \mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}^{\ell ^{2},\limfunc{triple} }\left( \omega ,\sigma \right) \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1}_{I_{i}}\right) ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \notag \end{eqnarray} \subsection{Statement of the main theorem} Denote by $\Omega _{\limfunc{dyad}}$ the collection of all dyadic grids in $ \mathbb{R}^{n}$, and let $\mathcal{Q}^{n}$ denote the collection of all cubes in $\mathbb{R}^{n}$ having sides parallel to the coordinate axes. A positive locally finite Borel measure $\mu $ on $\mathbb{R}^{n}$ is said to be doubling if there is a constant $C_{\limfunc{doub}}$, called the doubling constant, such that \begin{equation*} \left\vert 2Q\right\vert _{\mu }\leq C_{\limfunc{doub}}\left\vert Q\right\vert _{\mu }\ ,\ \ \ \ \ \text{for all cubes }Q\in \mathcal{Q}^{n}. \end{equation*} For $0\leq \lambda <n$ we define a smooth $\lambda $-fractional Calder\'{o} n-Zygmund kernel $K^{\lambda }(x,y)$ to be a function $K^{\lambda }:\mathbb{R }^{n}\times \mathbb{R}^{n}\rightarrow \mathbb{R}$ satisfying the following fractional size and smoothness conditions \begin{equation} \left\vert \nabla _{x}^{j}K^{\lambda }\left( x,y\right) \right\vert +\left\vert \nabla _{y}^{j}K^{\lambda }\left( x,y\right) \right\vert \leq C_{\lambda ,j}\left\vert x-y\right\vert ^{\lambda -j-n},\ \ \ \ \ 0\leq j<\infty , \label{sizeandsmoothness'} \end{equation} and we denote by $T^{\lambda }$ the associated $\lambda $-fractional singular integral on $\mathbb{R}^{n}$. \subsubsection{Defining the norm inequality\label{Subsubsection norm}} As in \cite[see page 314]{SaShUr9}, we introduce a family $\left\{ \eta _{\delta ,R}^{\lambda }\right\} _{0<\delta <R<\infty }$ of smooth nonnegative functions on $\left[ 0,\infty \right) $ so that the truncated kernels $K_{\delta ,R}^{\lambda }\left( x,y\right) =\eta _{\delta ,R}^{\lambda }\left( \left\vert x-y\right\vert \right) K^{\lambda }\left( x,y\right) $ are bounded with compact support for fixed $x$ or $y$, and uniformly satisfy (\ref{sizeandsmoothness'}). Then the truncated operators \begin{equation*} T_{\sigma ,\delta ,R}^{\lambda }f\left( x\right) \equiv \int_{\mathbb{R} ^{n}}K_{\delta ,R}^{\lambda }\left( x,y\right) f\left( y\right) d\sigma \left( y\right) ,\ \ \ \ \ x\in \mathbb{R}^{n}, \end{equation*} are pointwise well-defined when $f$ is bounded with compact support, and we will refer to the pair $\left( K^{\lambda },\left\{ \eta _{\delta ,R}^{\lambda }\right\} _{0<\delta <R<\infty }\right) $ as a $\lambda $ -fractional singular integral operator, which we typically denote by $ T^{\lambda }$, suppressing the dependence on the truncations. For $ 1<p<\infty $, we say that a $\lambda $-fractional singular integral operator $T^{\lambda }=\left( K^{\lambda },\left\{ \eta _{\delta ,R}^{\lambda }\right\} _{0<\delta <R<\infty }\right) $ satisfies the norm inequality \begin{equation} \left\Vert T_{\sigma }^{\lambda }f\right\Vert _{L^{p}\left( \omega \right) }\leq \mathfrak{N}_{T^{\lambda }}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) },\ \ \ \ \ f\in L^{p}\left( \sigma \right) , \label{two weight'} \end{equation} where $\mathfrak{N}_{T^{\lambda }}\left( \sigma ,\omega \right) $ denotes the best constant in (\ref{two weight'}), provided \begin{equation*} \left\Vert T_{\sigma ,\delta ,R}^{\lambda }f\right\Vert _{L^{p}\left( \omega \right) }\leq \mathfrak{N}_{T^{\lambda }}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) },\ \ \ \ \ f\in L^{p}\left( \sigma \right) ,0<\delta <R<\infty . \end{equation*} In the presence of the classical Muckenhoupt condition $A_{p}^{\alpha }$, it can be easily shown that the norm inequality is independent of the choice of truncations used - see e.g. \cite{LaSaShUr3} where rough operators are treated in the case $p=2$, but the proofs can be modified. We can now state our main theorem. \begin{theorem} \label{main}Suppose that $1<p<\infty $, that $\sigma $ and $\omega $ are locally finite positive Borel measures on $\mathbb{R}^{n}$, and that $ T^{\lambda }$ is a smooth $\lambda $-fractional singular integral operator on $\mathbb{R}^{n}$. Denote by $\mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) $ the smallest constant $C$ in the two weight norm inequality \begin{equation} \left\Vert T_{\sigma }^{\lambda }f\right\Vert _{L^{p}\left( \omega \right) }\leq C\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }. \label{2 wt norm} \end{equation} \begin{enumerate} \item Then \begin{equation*} \mathfrak{T}_{T^{\lambda },p}\left( \sigma ,\omega \right) +\mathfrak{T} _{T^{\lambda ,\ast },p^{\prime }}\left( \omega ,\sigma \right) +\mathcal{WBP} _{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) \leq \mathfrak{T} _{T^{\lambda },p}^{\ell ^{2},\limfunc{triple}}\left( \sigma ,\omega \right) + \mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}^{\ell ^{2},\limfunc{triple} }\left( \omega ,\sigma \right) \leq \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) , \end{equation*} and when $T^{\lambda }$ is Stein elliptic, we also have \begin{equation*} A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) +A_{p^{\prime }}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \omega ,\sigma \right) \lesssim \mathfrak{T}_{T^{\lambda },p}^{\ell ^{2},\limfunc{triple} }\left( \sigma ,\omega \right) +\mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}^{\ell ^{2},\limfunc{triple}}\left( \omega ,\sigma \right) . \end{equation*} \item Suppose in addition that $\sigma $ and $\omega $ are doubling measures on $\mathbb{R}^{n}$. Then the two weight norm inequality (\ref{2 wt norm}) holds provided the quadratic weak boundedness property (\ref{WBP}) holds, and the scalar testing conditions (\ref{test}) hold, and the quadratic offset Muckenhoupt conditions (\ref{quad A2 tailless}) hold; and moreover in this case we have \begin{eqnarray} \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) &\lesssim & \mathfrak{T}_{T^{\lambda },p}\left( \sigma ,\omega \right) +\mathfrak{T} _{T^{\lambda ,\ast },p^{\prime }}\left( \omega ,\sigma \right) +\mathcal{WBP} _{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) \label{main inequ} \\ &&+A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) +A_{p^{\prime }}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \omega ,\sigma \right) . \notag \end{eqnarray} \item Suppose in addition that $\sigma $ and $\omega $ are doubling measures on $\mathbb{R}^{n}$ and $T^{\lambda }$ is Stein elliptic. Then the two weight norm inequality (\ref{2 wt norm}) holds \emph{if and only if} the quadratic \emph{triple} testing conditions (\ref{quad triple test}) hold, and moreover, \begin{equation*} \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) \approx \mathfrak{ T}_{T^{\lambda },p}^{\ell ^{2},\limfunc{triple}}\left( \sigma ,\omega \right) +\mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}^{\ell ^{2},\limfunc{ triple}}\left( \omega ,\sigma \right) . \end{equation*} \end{enumerate} \end{theorem} The constants on the right hand side of (\ref{main inequ}) represent the most `elementary' constants we were able to find that characterize the norm of the two weight inequality for doubling measures when $p\neq 2$. \begin{remark} In the case of equal measures $\sigma =\omega $, the quadratic $ A_{p}^{\lambda ,\ell ^{2}}$ and $A_{p}^{\lambda ,\ell ^{2},\func{offest}}$ conditions trivally reduce to the scalar $A_{p}^{\lambda }$ and $ A_{p}^{\lambda ,\func{offest}}$ conditions respectively. We show in the appendix that $A_{p}^{\lambda ,\ell ^{2},\func{offest}}\left( \sigma ,\omega \right) +A_{p^{\prime }}^{\lambda ,\ell ^{2},\func{offest}}\left( \omega ,\sigma \right) $ is \emph{not} controlled by $A_{p}^{\lambda }\left( \sigma ,\omega \right) $ in general, but the case of doubling measures remains open. We also note that our weak boundedness property (\ref{WBP}) includes the case $I_{i}^{\ast }=I_{i}$, which cannot be removed by surgery, and in any event surgery is difficult to implement in the quadratic setting. \end{remark} Part (3) is an easy corollary of parts (1) and (2). Indeed, it is trivial that $\mathfrak{T}_{T^{\lambda },p}^{\ell ^{2},\limfunc{triple}}\left( \sigma ,\omega \right) +\mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}^{\ell ^{2},\limfunc{triple}}\left( \omega ,\sigma \right) \lesssim \mathfrak{N} _{T^{\lambda },p}\left( \sigma ,\omega \right) $, and a simple exercise to see that for general measures, \begin{eqnarray*} &&\mathfrak{T}_{T^{\lambda },p}\left( \sigma ,\omega \right) +\mathfrak{T} _{T^{\lambda ,\ast },p^{\prime }}\left( \omega ,\sigma \right) +\mathcal{WBP} _{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) \\ &&+A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) +A_{p^{\prime }}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \omega ,\sigma \right) \lesssim \mathfrak{T}_{T^{\lambda },p}^{\ell ^{2},\limfunc{triple} }\left( \sigma ,\omega \right) +\mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}^{\ell ^{2},\limfunc{triple}}\left( \omega ,\sigma \right) . \end{eqnarray*} \begin{notation} In the interest of reducing notational clutter we will sometimes omit specifying the measure pair and simply write $\mathfrak{T}_{T^{\lambda },p}$ and $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}$ in place of $\mathfrak{T} _{T^{\lambda },p}\left( \sigma ,\omega \right) $ and $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) $ etc. especially when in line. \end{notation} \section{Organization of the proof} We follow the overall outline of the proof of the case $p=2$ given in \cite {AlSaUr}, but with a number of adaptations to the use of square functions. The proof of Theorem \ref{main} is achieved by proving the bilinear form bound, \begin{equation*} \frac{\left\vert \left\langle T_{\sigma }^{\lambda }f,g\right\rangle _{\omega }\right\vert }{\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }}\lesssim \mathfrak{T}_{T^{\lambda },p}+\mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}+ \mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2}}+A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}+A_{p^{\prime }}^{\lambda ,\ell ^{2},\limfunc{offset}}, \end{equation*} for $\func{good}$ functions $f$ and $g$ in the sense of Nazarov, Treil and Volberg, see \cite{NTV} for the treatment we use here\footnote{ See also \cite[Subsection 3.1]{SaShUr10} for a treatment using finite collections of grids, in which case the conditional probability arguments are elementary.}. Following the weighted Haar expansions as given by Nazarov, Treil and Volberg in \cite{NTV4}, we write $f$ and $g$ in weighted Alpert wavelet expansions, \begin{equation} \left\langle T_{\sigma }^{\lambda }f,g\right\rangle _{\omega }=\left\langle T_{\sigma }^{\lambda }\left( \sum_{I\in \mathcal{D}}\bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \sum_{J\in \mathcal{D}}\bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega }=\sum_{I\in \mathcal{D }\ \text{and }J\in \mathcal{D}}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega }\ . \label{expand} \end{equation} The sum is further decomposed, as depicted in the brief schematic diagram below, by first \emph{Cube Size Splitting}, then using the \emph{Shifted Corona Decomposition}, according to the \emph{Canonical Splitting}. All of these `descriptive' expressions will be defined as the proof proceeds. Here is the brief schematic diagram as in \cite{AlSaUr}, summarizing the shifted corona decompositions as used in \cite{AlSaUr} and \cite{SaShUr7} for Alpert and Haar wavelet expansions of $f$ and $g$. The parameter $\rho $ is defined below. \begin{equation*} \fbox{$ \begin{array}{ccccccccc} \left\langle T_{\sigma }^{\lambda }f,g\right\rangle _{\omega } & & & & & & & & \\ \downarrow & & & & & & & & \\ \mathsf{B}_{\Subset _{\rho }}\left( f,g\right) & + & \mathsf{B}_{_{\rho }\Supset }\left( f,g\right) & + & \mathsf{B}_{\cap }\left( f,g\right) & + & \mathsf{B}_{\diagup }\left( f,g\right) & + & \mathsf{B}_{\func{Adj},\rho }\left( f,g\right) \\ \downarrow & & \left[ \limfunc{duality}\right] & & \left[ A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\right] & & \left[ A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}\right] & & \left[ \mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2}}\right] \\ \mathsf{T}_{\limfunc{diagonal}}\left( f,g\right) & + & \mathsf{T}_{\limfunc{ far}\limfunc{below}}\left( f,g\right) & + & \mathsf{T}_{\limfunc{far} \limfunc{above}}\left( f,g\right) & + & \mathsf{T}_{\limfunc{disjoint} }\left( f,g\right) & & \\ \downarrow & & \downarrow & & \left[ =0\right] & & \left[ =0\right] & & \\ \mathsf{B}_{\Subset _{\mathbf{\rho }}}^{F}\left( f,g\right) & & \mathsf{T}_{ \limfunc{far}\limfunc{below}}^{1}\left( f,g\right) & + & \mathsf{T}_{ \limfunc{far}\limfunc{below}}^{2}\left( f,g\right) & & & & \\ \downarrow & & \left[ A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\right] & & \left[ A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\right] & & & & \\ \mathsf{B}_{\func{stop}}^{F}\left( f,g\right) & + & \mathsf{B}_{\func{ paraproduct}}^{F}\left( f,g\right) & + & \mathsf{B}_{\func{neighbour} }^{F}\left( f,g\right) & + & \mathsf{B}_{\limfunc{commutator}}^{F}\left( f,g\right) & & \\ \left[ A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\right] & & \left[ \mathfrak{T}_{T^{\lambda },p}\right] & & \left[ A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}\right] & & \left[ A_{p}^{\lambda ,\ell ^{2},\limfunc{ offset}}\right] & & \end{array} $} \end{equation*} The condition that is used to control the indicated form is given in square brackets directly underneath. Note that all forms are controlled solely by the quadratic offset Muckenhoupt condition, save for the adjacent form which uses only the weak boundedness property, and the paraproduct form which uses only the scalar testing condition. There is however a notable exception in our treatment here as compared to that in \cite{AlSaUr}. We make no use of a $\kappa $-pivotal condition as was done in \cite{AlSaUr}, and in particular we construct our corona decomposition using only the classical Calder\'{o}n-Zygmund stopping times. Instead, we use the key fact that for a doubling measure $\mu $ and $\kappa \in \mathbb{N}$ chosen sufficiently large depending on the doubling constant of $\mu $, the $\kappa $-Poisson averages reduce to ordinary averages. Indeed, the $\kappa $-Poisson kernel of order $\lambda $ is given by \begin{equation} \mathrm{P}_{\kappa }^{\lambda }\left( Q,\mu \right) \equiv \int_{\mathbb{R} ^{n}}\frac{\ell \left( Q\right) ^{\kappa }}{\left( \ell \left( Q\right) +\left\vert y-c_{Q}\right\vert \right) ^{n+\kappa -\lambda }}d\mu \left( y\right) ,\ \ \ \ \ \kappa \geq 1, \label{def kappa Poisson} \end{equation} and a doubling measure $\mu $ has a `doubling exponent' $\theta >0$ and a positive constant $c$ that satisfy the condition, \begin{equation*} \left\vert 2^{-j}Q\right\vert _{\mu }\geq c2^{-j\theta }\left\vert Q\right\vert _{\mu }\ ,\ \ \ \ \ \text{for all }j\in \mathbb{N}. \end{equation*} Thus if $\mu $ has doubling exponent $\theta $ and $\kappa >\theta +\lambda -n$, we have \begin{eqnarray} &&\mathrm{P}_{\kappa }^{\lambda }\left( Q,\mu \right) =\int_{\mathbb{R}^{n}} \frac{\ell \left( Q\right) ^{\kappa }}{\left( \ell \left( Q\right) +\left\vert x-c_{Q}\right\vert \right) ^{n+\kappa -\lambda }}d\mu \left( x\right) \label{kappa large} \\ &=&\ell \left( Q\right) ^{\lambda -n}\left\{ \int_{Q}+\sum_{j=1}^{\infty }\int_{2^{j}Q\setminus 2^{j-1}Q}\right\} \frac{1}{\left( 1+\frac{\left\vert x-c_{Q}\right\vert }{\ell \left( Q\right) }\right) ^{n+\kappa -\lambda }} d\mu \left( x\right) \notag \\ &\approx &\left\vert Q\right\vert ^{\frac{\lambda }{n}-1}\sum_{j=0}^{\infty }2^{-j\left( n+\kappa -\lambda \right) }\left\vert 2^{j}Q\right\vert _{\mu }\approx \left\vert Q\right\vert ^{\frac{\lambda }{n}-1}\sum_{j=0}^{\infty }2^{-j\left( n+\kappa -\lambda \right) }\frac{1}{c2^{-j\theta }}\left\vert Q\right\vert _{\mu }\approx C_{n,\kappa ,\lambda ,\theta }\left\vert Q\right\vert ^{\frac{\lambda }{n}-1}\left\vert Q\right\vert _{\mu }\ . \notag \end{eqnarray} \begin{remark} The doubling exponents for $\sigma $ and $\omega $ need not be the same, and so we could introduce separate constants $\kappa _{1}$ and $\kappa _{2}$ corresponding to $\sigma $ and $\omega $ respectively. The interested reader can easily modify the results and proofs in this paper to accommodate such constants. In this paper we take $\kappa =\max \left\{ \kappa _{1},\kappa _{2}\right\} $. \end{remark} We now turn to defining the decompositions of the bilinear form $ \left\langle T_{\sigma }^{\lambda }f,g\right\rangle _{\omega }$ used in the schematic diagram above. For this we first need some preliminaries. We introduce parameters $r,\varepsilon ,\rho ,\tau $ as in \cite{AlSaUr} and \cite{SaShUr7}. We will choose $\varepsilon >0$ sufficiently small later in the argument, and then $r$ must be chosen sufficiently large depending on $ \varepsilon $ in order to reduce matters to $\left( r,\varepsilon \right) - \func{good}$ functions by the Nazarov, Treil and Volberg argument - see either \cite{NTV4}, \cite{SaShUr7} or \cite[Section 3.1]{SaWi} for details. \begin{definition} \label{def parameters}The parameters $\tau $ and $\rho $ are fixed to satisfy \begin{equation*} \tau >r\text{ and }\rho >r+\tau , \end{equation*} where $r$ is the goodness parameter already fixed. \end{definition} Let $\mu $ be a positive locally finite Borel measure on $\mathbb{R}^{n}$ that is doubling, let $\mathcal{D}$ be a dyadic grid on $\mathbb{R}^{n}$, let $\kappa \in \mathbb{N}$ and let $\left\{ \bigtriangleup _{Q;\kappa }^{\mu }\right\} _{Q\in \mathcal{D}}$ be the set of weighted Alpert projections on $L^{2}\left( \mu \right) $ and $\left\{ \mathbb{E}_{Q}^{\mu ,\kappa }\right\} _{Q\in \mathcal{D}}$ the associated set of projections (see \cite{RaSaWi} for definitions). When $\kappa =1$, these are the familiar weighted Haar projections $\bigtriangleup _{Q}^{\mu }=\bigtriangleup _{Q;1}^{\mu }$. Recall also the following bound for the Alpert projections $\mathbb{E}_{I;\kappa }^{\mu }$ (\cite[see (4.7) on page 14]{Saw6}): \begin{equation} \left\Vert \mathbb{E}_{I;\kappa }^{\mu }f\right\Vert _{L_{I}^{\infty }\left( \mu \right) }\lesssim E_{I}^{\mu }\left\vert f\right\vert \leq \sqrt{\frac{1 }{\left\vert I\right\vert _{\mu }}\int_{I}\left\vert f\right\vert ^{2}d\mu } ,\ \ \ \ \ \text{for all }f\in L_{\limfunc{loc}}^{2}\left( \mu \right) . \label{analogue} \end{equation} In terms of the Alpert coefficient vectors $\widehat{f}\left( I\right) \equiv \left\{ \left\langle f,h_{I;\kappa }^{\mu ,a}\right\rangle \right\} _{a\in \Gamma _{I,n,\kappa }}$ for an orthonormal basis $\left\{ h_{I;\kappa }^{\mu ,a}\right\} _{a\in \Gamma _{I,n,\kappa }}$ of $L_{I;\kappa }^{2}\left( \mu \right) $ where $\Gamma _{I,n,\kappa }$ is a convenient finite index set of size $d_{Q;\kappa }$, we thus have \begin{equation} \left\vert \widehat{f}\left( I\right) \right\vert =\left\Vert \bigtriangleup _{I;\kappa }^{\mu }f\right\Vert _{L^{2}\left( \mu \right) }\leq \left\Vert \bigtriangleup _{I;\kappa }^{\mu }f\right\Vert _{L^{\infty }\left( \mu \right) }\sqrt{\left\vert I\right\vert _{\mu }}\lesssim \left\Vert \bigtriangleup _{I;\kappa }^{\mu }f\right\Vert _{L^{2}\left( \mu \right) }=\left\vert \widehat{f}\left( I\right) \right\vert . \label{analogue'} \end{equation} \begin{notation} We will write the equal quantities $\left\vert \widehat{f}\left( I\right) \right\vert $ and $\left\Vert \bigtriangleup _{I;\kappa }^{\mu }f\right\Vert _{L^{2}\left( \mu \right) }$ interchangeably throughout the paper, depending on context. \end{notation} \subsection{The cube size decomposition} Now we can define the cube size decomposition in the second row of the diagram as given in \cite{AlSaUr} and \cite{SaWi}. For a sufficiently large positive integer $\rho \in \mathbb{N}$, we let \begin{equation} \func{Adj}_{\rho }\left( I\right) \equiv \left\{ J\in \mathcal{D}:2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 2^{\rho }\text{ and }\overline{J}\cap \overline{I}\neq \emptyset \right\} ,\ \ \ \ \ I\in \mathcal{D}, \label{def Adj} \end{equation} be the finite collection of dyadic cubes of side length between $2^{-\rho }\ell \left( I\right) $ and $2^{\rho }\ell \left( I\right) $, and whose closures have nonempty intersection. We write $J\Subset _{\rho ,\varepsilon }I$ to mean that $J\subset I$, $\ell \left( J\right) \leq 2^{-\rho }\ell \left( I\right) $ and $\limfunc{dist}\left( J,\partial I\right) >2\sqrt{n} \ell \left( J\right) ^{\varepsilon }\ell \left( I\right) ^{1-\varepsilon }$. Then we write \begin{eqnarray*} \left\langle T_{\sigma }^{\lambda }f,g\right\rangle _{\omega } &=&\dsum\limits_{I,J\in \mathcal{D}}\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ &=&\dsum\limits_{\substack{ I,J\in \mathcal{D} \\ J\Subset _{\rho ,\varepsilon }I}}\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }+\dsum\limits_{\substack{ I,J\in \mathcal{D} \\ J_{\rho ,\varepsilon }\Supset I}}\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ &&+\dsum\limits_{I,J\in \mathcal{D}:\ J\cap I=\emptyset ,\frac{\ell \left( J\right) }{\ell \left( I\right) }<2^{-\rho }\text{ or }\frac{\ell \left( J\right) }{\ell \left( I\right) }>2^{\rho }}\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ &&+\dsum\limits_{\substack{ I,J\in \mathcal{D} \\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 2^{\rho }\text{ and }\overline{J }\cap \overline{I}=\emptyset }}\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }+\dsum\limits_{I\in \mathcal{D},\ J\in \func{Adj} _{\rho }\left( I\right) }\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ &\equiv &\mathsf{B}_{\Subset _{\rho ,\varepsilon }}\left( f,g\right) + \mathsf{B}_{_{\rho ,\varepsilon }\Supset }\left( f,g\right) +\mathsf{B} _{\cap }\left( f,g\right) +\mathsf{B}_{\diagup }\left( f,g\right) +\mathsf{B} _{\func{Adj},\rho }\left( f,g\right) . \end{eqnarray*} The disjoint and comparable forms $\mathsf{B}_{\cap }\left( f,g\right) $ and $\mathsf{B}_{\diagup }\left( f,g\right) $ are controlled using only the quadratic offset Muckenhoupt condition, while the adjacent form $\mathsf{B}_{ \func{Adj},\rho }\left( f,g\right) $ is controlled by the Alpert weak boundedness property. The above form $\mathsf{B}_{_{\rho ,\varepsilon }\Supset }\left( f,g\right) $ is handled exactly as is the below form $ \mathsf{B}_{\Subset _{\rho ,\varepsilon }}\left( f,g\right) $ but interchanging the measures $\sigma $ and $\omega $, and the exponents $p$ and $p^{\prime }$, as well as using the duals of the scalar testing and quadratic Muckenhoupt testing conditions. So it remains only to treat the below form $\mathsf{B}_{\Subset _{\rho ,\varepsilon }}\left( f,g\right) $, to which we now turn. In order to describe the ensuing decompositions of $\mathsf{B}_{\Subset _{\rho ,\varepsilon }}\left( f,g\right) $, we first need to introduce the corona and shifted corona decompositions of $f$ and $g$ respectively. We construct the \emph{Calder\'{o}n-Zygmund} corona decomposition for a function $f$ in $L^{p}\left( \mu \right) $ (where $\mu =\sigma $ here, and where $\mu =\omega $ when treating $\mathsf{B}_{_{\rho ,\varepsilon }\Supset }\left( f,g\right) $) and that is supported in a dyadic cube $F_{1}^{0}$. Fix $\Gamma >1$ and define $\mathcal{G}_{0}=\left\{ F_{1}^{0}\right\} $ to consist of the single cube $F_{1}^{0}$, and define the first generation $ \mathcal{G}_{1}=\left\{ F_{k}^{1}\right\} _{k}$ of \emph{CZ stopping children } of $F_{1}^{0}$ to be the \emph{maximal} dyadic subcubes $I$ of $F_{0}$ satisfying \begin{equation*} E_{I}^{\mu }\left\vert f\right\vert \geq \Gamma E_{F_{1}^{0}}^{\mu }\left\vert f\right\vert . \end{equation*} Then define the second generation $\mathcal{G}_{2}=\left\{ F_{k}^{2}\right\} _{k}$ of CZ\emph{\ }stopping children of $F_{1}^{0}$ to be the \emph{maximal} dyadic subcubes $I$ of some $F_{k}^{1}\in \mathcal{G}_{1}$ satisfying \begin{equation*} E_{I}^{\mu }\left\vert f\right\vert \geq \Gamma E_{F_{k}^{1}}^{\mu }\left\vert f\right\vert . \end{equation*} Continue by recursion to define $\mathcal{G}_{n}$ for all $n\geq 0$, and then set \begin{equation*} \mathcal{F\equiv }\dbigcup\limits_{n=0}^{\infty }\mathcal{G}_{n}=\left\{ F_{k}^{n}:n\geq 0,k\geq 1\right\} \end{equation*} to be the set of all CZ stopping intervals in $F_{1}^{0}$ obtained in this way. The $\mu $-Carleson condition for $\mathcal{F}$ follows as usual from the first step, \begin{equation*} \sum_{F^{\prime }\in \mathfrak{C}_{\mathcal{F}}\left( F\right) }\left\vert F^{\prime }\right\vert _{\mu }\leq \frac{1}{\Gamma }\sum_{F^{\prime }\in \mathfrak{C}_{\mathcal{F}}\left( F\right) }\frac{1}{E_{F}^{\mu }\left\vert f\right\vert }\int_{F^{\prime }}\left\vert f\right\vert d\mu \leq \frac{1}{ \Gamma }\left\vert F\right\vert _{\mu }. \end{equation*} Moreover, if we define \begin{equation} \alpha _{\mathcal{F}}\left( F\right) \equiv \sup_{F^{\prime }\in \mathcal{F} :\ F\subset F^{\prime }}E_{F^{\prime }}^{\mu }\left\vert f\right\vert , \label{def alpha} \end{equation} then in each corona \begin{equation*} \mathcal{C}_{F}\equiv \left\{ I\in \mathcal{D}:I\subset F\text{ and } I\not\subset F^{\prime }\text{ for any }F^{\prime }\in \mathcal{F}\text{ with }F^{\prime }\varsubsetneqq F\right\} , \end{equation*} we have, from the definition of the stopping times, the following average control \begin{equation} E_{I}^{\mu }\left\vert f\right\vert <\Gamma \alpha _{\mathcal{F}}\left( F\right) ,\ \ \ \ \ I\in \mathcal{C}_{F}\text{ and }F\in \mathcal{F}. \label{average control} \end{equation} Finally, as in \cite{NTV4}, \cite{LaSaShUr3} and \cite{SaShUr7}, we obtain the Carleson condition and quasiorthogonality inequality, \begin{equation} \sum_{F^{\prime }\preceq F}\left\vert F^{\prime }\right\vert _{\mu }\leq C_{0}\left\vert F\right\vert _{\mu }\text{ for all }F\in \mathcal{F};\text{\ and }\sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) ^{2}\left\vert F\right\vert _{\mu }\mathbf{\leq }C_{0}^{2}\left\Vert f\right\Vert _{L^{2}\left( \mu \right) }^{2}, \label{Car and quasi} \end{equation} where $\preceq $ denotes the tree relation $F^{\prime }\subset F$ for $ F^{\prime },F\in \mathcal{F}$. Moreover, there is the following useful consequence of (\ref{Car and quasi}) that says the sequence $\left\{ \alpha _{\mathcal{F}}\left( F\right) \mathbf{1}_{F}\right\} _{F\in \mathcal{F}}$ has an additional \emph{quasiorthogonal} property relative to $f$ with a constant $C_{0}^{\prime }$ depending only on $C_{0}$: \begin{equation} \left\Vert \sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) \mathbf{1}_{F}\right\Vert _{L^{2}\left( \mu \right) }^{2}\leq C_{0}^{\prime }\left\Vert f\right\Vert _{L^{2}\left( \mu \right) }^{2}. \label{q orth} \end{equation} Indeed, this is an easy consequence of a geometric decay in levels of the tree $\mathcal{F}$, that follows in turn from the Carleson condition in the first inequality of (\ref{Car and quasi}). This geometric decay asserts that there are positive constants $C_{1}$ and $ \varepsilon $, depending on $C_{0}$, such that if $\mathfrak{C}_{\mathcal{F} }^{\left( n\right) }\left( F\right) $ denotes the set of $n^{th}$ generation children of $F$ in $\mathcal{F}$, \begin{equation} \sum_{F^{\prime }\in \mathfrak{C}_{\mathcal{F}}^{\left( n\right) }\left( F\right) }\left\vert F^{\prime }\right\vert _{\mu }\leq \left( C_{1}2^{-\varepsilon n}\right) ^{2}\left\vert F\right\vert _{\mu },\ \ \ \ \ \text{for all }n\geq 0\text{ and }F\in \mathcal{F}. \label{geom decay} \end{equation} To see this, let $\beta _{k}\left( F\right) \equiv \sum_{F^{\prime }\in \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F\right) }\left\vert F^{\prime }\right\vert _{\mu }$ and note that $\beta _{k+1}\left( F\right) \leq \beta _{k}\left( F\right) $ implies that for any integer $N\geq C$, we have \begin{equation*} \left( N+1\right) \beta _{N}\left( F\right) \leq \sum_{k=0}^{N}\beta _{k}\left( F\right) \leq C\left\vert F\right\vert _{\mu }\ , \end{equation*} and hence \begin{equation*} \beta _{N}\left( F\right) \leq \frac{C}{N+1}\left\vert F\right\vert _{\mu }< \frac{1}{2}\left\vert F\right\vert _{\mu }\ ,\ \ \ \ \ \text{for }F\in \mathcal{F}\text{ and }N=\left[ 2C\right] . \end{equation*} It follows that \begin{equation*} \beta _{\ell N}\left( F\right) \leq \frac{1}{2}\beta _{\left( \ell -1\right) N}\left( F\right) \leq ...\leq \frac{1}{2^{\ell }}\beta _{0}\left( F\right) = \frac{1}{2^{\ell }}\left\vert F\right\vert _{\mu },\ \ \ \ \ \ell =0,1,2,... \end{equation*} and so given $n\in \mathbb{N}$, choose $\ell $ such that $\ell N\leq n<\left( \ell +1\right) N$, and note that \begin{equation*} \sum_{F^{\prime }\in \mathfrak{C}_{\mathcal{F}}^{\left( n\right) }\left( F\right) }\left\vert F^{\prime }\right\vert _{\mu }=\beta _{n}\left( F\right) \leq \beta _{\ell N}\left( F\right) \approx C_{1}2^{-\varepsilon n}\left\vert F\right\vert _{\mu }\ , \end{equation*} which proves the geometric decay (\ref{geom decay}). Now let $\sigma $ and $\omega $ be doubling measures and define the two corona projections \begin{equation*} \mathsf{P}_{\mathcal{C}_{F}}^{\sigma }\equiv \sum_{I\in \mathcal{C} _{F}}\bigtriangleup _{I;\kappa }^{\sigma }\text{ and }\mathsf{P}_{\mathcal{C} _{F}^{\tau -\limfunc{shift}}}^{\omega }\equiv \sum_{J\in \mathcal{C} _{F}^{\tau -\limfunc{shift}}}\bigtriangleup _{J;\kappa }^{\omega }\ , \end{equation*} where \begin{align} \mathcal{C}_{F}^{\tau -\limfunc{shift}}& \equiv \left[ \mathcal{C} _{F}\setminus \mathcal{N}_{\mathcal{D}}^{\tau }\left( F\right) \right] \cup \dbigcup\limits_{F^{\prime }\in \mathfrak{C}_{\mathcal{F}}\left( F\right) } \left[ \mathcal{N}_{\mathcal{D}}^{\tau }\left( F^{\prime }\right) \setminus \mathcal{N}_{\mathcal{D}}^{\tau }\left( F\right) \right] ; \label{def shift} \\ \text{where }\mathcal{N}_{\mathcal{D}}^{\tau }\left( F\right) & \equiv \left\{ J\in \mathcal{D}:J\subset F\text{ and }\ell \left( J\right) >2^{-\tau }\ell \left( F\right) \right\} , \notag \end{align} and note that $f=\sum_{F\in \mathcal{F}}\mathsf{P}_{\mathcal{C}_{F}}^{\sigma }f$. Thus the corona $\mathcal{C}_{F}^{\tau -\limfunc{shift}}$ has the top $ \tau $ levels from $\mathcal{C}_{F}$ removed, and includes the first $\tau $ levels from each of its $\mathcal{F}$-children, except if they have already been removed. \subsection{The canonical splitting} We can now continue with the definitions of decompositions in the schematic diagram above. To bound the below form $\mathsf{B}_{\Subset _{\rho ,\varepsilon }}\left( f,g\right) $, we proceed with the \emph{Canonical Splitting} of \begin{equation*} \mathsf{B}_{\Subset _{\mathbf{\rho },\varepsilon }}\left( f,g\right) =\dsum\limits_{\substack{ I,J\in \mathcal{D} \\ J\Subset _{\rho ,\varepsilon }I}}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega } \end{equation*} as in \cite{SaShUr7} and \cite{AlSaUr}, \begin{align*} \mathsf{B}_{\Subset _{\mathbf{\rho },\varepsilon }}\left( f,g\right) & =\sum_{F\in \mathcal{F}}\left\langle T_{\sigma }^{\lambda }\mathsf{P}_{ \mathcal{C}_{F}}^{\sigma }f,\mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift }}}^{\omega }g\right\rangle _{\omega }^{\Subset _{\rho }}+\sum_{\substack{ F,G\in \mathcal{F} \\ G\subsetneqq F}}\left\langle T_{\sigma }^{\lambda } \mathsf{P}_{\mathcal{C}_{F}}^{\sigma }f,\mathsf{P}_{\mathcal{C}_{G}^{\tau - \limfunc{shift}}}^{\omega }g\right\rangle _{\omega }^{\Subset _{\rho }} \\ & +\sum_{\substack{ F,G\in \mathcal{F} \\ G\supsetneqq F}}\left\langle T_{\sigma }^{\lambda }\mathsf{P}_{\mathcal{C}_{F}}^{\sigma }f,\mathsf{P}_{ \mathcal{C}_{G}^{\tau -\limfunc{shift}}}^{\omega }g\right\rangle _{\omega }^{\Subset _{\rho }}+\sum_{\substack{ F,G\in \mathcal{F} \\ F\cap G=\emptyset }}\left\langle T_{\sigma }^{\lambda }\mathsf{P}_{\mathcal{C} _{F}}^{\sigma }f,\mathsf{P}_{\mathcal{C}_{G}^{\tau -\limfunc{shift} }}^{\omega }g\right\rangle _{\omega }^{\Subset _{\rho }} \\ & \equiv \mathsf{T}_{\limfunc{diagonal}}\left( f,g\right) +\mathsf{T}_{ \limfunc{far}\limfunc{below}}\left( f,g\right) +\mathsf{T}_{\limfunc{far} \limfunc{above}}\left( f,g\right) +\mathsf{T}_{\limfunc{disjoint}}\left( f,g\right) , \end{align*} where for $F\in \mathcal{F}$ we use the shorthand \begin{equation*} \left\langle T_{\sigma }^{\lambda }\left( \mathsf{P}_{\mathcal{C} _{F}}^{\sigma }f\right) ,\mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift} }}^{\omega }g\right\rangle _{\omega }^{\Subset _{\rho }}\equiv \dsum\limits _{\substack{ I\in \mathcal{C}_{F},\ J\in \mathcal{C}_{F}^{\tau -\limfunc{ shift}} \\ J\Subset _{\rho ,\varepsilon }I}}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega }. \end{equation*} The final two forms $\mathsf{T}_{\limfunc{far}\limfunc{above}}\left( f,g\right) $ and $\mathsf{T}_{\limfunc{disjoint}}\left( f,g\right) $ each vanish just as in \cite{SaShUr7} and \cite{AlSaUr}, since there are no pairs $\left( I,J\right) \in \mathcal{C}_{F}\times \mathcal{C}_{G}^{\tau -\limfunc{ shift}}$ with both (\textbf{i}) $J\Subset _{\rho ,\varepsilon }I$ and ( \textbf{ii}) either $F\subsetneqq G$ or $G\cap F=\emptyset $. The far below form $\mathsf{T}_{\limfunc{far}\limfunc{below}}\left( f,g\right) $ is then further split into two forms $\mathsf{T}_{\limfunc{far}\limfunc{below} }^{1}\left( f,g\right) $ and $\mathsf{T}_{\limfunc{far}\limfunc{below} }^{2}\left( f,g\right) $ as in \cite{SaShUr7} and \cite{AlSaUr}, \begin{align} \mathsf{T}_{\limfunc{far}\limfunc{below}}\left( f,g\right) & =\sum_{G\in \mathcal{F}}\sum_{F\in \mathcal{F}:\ G\subsetneqq F}\sum_{\substack{ I\in \mathcal{C}_{F}\text{ and }J\in \mathcal{C}_{G}^{\tau -\limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}}\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \label{far below decomp} \\ & =\sum_{G\in \mathcal{F}}\sum_{F\in \mathcal{F}:\ G\subsetneqq F}\sum_{J\in \mathcal{C}_{G}^{\tau -\limfunc{shift}}}\sum_{I\in \mathcal{C}_{F}\text{ and }J\subset I}\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \notag \\ & -\sum_{F\in \mathcal{F}}\sum_{G\in \mathcal{F}:\ G\subsetneqq F}\sum_{J\in \mathcal{C}_{G}^{\tau -\limfunc{shift}}}\sum_{I\in \mathcal{C}_{F}\text{ and }J\subset I\text{ but }J\not\Subset _{\rho ,\varepsilon }I}\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \notag \\ & \equiv \mathsf{T}_{\limfunc{far}\limfunc{below}}^{1}\left( f,g\right) - \mathsf{T}_{\limfunc{far}\limfunc{below}}^{2}\left( f,g\right) . \notag \end{align} The second far below form $\mathsf{T}_{\limfunc{far}\limfunc{below} }^{2}\left( f,g\right) $ satisfies \begin{equation} \left\vert \mathsf{T}_{\limfunc{far}\limfunc{below}}^{2}\left( f,g\right) \right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \label{second far below} \end{equation} which follows in an easy way from (\ref{routine'}) and (\ref{routine''}) and their porisms - see below. To control the first and main far below form $ \mathsf{T}_{\limfunc{far}\limfunc{below}}^{1}\left( f,g\right) $, we will use some new quadratic arguments exploiting Carleson measure conditions to establish \begin{equation} \left\vert \mathsf{T}_{\limfunc{far}\limfunc{below}}^{1}\left( f,g\right) \right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \label{first far below} \end{equation} To handle the diagonal term $\mathsf{T}_{\limfunc{diagonal}}\left( f,g\right) $, we further decompose according to the stopping times $\mathcal{ F}$, \begin{equation} \mathsf{T}_{\limfunc{diagonal}}\left( f,g\right) =\sum_{F\in \mathcal{F}} \mathsf{B}_{\Subset _{\rho ,\varepsilon }}^{F}\left( f,g\right) ,\text{ where }\mathsf{B}_{\Subset _{\rho ,\varepsilon }}^{F}\left( f,g\right) \equiv \left\langle T_{\sigma }^{\lambda }\left( \mathsf{P}_{\mathcal{C} _{F}}^{\sigma }f\right) ,\mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift} }}^{\omega }g\right\rangle _{\omega }^{\Subset _{\rho }}, \label{def block} \end{equation} where we recall that in \cite{AlSaUr} for $p=2$, the following estimate was obtained, \begin{equation} \left\vert \mathsf{B}_{\Subset _{\rho }}^{F}\left( f,g\right) \right\vert \lesssim \left( \mathfrak{T}_{T^{\lambda }}+\sqrt{A_{2}^{\lambda }}\right) \ \left( \left\Vert \mathbb{E}_{F;\kappa }^{\sigma }f\right\Vert _{\infty } \sqrt{\left\vert F\right\vert _{\sigma }}+\left\Vert \mathsf{P}_{\mathcal{C} _{F}}^{\sigma }f\right\Vert _{L^{2}\left( \sigma \right) }\right) \ \left\Vert \mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift}}}^{\omega }g\right\Vert _{L^{2}\left( \omega \right) }. \label{below form bound'} \end{equation} This was achieved by adapting the classical \emph{reach} of Nazarov, Treil and Volberg using Haar wavelet projections $\bigtriangleup _{I}^{\sigma }$, where by `reach' we mean the ingenious `thinking outside the box' idea of the paraproduct / stopping / neighbour decomposition of Nazarov, Treil and Volberg \cite{NTV4}. Since we are using weighted Alpert wavelet projections $ \bigtriangleup _{I;\kappa }^{\sigma }$, the projection $\mathbb{E} _{I^{\prime };\kappa }^{\sigma }\bigtriangleup _{I;\kappa }^{\sigma }f$ onto the child $I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) $ equals $M_{I^{\prime };\kappa }\mathbf{1}_{I_{\pm }}$ where $M=M_{I^{\prime };\kappa }$ is a polynomial of degree less than $\kappa $ restricted to $ I^{\prime }$, as opposed to a constant in the Haar case, and hence no longer commutes in general with the operator $T_{\sigma }^{\lambda }$. As in \cite {AlSaUr}, this results in a new commutator form to be bounded, and complicates bounding the remaining forms as well. \subsection{The Nazarov, Treil and Volberg reach} Here is the Nazarov, Treil and Volberg decomposition, or reach, adapted to Alpert wavelets as in \cite{AlSaUr}. We have that $\mathsf{B}_{\Subset _{\rho ,\varepsilon };\kappa }^{F}\left( f,g\right) $ equals \begin{align*} & \sum_{\substack{ I\in \mathcal{C}_{F}\text{ and }J\in \mathcal{C} _{F}^{\tau -\limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}} \left\langle T_{\sigma }^{\lambda }\left( \mathbf{1}_{I_{J}}\bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }+\sum_{\substack{ I\in \mathcal{C}_{F}\text{ and } J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}}\sum_{\theta \left( I_{J}\right) \in \mathfrak{C}_{\mathcal{ D}}\left( I\right) \setminus \left\{ I_{J}\right\} }\left\langle T_{\sigma }^{\lambda }\left( \mathbf{1}_{\theta \left( I_{J}\right) }\bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ & \equiv \mathsf{B}_{\func{home};\kappa }^{F}\left( f,g\right) +\mathsf{B}_{ \limfunc{neighbour};\kappa }^{F}\left( f,g\right) , \end{align*} and we further decompose the home form using \begin{equation} M_{I^{\prime }}=M_{I^{\prime };\kappa }\equiv \mathbf{1}_{I^{\prime }}\bigtriangleup _{I;\kappa }^{\sigma }f=\mathbb{E}_{I^{\prime };\kappa }^{\sigma }\bigtriangleup _{I;\kappa }^{\sigma }f, \label{def M} \end{equation} to obtain \begin{align*} & \mathsf{B}_{\func{home};\kappa }^{F}\left( f,g\right) =\sum_{\substack{ I\in \mathcal{C}_{F}\text{ and }J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}}\left\langle M_{I_{J}}T_{\sigma }^{\lambda }\mathbf{1}_{F},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }-\sum_{\substack{ I\in \mathcal{C}_{F}\text{ and } J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}}\left\langle M_{I_{J}}T_{\sigma }^{\lambda }\mathbf{1} _{F\setminus I_{J}},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ & +\sum_{\substack{ I\in \mathcal{C}_{F}\text{ and }J\in \mathcal{C} _{F}^{\tau -\limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}} \left\langle \left[ T_{\sigma }^{\lambda },M_{I_{J}}\right] \mathbf{1} _{I_{J}},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\equiv \mathsf{B}_{\limfunc{paraproduct};\kappa }^{F}\left( f,g\right) + \mathsf{B}_{\limfunc{stop};\kappa }^{F}\left( f,g\right) +\mathsf{B}_{ \limfunc{commutator};\kappa }^{F}\left( f,g\right) . \end{align*} Altogether then we have the weighted Alpert version of the Nazarov, Treil and Volberg paraproduct decomposition, \begin{equation*} \mathsf{B}_{\Subset _{\rho ,\varepsilon };\kappa }^{F}\left( f,g\right) = \mathsf{B}_{\limfunc{paraproduct};\kappa }^{F}\left( f,g\right) +\mathsf{B}_{ \limfunc{stop};\kappa }^{F}\left( f,g\right) +\mathsf{B}_{\limfunc{commutator };\kappa }^{F}\left( f,g\right) +\mathsf{B}_{\limfunc{neighbour};\kappa }^{F}\left( f,g\right) . \end{equation*} Several points of departure can now be identified in the following description of the remainder of the paper. While we use here terminology yet to be defined, the reader is nevertheless encouraged to keep these seven points in mind while reading. \begin{enumerate} \item In order to obtain an estimate such as (\ref{below form bound'}) for $ p\neq 2$, we will need to use square functions and vector-valued inequalities as motivated by \cite{HyVu}, that in turn will require the quadratic Muckenhoupt condition in place of the classical one, and we turn to these issues in the next section. \item A guiding principle will be to apply the pointwise $\ell ^{2}$ Cauchy-Schwarz inequality early in the proof, and then manipulate the resulting vector-valued inequalities into a form where application of the hypotheses reduce matters to the Fefferman-Stein inequalities for the vector maximal function, and square function estimates. \item After that we will prove necessity of quadratic testing and Muckenhoupt conditions in Section 4. We also introduce a quadratic \emph{ Alpert} weak boundedness property that helps clarify the role of weak boundedness, and show that it is controlled by quadratic weak boundedness and quadratic offset Muckenhoupt. \item The first forms we choose to control in Section 5 are the comparable form, which uses only the quadratic Alpert testing conditions, and the paraproduct form, called the `difficult' form in \cite{NTV4}, which uses only the scalar cube testing condition, and constitutes one of the difficult new arguments in the paper. \item Following that we consider in Section 6 the disjoint, stopping, far below and neighbour forms, all of which require what we call a `Pivotal Lemma' that originated in \cite{NTV4}, as well as the quadratic Muckenhoupt conditions. \item Next we consider the commutator form\ in Section 7, which requires a new pigeon-holing of the tower of dyadic cubes lying above a fixed point in space, as well as Taylor expansions and quadratic offset Muckenhoupt conditions, thus constituting another of the difficult new arguments in the paper. The proof of the main theorem is wrapped up here as well. \item Finally, the Appendix in Section 8 contains an example for $p\neq 2$ of radially decreasing weights on the real line for which $A_{p}<\infty $ but $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}=\infty $. \end{enumerate} \section{Square functions and vector-valued inequalities} Recall that the Haar square function \begin{equation*} \mathcal{S}_{\limfunc{Haar}}f\left( x\right) \equiv \left( \sum_{I\in \mathcal{D}}\left\vert \bigtriangleup _{I}^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}} \end{equation*} is bounded on $L^{p}\left( \mu \right) $ for any $1<p<\infty $ and any locally finite positive Borel measure $\mu $ - simply because $\mathcal{S}_{ \limfunc{Haar}}$ is the martingale difference square function of an $L^{p}$ bounded martingale. We now extend this result to more complicated square functions. Fix a $\mathcal{D}$-dyadic cube $F_{0}$, let $\mu $\ be a locally finite positive Borel measure on $F_{0}$, and suppose that $\mathcal{F}$ is a subset of $\mathcal{D}_{F_{0}}\equiv \left\{ I\in \mathcal{D}:I\subset F_{0}\right\} $. We say that $F^{\prime }\in \mathcal{F}$ is an $\mathcal{F}$ -child of $F$ if $F^{\prime }\subset F$, and is maximal with respect to this inclusion. The collection $\left\{ \mathcal{C}_{F}\right\} _{F\in \mathcal{F} }$ of subsets $\mathcal{C}_{F}\subset \mathcal{D}_{F_{0}}$ is defined by \begin{equation*} \mathcal{C}_{F}\equiv \left\{ I\in \mathcal{D}:I\subset F\text{ and } I\not\subset F^{\prime }\text{ for any }\mathcal{F}\text{-child }F^{\prime } \text{ of }F\right\} ,\ \ \ \ \ F\in \mathcal{F}, \end{equation*} and satisfy the properties \begin{eqnarray*} &&\mathcal{C}_{F}\text{ is connected for each }F\in \mathcal{F}, \\ &&F\in \mathcal{C}_{F}\text{ and }I\in \mathcal{C}_{F}\Longrightarrow I\subset F\text{ for each }F\in \mathcal{F}, \\ &&\mathcal{C}_{F}\cap \mathcal{C}_{F^{\prime }}=\emptyset \text{ for all distinct }F,F^{\prime }\in \mathcal{F}, \\ &&\mathcal{D}_{F_{0}}=\bigcup_{F\in \mathcal{F}}\mathcal{C}_{F}\ . \end{eqnarray*} The subset $\mathcal{C}_{F}$ of $\mathcal{D}$ is referred to as the $ \mathcal{F}$-corona with top $F$. Define the Haar corona projections $ \mathsf{P}_{\mathcal{C}_{F}}^{\mu }\equiv \sum_{I\in \mathcal{C} _{F}}\bigtriangleup _{I}^{\mu }$ and group them together according to their depth in the tree $\mathcal{F}$ into the projections \begin{equation*} \mathsf{P}_{k}^{\mu }\equiv \sum_{F\in \mathfrak{C}_{\mathcal{F}}^{k}\left( F_{0}\right) }\mathsf{P}_{\mathcal{C}_{F}}^{\mu }\ . \end{equation*} Note that the $k^{th}$ grandchildren $F\in \mathfrak{C}_{\mathcal{F} }^{k}\left( F_{0}\right) $ are pairwise disjoint and hence so are the supports of the functions $\mathsf{P}_{\mathcal{C}_{F}}^{\mu }f$ for $F\in \mathfrak{C}_{\mathcal{F}}^{k}\left( F_{0}\right) $. Define the $\mathcal{F}$ -square function $\mathcal{S}_{\mathcal{F}}f$ by \begin{equation*} \mathcal{S}_{\mathcal{F}}f\left( x\right) =\left( \sum_{k=0}^{\infty }\left\vert \mathsf{P}_{k}^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{ \frac{1}{2}}=\left( \sum_{F\in \mathcal{F}}\left\vert \mathsf{P}_{\mathcal{C} _{F}}^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}=\left( \sum_{F\in \mathcal{F}}\left\vert \sum_{I\in \mathcal{C}_{F}}\bigtriangleup _{I}^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}. \end{equation*} Now note that the sequence $\left\{ \mathsf{P}_{k}^{\mu }f\left( x\right) \right\} _{F\in \mathcal{F}}$ of functions is the \emph{martingale difference sequence} of the $L^{p}$ bounded martingale $\left\{ \mathsf{E} _{k}^{\mu }f\left( x\right) \right\} _{F\in \mathcal{F}}$ with respect to the increasing sequence $\left\{ \mathcal{E}_{k}\right\} _{k=0}^{\infty }$ of $\sigma $-algebras, where $\mathcal{E}_{k}$ is the $\sigma $-algebra generated by the `atoms' $F\in \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F_{0}\right) $, i.e. \begin{equation*} \mathcal{E}_{k}\equiv \left\{ E\text{ Borel }\subset F_{0}:E\cap F\in \left\{ \emptyset ,F\right\} \text{ for all }F\in \mathfrak{C}_{\mathcal{F} }^{\left( k\right) }\left( F_{0}\right) \right\} , \end{equation*} and where \begin{eqnarray*} \mathsf{E}_{k}^{\mu }f\left( x\right) &\equiv &\left\{ \begin{array}{ccc} E_{F}^{\mu }f & \text{ if } & x\in F\text{ for some }F\in \mathfrak{C}_{ \mathcal{F}}^{\left( k\right) }\left( F_{0}\right) \\ f\left( x\right) & \text{ if } & x\in F_{0}\setminus \bigcup \mathfrak{C}_{ \mathcal{F}}^{\left( k\right) }\left( F_{0}\right) \end{array} \right. ; \\ &\text{ }&\text{where }\bigcup \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F_{0}\right) \equiv \bigcup_{F\in \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F_{0}\right) }F. \end{eqnarray*} Indeed, if $E\in \mathcal{E}_{k-1}$, then \begin{eqnarray*} &&\int_{E}\mathsf{E}_{k}^{\mu }f\left( x\right) d\mu \left( x\right) =\int_{E\setminus \bigcup \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F_{0}\right) }\mathsf{E}_{k}^{\mu }f\left( x\right) d\mu \left( x\right) +\sum_{F\in \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F_{0}\right) :\ F\subset E}\int_{F}\mathsf{E}_{k}^{\mu }f\left( x\right) d\mu \left( x\right) \\ &=&\int_{E\setminus \bigcup \mathfrak{C}_{\mathcal{F}}^{\left( k-1\right) }\left( F_{0}\right) }f\left( x\right) d\mu \left( x\right) +\sum_{F\in \mathfrak{C}_{\mathcal{F}}^{\left( k-1\right) }\left( F_{0}\right) :\ F\subset E}\int_{F\setminus \bigcup \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F_{0}\right) }f\left( x\right) d\mu \left( x\right) +\sum_{F^{\prime }\in \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F_{0}\right) :\ F^{\prime }\subset E}\int_{F^{\prime }}f\left( x\right) d\mu \left( x\right) \\ &=&\int_{E\setminus \bigcup \mathfrak{C}_{\mathcal{F}}^{\left( k-1\right) }\left( F_{0}\right) }\mathsf{E}_{k-1}^{\mu }f\left( x\right) d\mu \left( x\right) +\sum_{F\in \mathfrak{C}_{\mathcal{F}}^{\left( k-1\right) }\left( F_{0}\right) :\ F\subset E}\int_{F}f\left( x\right) d\mu \left( x\right) \\ &=&\int_{E\setminus \bigcup \mathfrak{C}_{\mathcal{F}}^{\left( k-1\right) }\left( F_{0}\right) }\mathsf{E}_{k-1}^{\mu }f\left( x\right) d\mu \left( x\right) +\sum_{F\in \mathfrak{C}_{\mathcal{F}}^{\left( k-1\right) }\left( F_{0}\right) :\ F\subset E}\int_{F}\mathsf{E}_{k-1}^{\mu }f\left( x\right) d\mu \left( x\right) =\int_{E}\mathsf{E}_{k-1}^{\mu }f\left( x\right) d\mu \left( x\right) , \end{eqnarray*} shows that $\left\{ \mathsf{E}_{k}^{\mu }f\left( x\right) \right\} _{F\in \mathcal{F}}$ is a martingale. Finally, it is easy to check that the Haar support of the function $\mathsf{P}_{k}^{\mu }f=\mathsf{E}_{k}^{\mu }f- \mathsf{E}_{k-1}^{\mu }f$ is precisely $\bigcup_{F\in \mathfrak{C}_{\mathcal{ F}}^{\left( k\right) }\left( F_{0}\right) }\mathcal{C}_{F}$, the union of the coronas associated to the $k$-grandchildren of $F_{0}$. From Burkholder's martingale transform theorem, for a nice treatment see Hytonen \cite{Hyt2}, we obtain the inequality \begin{equation*} \left\Vert \sum_{k=0}^{\infty }v_{k}\mathsf{P}_{k}^{\mu }f\right\Vert _{L^{p}\left( \mu \right) }\leq C_{p}\left( \sup_{0\leq k<\infty }\left\vert v_{k}\right\vert \right) \left\Vert f\right\Vert _{L^{p}\left( \mu \right) }, \end{equation*} for all sequences $v_{k}$ of predictable functions. Now we take $v_{k}=\pm 1$ randomly on $\bigtriangleup _{F}^{\mu }f\equiv \mathbf{1}_{F}\mathsf{P} _{k}^{\mu }$ for $F\in \mathfrak{C}_{\mathcal{F}}^{\left( k\right) }\left( F_{0}\right) $, and then an application of Khintchine's inequality, for which see \cite[Lemma 5.5 page 114]{MuSc} and \cite[Proposition 4.5 page 28] {Wol}, allows us to conclude that the square function satisfies the following $L^{p}\left( \mu \right) $ bound, \begin{equation*} \left\Vert \mathcal{S}_{\mathcal{F}}f\right\Vert _{L^{p}\left( \mu \right) }\leq C_{p}\left\Vert f\right\Vert _{L^{p}\left( \mu \right) },\ \ \ \ \ \text{for all }1<p<\infty . \end{equation*} We now note that from this result, we can obtain the square function bounds we need for the nearby and paraproduct forms treated below, which include both of the square funcitons $\mathcal{S}_{\mathcal{F}}$ and \begin{equation*} \mathcal{S}_{\mathcal{F}^{\tau -\func{shift}}}f\left( x\right) \equiv \left( \sum_{F\in \mathcal{F}}\left\vert \mathsf{P}_{\mathcal{C}_{F}^{\mu ,\tau - \func{shift}}}^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}. \end{equation*} Indeed, we first note that if we take $\mathcal{F}=\mathcal{D}_{F_{0}}$, then we obtain the bound \begin{eqnarray*} \left\Vert \mathcal{S}_{\limfunc{Haar}}f\right\Vert _{L^{p}\left( \mu \right) } &\leq &C_{p}\left\Vert f\right\Vert _{L^{p}\left( \mu \right) },\ \ \ \ \ \text{for all }1<p<\infty ; \\ \mathcal{S}_{\limfunc{Haar}}f\left( x\right) &\equiv &\left( \sum_{I\in \mathcal{D}_{F_{0}}}\left\vert \bigtriangleup _{I}^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}. \end{eqnarray*} Then using, \begin{equation*} \mathcal{C}_{F}\setminus \mathcal{C}_{F}^{\mu ,\tau -\func{shift}}\subset \mathcal{N}_{F}\text{ and }\mathcal{C}_{F}^{\mu ,\tau -\func{shift} }\setminus \mathcal{C}_{F}\subset \bigcup_{F^{\prime }\in \mathfrak{C}_{ \mathcal{F}}\left( F\right) }\mathcal{N}_{F^{\prime }}\ , \end{equation*} we conclude that the symmetric difference of $\mathcal{C}_{F}$ and $\mathcal{ C}_{F}^{\mu ,\tau -\func{shift}}$ is contained in $\mathcal{N}_{F}\cup \bigcup_{F^{\prime }\in \mathfrak{C}_{\mathcal{F}}\left( F\right) }\mathcal{N }_{F^{\prime }}$, where $\mathcal{N}_{F}$ denotes the set of cubes $I$ near $ F$ in the corona $\mathcal{C}_{F}$, i.e. $\ell \left( I\right) \geq 2^{-\tau }\ell \left( F\right) $. But since the children $F^{\prime }\in \mathfrak{C} _{\mathcal{F}}\left( F\right) $ are pairwise disjoint, and the cardinality of the nearby sets $\mathcal{N}_{F}$ and $\mathcal{N}_{F^{\prime }}$ are each $2^{n\tau }$, we see that \begin{equation*} \left\Vert \mathcal{S}_{\mathcal{F}^{\tau -\func{shift}}}f\right\Vert _{L^{p}\left( \mu \right) }\leq \left\Vert \mathcal{S}_{\mathcal{F} }f\right\Vert _{L^{p}\left( \mu \right) }+C_{\tau ,n}\left\Vert \mathcal{S}_{ \limfunc{Haar}}f\right\Vert _{L^{p}\left( \mu \right) }, \end{equation*} since each of the square functions $\mathcal{S}_{\mathcal{F}}$ and $\mathcal{ S}_{\limfunc{Haar}}$ have already been shown to be bounded on $L^{p}\left( \mu \right) $. We have thus proved the following theorem. \begin{theorem} \label{square thm}Suppose $\mu $ is a locally finite positive Borel measure on $\mathbb{R}^{n}$. Then for $1<p<\infty $, \begin{equation*} \left\Vert \mathcal{S}_{\mathcal{F}^{\tau -\func{shift}}}f\right\Vert _{L^{p}\left( \mu \right) }\leq C_{p,\tau }\left\Vert f\right\Vert _{L^{p}\left( \mu \right) }. \end{equation*} \end{theorem} Another square function that will arise in the nearby and related forms is \begin{eqnarray*} \mathcal{S}_{\rho ,\delta }f\left( x\right) &\equiv &\left( \sum_{I\in \mathcal{D}\ :x\in I}\left\vert \mathsf{P}_{I}^{\rho ,\delta }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}, \\ \text{where }\mathsf{P}_{I}^{\rho ,\delta }f\left( x\right) &\equiv &\sum_{J\in \mathcal{D}:\ 2^{-\rho }\ell \left( I\right) \leq \ell \left( J\right) \leq 2^{\rho }\ell \left( I\right) }2^{-\delta \limfunc{dist}\left( J,I\right) }\bigtriangleup _{J}^{\mu }f\left( x\right) . \end{eqnarray*} \begin{theorem} \label{square thm nearby}Suppose $\mu $ is a locally finite positive Borel measure on $\mathbb{R}^{n}$, and let $0<\rho ,\delta <1$. Then for $ 1<p<\infty $, \begin{equation*} \left\Vert \mathcal{S}_{\rho ,\delta }f\right\Vert _{L^{p}\left( \mu \right) }\leq C_{p,\rho ,\delta }\left\Vert f\right\Vert _{L^{p}\left( \mu \right) }. \end{equation*} \end{theorem} \begin{proof} It is easy to see that $\mathcal{S}_{\rho ,\delta }f\left( x\right) \leq C_{\rho ,\delta }\mathcal{S}_{\limfunc{Haar}}f\left( x\right) $, and the boundedness of $\mathcal{S}_{\rho ,\delta }$ now follows from the boundedness of the Haar square function $\mathcal{S}_{\limfunc{Haar}}$. \end{proof} \subsection{Alpert square functions} Now we extend the Haar square function inequalities to Alpert square functions that use weighted Alpert wavelets in place of Haar wavelets. Recall from \cite{RaSaWi} that if $\mathbb{E}_{I;\kappa }^{\mu }$ denotes orthogonal projection in\thinspace $L^{2}\left( \mu \right) $ onto the finite dimensional space of restrictions to $I$ of polynomials of degree less than $\kappa $, then the weighted Alpert projection $\bigtriangleup _{I;\kappa }^{\mu }$ is given by \begin{equation*} \bigtriangleup _{I;\kappa }^{\mu }=\left( \sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) }\mathbb{E}_{I^{\prime };\kappa }^{\mu }\right) - \mathbb{E}_{I;\kappa }^{\mu }. \end{equation*} These weighted Alpert projections $\left\{ \bigtriangleup _{I;\kappa }^{\mu }\right\} _{I\in \mathcal{D}}$ are orthogonal and span $L^{2}\left( \mu \right) $ for measures $\mu $ that are infinite on all dyadic tops, and in particular for doubling measures, see \cite{RaSaWi} and \cite{AlSaUr2} for teminology and proofs. We begin by showing that the Alpert square function \begin{equation*} \mathcal{S}_{\limfunc{Alpert};\kappa }f\left( x\right) \equiv \left( \sum_{I\in \mathcal{D}}\left\vert \bigtriangleup _{I;\kappa }^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}} \end{equation*} is bounded on $L^{p}\left( \mu \right) $ for any $1<p<\infty $ and any locally finite positive Borel measure $\mu $. Indeed, it is enough to show that $\mathcal{S}_{\limfunc{Alpert};\kappa }$ is a martingale difference square function of an $L^{p}\,$bounded martingale in order to use Burkholder's theorem. Recall that $\mathcal{D}_{k}\equiv \left\{ Q\in \mathcal{D}:\ell \left( Q\right) =2^{k}\right\} $ is the tiling of $\mathbb{R}^{n}$ with dyadic cubes of side length $2^{k}$. For each $k\in \mathbb{Z}$ define the projections \begin{equation*} \mathsf{P}_{k;\kappa }^{\mu }f\left( x\right) \equiv \sum_{Q\in \mathcal{D} _{k}}\mathbb{E}_{Q;\kappa }^{\mu }f \end{equation*} of $f$ onto the linear space of functions whose restrictions to cubes in $ \mathcal{D}_{k}$ are polynomials of degree at most $\kappa $. It is easily checked that the sequence $\left\{ \mathsf{P}_{k;\kappa }^{\mu }f\left( x\right) \right\} _{F\in \mathcal{F}}$ of functions is the \emph{martingale difference sequence} of the $L^{p}$ bounded martingale $\left\{ \mathbb{E} _{k;\kappa }^{\mu }f\left( x\right) \right\} _{F\in \mathcal{F}}$ of projections $\mathbb{E}_{k;\kappa }^{\mu }f$ of $f$ with respect to the increasing sequence $\left\{ \mathcal{E}_{k}\right\} _{k=0}^{\infty }$ of $ \sigma $-algebras, where $\mathcal{E}_{k}$ is the $\sigma $-algebra generated by the dyadic cubes in $\mathcal{D}_{k}$, i.e. \begin{equation*} \mathcal{E}_{k}\equiv \left\{ E\text{ Borel }\subset \mathbb{R}^{n}:E\cap Q\in \left\{ \emptyset ,Q\right\} \text{ for all }Q\in \mathcal{D} _{k}\right\} . \end{equation*} Indeed, it is enough to show that the functions $\mathsf{P}_{k;\kappa }^{\mu }$ and $\mathsf{P}_{k+1;\kappa }^{\mu }$ have the same integral over all $ P\in \mathcal{D}_{k}$, and this holds because $\bigtriangleup _{P;\kappa }^{\mu }f$ has vanishing mean on $P$: \begin{eqnarray*} &&\int_{P}\mathsf{P}_{k+1;\kappa }^{\mu }f\left( x\right) d\mu \left( x\right) -\int_{P}\mathsf{P}_{k;\kappa }^{\mu }f\left( x\right) d\mu \left( x\right) =\int_{P}\left( \mathsf{P}_{k+1;\kappa }^{\mu }f\left( x\right) - \mathsf{P}_{k;\kappa }^{\mu }f\left( x\right) \right) d\mu \left( x\right) \\ &=&\int_{P}\left( \sum_{Q\in \mathcal{D}_{k+1}}\mathbb{E}_{Q;\kappa }^{\mu }f-\sum_{Q\in \mathcal{D}_{k}}\mathbb{E}_{Q;\kappa }^{\mu }f\right) d\mu \left( x\right) =\int_{P}\sum_{Q\in \mathcal{D}_{k+1}:Q\subset P}\left( \mathbb{E}_{Q;\kappa }^{\mu }f-\mathbb{E}_{P;\kappa }^{\mu }f\right) d\mu \left( x\right) \\ &=&\int_{P}\left( \bigtriangleup _{P;\kappa }^{\mu }f\left( x\right) \right) d\mu \left( x\right) =0. \end{eqnarray*} The $L^{p}$ boundedness $\left\Vert \mathbb{E}_{k;\kappa }^{\mu }f\right\Vert _{L^{p}\left( \mu \right) }\leq C$ follows easily from the estimate $\left\Vert \mathbb{E}_{Q;\kappa }^{\mu }f\right\Vert _{\infty }\lesssim E_{Q}^{\mu }\left\vert f\right\vert $ in (\ref{analogue}) for $ Q\in \mathcal{D}_{k}$. Thus Burkholder's theorem and Khintchine's inequality imply that the Alpert $ \mathcal{F}$-square function $\mathcal{S}_{\mathcal{F};\kappa }f$ defined by \begin{equation*} \mathcal{S}_{\mathcal{F};\kappa }f\left( x\right) =\left( \sum_{k=0}^{\infty }\left\vert \mathsf{P}_{k;\kappa }^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}=\left( \sum_{F\in \mathcal{F}}\left\vert \mathsf{P }_{\mathcal{C}_{F};\kappa }^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{ \frac{1}{2}}=\left( \sum_{F\in \mathcal{F}}\left\vert \sum_{I\in \mathcal{C} _{F}}\bigtriangleup _{I;\kappa }^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}, \end{equation*} is bounded on $L^{p}\left( \mu \right) $ for all $1<p<\infty $, and just as in the case of Haar wavelets, so are the square functions \begin{equation*} \mathcal{S}_{\mathcal{F}^{\tau -\func{shift}};\kappa }f\left( x\right) \equiv \left( \sum_{F\in \mathcal{F}}\left\vert \mathsf{P}_{\mathcal{C} _{F}^{\mu ,\tau -\func{shift}};\kappa }^{\mu }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}, \end{equation*} and \begin{eqnarray*} \mathcal{S}_{\rho ,\delta ;\kappa }f\left( x\right) &\equiv &\left( \sum_{I\in \mathcal{D}\ :x\in I}\left\vert \mathsf{P}_{I;\kappa }^{\rho ,\delta }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}, \\ \text{where }\mathsf{P}_{I;\kappa }^{\rho ,\delta }f\left( x\right) &\equiv &\sum_{J\in \mathcal{D}:\ 2^{-\rho }\ell \left( I\right) \leq \ell \left( J\right) \leq 2^{\rho }\ell \left( I\right) }2^{-\delta \limfunc{dist}\left( J,I\right) }\bigtriangleup _{J;\kappa }^{\mu }f\left( x\right) . \end{eqnarray*} Altogether we obtain the following theorem. \begin{theorem} \label{Alpert square thm}Suppose $\mu $ is a locally finite positive Borel measure on $\mathbb{R}^{n}$. Then for $\kappa \in \mathbb{N}$, $1<p<\infty $ and $0<\rho ,\delta <1$, we have \begin{eqnarray*} &&\left\Vert \mathcal{S}_{\limfunc{Alpert};\kappa }f\right\Vert _{L^{p}\left( \mu \right) }+\left\Vert \mathcal{S}_{\mathcal{F};\kappa }f\right\Vert _{L^{p}\left( \mu \right) }+\left\Vert \mathcal{S}_{\mathcal{F} ^{\tau -\func{shift}};\kappa }f\right\Vert _{L^{p}\left( \mu \right) }\leq C_{p,n,\kappa ,\tau }\left\Vert f\right\Vert _{L^{p}\left( \mu \right) }, \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left\Vert \mathcal{S}_{\rho ,\delta ;\kappa }f\right\Vert _{L^{p}\left( \mu \right) }\leq C_{p,\rho ,\delta ,n,\kappa }\left\Vert f\right\Vert _{L^{p}\left( \mu \right) }. \end{eqnarray*} \end{theorem} \subsection{Vector-valued inequalities} We begin by reviewing the well-known $\ell ^{2}$-extension of a bounded linear operator.\ We include the simple proof here as it sheds light on the nature of the quadratic Muckenhoupt condition, in particular on its necessity for the norm inequality - namely that one must test the norm inequality over \emph{all} functions $\mathbf{f}_{\mathbf{u}}$ defined below. Let $M\in \mathbb{N}$ be a large positive integer that we will send to $ \infty $ later on. Suppose $T$ is bounded from $L^{p}\left( \sigma \right) $ to $L^{p}\left( \omega \right) $, $0<p<\infty $, and for $\mathbf{f}=\left\{ f_{j}\right\} _{j=1}^{M}$, define \begin{equation*} T\mathbf{f}\equiv \left\{ Tf_{j}\right\} _{j=1}^{M}.\text{ } \end{equation*} For any unit vector $\mathbf{u}$ $=\left( u_{j}\right) _{j=1}^{M}$ in $ \mathbb{C}^{M}$ define \begin{equation*} \mathbf{f}_{\mathbf{u}}\equiv \left\langle \mathbf{f},\mathbf{u} \right\rangle \text{ and }T_{\mathbf{u}}\mathbf{f}\equiv \left\langle T \mathbf{f},\mathbf{u}\right\rangle =T\left\langle \mathbf{f},\mathbf{u} \right\rangle =T\mathbf{f}_{\mathbf{u}} \end{equation*} where the final equalities follow since $T$ is linear. We have \begin{equation*} \int_{\mathbb{R}^{n}}\left\vert T_{\mathbf{u}}\mathbf{f}\left( x\right) \right\vert ^{p}d\omega \left( x\right) =\int_{\mathbb{R}^{n}}\left\vert T \mathbf{f}_{\mathbf{u}}\left( x\right) \right\vert ^{p}d\omega \left( x\right) \leq \left\Vert T\right\Vert _{L^{p}\left( \sigma \right) \rightarrow L^{p}\left( \omega \right) }^{p}\int_{\mathbb{R}^{n}}\left\vert \mathbf{f}_{\mathbf{u}}\left( x\right) \right\vert ^{p}d\sigma \left( x\right) , \end{equation*} where \begin{equation*} T_{\mathbf{u}}\mathbf{f}\left( x\right) =\left\langle T\mathbf{f}\left( x\right) ,\mathbf{u}\right\rangle =\left\vert T\mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}\left\langle \frac{T\mathbf{f}\left( x\right) }{ \left\vert T\mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}},\mathbf{u} \right\rangle =\left\vert T\mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}\ \cos \theta , \end{equation*} if $\theta $ is the angle between $\frac{T\mathbf{f}\left( x\right) }{ \left\vert T\mathbf{f}\left( x\right) \right\vert }$ and $\mathbf{u}$ in $ \mathbb{C}^{M}$. Then using \begin{equation*} \int_{\mathbb{S}^{M-1}}\left\vert \left\langle \mathbf{u},\mathbf{v} \right\rangle \right\vert ^{p}d\mathbf{u}=\gamma _{p}\text{ for }\left\Vert \mathbf{v}\right\Vert =1, \end{equation*} we have \begin{eqnarray*} &&\int_{\mathbb{S}^{M-1}}\left\{ \int_{\mathbb{R}^{n}}\left\vert T_{\mathbf{u }}\mathbf{f}\left( x\right) \right\vert ^{p}d\omega \left( x\right) \right\} d\mathbf{u}=\int_{\mathbb{R}^{n}}\left\{ \int_{\mathbb{S}^{M-1}}\left\vert T_{\mathbf{u}}\mathbf{f}\left( x\right) \right\vert ^{p}d\mathbf{u}\right\} d\omega \left( x\right) \\ &=&\int_{\mathbb{R}^{n}}\left\vert T\mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}^{p}\left\{ \int_{\mathbb{S}^{M-1}}\left\vert \cos \theta \right\vert ^{p}d\mathbf{u}\right\} d\omega \left( x\right) =\gamma _{p}\int_{\mathbb{R}^{n}}\left\vert T\mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}^{p}d\omega \left( x\right) , \end{eqnarray*} and similarly, \begin{equation*} \int_{\mathbb{S}^{M-1}}\left\{ \int_{\mathbb{R}^{n}}\left\vert \mathbf{f}_{ \mathbf{u}}\left( x\right) \right\vert ^{p}d\sigma \left( x\right) \right\} d \mathbf{u}=\gamma _{p}\int_{\mathbb{R}^{n}}\left\vert \mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}^{p}d\sigma \left( x\right) . \end{equation*} Altogether then, \begin{eqnarray*} &&\gamma _{p}\int_{\mathbb{R}^{n}}\left\vert T\mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}^{p}d\omega \left( x\right) =\int_{\mathbb{S} ^{M-1}}\left\{ \int_{\mathbb{R}^{n}}\left\vert T_{\mathbf{u}}\mathbf{f} \left( x\right) \right\vert ^{p}d\omega \left( x\right) \right\} d\mathbf{u} =\int_{\mathbb{S}^{M-1}}\left\{ \int_{\mathbb{R}^{n}}\left\vert T\mathbf{f}_{ \mathbf{u}}\left( x\right) \right\vert ^{p}d\omega \left( x\right) \right\} d \mathbf{u} \\ &\leq &\int_{\mathbb{S}^{M-1}}\left\{ \left\Vert T\right\Vert _{L^{p}\left( \sigma \right) \rightarrow L^{p}\left( \omega \right) }^{p}\int_{\mathbb{R} ^{n}}\left\vert \mathbf{f}_{\mathbf{u}}\left( x\right) \right\vert ^{p}d\sigma \left( x\right) \right\} d\mathbf{u}=\gamma _{p}\left\Vert T\right\Vert _{L^{p}\left( \sigma \right) \rightarrow L^{p}\left( \omega \right) }^{p}\int_{\mathbb{R}^{n}}\left\vert \mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}^{p}d\sigma \left( x\right) , \end{eqnarray*} and upon dividing both sides by $\gamma _{p}$ we conclude that \begin{equation*} \int_{\mathbb{R}^{n}}\left\vert T\mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}^{p}d\omega \left( x\right) \leq \left\Vert T\right\Vert _{L^{p}\left( \sigma \right) \rightarrow L^{p}\left( \omega \right) }^{p}\int_{\mathbb{R}^{n}}\left\vert \mathbf{f}\left( x\right) \right\vert _{\ell ^{2}}^{p}d\sigma \left( x\right) . \end{equation*} Finally we can let $M\nearrow \infty $ to obtain the desired vector-valued extension, \begin{equation} \left( \int_{\mathbb{R}^{n}}\left( \sqrt{\sum_{j=1}^{\infty }\left\vert Tf_{j}\left( x\right) \right\vert ^{2}}\right) ^{p}d\omega \left( x\right) \right) ^{\frac{1}{p}}\leq \left\Vert T\right\Vert _{L^{p}\left( \sigma \right) \rightarrow L^{p}\left( \omega \right) }\left( \int_{\mathbb{R} ^{n}}\left( \sqrt{\sum_{j=1}^{\infty }\left\vert f_{j}\left( x\right) \right\vert ^{2}}\right) ^{p}d\sigma \left( x\right) \right) ^{\frac{1}{p}}. \label{in full} \end{equation} \section{Necessity of quadratic testing and $A_{p}$ conditions} We can use the vector-valued inequality (\ref{in full}) to obtain the necessity of the quadratic testing inequality, namely \begin{equation} \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1} _{I_{i}}T_{\sigma }^{\lambda }\mathbf{1}_{I_{i}}\right) ^{2}\right) ^{\frac{1 }{2}}\right\Vert _{L^{p}\left( \omega \right) }\leq \mathfrak{T}_{T^{\lambda }}^{\ell ^{2}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1}_{I_{i}}\right) ^{2}\right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \label{quad cube testing} \end{equation} for the boundedness of $T^{\lambda }$ from $L^{p}\left( \sigma \right) $ to $ L^{p}\left( \omega \right) $, i.e. $\mathfrak{T}_{T^{\lambda }}^{\ell ^{2}}\left( \sigma ,\omega \right) \lesssim \left\Vert T^{\lambda }\right\Vert _{L^{p}\left( \sigma \right) \rightarrow L^{p}\left( \omega \right) }$. Indeed, we simply set $f_{i}\equiv a_{i}\mathbf{1}_{I_{i}}$ in ( \ref{in full}) to obtain the \emph{global} quadratic testing inequality, \begin{equation} \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}T_{\sigma }^{\lambda } \mathbf{1}_{I_{i}}\right) ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }\leq \mathfrak{T}_{T^{\lambda },p}^{\ell ^{2}, \func{global}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1}_{I_{i}}\right) ^{2}\right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \label{global cube testing} \end{equation} and then we simply note the pointwise inequality \begin{equation*} \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1}_{I_{i}}T_{\sigma }^{\lambda } \mathbf{1}_{I_{i}}\right) \left( x\right) ^{2}=\sum_{i=1}^{\infty }\left\vert a_{i}\right\vert ^{2}\left\vert T_{\sigma }^{\lambda }\mathbf{1} _{I_{i}}\left( x\right) \right\vert ^{2}\mathbf{1}_{I_{i}}\left( x\right) \leq \sum_{i=1}^{\infty }\left\vert a_{i}\right\vert ^{2}\left\vert T_{\sigma }^{\lambda }\mathbf{1}_{I_{i}}\left( x\right) \right\vert ^{2} \mathbf{,} \end{equation*} to obtain the local version (\ref{quad cube testing}). Now we turn to the necessity of the quadratic offset $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}$ condition, namely \begin{equation*} \left\Vert \left( \sum_{i=1}^{\infty }\left( a_{i}\mathbf{1}_{I_{i}^{\ast }} \frac{\left\vert I_{i}\right\vert _{\sigma }}{\left\vert I_{i}\right\vert ^{1-\frac{\lambda }{n}}}\right) ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }\leq A_{p}^{\lambda ,\ell ^{2},\limfunc{offset} }\left( \sigma ,\omega \right) \left\Vert \left( \sum_{i=1}^{\infty }\left\vert a_{i}\mathbf{1}_{I_{i}}\right\vert ^{2}\right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \sigma \right) }. \end{equation*} Suppose that $T^{\lambda }$ is Stein elliptic, and fix appropriate sequences $\left\{ I_{i}\right\} _{i=1}^{\infty }$ and $\left\{ a_{i}\right\} _{i=1}^{\infty }$ of cubes and numbers respectively. Then there is a choice of constant $C$ and appropriate cubes $I_{i}^{\ast }$ such that \begin{equation*} \left\vert T_{\sigma }^{\lambda }\mathbf{1}_{I_{i}}\left( x\right) \right\vert \geq c\frac{\left\vert I_{i}\right\vert _{\sigma }}{\left\vert I_{i}\right\vert ^{1-\frac{\lambda }{n}}}\text{ for }x\in I_{i}^{\ast },\ \ \ \ \ 1\leq i\leq \infty . \end{equation*} Now we simply apply (\ref{global cube testing}) to obtain \begin{equation*} A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \lesssim \mathfrak{T}_{T^{\lambda },p}^{\ell ^{2},\func{global}}\left( \sigma ,\omega \right) \leq \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) . \end{equation*} It should be noticed that while the necessity of the quadratic Muckenhoupt condition $\mathcal{A}_{p}^{\lambda ,\ell ^{2}}\left( \sigma ,\omega \right) $ itself is easily shown for the Hilbert transform, the necessity for even nice operators in higher dimensions is much more difficult. \subsection{Quadratic Alpert weak boundedness property} It is convenient in our proof to introduce the quadratic \emph{Alpert weak boundedness property} constant $\mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\left( \sigma ,\omega \right) $ as the least constant in the inequality, \begin{equation} \left\Vert \left( \sum_{I\in \mathcal{D}}\sum_{J\in \func{Adj}_{\rho }\left( I\right) }\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f\right\vert ^{2}\right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }\leq \mathcal{AWBP} _{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }. \label{quad Haar test} \end{equation} There is only one quadratic Alpert inequality in the weak boundedness condition (\ref{quad Haar test}), since we show below in Proposition \ref {diag form} that (\ref{quad Haar test}) is \emph{equivalent} to the bilinear inequality \begin{equation} \left\vert \sum_{I\in \mathcal{D}}\sum_{J\in \func{Adj}_{\rho }\left( I\right) }\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\right\vert \leq C\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) },\ \ \ \ \ f\in L^{p}\left( \sigma \right) ,g\in L^{p^{\prime }}\left( \omega \right) , \label{bil in} \end{equation} which is then also equivalent to the inequality dual to that appearing in ( \ref{quad Haar test}). In fact, this bilinear inequality is a `quadratic analogue' of a scalar weak boundedness property, which points to the relative `weakness' of (\ref{quad Haar test}). Of course, one can use the $ L^{\infty }$ estimates (\ref{analogue'}) from \cite{Saw6} on Alpert wavelets, together with the vector-valued maximal theorem of Fefferman and Stein in a space of homogeneous type \cite[Theorem 2.1]{GrLiYa}, to show that for doubling measures, we actually have $\mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\lesssim \mathfrak{T}_{T^{\lambda },p}^{\ell ^{2},\func{global}}$, but we will instead show a stronger result in Lemma \ref{stronger} below. The property (\ref{quad Haar test}) appears at first glance to be much stronger than the corresponding scalar testing and weak boundedness conditions, mainly because the standard proof of necessity of these conditions involves testing the scalar norm inequality over a dense set of functions $\sum_{i=1}^{\infty }u_{i}a_{i}\mathbf{1}_{I_{i}}$ with $ \sum_{i=1}^{\infty }u_{i}^{2}=1$, see the subsection on vector-valued inequalities above. However, the minimal nature of the role of the quadratic Alpert weak boundedness property (\ref{quad Haar test}) is demonstrated by considering the adjacent diagonal bilinear form $\mathsf{B}_{\func{Adj},\rho }\left( f,g\right) $ associated with the form \begin{equation*} \left\langle T_{\sigma }^{\lambda }f,g\right\rangle _{\omega }=\left\langle T_{\sigma }^{\lambda }\left( \sum_{I\in \mathcal{D}}\bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\sum_{J\in \mathcal{D}}\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }=\sum_{I,J\in \mathcal{D} }\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\ , \end{equation*} where $f=\sum_{I\in \mathcal{D}}\bigtriangleup _{I;\kappa }^{\sigma }f$ and $ g=\sum_{J\in \mathcal{D}}\bigtriangleup _{J;\kappa }^{\omega }g$ are the weighted Alpert expansions of $f$ and $g$ respectively. Here the adjacent diagonal form $\mathsf{B}_{\func{Adj},\rho }\left( f,g\right) $ is given by \begin{equation*} \mathsf{B}_{\func{Adj},\rho }\left( f,g\right) \equiv \sum_{I\in \mathcal{D} }\sum_{J\in \func{Adj}_{\rho }\left( I\right) }\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }, \end{equation*} where $\func{Adj}_{\rho }\left( I\right) $ is defined in (\ref{def Adj}). We now demonstrate that the norm of $\mathsf{B}_{\func{Adj},\rho }\left( f,g\right) $ as a bilinear form is comparable to the quadratic Alpert weak boundedness constant $\mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }$. \begin{proposition} \label{diag form}Suppose $1<p<\infty $, $0\leq \rho <\infty $, and $\sigma $ and $\omega $ are positive locally finite Borel measures on $\mathbb{R}^{n}$ . If $\mathfrak{N}_{L^{p}\left( \sigma \right) \times L^{p^{\prime }}\left( \omega \right) }$ denotes the smallest constant $C$ in the bilinear inequality. \begin{equation*} \left\vert \mathsf{B}_{\func{Adj},\rho }\left( f,g\right) \right\vert \leq C\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \end{equation*} then \begin{equation*} \mathfrak{N}_{L^{p}\left( \sigma \right) \times L^{p^{\prime }}\left( \omega \right) }\approx \mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\left( \sigma ,\omega \right) . \end{equation*} \end{proposition} \begin{proof} We have \begin{eqnarray*} &&\mathfrak{N}_{L^{p}\left( \sigma \right) \times L^{p^{\prime }}\left( \omega \right) }=\sup_{\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }=\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }=1}\left\vert \mathsf{B}_{\func{Adj},\rho }\left( f,g\right) \right\vert \\ &=&\sup_{\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }=\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }=1}\left\vert \int_{ \mathbb{R}^{n}}\left[ \sum_{I\in \mathcal{D}}\sum_{J\in \func{Adj},\rho \left( I\right) }\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f\left( x\right) \right] \ g\left( x\right) \ d\omega \left( x\right) \right\vert \\ &=&\sup_{\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }=1}\left( \int_{\mathbb{R}^{n}}\left\vert \sum_{I\in \mathcal{D}}\sum_{J\in \func{Adj} ,\rho \left( I\right) }\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f\left( x\right) \right\vert ^{p}d\omega \left( x\right) \right) ^{\frac{1}{p}}. \end{eqnarray*} Now we use the fact that Alpert multipliers are bounded on $L^{p}\left( \sigma \right) $ to obtain \begin{equation*} \left\Vert \sum_{I\in \mathcal{D}}\pm \bigtriangleup _{I;\kappa }^{\sigma }f\right\Vert _{L^{p}\left( \sigma \right) }\approx \left\Vert \sum_{I\in \mathcal{D}}\bigtriangleup _{I;\kappa }^{\sigma }f\right\Vert _{L^{p}\left( \sigma \right) }=\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\ . \end{equation*} Hence by the equivalence $\left\Vert \mathcal{S}_{\limfunc{Alpert} }f\right\Vert _{L^{p}\left( \sigma \right) }\approx \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }$, we have \begin{eqnarray*} &&\mathfrak{N}_{L^{p}\left( \sigma \right) \times L^{p^{\prime }}\left( \omega \right) }\approx \sup_{\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }=1}\mathbb{E}_{\pm }\left( \int_{\mathbb{R}^{n}}\left\vert \sum_{I\in \mathcal{D}}\sum_{J\in \func{Adj},\rho \left( I\right) }\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \pm \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{p}d\omega \left( x\right) \right) ^{\frac{1}{p}} \\ &\approx &\sup_{\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }=1}\left( \int_{\mathbb{R}^{n}}\left( \sum_{I\in \mathcal{D}}\sum_{J\in \func{Adj},\rho \left( I\right) }\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \right) ^{\frac{1}{p}}=\mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\left( \sigma ,\omega \right) . \end{eqnarray*} \end{proof} Note that when $\rho =0$, we have $\func{Adj}_{\rho }\left( I\right) =\func{ Adj}_{0}\left( I\right) =\func{Adj}\left( I\right) $ as defined in the introduction. \begin{lemma} \label{stronger}Suppose that $\sigma $ and $\omega $ are doubling measures, $ 1<p<\infty $, and that $T^{\lambda }$ is a smooth $\lambda $-fractional Calderon-Zygmund operator. Then for $0<\varepsilon <1$, \begin{equation*} \mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\left( \sigma ,\omega \right) \leq C_{\varepsilon }\left[ \mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) +A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}\left( \sigma ,\omega \right) \right] +\varepsilon \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) . \end{equation*} \end{lemma} \begin{proof} We use an idea from the proof in \cite[Theorem 9 on page 23]{AlSaUr}. Fix a dyadic cube $I$. If $P$ is an $I$-normalized polynomial of degree less than $ \kappa $ on the cube $I$, i.e. $\left\Vert P\right\Vert _{L^{\infty }}\approx 1$, then we can approximate $P$ by a step function \begin{equation*} S\equiv \sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}^{\left( m\right) }\left( I\right) }a_{I^{\prime }}\mathbf{1}_{I^{\prime }}, \end{equation*} satisfying \begin{equation*} \left\Vert S-\mathbf{1}_{I}P\right\Vert _{L^{\infty }\left( \sigma \right) }<\varepsilon \ , \end{equation*} provided we take $m\geq 1$ sufficiently large depending on $n$ and $\kappa $ , but independent of the cube $I$. Using this we can write \begin{eqnarray*} \bigtriangleup _{I;\kappa }^{\sigma }f &=&\sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) }\sum_{I^{\prime \prime }\in \mathfrak{C}_{ \mathcal{D}}^{\left( m\right) }\left( I^{\prime }\right) }a_{I^{\prime \prime }}\mathbf{1}_{I^{\prime \prime }}+\func{Error}_{I}\ , \\ \bigtriangleup _{J;\kappa }^{\omega }g &=&\sum_{J^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( J\right) }\sum_{J^{\prime \prime }\in \mathfrak{C}_{ \mathcal{D}}^{\left( m\right) }\left( J^{\prime }\right) }b_{J^{\prime \prime }}\mathbf{1}_{J^{\prime \prime }}+\func{Error}_{J}\ , \end{eqnarray*} where $\left\Vert \func{Error}_{I}\right\Vert _{\infty },\left\Vert \func{ Error}_{J}\right\Vert _{\infty }<\frac{\varepsilon _{2}}{2}$ and the constants $a_{I^{\prime \prime }}$ and $b_{J^{\prime \prime }}$ are controlled by $\left\Vert \mathbb{E}_{I^{\prime }}\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \right\Vert _{\infty }$ and $\left\Vert \mathbb{E}_{J^{\prime }}\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\Vert _{\infty }$ respectively. Thus we have \begin{eqnarray*} &&\mathsf{B}_{\func{Adj},\rho }\left( f,g\right) \equiv \sum_{I\in \mathcal{D }}\sum_{J\in \func{Adj}_{\rho }\left( I\right) }\left\langle T_{\sigma }^{\lambda }\bigtriangleup _{I;\kappa }^{\sigma }f,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ &=&\left\{ \sum_{I\in \mathcal{D}}\sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) }\sum_{I^{\prime \prime }\in \mathfrak{C}_{ \mathcal{D}}^{\left( m\right) }\left( I^{\prime }\right) }\right\} \left\{ \sum_{J\in \func{Adj}_{\rho }\left( I\right) }\sum_{J^{\prime }\in \mathfrak{ C}_{\mathcal{D}}\left( I\right) }\sum_{J^{\prime \prime }\in \mathfrak{C}_{ \mathcal{D}}^{\left( m\right) }\left( J^{\prime }\right) }\right\} a_{I^{\prime \prime }}b_{J^{\prime \prime }}\left\langle T_{\sigma }^{\lambda }\mathbf{1}_{I^{\prime \prime }},\mathbf{1}_{J^{\prime \prime }}\right\rangle _{\omega }+\func{Error} \\ &=&\left\{ \sum_{\overline{J^{\prime \prime }}\cap \overline{I^{\prime \prime }}=\emptyset }+\sum_{\overline{J^{\prime \prime }}\cap \overline{ I^{\prime \prime }}\not=\emptyset }\right\} a_{I^{\prime \prime }}b_{J^{\prime \prime }}\left\langle T_{\sigma }^{\lambda }\mathbf{1} _{I^{\prime \prime }},\mathbf{1}_{J^{\prime \prime }}\right\rangle _{\omega }+\func{Error}\equiv T_{\func{sep}}+T_{\func{touch}}+\func{Error}, \end{eqnarray*} where we have suppressed many of the conditions governing the dyadic cubes $ I^{\prime \prime }$ and $J^{\prime \prime }$, including the fact that $\ell \left( J^{\prime \prime }\right) =2^{-m-1}\ell \left( J\right) =2^{-m-1}\ell \left( I\right) =\ell \left( I^{\prime \prime }\right) $. Thus the cubes $ J^{\prime \prime }$ and $I^{\prime \prime }$ arising in term $T_{\func{sep}}$ are separated and it is then an easy matter to see that \begin{equation*} \left\vert T_{\func{sep}}\right\vert \leq C_{m}A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{equation*} As for the term $T_{\func{touch}}$, it is easily controlled by the weak boundedness constant, \begin{equation*} \left\vert T_{\func{touch}}\right\vert \leq C_{m}\mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{equation*} Finally, the error term $\func{Error}$ is immediately controlled by a small multiple of the operator norm, \begin{equation*} \left\vert \func{Error}\right\vert \leq C\varepsilon \mathfrak{N} _{T^{\lambda },p}\left( \sigma ,\omega \right) . \end{equation*} \end{proof} \section{Forms requiring testing conditions} The three forms requiring conditions other than those of Muckenhoupt type, are the Alpert adjacent diagonal form, which uses only the quadratic Alpert weak boundedness constant $\mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }$, and the two dual paraproduct forms, which each use only the appropriate scalar testing condition $\mathfrak{T}_{T^{\lambda },p}$ or $ \mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}$. \subsection{Adjacent diagonal form} Here we control the quadratic adjacent form by \begin{eqnarray*} \left\vert \mathsf{B}_{\func{Adj},\rho }\left( f,g\right) \right\vert &=&\left\vert \dsum\limits_{I\in \mathcal{D},\ J\in \func{Adj}_{\rho }\left( I\right) }\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega }\right\vert \\ &=&\left\vert \int_{\mathbb{R}^{n}}\dsum\limits_{I\in \mathcal{D},\ J\in \func{Adj}_{\rho }\left( I\right) }\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \ \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \ d\omega \left( x\right) \right\vert \\ &\leq &\int_{\mathbb{R}^{n}}\left( \dsum\limits_{I\in \mathcal{D},\ J\in \func{Adj}_{\rho }\left( I\right) }\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\ \left( \dsum\limits_{I\in \mathcal{D},\ J\in \func{Adj}_{\rho }\left( I\right) }\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\ d\omega \left( x\right) \\ &\lesssim &\left\Vert \left( \dsum\limits_{I\in \mathcal{D},\ J\in \func{Adj} _{\rho }\left( I\right) }\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }\ \left\Vert \mathcal{S}_{\limfunc{Alpert} ,\kappa }g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{eqnarray*} We have $\left\Vert \mathcal{S}_{\limfunc{Alpert},\kappa }g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }\approx \left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }$ by a square function estimate, and using the quadratic Alpert weak boundedness property, we obtain \begin{equation*} \left( \int_{\mathbb{R}^{n}}\left( \dsum\limits_{I\in \mathcal{D},\ J\in \func{Adj}_{\rho }\left( I\right) }\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \right) ^{\frac{1}{p}}\lesssim \mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\ , \end{equation*} and so altogether that \begin{equation*} \left\vert \mathsf{B}_{\func{Adj},\rho }\left( f,g\right) \right\vert \lesssim \mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\left( \sigma ,\omega \right) \ \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{equation*} Recall that in Lemma \ref{stronger}, we have controlled the Alpert weak boundedness property constant $\mathcal{AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }$ by the adjacent weak boundedness property constant $ \mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2}}$ and the offset Muckenhoupt constant $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}$, plus a small multiple of the operator norm. This will be used at the end of the proof to eliminate the use of $\mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }$. \subsection{Paraproduct form} Here we must bound the paraproduct form, \begin{equation*} \mathsf{B}_{\limfunc{paraproduct}}\left( f,g\right) =\sum_{F\in \mathcal{F}} \mathsf{B}_{\limfunc{paraproduct}}^{F}\left( f,g\right) =\sum_{F\in \mathcal{ F}}\sum_{J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}}}\left\langle \mathbf{1} _{J}\left( E_{J}^{\sigma }f\right) T_{\sigma }^{\lambda }\mathbf{1} _{F},\bigtriangleup _{J}^{\omega }g\right\rangle _{\omega }, \end{equation*} in Theorem \ref{main}. Define $\widetilde{g}=\sum_{J\in \mathcal{D}}\frac{ E_{J}^{\sigma }f}{E_{F}^{\sigma }f}\bigtriangleup _{J}^{\omega }g$ and note that $\left\vert E_{J}^{\sigma }f\right\vert \lesssim \left\vert E_{F}^{\sigma }f\right\vert $ to obtain \begin{eqnarray*} &&\left\vert \mathsf{B}_{\limfunc{paraproduct}}\left( f,g\right) \right\vert =\left\vert \sum_{F\in \mathcal{F}}\mathsf{B}_{\limfunc{paraproduct} }^{F}\left( f,g\right) \right\vert =\left\vert \sum_{F\in \mathcal{F} }\sum_{J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}}}\left\langle \left( E_{J}^{\sigma }f\right) T_{\sigma }^{\lambda }\mathbf{1}_{F},\bigtriangleup _{J}^{\omega }g\right\rangle _{\omega }\right\vert \\ &=&\left\vert \sum_{F\in \mathcal{F}}E_{F}^{\sigma }f\sum_{J\in \mathcal{C} _{F}^{\tau -\limfunc{shift}}}\left\langle \bigtriangleup _{J}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F},\frac{E_{J}^{\sigma }f}{E_{F}^{\sigma }f}\bigtriangleup _{J}^{\omega }g\right\rangle _{\omega }\right\vert =\left\vert \sum_{F\in \mathcal{F}}E_{F}^{\sigma }f\sum_{J\in \mathcal{C} _{F}^{\tau -\limfunc{shift}}}\left\langle \bigtriangleup _{J}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F},\bigtriangleup _{J}^{\omega } \widetilde{g}\right\rangle _{\omega }\right\vert \\ &=&\left\vert \sum_{F\in \mathcal{F}}E_{F}^{\sigma }f\left\langle \mathsf{P} _{\mathcal{C}_{F}^{\tau -\limfunc{shift}}}^{\omega }T_{\sigma }^{\lambda } \mathbf{1}_{F},\mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift}}}^{\omega } \widetilde{g}\right\rangle _{\omega }\right\vert =\left\vert \int_{\mathbb{R} ^{n}}\sum_{F\in \mathcal{F}}\ \mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{ shift}}}^{\omega }T_{\sigma }^{\lambda }\left( \mathbf{1}_{F}E_{F}^{\sigma }f\right) \left( x\right) \ \mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift }}}^{\omega }\widetilde{g}\left( x\right) \ d\omega \left( x\right) \right\vert \\ &\leq &\int_{\mathbb{R}^{n}}\left( \sum_{F\in \mathcal{F}}\left\vert \mathsf{ P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift}}}^{\omega }T_{\sigma }^{\lambda }\left( \mathbf{1}_{F}E_{F}^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\ \left( \sum_{F\in \mathcal{F}}\left\vert \mathsf{ P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift}}}^{\omega }\widetilde{g}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\ d\omega \left( x\right) , \end{eqnarray*} and now with the $\mathcal{F}$-square functions \begin{equation*} \mathcal{S}_{\mathcal{F}}^{\mu }h\left( x\right) \equiv \left( \sum_{F\in \mathcal{F}}\left\vert \mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift} }}^{\omega }h\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}, \end{equation*} we can write \begin{eqnarray*} \left\vert \mathsf{B}_{\limfunc{paraproduct}}\left( f,g\right) \right\vert &\leq &\int_{\mathbb{R}^{n}}\mathcal{S}_{\mathcal{F}}^{\omega }T_{\sigma }^{\lambda }\left[ \alpha _{\mathcal{F}}\left( F\right) \mathbf{1}_{F}\right] \left( x\right) \ \mathcal{S}_{\mathcal{F}}^{\omega }\widetilde{g}\left( x\right) \ d\omega \left( x\right) \\ &\leq &\left( \int_{\mathbb{R}^{n}}\mathcal{S}_{\mathcal{F}}^{\omega }T_{\sigma }^{\lambda }\left[ \left( E_{F}^{\sigma }f\right) \mathbf{1}_{F} \right] \left( x\right) ^{p}d\omega \left( x\right) \right) ^{\frac{1}{p} }\left( \int_{\mathbb{R}^{n}}\mathcal{S}_{\mathcal{F}}^{\omega }\widetilde{g} \left( x\right) ^{p^{\prime }}d\omega \left( x\right) \right) ^{\frac{1}{ p^{\prime }}}. \end{eqnarray*} Now we consider the space $\ell ^{2}\left( \mathcal{F}\right) $, and define the vector-valued operator $\mathcal{T}_{\mathcal{F}}:L^{p}\left( \ell ^{2}\left( \mathcal{F}\right) ;\sigma \right) \rightarrow L^{p}\left( \ell ^{2}\left( \mathcal{F}\right) ;\omega \right) $ by \begin{eqnarray*} \mathcal{T}_{\mathcal{F}}\left( \left\{ f_{F}\right\} _{F\in \mathcal{F} }\right) \left( x\right) &\equiv &\left\{ \mathcal{S}_{\mathcal{F}}^{\omega }T_{\sigma }^{\lambda }f_{F}\left( x\right) \right\} _{F\in \mathcal{F} }=\left\{ \mathcal{T}_{F}f_{F}\left( x\right) \right\} _{F\in \mathcal{F}}\ , \\ \text{where }\mathcal{T}_{F}h &\equiv &\mathsf{P}_{\mathcal{C}_{F}^{\tau - \limfunc{shift}}}^{\omega }T_{\sigma }^{\lambda }h. \end{eqnarray*} \begin{theorem} \label{para thm}If $1<p<\infty $ and $\omega $ and $\sigma $ are doubling measures, and if $\mathcal{F}$ is $\sigma $-Carleson, then \begin{eqnarray} &&\int_{\mathbb{R}^{n}}\left\vert \mathcal{T}_{\mathcal{F}}\left( \left\{ \alpha _{\mathcal{F}}\left( F\right) \mathbf{1}_{F}\right\} _{F\in \mathcal{F }}\right) \left( x\right) \right\vert _{\ell ^{2}\left( \mathcal{F}\right) }^{p}d\omega \left( x\right) =\int_{\mathbb{R}^{n}}\left( \sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) ^{2}\mathbf{1} _{F}\left\vert \mathsf{P}_{\mathcal{C}_{F}^{\tau -\func{shift}}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \label{bounded in Lp} \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \leq C\left( p,\tau ,C_{\limfunc{doub}}^{\omega }\right) \mathfrak{T}_{T^{\lambda }}\left( \sigma ,\omega \right) ^{p}\int_{\mathbb{R}^{n}}\left\vert f\left( y\right) \right\vert ^{p}d\sigma \left( y\right) . \notag \end{eqnarray} \end{theorem} \begin{proof} The estimate (\ref{bounded in Lp}) clearly holds for $p=2$ since then the left hand side of (\ref{bounded in Lp}) is \begin{equation*} \sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) ^{2}\int_{ \mathbb{R}^{n}}\left\vert \mathsf{P}_{\mathcal{C}_{F}^{\tau -\func{shift} }}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\vert ^{2}\leq \sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) ^{2} \mathfrak{T}_{T^{\lambda }}\left( \sigma ,\omega \right) ^{p}\left\vert F\right\vert _{\sigma }\lesssim \mathfrak{T}_{T^{\lambda }}\left( \sigma ,\omega \right) ^{2}\int_{\mathbb{R}^{n}}\left\vert f\left( y\right) \right\vert ^{2}d\sigma \left( y\right) , \end{equation*} by quasiorthogonality (\ref{Car and quasi}). We next claim that for $1<p<\infty $ we have \begin{equation} \int_{\mathbb{R}^{n}}\left( \sum_{F\in \mathcal{F}}\left\vert \alpha _{ \mathcal{F}}\left( F\right) \mathsf{P}_{\mathcal{C}_{F}^{\tau -\func{shift} }}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \lesssim \sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) ^{p}\left\vert F\right\vert _{\sigma }. \label{first claim} \end{equation} Indeed, for $1<p\leq 2$ (and even for $0<p\leq 2$), the inequality $\lesssim $ follows from the trivial inequality $\left\Vert \cdot \right\Vert _{\ell ^{q}}\leq \left\Vert \cdot \right\Vert _{\ell ^{1}}$ for $0<q\leq 1$ and testing for the operator $T_{\sigma }^{\lambda }$, \begin{eqnarray*} &&\int_{\mathbb{R}^{n}}\left( \sum_{F\in \mathcal{F}}\left\vert \alpha _{ \mathcal{F}}\left( F\right) \mathsf{P}_{\mathcal{C}_{F}^{\tau -\func{shift} }}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \leq \int_{\mathbb{R} ^{n}}\sum_{F\in \mathcal{F}}\left\vert \alpha _{\mathcal{F}}\left( F\right) \mathsf{P}_{\mathcal{C}_{F}^{\tau -\func{shift}}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\vert ^{p}d\omega \left( x\right) \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\sum_{F\in \mathcal{F}}\alpha _{\mathcal{F} }\left( F\right) ^{p}\int_{\mathbb{R}^{n}}\left\vert \mathsf{P}_{\mathcal{C} _{F}^{\tau -\func{shift}}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1} _{F}\right\vert ^{p}d\omega \leq \mathfrak{T}_{T^{\lambda }}\left( \sigma ,\omega \right) ^{p}\sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) ^{p}\left\vert F\right\vert _{\sigma }. \end{eqnarray*} For convenience in this proof we will write \begin{equation*} \mathsf{P}_{F}^{\omega }=\mathsf{P}_{\mathcal{C}_{F}^{\tau -\func{shift} }}^{\omega }\ . \end{equation*} When $2m\leq p<2\left( m+1\right) $ for $m\in \mathbb{N}$, we set \begin{equation*} \beta \left( x\right) \equiv \left( \sum_{F\in \mathcal{F}}\left\vert \alpha _{\mathcal{F}}\left( F\right) \mathsf{P}_{F}^{\omega }T_{\sigma }^{\lambda } \mathbf{1}_{F}\left( x\right) \right\vert ^{2}\right) ^{\frac{p-2m}{2}}\leq \sum_{F\in \mathcal{F}}\left\vert \alpha _{\mathcal{F}}\left( F\right) \mathsf{P}_{F}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\vert ^{p-2m}, \end{equation*} so that \begin{eqnarray*} &&\int_{\mathbb{R}^{n}}\left( \sum_{F\in \mathcal{F}}\left\vert \alpha _{ \mathcal{F}}\left( F\right) \mathsf{P}_{F}^{\omega }T_{\sigma }^{\lambda } \mathbf{1}_{F}\left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) =\int_{\mathbb{R}^{n}}\left( \sum_{F\in \mathcal{F} }\left\vert \alpha _{\mathcal{F}}\left( F\right) \mathsf{P}_{F}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\vert ^{2}\right) ^{m}\beta \left( x\right) d\omega \left( x\right) \\ &=&\int_{\mathbb{R}^{n}}\sum_{\left( F_{1},...,F_{m}\right) \in \mathcal{F} ^{m}}\alpha _{\mathcal{F}}\left( F_{1}\right) ^{2}...\alpha _{\mathcal{F} }\left( F_{m}\right) ^{2}\ \left[ \mathsf{P}_{F_{1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{1}}\right] ^{2}...\left[ \mathsf{P} _{F_{m}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{m}}\right] ^{2}\ \beta \left( x\right) d\omega \left( x\right) \\ &=&C_{m}\int_{\mathbb{R}^{n}}\sum_{\left( F_{1},...,F_{2m}\right) \in \mathcal{F}_{\ast }^{m}}\alpha _{\mathcal{F}}\left( F_{1}\right) ^{2}...\alpha _{\mathcal{F}}\left( F_{m}\right) ^{2}\ \left[ \mathsf{P} _{F_{1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{1}}\right] ^{2}... \left[ \mathsf{P}_{F_{m}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{m}} \right] ^{2}\ \beta \left( x\right) d\omega \left( x\right) \\ &\equiv &C_{m}\func{Int}_{T^{\lambda }}\left( p\right) , \end{eqnarray*} where \begin{equation*} \mathcal{F}_{\ast }^{m}\equiv \left\{ \left( F_{1},...,F_{m}\right) \in \mathcal{F}^{m}:F_{i}\subset F_{j}\text{ for }1\leq i\leq j\leq m\right\} , \end{equation*} and \begin{eqnarray} \func{Int}_{T^{\lambda }}\left( p\right) &=&\sum_{\left( F_{1},...,F_{m}\right) \in \mathcal{F}_{\ast }^{m}}\alpha _{\mathcal{F} }\left( F_{1}\right) ^{2}...\alpha _{\mathcal{F}}\left( F_{m}\right) ^{2}\ \int_{\mathbb{R}^{n}}\left[ \mathsf{P}_{F_{1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{1}}\right] ^{2}...\left[ \mathsf{P}_{F_{m}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{m}}\right] ^{2}\beta \left( x\right) d\omega \left( x\right) \label{Int''} \\ &\leq &\sum_{\left( F_{1},...,F_{m}\right) \in \mathcal{F}_{\ast }^{m}}\sum_{F_{m+1}\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F_{1}\right) ^{2}...\alpha _{\mathcal{F}}\left( F_{m}\right) ^{2}\ \left\vert \alpha _{ \mathcal{F}}\left( F_{m+1}\right) \right\vert ^{p-2m} \notag \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \int_{\mathbb{R}^{n}}\left[ \mathsf{P} _{F_{1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{1}}\right] ^{2}... \left[ \mathsf{P}_{F_{m}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{m}} \right] ^{2}\left\vert \mathsf{P}_{F_{m+1}}^{\omega }T_{\sigma }^{\lambda } \mathbf{1}_{F}\left( x\right) \right\vert ^{p-2m}d\omega \left( x\right) . \notag \end{eqnarray} Note that $F_{1}\subset F_{m+1}$, but there is no further relation between $ F_{m+1}$ and the sequence $\left( F_{1},...,F_{m}\right) \in \mathcal{F} _{\ast }^{m}$. So we first suppose that \begin{equation} F_{m}\subset F_{m+1}. \label{first assumption} \end{equation} We then pigeonhole the relative sizes of the cubes in the increasing sequence $\mathbf{F}\equiv \left( F_{1},F_{2},...,F_{m},F_{m+1}\right) $: \begin{eqnarray*} \func{Int}_{T^{\lambda }}\left( p\right) &\leq &\sum_{n_{1}=0}^{\infty }\sum_{F_{1}\in \mathfrak{C}_{\mathcal{F}}^{\left( n_{1}\right) }\left( F_{2}\right) }\alpha _{\mathcal{F}}\left( F_{1}\right) ^{2}\sum_{n_{2}=0}^{\infty }\sum_{F_{2}\in \mathfrak{C}_{\mathcal{F} }^{\left( n_{2}\right) }\left( F_{3}\right) }\alpha _{\mathcal{F}}\left( F_{2}\right) ^{2}\times ... \\ &&\ \ \ \ \ \times \sum_{n_{m}=0}^{\infty }\sum_{F_{m}\in \mathfrak{C}_{ \mathcal{F}}^{\left( n_{m}\right) }\left( F_{m+1}\right) }\alpha _{\mathcal{F }}\left( F_{m}\right) ^{2}\sum_{F_{m+1}\in \mathcal{F}}\left\vert \alpha _{ \mathcal{F}}\left( F_{m+1}\right) \right\vert ^{p-2m}\ \mathcal{I}\left( \mathbf{F}\right) \\ \text{where }\mathcal{I}\left( \mathbf{F}\right) &\equiv &\int_{F_{1}}\left[ \mathsf{P}_{F_{1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{1}}\right] ^{2}\left[ \mathsf{P}_{F_{2}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1} _{F_{2}}\right] ^{2}...\left[ \mathsf{P}_{F_{m}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{m}}\right] ^{2}\left\vert \mathsf{P} _{F_{m+1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{m+1}}\left( x\right) \right\vert ^{p-2m}d\omega \left( x\right) . \end{eqnarray*} Let $\mathfrak{F}^{\left[ 0\right] }$ be the collection of increasing sequences of cubes $\mathbf{F}=\left( F_{1},F_{2},...,F_{m},F_{m+1}\right) \in \mathcal{F}^{m}\times \mathcal{F}$ in which the cubes $F_{j}$ are close together in the sense that $\ell \left( F_{j}\right) \geq 2^{-\tau }\ell \left( F_{j+1}\right) $ for all $1\leq j<m+1$, and in addition let $ \mathfrak{F}^{\left[ 0\right] }$ be the subset of sequences $\left( F_{1},F_{2},...,F_{m},F_{m+1}\right) \in \mathfrak{F}^{\left[ 0\right] }$. Note that the collection of such sequences is finite, depending only on $ \tau $ and $m$, and so in this case we simply bound each summand uniformly. Then define \begin{equation*} \func{Int}_{T^{\lambda }}^{\left[ 0\right] }\left( p\right) \equiv \sum_{ \mathbf{F}\in \mathfrak{F}^{\left[ 0\right] }}\left( \prod_{j=1}^{m}\alpha _{ \mathcal{F}}\left( F_{j}\right) ^{2}\right) \left\vert \alpha _{\mathcal{F} }\left( F_{m+1}\right) \right\vert ^{p-2m}\mathcal{I}\left( \mathbf{F} \right) , \end{equation*} and for $F\in \mathcal{F}$, define \begin{equation*} \mathfrak{F}^{\left[ 0\right] }\left( F\right) \equiv \bigcup_{j=1}^{m}\left\{ \left( F_{1},...F_{j-1},F_{j+1},...F_{m},F_{m+1}\right) \in \mathcal{F}_{\ast }^{m-1}:\left( F_{1},...F_{j-1},F,F_{j+1},...F_{m},F_{m+1}\right) \in \mathfrak{F}^{\left[ 0\right] }\right\} . \end{equation*} Then from H\"{o}lder's inequality with exponents $\left( \overset{m\ times}{ \overbrace{\frac{p}{2},...,\frac{p}{2}},}\frac{p}{p-2m}\right) $, \begin{eqnarray*} \mathcal{I}\left( \mathbf{F}\right) &\leq &\left[ \prod_{j=1}^{m}\left( \int_{F_{1}}\left[ \mathsf{P}_{F_{j}}^{\omega }T_{\sigma }^{\lambda }\mathbf{ 1}_{F_{j}}\right] ^{p}d\omega \right) ^{\frac{2}{p}}\right] \left( \int_{F_{1}}\left[ \mathsf{P}_{F_{m+1}}^{\omega }T_{\sigma }^{\lambda } \mathbf{1}_{F_{m+1}}\right] ^{p}d\omega \right) ^{\frac{p-2m}{p}} \\ &\lesssim &\left[ \prod_{j=1}^{m}\left( \int_{F_{j}}\left[ T_{\sigma }^{\lambda }\mathbf{1}_{F_{j}}\right] ^{p}d\omega \right) ^{\frac{2}{p}} \right] \left( \int_{F_{1}}\left[ T_{\sigma }^{\lambda }\mathbf{1}_{F_{m+1}} \right] ^{p}d\omega \right) ^{\frac{p-2m}{p}}\leq \mathfrak{T}_{T^{\lambda },p}^{p}\left( \prod_{j=1}^{m}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2 }{p}}\right) \left\vert F_{m+1}\right\vert _{\sigma }^{\frac{p-2m}{p}}, \end{eqnarray*} and the geometric/arithmetic mean inequality, \begin{equation*} \left( \prod_{j=1}^{m}\alpha _{\mathcal{F}}\left( F_{j}\right) ^{2}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p}}\right) \alpha _{ \mathcal{F}}\left( F_{2m+1}\right) \left\vert F_{2m+1}\right\vert _{\sigma }^{\frac{p-2m}{p}}\leq \frac{2}{p}\sum_{j=1}^{m}\alpha _{\mathcal{F}}\left( F_{j}\right) ^{p}\left\vert F_{j}\right\vert _{\sigma }+\frac{p-2m}{p}\alpha _{\mathcal{F}}\left( F_{m+1}\right) ^{p}\left\vert F_{m+1}\right\vert _{\sigma }, \end{equation*} we obtain \begin{eqnarray*} &&\func{Int}_{T^{\lambda }}^{\left[ 0\right] }\left( p\right) \lesssim \mathfrak{T}_{T^{\lambda },p}^{p}\sum_{\mathbf{F}\in \mathfrak{F}^{\left[ 0 \right] }}\ \prod_{j=1}^{m}\alpha _{\mathcal{F}}\left( F_{j}\right) ^{2}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p}}\left\vert \alpha _{ \mathcal{F}}\left( F_{m+1}\right) \right\vert ^{p-2m}\left\vert F_{m+1}\right\vert _{\sigma }^{\frac{p-2m}{p}} \\ &\leq &\mathfrak{T}_{T^{\lambda },p}^{p}\sum_{\mathbf{F}\in \mathfrak{F}^{ \left[ 0\right] }}\ \left( \frac{2}{p}\sum_{j=1}^{m}\alpha _{\mathcal{F} }\left( F_{j}\right) ^{p}\left\vert F_{j}\right\vert _{\sigma }+\frac{p-2m}{p }\alpha _{\mathcal{F}}\left( F_{m+1}\right) ^{p}\left\vert F_{m+1}\right\vert _{\sigma }\right) \\ &=&\mathfrak{T}_{T^{\lambda },p}^{p}\frac{2}{p}\sum_{j=1}^{m}\left( \sum_{F_{j}\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F_{j}\right) ^{p}\left\vert F_{j}\right\vert _{\sigma }\right) \#\left( \mathfrak{F}^{ \left[ 0\right] }\left( F_{j}\right) \right) +\frac{p-2m}{p}\sum_{F_{m+1}\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F_{m+1}\right) ^{p}\left\vert F_{2m+1}\right\vert _{\sigma }\#\left( \mathfrak{F}^{\left[ 0\right] }\left( F_{m+1}\right) \right) \\ &\leq &C_{\tau ,p}\mathfrak{T}_{T^{\lambda },p}^{p}\left( \sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) ^{p}\left\vert F\right\vert _{\sigma }\right) . \end{eqnarray*} Still keeping our assumption (\ref{first assumption}), we define, for $1\leq k<m$, \begin{equation} \mathfrak{F}^{\left[ k\right] }\equiv \left\{ \mathbf{F}\in \mathcal{F} _{\ast }^{m-1}:\ell \left( F_{j}\right) \geq 2^{-\tau }\ell \left( F_{j+1}\right) \text{ for }1\leq j<k\text{, and }\ell \left( F_{k}\right) <2^{-\tau }\ell \left( F_{k+1}\right) \right\} , \label{Fk} \end{equation} to consist of those sequences $\mathbf{F}$ in $\mathcal{F}_{\ast }^{m-1}$ for which $k$ is the first time $\ell \left( F_{k}\right) <2^{-\tau }\ell \left( F_{k+1}\right) $. We also define for $\mathbf{n}=\left( n_{k},...,n_{m}\right) $, the collection $\mathfrak{F}_{\mathbf{n}}^{\left[ k \right] }$ to consist of those $\mathbf{F}\in \mathfrak{F}^{\left[ k\right] } $ for which $F_{j}\in \mathfrak{C}^{\left( n_{j}\right) }_{\mathcal{F} }\left( F_{j+1}\right) $ for $k\leq j\leq m$. A key point in our analysis arises now. If $\ell \left( F_{k}\right) <2^{-\tau }\ell \left( F_{k+1}\right) $, then $\mathcal{C}_{F_{k}}^{\omega ,\func{shift}}\cap \mathcal{C}_{F_{j}}^{\omega ,\func{shift}}=\emptyset $ for $k<j\leq 2m$, and so each function $\mathsf{P}_{F_{j}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1 }_{F_{j}}$ is constant on $F_{k}$ for $k<j\leq m+1$. Thus for $\mathbf{F}\in \mathfrak{F}^{\left[ k\right] }$ we have \begin{eqnarray*} &&E_{F_{k}}^{\omega }\left( \left[ \prod_{j=k+1}^{m}\mathsf{P} _{F_{j}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{j}}\right] \left\vert \mathsf{P}_{F_{m+1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1} _{F_{2m+1}}\right\vert ^{p-2m}\right) =\left( \prod_{j=k+1}^{m}E_{F_{k}}^{\omega }\left[ \mathsf{P}_{F_{j}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{j}}\right] \right) E_{F_{k}}^{\omega }\left\vert \mathsf{P}_{F_{m+1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1} _{F_{m+1}}\right\vert ^{p-2m} \\ &&\text{and }\mathcal{I}\left( \mathbf{F}\right) =\left( \int_{\mathbb{R} ^{n}}\prod_{j=1}^{k}\left[ \mathsf{P}_{F_{j}}^{\omega }T_{\sigma }^{\lambda } \mathbf{1}_{F_{j}}\right] ^{2}d\omega \right) \left( \prod_{j=k+1}^{m}\left( E_{F_{j}^{\prime }}^{\omega }\left[ \mathsf{P}_{F_{j}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{j}}\right] \right) ^{2}\right) E_{F_{m+1}^{\prime }}^{\omega }\left\vert \mathsf{P}_{F_{m+1}}^{\omega }T_{\sigma }^{\lambda } \mathbf{1}_{F_{m+1}}\right\vert ^{p-2m}, \end{eqnarray*} where we denote by $F_{j}^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( F_{j}\right) $ the unique child of $F_{j}$ containing $F_{k}$, equivalently containing $F_{1}$. Using H\"{o}lder's inequality we obtain \begin{eqnarray*} \mathcal{I}\left( \mathbf{F}\right) &\leq &\prod_{j=1}^{k}\left( \int_{F_{1}}\left\vert \mathsf{P}_{F_{j}}^{\omega }T_{\sigma }^{\lambda } \mathbf{1}_{F_{j}}\right\vert ^{p}d\omega \right) ^{\frac{2}{p}}\left\vert F_{1}\right\vert _{\omega }^{\frac{p-2k}{p}}\times \\ &&\ \ \ \ \ \times \left( \prod_{j=k+1}^{m}\frac{1}{\left\vert F_{j}^{\prime }\right\vert _{\omega }}\int_{F_{j}^{\prime }}\left\vert \mathsf{P} _{F_{j}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{j}}\right\vert d\omega \right) ^{2}\left( \frac{1}{\left\vert F_{m+1}^{\prime }\right\vert _{\omega }}\int_{F_{m+1}^{\prime }}\left\vert \mathsf{P}_{F_{2m+1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F_{m+1}}\right\vert d\omega \right) ^{p-2m} \\ &\leq &\mathfrak{T}_{T^{\lambda },p}^{p}\left( \prod_{j=1}^{k}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p}}\right) \left\vert F_{1}\right\vert _{\omega }^{\frac{p-2k}{p}}\left( \prod_{j=k+1}^{m}\left( \frac{1}{ \left\vert F_{j}^{\prime }\right\vert _{\omega }}\int_{F_{j}^{\prime }}\left\vert \mathsf{P}_{F_{j}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1} _{F_{j}}\right\vert ^{p}d\omega \right) ^{\frac{2}{p}}\left( \frac{1}{ \left\vert F_{m+1}^{\prime }\right\vert _{\omega }}\int_{F_{m+1}^{\prime }}\left\vert \mathsf{P}_{F_{m+1}}^{\omega }T_{\sigma }^{\lambda }\mathbf{1} _{F_{m+1}}\right\vert ^{p}d\omega \right) ^{\frac{p-2m}{p}}\right) \\ &\leq &\mathfrak{T}_{T^{\lambda },p}^{p}\left( \prod_{j=1}^{k}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p}}\right) \left\vert F_{1}\right\vert _{\omega }^{\frac{p-2k}{p}}\left( \prod_{j=k+1}^{m}\frac{\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p}}}{\left\vert F_{j}^{\prime }\right\vert _{\omega }^{\frac{2}{p}}}\right) \left( \frac{\left\vert F_{m+1}\right\vert _{\sigma }^{\frac{p-2m}{p}}}{\left\vert F_{m+1}^{\prime }\right\vert _{\omega }^{\frac{p-2m}{p}}}\right) \\ &=&\mathfrak{T}_{T^{\lambda },p}^{p}\left( \prod_{j=1}^{k}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p}}\right) \left( \prod_{j=k+1}^{m}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p} }\left\vert F_{m+1}\right\vert _{\sigma }^{\frac{p-2m}{p}}\right) \prod_{j=k+1}^{m}\left( \frac{\left\vert F_{1}\right\vert _{\omega }}{ \left\vert F_{j}^{\prime }\right\vert _{\omega }}\right) ^{\frac{2}{p} }\left( \frac{\left\vert F_{1}\right\vert _{\omega }}{\left\vert F_{m+1}^{\prime }\right\vert _{\omega }}\right) ^{\frac{p-2m}{p}}. \end{eqnarray*} Thus the key point in our analysis above produces the factors $\frac{ \left\vert F_{1}\right\vert _{\omega }}{\left\vert F_{j}^{\prime }\right\vert _{\omega }}$, which\ are small because $\omega $ is doubling, namely $\frac{\left\vert F_{1}\right\vert _{\omega }}{\left\vert F_{j}^{\prime }\right\vert _{\omega }}\leq C_{\limfunc{doub}}2^{-\varepsilon \left( n_{j}+...+n_{m}\right) }$, where $\varepsilon $ is the reverse doubling constant for $\omega $. Thus we have the estimate \begin{eqnarray*} \mathcal{I}\left( \mathbf{F}\right) &\leq &C_{\limfunc{doub}}^{p-k}\mathfrak{ T}_{T^{\lambda },p}^{p}\left( \left( \prod_{j=1}^{m}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p}}\right) \left\vert F_{m+1}\right\vert _{\sigma }^{\frac{p-2m}{p}}\right) \prod_{j=k+1}^{m}2^{-\varepsilon \frac{2}{p}\left( n_{j}+...+n_{m}\right) }2^{-\varepsilon \frac{p-k-1}{p}n_{m+1}} \\ &=&C_{\limfunc{doub}}^{p-k}\mathfrak{T}_{T^{\lambda },p}^{p}\left( \left( \prod_{j=1}^{m}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p}}\right) \left\vert F_{m+1}\right\vert _{\sigma }^{\frac{p-2m}{p}}\right) 2^{-2\varepsilon \left( \frac{1}{p}n_{k+1}+...+\frac{p-k}{p}n_{m}+\frac{p-2m }{p}\frac{p-k-1}{p}n_{m+1}\right) } \\ &\leq &C_{\limfunc{doub}}^{p-k}\mathfrak{T}_{T^{\lambda },p}^{p}\left( \left( \prod_{j=1}^{m}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p} }\right) \left\vert F_{m+1}\right\vert _{\sigma }^{\frac{p-2m}{p}}\right) 2^{-\eta \left( n_{k+1}+...+n_{m}+n_{m+1}\right) }. \end{eqnarray*} Now we set \begin{equation*} \func{Int}_{T^{\lambda }}^{\left[ k\right] }\left( m\right) \equiv \sum_{ \mathbf{F}\in \mathfrak{F}^{\left[ k\right] }}\left( \prod_{j=1}^{m}\alpha _{ \mathcal{F}}\left( F_{j}\right) ^{2}\alpha _{\mathcal{F}}\left( F_{m+1}\right) ^{p-2m}\right) \mathcal{I}\left( \mathbf{F}\right) , \end{equation*} and for $F\in \mathcal{F}$, and $\mathbf{n}=\left( n_{k},...,n_{m}\right) $, \begin{equation*} \mathfrak{F}_{\mathbf{n}}^{\left[ k\right] }\left( F\right) \equiv \bigcup_{j=1}^{m}\left\{ \left( F_{1},...F_{j-1},F_{j+1},...F_{m}\right) \in \mathcal{F}_{\ast }^{m-1}:\left( F_{1},...F_{j-1},F,F_{j+1},...F_{m}\right) \in \mathfrak{F}_{\mathbf{n}}^{\left[ k\right] }\right\} . \end{equation*} Arguing as in the case of $\func{Int}_{T}^{\left[ 0\right] }\left( m\right) $ , and with $\sum_{\mathbf{n}}\equiv \sum_{n_{k}=0}^{\infty }...\sum_{n_{m}=0}^{\infty }$, we obtain the estimate \begin{eqnarray*} &&\func{Int}_{T^{\lambda }}^{\left[ k\right] }\left( m\right) \lesssim C_{ \limfunc{doub}}^{p-k}\mathfrak{T}_{T^{\lambda },p}^{p}\sum_{\mathbf{n}}\sum_{ \mathbf{F}\in \mathfrak{F}_{\mathbf{n}}^{\left[ k\right] }}2^{-\eta \left( n_{k+1}+...+n_{m}\right) }\ \left( \prod_{j=1}^{m}\alpha _{\mathcal{F} }\left( F_{j}\right) ^{2}\left\vert F_{j}\right\vert _{\sigma }^{\frac{2}{p} }\right) \left( \alpha _{\mathcal{F}}\left( F_{m+1}\right) \left\vert F_{m+1}\right\vert _{\sigma }^{\frac{1}{p}}\right) ^{p-2m} \\ &\leq &C_{\limfunc{doub}}^{p-k}\mathfrak{T}_{T^{\lambda },p}^{p}\sum_{ \mathbf{n}}2^{-\eta \left( n_{k+1}+...+n_{m}\right) }\sum_{\mathbf{F}\in \mathfrak{F}_{\mathbf{n}}^{\left[ k\right] }}\ \left( \frac{2}{p} \sum_{j=1}^{m}\alpha _{\mathcal{F}}\left( F_{j}\right) ^{p}\left\vert F_{j}\right\vert _{\sigma }+\frac{p-2m}{p}\alpha _{\mathcal{F}}\left( F_{m+1}\right) ^{p}\left\vert F_{m+1}\right\vert _{\sigma }\right) \\ &=&C_{\limfunc{doub}}^{p-k}\mathfrak{T}_{T^{\lambda },p}^{p}\sum_{\mathbf{n} }2^{-\eta \left( n_{k+1}+...+n_{m}\right) }\sum_{j=1}^{m}\frac{2}{p} \sum_{F_{j}\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F_{j}\right) ^{p}\left\vert F_{j}\right\vert _{\sigma }\left( \#\mathfrak{F}_{\mathbf{n} }^{\left[ k\right] }\left( F_{j}\right) \right) \\ &&+C_{\limfunc{doub}}^{p-k}\mathfrak{T}_{T^{\lambda },p}^{p}\sum_{\mathbf{n} }2^{-\eta \left( n_{k+1}+...+n_{m}\right) }\frac{p-2m}{p}\sum_{F_{m+1}\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F_{m+1}\right) ^{p}\left\vert F_{m+1}\right\vert _{\sigma }\left( \#\mathfrak{F}_{\mathbf{n}}^{\left[ k \right] }\left( F_{m}\right) \right) \\ &\leq &C_{\limfunc{doub}}^{p-k}\mathfrak{T}_{T^{\lambda },p}^{p}C_{\tau ,p}\left( \sum_{\mathbf{n}}2^{-\eta \left( n_{k+1}+...+n_{m}\right) }\right) \left( \sum_{F\in \mathcal{F}}\alpha _{\mathcal{F}}\left( F\right) ^{p}\left\vert F\right\vert _{\sigma }\right) . \end{eqnarray*} This finishes our argument in the case (\ref{first assumption}) where $ F_{m+1}$ is the largest of all the $F_{j}$. Now we assume that for some $1\leq s\leq m$, we have \begin{equation*} \ell \left( F_{s}\right) \leq \ell \left( F_{m+1}\right) <\ell \left( F_{s+1}\right) , \end{equation*} and we continue to let $k$ be defined as in (\ref{Fk}). Then in the case that $s>k$, we can proceed exactly as above, while in the case $s\leq k$, we must include $F_{m+1}$ along with the cubes $F_{1},...F_{k}$, and the only change in this case is the choice of exponents when applying H\"{o}lder's inequality. Thus we have proved (\ref{first claim}), namely \begin{eqnarray*} \int_{\mathbb{R}^{n}}\left( \sum_{F\in \mathcal{F}}\left\vert \alpha _{ \mathcal{F}}\left( F\right) \mathsf{P}_{\mathcal{C}_{F}^{\omega ,\func{shift} }}^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) &=&C_{p}\func{Int} _{T^{\lambda }}\left( p\right) =C_{p}\sum_{k=1}^{m}\func{Int}_{T^{\lambda }}^{\left[ k\right] }\left( m\right) \\ &\leq &C\left( p,\tau ,C_{\limfunc{doub}}^{\omega }\right) \mathfrak{T} _{T^{\lambda },p}^{p}\left( \sum_{F\in \mathcal{F}}\alpha _{\mathcal{F} }\left( F\right) ^{p}\left\vert F\right\vert _{\sigma }\right) , \end{eqnarray*} where the constant \begin{equation*} C\left( p,\tau ,C_{\limfunc{doub}}^{\omega }\right) =C_{\tau ,p}\sum_{k=0}^{m}C_{\limfunc{doub}}^{p-k}\left( \sum_{n_{k}=0}^{\infty }...\sum_{n_{m}=0}^{\infty }2^{-\eta \left( n_{k}+...+n_{m}\right) }\right) \end{equation*} depends on $p$, $\tau $ and the doubling constant $C_{\limfunc{doub}}$ of $ \omega $. Finally we use quasiorthogonality (\ref{Car and quasi}) in the measure $\sigma $ to obtain $\sum_{F\in \mathcal{F}}\alpha _{\mathcal{F} }\left( F\right) ^{p}\left\vert F\right\vert _{\sigma }\lesssim \int_{ \mathbb{R}^{n}}\left\vert f\right\vert ^{p}d\sigma $, and hence that \begin{equation*} \int_{\mathbb{R}^{n}}\left\vert \left\{ \alpha _{\mathcal{F}}\left( F\right) \mathbf{1}_{F}\left( x\right) T_{\sigma }^{\lambda }\mathbf{1}_{F}\left( x\right) \right\} _{F\in \mathcal{F}}\right\vert _{\ell ^{2}}^{p}d\omega \left( x\right) \lesssim \mathfrak{T}_{T^{\lambda },p}\left( \sigma ,\omega \right) ^{p}\int_{\mathbb{R}^{n}}\left\vert f\left( y\right) \right\vert ^{p}d\sigma \left( y\right) \ . \end{equation*} \end{proof} \section{Forms requiring quadratic offset Muckenhoupt conditions} To bound the disjoint $\mathsf{B}_{\cap }\left( f,g\right) $, comparable $ \mathsf{B}_{/}\left( f,g\right) $, stopping $\mathsf{B}_{\limfunc{stop} }\left( f,g\right) $, far below $\mathsf{B}_{\func{far}\func{below}}\left( f,g\right) $, and neighbour $\mathsf{B}_{\limfunc{neighbour}}\left( f,g\right) $ forms, we will need the quadratic offset Muckenhoupt conditions, as well as a Pivotal Lemma, which originated in \cite{NTV4}. For $0\leq \lambda <n$ and $t\in \mathbb{R}_{+}$, recall the $t^{th}$-order fractional Poisson integral \begin{equation*} \mathrm{P}_{t}^{\lambda }\left( J,\mu \right) \equiv \int_{\mathbb{R}^{n}} \frac{\ell \left( J\right) ^{t}}{\left( \ell \left( J\right) +\left\vert y-c_{J}\right\vert \right) ^{t+n-\lambda }}d\mu \left( y\right) , \end{equation*} where $\mathrm{P}_{1}^{\lambda }\left( J,\mu \right) =\mathrm{P}^{\lambda }\left( J,\mu \right) $ is the standard Poisson integral of order $\lambda $ . The following Poisson estimate from \cite[Lemma 33]{Saw6} is a straightforward extension of the case $\kappa =1$ due to Nazarov, Treil and Volberg in \cite{NTV4}, which provided the vehicle through which geometric gain was derived from their groundbreaking notion of goodness. \begin{lemma} \label{Poisson inequality}Fix $\kappa \geq 1$. Suppose that $J\subset I\subset K$ and that $\limfunc{dist}\left( J,\partial I\right) >2\sqrt{n} \ell \left( J\right) ^{\varepsilon }\ell \left( I\right) ^{1-\varepsilon }$. Then \begin{equation} \mathrm{P}_{\kappa }^{\lambda }(J,\sigma \mathbf{1}_{K\setminus I})\lesssim \left( \frac{\ell \left( J\right) }{\ell \left( I\right) }\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\mathrm{P}_{\kappa }^{\lambda }(I,\sigma \mathbf{1}_{K\setminus I}). \label{e.Jsimeq} \end{equation} \end{lemma} The next Pivotal Lemma is adapted from \cite{AlSaUr}, which has its roots in \cite{LaWi}, \cite{SaShUr7} and \cite{NTV4}. Recall that for a subset $ \mathcal{J}\subset \mathcal{D}$, and for a cube $J\in \mathcal{D}$, there are projections $\mathsf{P}_{\mathcal{J}}^{\omega }\equiv \sum_{J^{\prime }\in \mathcal{J}}\bigtriangleup _{J^{\prime };\kappa }^{\omega }$and $ \mathsf{P}_{J}^{\omega }\equiv \sum_{J^{\prime }\in \mathcal{D}:\ J^{\prime }\subset J}\bigtriangleup _{J^{\prime };\kappa }^{\omega }$. \begin{lemma}[\textbf{Pivotal Lemma}] \label{ener}Fix $\kappa \geq 1$ and a locally finite positive Borel measure $ \omega $. Let $J\ $be a cube in $\mathcal{D}$. Let $\Psi _{J}\in L^{p^{\prime }}\left( \omega \right) $ be supported in $J$ with vanishing $ \omega $-means up to order less than $\kappa $, and with Alpert support in $ \mathcal{J}$. Let $R\left( x\right) $ be a polynomial of degree less than $ \kappa $ satisfying $\sup_{x\in J}\left\vert R\left( x\right) \right\vert \leq 1$. Let $\nu $ be a positive measure supported in $\mathbb{R} ^{n}\setminus 2J$. Let $T^{\lambda }$ be a smooth $\lambda $-fractional singular integral operator with $0\leq \lambda <n$. Then we have the `pivotal' bound \begin{eqnarray} \mathsf{P}_{\mathcal{J}}^{\omega }\left[ RT^{\lambda }\left( \varphi \nu \right) \right] \left( x\right) &\lesssim &\mathrm{P}_{\kappa }^{\lambda }\left( J,\nu \right) \mathbf{1}_{J}\left( x\right) , \label{piv lemma} \\ \text{hence }\left\vert \left\langle RT^{\alpha }\left( \varphi \nu \right) ,\Psi _{J}\right\rangle _{\omega }\right\vert &\lesssim &\mathrm{P}_{\kappa }^{\lambda }\left( J,\nu \right) \int_{J}\left\vert \Psi _{J}\left( x\right) \right\vert d\omega \left( x\right) \ , \notag \end{eqnarray} for any function $\varphi $ with $\left\vert \varphi \right\vert \leq 1$. \end{lemma} \begin{proof} The proof is an adaptation of the one-dimensional proof in \cite{RaSaWi}, which was in turn adapted from the proofs in \cite{LaWi} and \cite{SaShUr7}, but using a $\kappa ^{th}$ order Taylor expansion instead of a first order expansion on the kernel $\left( K_{y}^{\lambda }\right) \left( x\right) =K^{\lambda }\left( x,y\right) $. Due to the importance of this lemma, we repeat the short argument. A standard argument shows that it is enough to consider the case $\mathcal{J}=\left\{ J\right\} $ and $\Psi _{J}=h_{J;\kappa }^{\omega ,a}$ where $\left\{ h_{J;\kappa }^{\omega ,a}\right\} _{a\in \Gamma _{J,n,\kappa }}$ is an orthonormal basis of $ L_{J;\kappa }^{2}\left( \omega \right) $ consisting of Alpert functions. First we set $\widetilde{K}^{\lambda }\left( x,y\right) =R\left( x\right) K^{\lambda }\left( x,y\right) $, which satisfies the same estimates as $ K^{\lambda }\left( x,y\right) $ does. Now we use the Calder\'{o}n-Zygmund smoothness estimate (\ref{sizeandsmoothness'}), together with Taylor's formula applied to $\widetilde{K}^{\lambda }\left( x,y\right) $ to obtain \begin{eqnarray*} \widetilde{K}_{y}^{\lambda }\left( x\right) &=&\limfunc{Tay}\left( \widetilde{K}_{y}^{\lambda }\right) \left( x,c\right) +\frac{1}{\kappa !} \sum_{\left\vert \beta \right\vert =\kappa }\left( \widetilde{K} _{y}^{\lambda }\right) ^{\left( \beta \right) }\left( \theta \left( x,c\right) \right) \left( x-c\right) ^{\beta }; \\ \limfunc{Tay}\left( \widetilde{K}_{y}^{\lambda }\right) \left( x,c\right) &\equiv &\widetilde{K}_{y}^{\lambda }\left( c\right) +\left[ \left( x-c\right) \cdot \nabla \right] \widetilde{K}_{y}^{\lambda }\left( c\right) +...+\frac{1}{\left( \kappa -1\right) !}\left[ \left( x-c\right) \cdot \nabla \right] ^{\kappa -1}\widetilde{K_{y}}^{\lambda }\left( c\right) , \end{eqnarray*} and the vanishing means of the Alpert functions $h_{J;\kappa }^{\omega ,a}$ for $a\in \Gamma _{J,n,\kappa }$, to obtain \begin{equation*} \mathsf{P}_{\mathcal{J}}^{\omega }\left[ RT^{\lambda }\left( \varphi \nu \right) \right] \left( x\right) =\left\langle RT^{\lambda }\mu ,h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) }h_{J;\kappa }^{\omega ,a}\left( x\right) \end{equation*} where \begin{eqnarray*} &&\left\langle RT^{\lambda }\mu ,h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) }=\int_{\mathbb{R}^{n}}\left\{ \int_{\mathbb{R} ^{n}}\widetilde{K}^{\lambda }\left( x,y\right) h_{J;\kappa }^{\omega ,a}\left( x\right) d\omega \left( x\right) \right\} d\mu \left( y\right) =\int_{\mathbb{R}^{n}}\left\langle \widetilde{K}_{y}^{\lambda },h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) }d\mu \left( y\right) \\ &=&\int_{\mathbb{R}^{n}}\left\langle \widetilde{K}_{y}^{\lambda }\left( x\right) -\limfunc{Tay}\left( \widetilde{K}_{y}^{\lambda }\right) \left( x,m_{J}^{\kappa }\right) ,h_{J;\kappa }^{\omega ,a}\left( x\right) \right\rangle _{L^{2}\left( \omega \right) }d\mu \left( y\right) \\ &=&\int_{\mathbb{R}^{n}}\left\langle \frac{1}{\kappa !}\sum_{\left\vert \beta \right\vert =\kappa }\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( \theta \left( x,m_{J}^{\kappa }\right) \right) \left( x-m_{J}^{\kappa }\right) ^{\beta },h_{J;\kappa }^{\omega ,a}\left( x\right) \right\rangle _{L^{2}\left( \omega \right) }d\mu \left( y\right) \ \ \ \ \ \text{(some }\theta \left( x,m_{J}^{\kappa }\right) \in J \text{) } \\ &=&\sum_{\left\vert \beta \right\vert =\kappa }\left\langle \left[ \int_{ \mathbb{R}^{n}}\frac{1}{\kappa !}\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) d\mu \left( y\right) \right] \left( x-m_{J}^{\kappa }\right) ^{\beta },h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) } \\ &&+\sum_{\left\vert \beta \right\vert =\kappa }\left\langle \left[ \int_{ \mathbb{R}^{n}}\frac{1}{\kappa !}\left[ \left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( \theta \left( x,m_{J}^{\kappa }\right) \right) -\sum_{\left\vert \beta \right\vert =\kappa }\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) \right] d\mu \left( y\right) \right] \left( x-m_{J}^{\kappa }\right) ^{\beta },h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) }\ . \end{eqnarray*} Then using that $\int_{\mathbb{R}^{n}}\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) d\mu \left( y\right) $ is independent of $x\in J$, and that $\left\langle \left( x-m_{J}^{\kappa }\right) ^{\beta },\mathbf{h}_{J;\kappa }^{\omega }\right\rangle _{L^{2}\left( \omega \right) }=\left\langle x^{\beta }, \mathbf{h}_{J;\kappa }^{\omega }\right\rangle _{L^{2}\left( \omega \right) }$ by moment vanishing of the Alpert wavelets, we can continue with \begin{eqnarray*} &&\left\langle RT^{\lambda }\mu ,h_{J;\kappa }^{\omega ,a}\right\rangle _{\omega }=\sum_{\left\vert \beta \right\vert =\kappa }\left[ \int_{\mathbb{R }^{n}}\frac{1}{\kappa !}\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) d\mu \left( y\right) \right] \cdot \left\langle x^{\beta },h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) } \\ &&\ \ \ \ \ +\frac{1}{\kappa !}\sum_{\left\vert \beta \right\vert =\kappa }\left\langle \left[ \int_{\mathbb{R}^{n}}\left[ \left( \widetilde{K} _{y}^{\lambda }\right) ^{\left( \beta \right) }\left( \theta \left( x,m_{J}^{\kappa }\right) \right) -\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) \right] d\mu \left( y\right) \right] \left( x-m_{J}^{\kappa }\right) ^{\beta },h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) }\ . \end{eqnarray*} Hence \begin{eqnarray*} &&\left\vert \left\langle RT^{\lambda }\mu ,h_{J;\kappa }^{\omega ,a}\right\rangle _{\omega }-\sum_{\left\vert \beta \right\vert =\kappa } \left[ \int_{\mathbb{R}^{n}}\frac{1}{\kappa !}\left( \widetilde{K} _{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) d\mu \left( y\right) \right] \cdot \left\langle x^{\beta },h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) }\right\vert \\ &\leq &\frac{1}{\kappa !}\sum_{\left\vert \beta \right\vert =\kappa }\left\vert \left\langle \left[ \int_{\mathbb{R}^{n}}\sup_{\theta \in J}\left\vert \left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( \theta \right) -\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) \right\vert d\left\vert \mu \right\vert \left( y\right) \right] \left\vert x-m_{J}^{\kappa }\right\vert ^{\kappa },\left\vert h_{J;\kappa }^{\omega ,a}\right\vert \right\rangle _{L^{2}\left( \omega \right) }\right\vert \\ &\lesssim &C_{CZ}\frac{\mathrm{P}_{\kappa +1}^{\lambda }\left( J,\left\vert \mu \right\vert \right) }{\left\vert J\right\vert ^{\kappa }} \int_{J}\left\vert x-m_{J}^{\kappa }\right\vert ^{\kappa }\left\vert h_{J;\kappa }^{\omega ,a}\left( x\right) \right\vert d\omega \left( x\right) \\ &\lesssim &C_{CZ}\mathrm{P}_{\kappa +1}^{\lambda }\left( J,\left\vert \mu \right\vert \right) \int_{J}\left\vert h_{J;\kappa }^{\omega ,a}\left( x\right) \right\vert d\omega \left( x\right) , \end{eqnarray*} where in the last line we have used \begin{eqnarray*} &&\int_{\mathbb{R}^{n}}\sup_{\theta \in J}\left\vert \left( \widetilde{K} _{y}^{\lambda }\right) ^{\left( \beta \right) }\left( \theta \right) -\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) \right\vert d\left\vert \mu \right\vert \left( y\right) \\ &\lesssim &C_{CZ}\int_{\mathbb{R}^{n}}\left( \frac{\left\vert J\right\vert }{ \left\vert y-c_{J}\right\vert }\right) \frac{d\left\vert \mu \right\vert \left( y\right) }{\left\vert y-c_{J}\right\vert ^{\kappa +1+n-\lambda }} =C_{CZ}\frac{\mathrm{P}_{\kappa +1}^{\lambda }\left( J,\left\vert \mu \right\vert \right) }{\left\vert J\right\vert ^{\frac{\kappa }{n}}}. \end{eqnarray*} Thus using a similar estimate on the first integral $\sum_{\left\vert \beta \right\vert =\kappa }\left[ \int_{\mathbb{R}^{n}}\frac{1}{\kappa !}\left( \widetilde{K}_{y}^{\lambda }\right) ^{\left( \beta \right) }\left( m_{J}^{\kappa }\right) d\mu \left( y\right) \right] \cdot \left\langle x^{\beta },h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) }$, we conclude that \begin{eqnarray*} \left\vert \mathsf{P}_{\mathcal{J}}^{\omega }\left[ RT^{\lambda }\left( \varphi \nu \right) \right] \left( x\right) \right\vert &=&\left\vert \left\langle RT^{\lambda }\mu ,h_{J;\kappa }^{\omega ,a}\right\rangle _{L^{2}\left( \omega \right) }\right\vert \left\vert h_{J;\kappa }^{\omega ,a}\left( x\right) \right\vert \\ &\lesssim &C_{CZ}\mathrm{P}_{\kappa }^{\lambda }\left( J,\left\vert \mu \right\vert \right) \int_{J}\left\vert h_{J;\kappa }^{\omega ,a}\right\vert d\omega \frac{1}{\sqrt{\left\vert J\right\vert _{\omega }}}\mathbf{1} _{J}\left( x\right) \lesssim C_{CZ}\mathrm{P}_{\kappa }^{\lambda }\left( J,\left\vert \mu \right\vert \right) \mathbf{1}_{J}\left( x\right) , \end{eqnarray*} since $\mathrm{P}_{\kappa +1}^{\lambda }\left( J,\left\vert \mu \right\vert \right) \leq \mathrm{P}_{\kappa }^{\lambda }\left( J,\left\vert \mu \right\vert \right) $. \end{proof} \subsection{Disjoint form} We decompose the disjoint form into two pieces, \begin{eqnarray*} \mathsf{B}_{\cap }\left( f,g\right) &=&\dsum\limits_{I,J\in \mathcal{D}\ :J\cap I=\emptyset \text{ and }\frac{\ell \left( J\right) }{\ell \left( I\right) }\notin \left[ 2^{-\rho },2^{\rho }\right] }\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega } \\ &=&\left\{ \dsum\limits_{\substack{ I,J\in \mathcal{D}\ :J\cap I=\emptyset \\ \ell \left( J\right) <2^{-\rho }\ell \left( I\right) }}+\dsum\limits _{\substack{ I,J\in \mathcal{D}\ :J\cap I=\emptyset \\ \ell \left( J\right) >2^{\rho }\ell \left( I\right) }}\right\} \left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega } \\ &\equiv &\mathsf{B}_{\cap }^{\limfunc{down}}\left( f,g\right) +\mathsf{B} _{\cap }^{\limfunc{up}}\left( f,g\right) . \end{eqnarray*} Since the up form is dual to the down form, we consider only $\mathsf{B} _{\cap }^{\limfunc{down}}\left( f,g\right) $, and we will prove the following estimate: \begin{equation} \left\vert \mathsf{B}_{\cap }^{\limfunc{down}}\left( f,g\right) \right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \label{routine'} \end{equation} \begin{description} \item[Porism] It is important to note that from the proof given, we may replace the sum $\dsum\limits_{\substack{ I,J\in \mathcal{D}\ :J\cap I=\emptyset \\ \ell \left( J\right) <2^{-\rho }\ell \left( I\right) }}$ in the left hand side of (\ref{routine'}) with a sum over any \emph{subset} of the pairs $I,J\,$arising in $\mathsf{B}_{\cap }^{\limfunc{down}}\left( f,g\right) $. A similar remark of course applies to $\mathsf{B}_{\cap }^{ \limfunc{up}}\left( f,g\right) $. \end{description} \begin{proof}[Proof of (\protect\ref{routine'})] Denote by $\limfunc{dist}$ the $\ell ^{\infty }$ distance in $\mathbb{R}^{n}$ : $\limfunc{dist}\left( x,y\right) =\max_{1\leq j\leq n}\left\vert x_{j}-y_{j}\right\vert $. We now estimate separately the long-range and mid-range cases in $\mathsf{B}_{\cap }^{\limfunc{down}}\left( f,g\right) $ where $\limfunc{dist}\left( J,I\right) \geq \ell \left( I\right) $ holds or not, and we decompose $\mathsf{B}_{\cap }^{\limfunc{down}}\left( f,g\right) $ accordingly: \begin{equation*} \mathsf{B}_{\cap }^{\limfunc{down}}\left( f,g\right) =\mathcal{A}^{\limfunc{ long}}\left( f,g\right) +\mathcal{A}^{\limfunc{mid}}\left( f,g\right) . \end{equation*} \textbf{The long-range case}: We begin with the case where $\limfunc{dist} \left( J,I\right) $ is at least $\ell \left( I\right) $, i.e. $J\cap 3I=\emptyset $. With $A\left( f,g\right) =\mathcal{A}^{\limfunc{long}}\left( f,g\right) $ we have \begin{equation*} A\left( f,g\right) =\sum_{\substack{ I,J\in \mathcal{D}:\ \limfunc{dist} \left( J,I\right) \geq \ell \left( I\right) \\ \ell \left( J\right) \leq 2^{-\rho }\ell \left( I\right) }}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }=\sum_{s=\rho }^{\infty }\sum_{m=1}^{\infty }A_{s,m}\left( f,g\right) , \end{equation*} where \begin{eqnarray*} A_{s,m}\left( f,g\right) &=&\sum_{\substack{ I,J\in \mathcal{D}:\ \limfunc{ dist}\left( J,I\right) \approx \ell \left( I\right) ^{m} \\ \ell \left( J\right) =2^{-s}\ell \left( I\right) }}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ &=&\sum_{J\in \mathcal{D}}\sum_{I\in \mathcal{F}_{s,m}\left( J\right) }\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }=\sum_{J\in \mathcal{D}}\left\langle T_{\sigma }^{\lambda }\left( \mathsf{Q}_{J,s,m}^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }, \end{eqnarray*} with \begin{equation*} \mathcal{F}_{s,m}\left( J\right) \equiv \left\{ I\in \mathcal{D}:\ \limfunc{ dist}\left( J,I\right) \approx 2^{m}\ell \left( I\right) \text{, }\ell \left( I\right) =2^{s}\ell \left( J\right) \right\} \text{ and }\mathsf{Q} _{J,s,m}^{\sigma }\equiv \sum_{I\in \mathcal{F}_{s,m}\left( J\right) }\bigtriangleup _{I;\kappa }^{\sigma }. \end{equation*} Then from the Pivotal Lemma \ref{ener} we have \begin{equation*} \left\vert \left\langle T_{\sigma }^{\lambda }\left( \mathsf{Q} _{J,s,m}^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\right\vert \lesssim \mathrm{P}_{\kappa }^{\lambda }\left( J,\left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\right\vert \sigma \right) \int_{J}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\right\vert d\omega , \end{equation*} where \begin{eqnarray*} \mathrm{P}_{\kappa }^{\lambda }\left( J,\left\vert \mathsf{Q} _{J,s,m}^{\sigma }f\right\vert \sigma \right) &\mathbf{=}&\int_{\mathbb{R} ^{n}}\frac{\ell \left( J\right) ^{\kappa }}{\left\vert \ell \left( J\right) + \limfunc{dist}\left( y,J\right) \right\vert ^{n+\kappa -\lambda }}\left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \\ &\lesssim &2^{-\kappa \left( s+m\right) }\int_{\mathbb{R}^{n}\setminus 3J} \frac{1}{\left\vert \ell \left( J\right) +\limfunc{dist}\left( y,J\right) \right\vert ^{n-\lambda }}\left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) , \end{eqnarray*} by the definiition of $\mathsf{Q}_{J,s,m}^{\sigma }$ since \begin{equation} \ell \left( J\right) =2^{-s}\ell \left( I\right) \approx 2^{-s-m}\limfunc{ dist}\left( y,J\right) . \label{pigeon s} \end{equation} Thus we have \begin{eqnarray*} &&\left\vert A_{s,m}\left( f,g\right) \right\vert \lesssim 2^{-\kappa \left( s+m\right) }\int_{\mathbb{R}^{n}}\sum_{J\in \mathcal{D}}\left( \int_{\mathbb{ R}^{n}\setminus 3J}\frac{1}{\left\vert c_{J}-y\right\vert ^{n-\lambda }} \left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \right) \mathbf{1}_{J}\left( x\right) \left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert d\omega \left( x\right) \\ &\lesssim &2^{-\kappa \left( s+m\right) }\int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left( \int_{\mathbb{R}^{n}\setminus 3J}\frac{1}{ \left\vert c_{J}-y\right\vert ^{n-\lambda }}\left\vert \mathsf{Q} _{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \mathbf{1}_{J}\left( x\right) \right) ^{2}\right) ^{\frac{1}{2}}\left( \sum_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}d\omega \left( x\right) \\ &\leq &2^{-\kappa \left( s+m\right) }\left( \int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left( \int_{\mathbb{R}^{n}\setminus 3J}\frac{1}{ \left\vert c_{J}-y\right\vert ^{n-\lambda }}\left\vert \mathsf{Q} _{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \mathbf{1}_{J}\left( x\right) \right) ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \right) ^{\frac{1}{p}}\left\Vert \mathcal{S}_{\limfunc{Alpert }}^{\omega }g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{eqnarray*} Now $\mathcal{S}_{\limfunc{Alpert}}^{\omega }$ is bounded on $L^{p^{\prime }}\left( \omega \right) $, and so by the geometric decay in $s$ and $m$, it remains to show that for each $s,m\in \mathbb{N}$, \begin{equation} \left( \int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left( \int_{ \mathbb{R}^{n}\setminus 3J}\frac{1}{\left\vert c_{J}-y\right\vert ^{n-\lambda }}\left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \right) ^{2}\mathbf{1}_{J}\left( x\right) \right) ^{\frac{p}{2}}d\omega \left( x\right) \right) ^{\frac{1}{p} }\lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }. \label{RTS} \end{equation} For this we use (\ref{pigeon s}) to write \begin{equation*} \int_{\mathbb{R}^{n}\setminus 3J}\frac{1}{\left\vert c_{J}-y\right\vert ^{n-\lambda }}\left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \approx \frac{1}{\ell \left( J\right) ^{\left( s+m\right) \left( n-\lambda \right) }}\int_{\mathbb{R}^{n}\setminus 3J}\left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) , \end{equation*} and then obtain with $K_{s,m}\left( J\right) $ equal to the support of $ \mathsf{Q}_{J,s,m}^{\sigma }$, that \begin{eqnarray*} &&\int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left( \int_{\mathbb{R} ^{n}\setminus 3J}\frac{1}{\left\vert c_{J}-y\right\vert ^{n-\lambda }} \left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \right) ^{2}\mathbf{1}_{J}\left( x\right) \right) ^{\frac{p}{ 2}}d\omega \left( x\right) \\ &\approx &\int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left( \frac{1}{ \ell \left( J\right) ^{\left( s+m\right) \left( n-\lambda \right) }} \int_{K_{s,m}\left( J\right) }\left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \right) ^{2}\mathbf{1} _{J}\left( x\right) \right) ^{\frac{p}{2}}d\omega \left( x\right) \\ &=&\int_{\mathbb{R}^{n}}\left( \sum_{K\in \mathcal{D}}\sum_{J\in \mathcal{D} :\ J\subset K\text{ and }K_{s,m}\left( J\right) =K}\mathbf{1}_{J}\left( x\right) \left( \frac{1}{\ell \left( J\right) ^{\left( s+m\right) \left( n-\lambda \right) }}\int_{K_{s,m}\left( J\right) }\left\vert \mathsf{Q} _{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \right) ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \\ &\approx &\int_{\mathbb{R}^{n}}\left( \sum_{K\in \mathcal{D}}\mathbf{1} _{K}\left( x\right) \left( \frac{\left\vert K\right\vert _{\sigma }}{\ell \left( K\right) ^{n-\lambda }}\frac{1}{\left\vert K\right\vert _{\sigma }} \int_{K}\left\vert \mathsf{Q}_{K}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \right) ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) , \end{eqnarray*} where $\mathsf{Q}_{K}^{\sigma }\equiv \sum_{K_{s,m}\left( J\right) =K} \mathsf{Q}_{J,s,m}^{\sigma }$. Now we use first the quadratic offset condition $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) $, and then the Fefferman-Stein vector-valued inequality for the maximal function, to obtain the following vector-valued inequality for each fixed $s,m\in \mathbb{N}$, \begin{eqnarray*} &&\int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left( \frac{1}{\ell \left( J\right) ^{\left( s+m\right) \left( n-\lambda \right) }} \int_{K_{s,m}\left( J\right) }\left\vert \mathsf{Q}_{J,s,m}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \right) ^{2}\mathbf{1} _{J}\left( x\right) \right) ^{\frac{p}{2}}d\omega \left( x\right) \\ &\lesssim &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) ^{p}\int_{\mathbb{R}^{n}}\left( \sum_{K\in \mathcal{D}}\mathbf{1} _{K}\left( x\right) \left( \frac{1}{\left\vert K\right\vert _{\sigma }} \int_{K}\left\vert \mathsf{Q}_{K}^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \right) ^{2}\right) ^{\frac{p}{2}}d\sigma \left( x\right) \\ &\lesssim &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) ^{p}\int_{\mathbb{R}^{n}}\left( \sum_{K\in \mathcal{D}}\left\vert \mathsf{Q}_{K}^{\sigma }f\left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2 }}d\sigma \left( x\right) \lesssim A_{p}^{\lambda ,\ell ^{2};\limfunc{offset} }\left( \sigma ,\omega \right) ^{p}\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }^{p}. \end{eqnarray*} As mentioned above, this completes the proof of the long range case by the geometric decay in $s$ and $m$. \textbf{The mid range case}: Let \begin{equation*} \mathcal{P}\equiv \left\{ \left( I,J\right) \in \mathcal{D}\times \mathcal{D} :J\text{ is good},\ \ell \left( J\right) \leq 2^{-\rho }\ell \left( I\right) ,\text{ }J\subset 3I\setminus I\right\} . \end{equation*} Now we pigeonhole the lengths of $I$ and $J$ and the distance between them by defining \begin{equation*} \mathcal{P}_{d}^{t}\equiv \left\{ \left( I,J\right) \in \mathcal{D}\times \mathcal{D}:J\text{ is good},\ \ell \left( I\right) =2^{t}\ell \left( J\right) ,\text{ }J\subset 3I\setminus I,\ 2^{d-1}\ell \left( J\right) \leq \limfunc{dist}\left( I,J\right) \leq 2^{d}\ell \left( J\right) \right\} . \end{equation*} Note that the closest a good cube $J$ can come to $I$ is determined by the goodness inequality, which gives this bound: \begin{eqnarray} &&2^{d}\ell \left( J\right) \geq \limfunc{dist}\left( I,J\right) \geq \frac{1 }{2}\ell \left( I\right) ^{1-\varepsilon }\ell \left( J\right) ^{\varepsilon }=\frac{1}{2}2^{t\left( 1-\varepsilon \right) }\ell \left( J\right) ; \label{d below} \\ &&\text{which implies }d\geq t\left( 1-\varepsilon \right) -1. \notag \end{eqnarray} We write \begin{equation*} \dsum\limits_{\left( I,J\right) \in \mathcal{P}}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }=\dsum\limits_{t=\rho }^{\infty }\ \sum_{d=N-\varepsilon t-1}^{N}\sum_{\left( I,J\right) \in \mathcal{P}_{d}^{t}}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }, \end{equation*} and for fixed $t$ and $d$, we estimate \begin{eqnarray*} &&\left\vert \dsum\limits_{\left( I,J\right) \in \mathcal{P} _{d}^{t}}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\right\vert =\left\vert \int_{\mathbb{R}^{n}}\dsum\limits_{\left( I,J\right) \in \mathcal{P}_{d}^{t}}T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \ \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \ d\omega \left( x\right) \right\vert \\ &=&\left\vert \int_{\mathbb{R}^{n}}\dsum\limits_{J\in \mathcal{D} }\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \dsum\limits_{I\in \mathcal{D}:\ \left( I,J\right) \in \mathcal{P} _{d}^{t}}\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \ \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \ d\omega \left( x\right) \right\vert \\ &\leq &\int_{\mathbb{R}^{n}}\left( \dsum\limits_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \dsum\limits_{I\in \mathcal{D}:\ \left( I,J\right) \in \mathcal{P} _{d}^{t}}\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\left( \dsum\limits_{J\in \mathcal{D} }\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}d\omega \left( x\right) \\ &\lesssim &\left\{ \int_{\mathbb{R}^{n}}\left( \dsum\limits_{J\in \mathcal{D} }\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \dsum\limits_{I\in \mathcal{D}:\ \left( I,J\right) \in \mathcal{P} _{d}^{t}}\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \right\} ^{ \frac{1}{p}} \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left\{ \int_{ \mathbb{R}^{n}}\left( \dsum\limits_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{p^{\prime }}{2}}d\omega \left( x\right) \right\} ^{\frac{ 1}{p^{\prime }}}. \end{eqnarray*} Now we use the fact that for a fixed $J$, there are only boundedly many $ I\in \mathcal{D}$ with $\left( I,J\right) \in \mathcal{P}_{d}^{t}$, which without loss of generality we can suppose is a single cube $I\left[ J\right] $, together with (\ref{d below}) to obtain the estimate \begin{eqnarray*} \left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\alpha }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert &\lesssim &\mathrm{P}_{\kappa }^{\lambda }\left( J,\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \sigma \right) \mathbf{1}_{J}\left( x\right) =\int_{I}\frac{\ell \left( J\right) ^{\kappa }}{\left( \ell \left( J\right) +\left\vert y-c_{J}\right\vert \right) ^{n+\kappa -\lambda }} \left\vert \bigtriangleup _{I\left[ J\right] ;\kappa }^{\sigma }f\left( y\right) \right\vert d\sigma \left( y\right) \mathbf{1}_{J}\left( x\right) \\ &\lesssim &\frac{\ell \left( J\right) ^{\kappa }}{\left( 2^{d}\ell \left( J\right) \right) ^{n+\kappa -\lambda }}\sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\left[ J\right] \right) }E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left\vert I^{\prime }\right\vert _{\sigma }\mathbf{1}_{J}\left( x\right) \\ &\lesssim &\frac{2^{-t\left[ \kappa -\varepsilon \left( n+\kappa -\lambda \right) \right] }}{\ell \left( I\right) ^{n-\lambda }}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\left[ J\right] \right) }E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left\vert I^{\prime }\right\vert _{\sigma }\mathbf{1}_{J}\left( x\right) , \end{eqnarray*} since \begin{equation*} \frac{\ell \left( J\right) ^{\kappa }}{\left( 2^{d}\ell \left( J\right) \right) ^{n+\kappa -\lambda }}=\frac{2^{-t\kappa }2^{\left( t-d\right) n+\kappa -\lambda }}{\ell \left( I\left[ J\right] \right) ^{n-\lambda }}\leq \frac{2^{-t\kappa }2^{\left( t\varepsilon +1\right) \left( n+\kappa -\lambda \right) }}{\ell \left( I\left[ J\right] \right) ^{n-\lambda }}=2^{n+\kappa -\lambda }\frac{2^{-t\left[ \kappa -\varepsilon \left( n+\kappa -\lambda \right) \right] }}{\ell \left( I\left[ J\right] \right) ^{n-\lambda }}. \end{equation*} Thus we have \begin{eqnarray*} &&\left\{ \int_{\mathbb{R}^{n}}\left( \dsum\limits_{J\in \mathcal{D} }\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \dsum\limits_{I\in \mathcal{D}:\ \left( I,J\right) \in \mathcal{P} _{d}^{t}}\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \right\} ^{ \frac{1}{p}} \\ &\lesssim &2^{-t\left[ \kappa -\varepsilon \left( n+\kappa -\lambda \right) \right] }\left\{ \int_{\mathbb{R}^{n}}\left( \dsum\limits_{J\in \mathcal{D} }\left\vert \sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\left[ J \right] \right) }E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \frac{\left\vert I^{\prime }\right\vert _{\sigma }}{\ell \left( I\right) ^{n-\lambda }}\mathbf{1}_{J}\left( x\right) \right\vert ^{2}\right) ^{\frac{p}{2}}d\omega \left( x\right) \right\} ^{ \frac{1}{p}} \\ &\lesssim &2^{-t\left[ \kappa -\varepsilon \left( n+\kappa -\lambda \right) \right] }\left\{ \int_{\mathbb{R}^{n}}\left( \dsum\limits_{I\in \mathcal{D} }\left\vert \sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\left[ J \right] \right) }E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \frac{\left\vert I^{\prime }\right\vert _{\sigma }}{\left\vert I\right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2} \mathbf{1}_{I^{\prime }}\left( x\right) \right) ^{\frac{p}{2}}d\omega \left( x\right) \right\} ^{\frac{1}{p}} \\ &\lesssim &2^{-t\left[ \kappa -\varepsilon \left( n+\kappa -\lambda \right) \right] }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\{ \int_{\mathbb{R}^{n}}\left( \dsum\limits_{I\in \mathcal{D} }\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\left[ J\right] \right) }\left( E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \right) ^{2}\mathbf{1}_{I^{\prime }}\left( x\right) \right) ^{\frac{p}{2}}d\sigma \left( x\right) \right\} ^{ \frac{1}{p}} \\ &\lesssim &2^{-t\left[ \kappa -\varepsilon \left( n+\kappa -\lambda \right) \right] }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }, \end{eqnarray*} and provided $0<\varepsilon <\frac{\kappa }{n+\kappa -\lambda }$, we can sum in $t$ to complete the proof of (\ref{routine'}). \end{proof} \subsection{Comparable form} We decompose \begin{eqnarray*} \mathsf{B}_{\diagup }\left( f,g\right) &=&\mathsf{B}_{\diagup }^{\func{below }}\left( f,g\right) +\mathsf{B}_{\diagup }^{\func{above}}\left( f,g\right) ; \\ \text{where }\mathsf{B}_{\diagup }^{\func{below}}\left( f,g\right) &\equiv &\dsum\limits_{I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1\text{ and }\overline{J}\cap \overline{I}=\emptyset }\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega } \\ &=&\dsum\limits_{I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1\text{ and }\overline{J}\cap \overline{I}=\emptyset \text{ and }J\subset 3I}\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega } \\ &&+\dsum\limits_{I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1\text{ and }\overline{J}\cap \overline{I}=\emptyset \text{ and }J\cap 3I=\emptyset }\left\langle T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) ,\left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \right\rangle _{\omega } \\ &\equiv &\mathsf{B}_{\diagup }^{\func{below}\func{near}}\left( f,g\right) + \mathsf{B}_{\diagup }^{\func{below}\func{far}}\left( f,g\right) \ . \end{eqnarray*} The second form $\mathsf{B}_{\diagup }^{\func{below}\func{far}}\left( f,g\right) $ is handled in the same way as the disjoint far form $\mathsf{B} _{\cap }^{\func{far}}\left( f,g\right) $ in the previous subsection, and for the first form $\mathsf{B}_{\diagup }^{\func{below}\func{near}}\left( f,g\right) $, we write \begin{eqnarray*} &&\left\vert \mathsf{B}_{\diagup }^{\func{below}\func{near}}\left( f,g\right) \right\vert =\left\vert \int_{\mathbb{R}^{n}}\dsum\limits_{I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1\text{ and }\overline{J}\cap \overline{I}=\emptyset \text{ and }J\subset 3I}T_{\sigma }^{\lambda }\left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \ \left( \bigtriangleup _{J;\kappa }^{\omega }g\right) \left( x\right) \ d\omega \left( x\right) \right\vert \\ &\lesssim &\int_{\mathbb{R}^{n}}\dsum\limits_{I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1\text{ and } \overline{J}\cap \overline{I}=\emptyset \text{ and }J\subset 3I}\left( \int_{I}\frac{\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\left( y\right) \right\vert }{\left\vert y-x\right\vert ^{n-\lambda }}d\sigma \left( y\right) \right) \ \left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert \ d\omega \left( x\right) \\ &\lesssim &\int_{\mathbb{R}^{n}}\dsum\limits_{I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1\text{ and } \overline{J}\cap \overline{I}=\emptyset \text{ and }J\subset 3I}\left( \frac{ 1}{\left\vert I\right\vert _{\sigma }}\int_{I}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert d\sigma \right) \frac{\left\vert I\right\vert _{\sigma }}{\left\vert I\right\vert ^{1-\frac{\lambda }{n}}} \mathbf{1}_{3I}\left( x\right) \ \left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert \ d\omega \left( x\right) , \end{eqnarray*} and so by the Cauchy-Schwarz inequality, we have \begin{eqnarray*} \left\vert \mathsf{B}_{\diagup }^{\func{below}\func{near}}\left( f,g\right) \right\vert &\lesssim &\int_{\mathbb{R}^{n}}\left( \dsum\limits_{\substack{ I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1 \\ \overline{J}\cap \overline{I}=\emptyset ,\ J\subset 3I}} \left\vert \left( \frac{1}{\left\vert I\right\vert _{\sigma }} \int_{I}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert d\sigma \right) \frac{\left\vert I\right\vert _{\sigma }}{\left\vert I\right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1}_{3I}\left( x\right) \right) ^{\frac{1}{2}} \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left( \dsum\limits_{\substack{ I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1 \\ \overline{J}\cap \overline{ I}=\emptyset ,\ J\subset 3I}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\ \ d\omega \left( x\right) \\ &\leq &\left\Vert \left( \dsum\limits_{\substack{ I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1 \\ \overline{J}\cap \overline{I}=\emptyset ,\ J\subset 3I}}\left\vert \left( \frac{1}{\left\vert I\right\vert _{\sigma }}\int_{I}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert d\sigma \right) \frac{ \left\vert I\right\vert _{\sigma }}{\left\vert I\right\vert ^{1-\frac{ \lambda }{n}}}\right\vert ^{2}\mathbf{1}_{3I}\left( x\right) \right) ^{\frac{ 1}{2}}\right\Vert _{L^{p}\left( \omega \right) }\left\Vert \mathcal{S}_{ \limfunc{Alpert},\kappa }g\right\Vert _{L^{p^{\prime }}\left( \omega \right) } \end{eqnarray*} and \begin{eqnarray*} &&\left\Vert \left( \dsum\limits_{\substack{ I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1 \\ \overline{ J}\cap \overline{I}=\emptyset ,\ J\subset 3I}}\left\vert \left( \frac{1}{ \left\vert I\right\vert _{\sigma }}\int_{I}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert d\sigma \right) \frac{\left\vert I\right\vert _{\sigma }}{\left\vert I\right\vert ^{1-\frac{\lambda }{n}}} \right\vert ^{2}\mathbf{1}_{3I}\left( x\right) \right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \dsum\limits_{\substack{ I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1 \\ \overline{J}\cap \overline{I}=\emptyset ,\ J\subset 3I}}\left( \frac{1}{ \left\vert I\right\vert _{\sigma }}\int_{I}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert d\sigma \right) ^{2}\mathbf{1}_{3I}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \end{eqnarray*} and by the Fefferman-Stein maximal inequality in the space of homogeneous type $\left( \mathbb{R}^{n},\sigma \right) $, where $\sigma $ is doubling ( \cite{GrLiYa}) \begin{eqnarray*} &&\left\Vert \left( \dsum\limits_{\substack{ I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1 \\ \overline{ J}\cap \overline{I}=\emptyset ,\ J\subset 3I}}\left( \frac{1}{\left\vert 3I\right\vert _{\sigma }}\int_{3I}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert d\sigma \right) ^{2}\mathbf{1}_{3I}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }\lesssim \left\Vert \left( \dsum\limits_{\substack{ I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1 \\ \overline{ J}\cap \overline{I}=\emptyset ,\ J\subset 3I}}\left[ \mathcal{M}_{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left( x\right) \right] ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \lesssim \left\Vert \left( \dsum\limits_{\substack{ I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1 \\ \overline{J}\cap \overline{I}=\emptyset ,\ J\subset 3I}} \left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }\lesssim \left\Vert \mathcal{S}_{\limfunc{Alpert},\kappa }f\right\Vert _{L^{p}\left( \sigma \right) }. \end{eqnarray*} Altogether, since both $\left\Vert \mathcal{S}_{\limfunc{Alpert},\kappa }f\right\Vert _{L^{p}\left( \sigma \right) }\approx \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }$ and $\left\Vert \mathcal{S}_{\limfunc{Alpert} ,\kappa }g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }\approx \left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }$ by square function estimates, we have controlled the norms of the below forms $\mathsf{ B}_{\diagup }^{\func{below}\func{near}}\left( f,g\right) $ and $\mathsf{B} _{\diagup }^{\func{below}\func{far}}\left( f,g\right) $ by the quadratic offset Muckenhoupt constant $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset} }\left( \sigma ,\omega \right) $, hence \begin{equation} \left\vert \mathsf{B}_{\diagup }^{\func{below}}\left( f,g\right) \right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \label{routine''} \end{equation} Finally, the form $\mathsf{B}_{\diagup }^{\func{above}}\left( f,g\right) $ is handled in dual fashion to $\mathsf{B}_{\diagup }^{\func{below}}\left( f,g\right) $. \begin{description} \item[Porism] It is important to note that from the proof given, we may replace the sum \begin{equation*} \dsum\limits_{I,J\in \mathcal{D}:\ 2^{-\rho }\leq \frac{\ell \left( J\right) }{\ell \left( I\right) }\leq 1\text{ and }\overline{J}\cap \overline{I} =\emptyset \text{ and }J\subset 3I} \end{equation*} in the left hand side of (\ref{routine''}) with a sum over any \emph{subset} of the pairs $I,J\,$arising in $\mathsf{B}_{\diagup }^{\func{below}}\left( f,g\right) $. A similar remark of course applies to $\mathsf{B}_{\diagup }^{ \func{above}}\left( f,g\right) $. \end{description} \subsection{Stopping form} We assume that $\sigma $ and $\omega $ are doubling measures. We will use a variant of the Haar stopping form argument due to Nazarov, Treil and Volberg \cite{NTV4} to bound the stopping form. Recall that \begin{equation*} \left\vert \widehat{f}\left( I\right) \right\vert Q_{I^{\prime };\kappa }= \mathbb{E}_{I^{\prime };\kappa }^{\sigma }f-\mathbf{1}_{I^{\prime }}\mathbb{E }_{I;\kappa }^{\sigma }f. \end{equation*} We begin by recalling that $\left\vert \widehat{f}\left( I\right) \right\vert Q_{I^{\prime };\kappa }=M_{I^{\prime };\kappa }=\mathbf{1} _{I^{\prime }}\bigtriangleup _{I;\kappa }^{\sigma }f$ and start the proof by pigeonholing the ratio of side lengths of $I$ and $J$ in the local stopping forms: \begin{align*} & \mathsf{B}_{\limfunc{stop};\kappa }^{F}\left( f,g\right) \equiv \sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) }\sum_{\substack{ J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}} \\ J\subset I^{\prime }\text{ and }J\Subset _{\rho ,\varepsilon }I}}\left\vert \widehat{f}\left( I\right) \right\vert \left\langle Q_{I^{\prime };\kappa }T_{\sigma }^{\lambda }\mathbf{1}_{F\setminus I^{\prime }},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ & =\sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D} }\left( I\right) }\sum_{\substack{ J\in \mathcal{C}_{F}^{\tau -\limfunc{shift }} \\ J\subset I^{\prime }\text{ and }J\Subset _{\rho ,\varepsilon }I}} \left\langle \bigtriangleup _{I;\kappa }^{\sigma }f\ T_{\sigma }^{\lambda } \mathbf{1}_{F\setminus I^{\prime }},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ & =\sum_{s=0}^{\infty }\sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) }\sum_{\substack{ J\in \mathcal{C} _{F}^{\tau -\limfunc{shift}}\text{and }\ell \left( J\right) =2^{-s}\ell \left( I\right) \\ J\subset I^{\prime }\text{ and }J\Subset _{\rho ,\varepsilon }I}}\left\langle \bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \ T_{\sigma }^{\lambda } \mathbf{1}_{F\setminus I^{\prime }}\right] ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\ . \end{align*} Now we write $J\prec I^{\prime }$ when $\pi _{\mathcal{D}}I^{\prime }\in \mathcal{C}_{F}^{\tau -\limfunc{shift}}$ and \begin{equation*} J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}}\text{, }\ell \left( J\right) =2^{-s}\ell \left( I\right) \text{, }J\subset I^{\prime }\text{ and } J\Subset _{\rho ,\varepsilon }I, \end{equation*} so that we have \begin{eqnarray*} &&\mathsf{B}_{\limfunc{stop};\kappa }\left( f,g\right) =\sum_{F\in \mathcal{F }}\mathsf{B}_{\limfunc{stop};\kappa }^{F}\left( f,g\right) \\ &=&\sum_{s=0}^{\infty }\sum_{F\in \mathcal{F}}\sum_{I\in \mathcal{C} _{F}}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) }\sum _{\substack{ J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}}\text{and }\ell \left( J\right) =2^{-s}\ell \left( I\right) \\ J\subset I^{\prime }\text{ and }J\Subset _{\rho ,\varepsilon }I}}\left\langle \bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \ T_{\sigma }^{\lambda }\mathbf{1}_{F\setminus I^{\prime }}\right] ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ &=&\sum_{s=0}^{\infty }\sum_{J\in \mathcal{D}}\left\langle \sum_{F\in \mathcal{F}}\sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) :\ J\prec I^{\prime }}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \ T_{\sigma }^{\lambda }\mathbf{1}_{F\setminus I^{\prime }}\right] ,\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega } \\ &=&\sum_{s=0}^{\infty }\int_{\mathbb{R}^{n}}\sum_{J\in \mathcal{D}}\left( \sum_{F\in \mathcal{F}}\sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) :\ J\prec I^{\prime }}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \ T_{\sigma }^{\lambda }\mathbf{1} _{F\setminus I^{\prime }}\right] \left( x\right) \right) \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \ d\omega \left( x\right) \\ &\equiv &\sum_{s=0}^{\infty }\mathsf{B}_{\limfunc{stop};\kappa ,s}\left( f,g\right) . \end{eqnarray*} But now we observe that if $J\subset I^{\prime }$ then $\bigtriangleup _{I;\kappa }^{\sigma }f$ is a polynomial of degree less than $\kappa $ on $J$ and so (\ref{analogue}) and (\ref{analogue'}) yield \begin{equation*} \left\vert \bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \ T_{\sigma }^{\lambda }\mathbf{1} _{F\setminus I^{\prime }}\right] \left( x\right) \right\vert \lesssim \left\Vert \mathbb{E}_{I^{\prime }}^{\sigma }\bigtriangleup _{I;\kappa }^{\sigma }f\right\Vert _{\infty }\ \mathrm{P}_{\kappa }^{\lambda }\left( J, \mathbf{1}_{F\setminus I^{\prime }}\sigma \right) \ \mathbf{1}_{J}\left( x\right) \lesssim E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \ \mathrm{P}_{\kappa }^{\lambda }\left( J, \mathbf{1}_{F\setminus I^{\prime }}\sigma \right) \ \mathbf{1}_{J}\left( x\right) . \end{equation*} Now we can now obtain geometric decay in $s$. Indeed, applying Cauchy-Schwarz we obtain \begin{eqnarray*} &&\mathsf{B}_{\limfunc{stop};\kappa ,s}\left( f,g\right) =\int_{\mathbb{R} ^{n}}\sum_{J\in \mathcal{D}}\left( \sum_{F\in \mathcal{F}}\sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) :\ J\prec I^{\prime }}\bigtriangleup _{J;\kappa }^{\omega }\bigtriangleup _{I;\kappa }^{\sigma }f\left( x\right) \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\mathbf{1}_{F\setminus I^{\prime }}\left( x\right) \right) \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \ d\omega \left( x\right) \\ &\leq &\int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left( \sum_{F\in \mathcal{F}}\sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) :\ J\prec I^{\prime }}E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \ \mathrm{P} _{\kappa }^{\lambda }\left( J,\mathbf{1}_{F\setminus I^{\prime }}\sigma \right) \ \mathbf{1}_{J}\left( x\right) \right) ^{2}\right) ^{\frac{1}{2} }\left( \sum_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}d\omega \left( x\right) \\ &\leq &\left\Vert S\left( x\right) \right\Vert _{L^{p}\left( \omega \right) }\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2} }\right\Vert _{L^{p^{\prime }}\left( \omega \right) }; \\ &&\text{where }S\left( x\right) ^{2}\equiv \sum_{J\in \mathcal{D}}\left( \sum_{F\in \mathcal{F}}\sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) :\ J\prec I^{\prime }}E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \ \mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1}_{F\setminus I^{\prime }}\sigma \right) \ \mathbf{1}_{J}\left( x\right) \right) ^{2}. \end{eqnarray*} For fixed $x\in J$, the pigeonholing above yields $I=\pi _{\mathcal{D} }^{\left( s\right) }J$, and thus we obtain \begin{eqnarray*} S\left( x\right) ^{2} &\equiv &\sum_{J\in \mathcal{D}}\left( \sum_{F\in \mathcal{F}}\sum_{I\in \mathcal{C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) :\ J\prec I^{\prime }}E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \ \mathrm{P} _{\kappa }^{\lambda }\left( J,\mathbf{1}_{F\setminus I^{\prime }}\sigma \right) \ \mathbf{1}_{J}\left( x\right) \right) ^{2} \\ &\lesssim &\sum_{J\in \mathcal{D}}\sum_{F\in \mathcal{F}}\sum_{I\in \mathcal{ C}_{F}}\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) :\ J\prec I^{\prime }}\left( E_{J}^{\sigma }\left\vert \bigtriangleup _{\pi _{ \mathcal{D}}^{\left( s\right) }J;\kappa }^{\sigma }f\right\vert \right) ^{2} \mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1}_{F\setminus \pi _{ \mathcal{D}}^{\left( s-1\right) }J}\sigma \right) ^{2}\mathbf{1}_{J}\left( x\right) , \end{eqnarray*} and now using the Poisson inequality with \begin{equation*} \eta \equiv \kappa -\varepsilon \left( n+\kappa -\lambda \right) >0, \end{equation*} we obtain \begin{eqnarray*} S\left( x\right) ^{2} &\lesssim &2^{-2\eta s}\sum_{J\in \mathcal{D}}\left( E_{J}^{\sigma }\left\vert \bigtriangleup _{\pi _{\mathcal{D}}^{\left( s\right) }J;\kappa }^{\sigma }f\right\vert \right) ^{2}\left\vert \mathrm{P} _{\kappa }^{\lambda }\left( \pi _{\mathcal{D}}^{\left( s-1\right) }J,\mathbf{ 1}_{F\left[ J\right] \setminus \pi _{\mathcal{D}}^{\left( s-1\right) }J}\sigma \right) ^{2}\right\vert ^{2}\mathbf{1}_{J}\left( x\right) \\ &=&2^{-2\eta s}\sum_{I\in \mathcal{D}}\sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) }\sum_{J\in \mathcal{D}:\ J\subset I^{\prime }}\left( E_{J}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \right) ^{2}\left\vert \mathrm{P}_{\kappa }^{\lambda }\left( I^{\prime },\mathbf{1}_{F\left[ J\right] \setminus I^{\prime }}\sigma \right) ^{2}\right\vert ^{2}\mathbf{1}_{J}\left( x\right) \\ &=&2^{-2\eta s}\sum_{I\in \mathcal{D}}\sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) }\left( E_{J}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \right) ^{2}\mathrm{P}_{\kappa }^{\lambda }\left( I^{\prime },\mathbf{1}_{F\left[ I\right] \setminus I^{\prime }}\sigma \right) ^{2}\mathbf{1}_{I^{\prime }}\left( x\right) . \end{eqnarray*} Since $\sigma $ is doubling, and for $\kappa $ chosen sufficiently large, we have $\mathrm{P}_{\kappa }\left( I^{\prime };\mathbf{1}_{I^{\prime }}\sigma \right) \approx \frac{\left\vert I^{\prime }\right\vert _{\sigma }}{ \left\vert I^{\prime }\right\vert ^{1-\frac{\lambda }{n}}}$ by (\ref{kappa large}), and since $E_{J}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }\right\vert \leq \left\Vert \mathbf{1}_{J}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }\right\vert \right\Vert _{\infty }\leq \left\Vert \mathbf{1}_{I^{\prime }}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }\right\vert \right\Vert _{\infty }\lesssim E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert $ by (\ref{analogue}) and (\ref{analogue'}), \begin{equation*} S\left( x\right) ^{2}\lesssim 2^{-2\eta s}\sum_{I\in \mathcal{D} }\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) }\left( E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \right) ^{2}\left( \frac{\left\vert I^{\prime }\right\vert _{\sigma }}{\left\vert I^{\prime }\right\vert ^{1-\frac{\lambda }{n}}} \right) ^{2}\mathbf{1}_{I^{\prime }}\left( x\right) . \end{equation*} Now using the quadratic Muckenhoupt constant $A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}\left( \sigma ,\omega \right) $, together with the Fefferman-Stein vector-valued inequality for $M_{\sigma }$ \cite{GrLiYa}, we get \begin{eqnarray*} &&\left\Vert S\left( x\right) \right\Vert _{L^{p}\left( \omega \right) }\lesssim 2^{-\eta s}\left\Vert \left( \sum_{I\in \mathcal{D} }\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) }\left( E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \right) ^{2}\left( \frac{\left\vert I^{\prime }\right\vert _{\sigma }}{\left\vert I^{\prime }\right\vert ^{1-\frac{\lambda }{n}}} \right) ^{2}\mathbf{1}_{I^{\prime }}\left( x\right) \right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &2^{-\eta s}A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{I\in \mathcal{D} }\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) }\left( E_{J}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \right) ^{2}\mathbf{1}_{I^{\prime }}\left( y\right) \right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \sigma \right) } \\ &\lesssim &2^{-\eta s}A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{I\in \mathcal{D}}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\left( y\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }\lesssim 2^{-\eta s}A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\ . \end{eqnarray*} Finally then, by H\"{o}lder's inequality we obtain \begin{equation*} \left\vert \mathsf{B}_{\limfunc{stop};\kappa ,s}\left( f,g\right) \right\vert \lesssim \left\Vert S\left( x\right) \right\Vert _{L^{p}\left( \omega \right) }\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }\lesssim 2^{-\eta s}A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }\ , \end{equation*} and provided $\varepsilon <\frac{\kappa }{n+\kappa -\lambda }$, i.e. $\eta >0 $, summing in $s$ gives \begin{equation*} \left\vert \mathsf{B}_{\limfunc{stop};\kappa }\left( f,g\right) \right\vert \leq \sum_{s=0}^{\infty }\left\vert \mathsf{B}_{\limfunc{stop};\kappa ,s}\left( f,g\right) \right\vert \lesssim C_{n,\kappa ,\lambda }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }\ , \end{equation*} which gives the required geometric decay. \subsection{Far below form} Recall that we decomposed the far below form $\mathsf{T}_{\limfunc{far} \limfunc{below}}\left( f,g\right) $ as $\mathsf{T}_{\limfunc{far}\limfunc{ below}}^{1}\left( f,g\right) +\mathsf{T}_{\limfunc{far}\limfunc{below} }^{2}\left( f,g\right) $, where we claimed that the second form $\mathsf{T}_{ \limfunc{far}\limfunc{below}}^{2}\left( f,g\right) $ was controlled by the disjoint and comparable forms $\mathsf{B}_{\cap }\left( f,g\right) $ and $ \mathsf{B}_{\diagup }\left( f,g\right) $ upon noting the porisms following ( \ref{routine'}) and (\ref{routine''}). Indeed, if $\bigtriangleup _{J;\kappa }^{\omega }g$ is not identically zero, then $J$ must be good, and in that case the condition "$J\subset I$ but $J\not\Subset _{\rho ,\varepsilon }I$" implies that the pair of cubes $I,J$ is included in \textbf{either} the sum defining the disjoint down form $\mathsf{B}_{\cap }^{\limfunc{down}}\left( f,g\right) $ \textbf{or} in the sum defining the comparable below form $ \mathsf{B}_{\diagup }^{\func{below}}\left( f,g\right) $. The first far below form $\mathsf{T}_{\limfunc{far}\limfunc{below}}^{1}\left( f,g\right) $ is handled by the following Intertwining Proposition. \begin{proposition}[The Intertwining Proposition] \label{Int Prop}Suppose $\sigma ,\omega $ are positive locally finite Borel measures on $\mathbb{R}^{n}$, that $\sigma $ is doubling, and that $\mathcal{ F}$ satisfies a $\sigma $-Carleson condition. Then for a smooth $\lambda $ -fractional singular integral $T^{\lambda }$, and for $\limfunc{good}$ functions $f\in L^{2}\left( \sigma \right) \cap L^{p}\left( \sigma \right) $ and $g\in L^{2}\left( \omega \right) \cap L^{p^{\prime }}\left( \omega \right) $, and with $\kappa _{1},\kappa _{2}\geq 1$ sufficiently large, we have the following bound for $\mathsf{T}_{\limfunc{far}\limfunc{below} }\left( f,g\right) =\sum_{F\in \mathcal{F}}\ \sum_{I:\ I\supsetneqq F}\ \left\langle T_{\sigma }^{\alpha }\bigtriangleup _{I;\kappa _{1}}^{\sigma }f, \mathsf{P}_{\mathcal{C}_{F}^{\tau -\limfunc{shift}}}^{\omega }g\right\rangle _{\omega }$: \begin{equation} \left\vert \mathsf{T}_{\limfunc{far}\limfunc{below}}^{1}\left( f,g\right) \right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\ \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \label{far below est} \end{equation} \end{proposition} \begin{proof} We write \begin{eqnarray*} f_{F} &\equiv &\sum_{I:\ I\supsetneqq F}\bigtriangleup _{I;\kappa }^{\sigma }f=\sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\bigtriangleup _{I;\kappa }^{\sigma }f \\ &=&\sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\left( \mathbb{ E}_{I;\kappa }^{\sigma }f-\mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right) \\ &=&\sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\left( \mathbb{ E}_{I;\kappa }^{\sigma }f\right) -\sum_{m=1}^{\infty }\mathbf{1}_{\pi _{ \mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\left( \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right) \\ &\equiv &\beta _{F}-\gamma _{F}\ , \end{eqnarray*} and then \begin{equation*} \sum_{F\in \mathcal{F}}\ \left\langle T_{\sigma }^{\lambda }f_{F},g_{F}\right\rangle _{\omega }=\sum_{F\in \mathcal{F}}\ \left\langle T_{\sigma }^{\lambda }\beta _{F},g_{F}\right\rangle _{\omega }-\sum_{F\in \mathcal{F}}\ \left\langle T_{\sigma }^{\lambda }\gamma _{F},g_{F}\right\rangle _{\omega }\ . \end{equation*} Now we use the Poisson inequality (\ref{e.Jsimeq}), namely \begin{equation*} \mathrm{P}_{\kappa }^{\lambda }\left( J,\sigma \mathbf{1}_{K\setminus I}\right) \lesssim \left( \frac{\ell \left( J\right) }{\ell \left( I\right) } \right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\mathrm{P} _{\kappa }^{\lambda }\left( I,\sigma \mathbf{1}_{K\setminus I}\right) , \end{equation*} to obtain that \begin{eqnarray*} &&\left\vert \sum_{F\in \mathcal{F}}\left\langle T_{\sigma }^{\lambda }\gamma _{F},g_{F}\right\rangle _{\omega }\right\vert =\left\vert \sum_{F\in \mathcal{F}}\int_{\mathbb{R}^{n}}T_{\sigma }^{\lambda }\left( \sum_{m=1}^{\infty }\mathbf{1}_{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{ \mathcal{F}}^{m}F}\left( \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right) \right) \left( x\right) \ \left( \sum_{J\in \mathcal{C} _{F}^{\omega ,\tau \text{-}\func{shift}}}\bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right) \ d\omega \left( x\right) \right\vert \\ &=&\left\vert \int_{\mathbb{R}^{n}}\sum_{J\in \mathcal{D}}\left\{ \sum_{F\in \mathcal{F}}\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \sum_{m=1}^{\infty }\mathbf{1}_{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\left( \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right) \right) \left( x\right) \ \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\} d\omega \left( x\right) \right\vert \\ &\leq &\int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left\vert \sum_{F\in \mathcal{F}}\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \sum_{m=1}^{\infty }\mathbf{1}_{\pi _{\mathcal{F} }^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\left( \mathbb{E}_{\pi _{\mathcal{F }}^{m+1}F;\kappa }^{\sigma }f\right) \right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\left( \sum_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}d\omega \left( x\right) \\ &\leq &\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \sum_{F\in \mathcal{F}}\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \sum_{m=1}^{\infty }\mathbf{1}_{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\left( \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right) \right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1 }{2}}\right\Vert _{L^{p}\left( \omega \right) }\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \end{eqnarray*} where the second factor is equivalent to $\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }$, and then using the Pivotal Lemma \ref{ener}, the first factor $S$ is dominated by \begin{eqnarray*} S &\lesssim &\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift} }}\sum_{m=1}^{\infty }\mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1} _{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\left\vert \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right\vert \sigma \right) \right\vert ^{2}\mathbf{1}_{J}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &=&\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \sum_{m=1}^{\infty }\sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{ shift}}}\left\Vert \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right\Vert _{\infty }\left( \frac{\ell \left( J\right) }{\ell \left( \pi _{\mathcal{F}}^{m}F\right) }\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\mathrm{P}_{\kappa }^{\lambda }\left( \pi _{\mathcal{F} }^{m}F,\mathbf{1}_{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F} }^{m}F}\sigma \right) \right\vert ^{2}\mathbf{1}_{J}\right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \omega \right) } \\ &\leq &\sum_{m=1}^{\infty }\left\Vert \left( \sum_{J\in \mathcal{D} }\left\vert \sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift}}}\left\Vert \mathbb{E}_{\pi _{\mathcal{F} }^{m+1}F;\kappa }^{\sigma }f\right\Vert _{\infty }\left( \frac{\ell \left( J\right) }{\ell \left( \pi _{\mathcal{F}}^{m}F\right) }\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\frac{\left\vert \pi _{ \mathcal{F}}^{m}F\right\vert _{\sigma }}{\left\vert \pi _{\mathcal{F} }^{m}F\right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1} _{J}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }, \end{eqnarray*} where in the last line we have used (\ref{kappa large}). Now we note that for each $J\in \mathcal{D}$ the number of cubes $F\in \mathcal{F}$ such that $J\in \mathcal{C}_{F}^{\tau -\func{shift}}$ is at most $\tau $. So without loss of generality, we may simply suppose that there is just one such cube denoted $F\left[ J\right] $. Thus for each $m\in \mathbb{N}$, the above norm is at most \begin{equation*} \left( 2^{-m}\right) ^{\kappa -\varepsilon \left( n+\kappa -\alpha \right) }\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \left\Vert \mathbb{E} _{\pi _{\mathcal{F}}^{m+1}F\left[ J\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\left( \frac{\ell \left( J\right) }{\ell \left( F\left[ J\right] \right) }\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) } \frac{\left\vert \pi _{\mathcal{F}}^{m}F\left[ J\right] \right\vert _{\sigma }}{\left\vert \pi _{\mathcal{F}}^{m}F\left[ J\right] \right\vert ^{1-\frac{ \lambda }{n}}}\right\vert ^{2}\mathbf{1}_{J}\left( x\right) \right) ^{\frac{1 }{2}}\right\Vert _{L^{p}\left( \omega \right) }, \end{equation*} and the sum inside the parentheses equals \begin{eqnarray*} &&\sum_{F\in \mathcal{F}}\sum_{J\in \mathcal{C}_{F}^{\omega ,\tau \text{-} \func{shift}}:\ x\in J\subset F}\left( \frac{\ell \left( J\right) }{\ell \left( F\left[ J\right] \right) }\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\left\vert \left\Vert \mathbb{E}_{\pi _{\mathcal{F }}^{m+1}F\left[ J\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\frac{ \left\vert \pi _{\mathcal{F}}^{m}F\left[ J\right] \right\vert _{\sigma }}{ \left\vert \pi _{\mathcal{F}}^{m}F\left[ J\right] \right\vert ^{1-\frac{ \lambda }{n}}}\right\vert ^{2}\mathbf{1}_{J}\left( x\right) \\ &\lesssim &\sum_{F\in \mathcal{F}}\sum_{J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift}}:\ x\in J\subset F}\left( \frac{\ell \left( J\right) }{ \ell \left( F\right) }\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\left\vert \left\Vert \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right\Vert _{\infty }\frac{\left\vert \pi _{\mathcal{F} }^{m}F\right\vert _{\sigma }}{\left\vert \pi _{\mathcal{F}}^{m}F\right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1}_{J}\left( x\right) \\ &\lesssim &\sum_{F\in \mathcal{F}}\left\vert \left\Vert \mathbb{E}_{\pi _{ \mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right\Vert _{\infty }\frac{ \left\vert \pi _{\mathcal{F}}^{m}F\right\vert _{\sigma }}{\left\vert \pi _{ \mathcal{F}}^{m}F\right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{ 1}_{J}\left( x\right) . \end{eqnarray*} Altogether then, using the quadratic offset $A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}$ condition and doubling, we have \begin{eqnarray*} S &\lesssim &\sum_{m=1}^{\infty }\left( 2^{-m}\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\left\Vert \left( \sum_{F\in \mathcal{F} }\left\vert \left\Vert \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right\Vert _{\infty }\frac{\left\vert \pi _{\mathcal{F} }^{m}F\right\vert _{\sigma }}{\left\vert \pi _{\mathcal{F}}^{m}F\right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1}_{F}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }\left( 2^{-m}\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\left\Vert \left( \sum_{F\in \mathcal{F} }\left\vert \bigtriangleup _{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\left( x\right) \right\vert ^{2}\mathbf{1}_{F}\left( x\right) \right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \end{eqnarray*} and we can continue with \begin{eqnarray*} &=&A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }\left( 2^{-m}\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\left\Vert \left( \sum_{G\in \mathcal{F} }\sum_{F\in \mathcal{F}:\ \pi _{\mathcal{F}}^{m+1}F=G}\left\vert \bigtriangleup _{G;\kappa }^{\sigma }f\left( x\right) \right\vert ^{2} \mathbf{1}_{F}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &\leq &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }\left( 2^{-m}\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\left\Vert \left( \sum_{G\in \mathcal{F} }\left\vert \bigtriangleup _{G;\kappa }^{\sigma }f\left( x\right) \right\vert ^{2}\mathbf{1}_{G}\left( x\right) \right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \sigma \right) } \\ &\leq &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }\left( 2^{-m}\right) ^{\kappa -\varepsilon \left( n+\kappa -\lambda \right) }\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }=C_{\eta }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }. \end{eqnarray*} Thus provided the Alpert parameter $\kappa $ satisfies \begin{equation} \kappa >\frac{\varepsilon \left( n-\lambda \right) }{1-\varepsilon }, \label{kappa needed} \end{equation} we have proved the estimate \begin{equation*} \left\vert \sum_{F\in \mathcal{F}}\left\langle T_{\sigma }^{\lambda }\gamma _{F},g_{F}\right\rangle _{\omega }\right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{equation*} It remains to bound $\sum_{F\in \mathcal{F}}\left\langle T_{\sigma }^{\lambda }\beta _{F},g_{F}\right\rangle _{\omega }$ where \begin{equation*} \beta _{F}=\sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\left( \mathbb{E}_{I;\kappa }^{\sigma }f\right) \text{ and }g_{F}\left( x\right) =\sum_{J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift} }}\bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) . \end{equation*} The difference between the previous estimate and this one is that the averages $\mathbf{1}_{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F} }^{m}F}\left\vert \mathbb{E}_{\pi _{\mathcal{F}}^{m+1}F;\kappa }^{\sigma }f\right\vert $ inside the Poisson kernel have been replaced with the sum of averages $\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{ \mathcal{F}}^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\left\vert \mathbb{E} _{I;\kappa }^{\sigma }f\right\vert $, but where the sum is taken over pairwise disjoint sets $\left\{ \theta \left( I\right) \right\} _{\pi _{ \mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}$. Just as in the previous estimate we start with \begin{eqnarray*} &&\left\vert \sum_{F\in \mathcal{F}}\left\langle T_{\sigma }^{\lambda }\beta _{F},g_{F}\right\rangle _{\omega }\right\vert =\left\vert \sum_{F\in \mathcal{F}}\int_{\mathbb{R}^{n}}T_{\sigma }^{\lambda }\left( \sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\left( \mathbb{ E}_{I;\kappa }^{\sigma }f\right) \right) \left( x\right) \ \left( \sum_{J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift}}}\bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right) \ d\omega \left( x\right) \right\vert \\ &=&\left\vert \int_{\mathbb{R}^{n}}\sum_{J\in \mathcal{D}}\left\{ \sum_{F\in \mathcal{F}}\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\left( \mathbb{E}_{I;\kappa }^{\sigma }f\right) \right) \left( x\right) \ \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\} d\omega \left( x\right) \right\vert \\ &\leq &\int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D}}\left\vert \sum_{F\in \mathcal{F}}\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F} }^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\left( \mathbb{E}_{I;\kappa }^{\sigma }f\right) \right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\left( \sum_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}d\omega \left( x\right) \\ &\leq &\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \sum_{F\in \mathcal{F}}\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\left( \mathbb{E}_{I;\kappa }^{\sigma }f\right) \right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2} }\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{eqnarray*} The second factor is equivalent to $\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }$, and the first factor $S$ is dominated by \begin{eqnarray*} S &\lesssim &\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift} }}\sum_{m=1}^{\infty }\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1} _{\theta \left( I\right) }\left( \mathbb{E}_{I;\kappa }^{\sigma }f\right) \sigma \right) \right\vert ^{2}\mathbf{1}_{J}\right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &\sum_{m=1}^{\infty }\left\Vert \left( \sum_{J\in \mathcal{D} }\left\vert \sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift}}}\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\left\Vert \mathbb{E}_{I;\kappa }^{\sigma }f\right\Vert _{\infty }\mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1} _{\theta \left( I\right) }\sigma \right) \right\vert ^{2}\mathbf{1} _{J}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }. \end{eqnarray*} Then we use \begin{eqnarray*} \sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F} }^{m+1}F}\left\Vert \mathbb{E}_{I;\kappa }^{\sigma }f\right\Vert _{\infty } \mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1}_{\theta \left( I\right) }\sigma \right) &\leq &\left( \sup_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\left\Vert \mathbb{E}_{I;\kappa }^{\sigma }f\right\Vert _{\infty }\right) \mathrm{P}_{\kappa }^{\lambda }\left( J,\sum_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F} }^{m+1}F}\mathbf{1}_{\theta \left( I\right) }\sigma \right) \\ &=&\left( \sup_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{ \mathcal{F}}^{m+1}F}\left\Vert \mathbb{E}_{I;\kappa }^{\sigma }f\right\Vert _{\infty }\right) \mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1}_{\pi _{ \mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\sigma \right) , \end{eqnarray*} and obtain that \begin{equation*} S\lesssim \sum_{m=1}^{\infty }\left\Vert \left( \sum_{J\in \mathcal{D} }\left\vert \sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift}}}\left( \sup_{I:\ \pi _{\mathcal{F}}^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\left\Vert \mathbb{E}_{I;\kappa }^{\sigma }f\right\Vert _{\infty }\right) \mathrm{P}_{\kappa }^{\lambda }\left( J, \mathbf{1}_{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F} }^{m}F}\sigma \right) \right\vert ^{2}\mathbf{1}_{J}\right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \omega \right) }. \end{equation*} Now we define $G_{m}\left[ F\right] \in \left( \pi _{\mathcal{F}}^{m}F,\pi _{ \mathcal{F}}^{m+1}F\right] $ so that $\sup_{I:\ \pi _{\mathcal{F} }^{m}F\subsetneqq I\subset \pi _{\mathcal{F}}^{m+1}F}\left\Vert \mathbb{E} _{I;\kappa }^{\sigma }f\right\Vert _{\infty }=\left\Vert \mathbb{E}_{G_{m} \left[ F\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }$, and dominate $S$ by \begin{eqnarray*} &&\sum_{m=1}^{\infty }\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{ shift}}}\left\Vert \mathbb{E}_{G_{m}\left[ F\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1} _{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\sigma \right) \right\vert ^{2}\mathbf{1}_{J}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &\sum_{m=1}^{\infty }\left\Vert \left( \sum_{J\in \mathcal{D} }\left\vert \sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift}}}\left\Vert \mathbb{E}_{G_{m}\left[ F\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\left( \frac{\ell \left( J\right) }{\ell \left( G_{m}\left[ F\right] \right) }\right) ^{\eta }\mathrm{P}_{\kappa }^{\lambda }\left( G_{m}\left[ F\right] ,\mathbf{1}_{\pi _{\mathcal{F} }^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\sigma \right) \right\vert ^{2} \mathbf{1}_{J}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &\sum_{m=1}^{\infty }2^{-m\eta }\left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \sum_{F\in \mathcal{F}:\ J\in \mathcal{C}_{F}^{\omega ,\tau \text{-}\func{shift}}}\left\Vert \mathbb{E}_{G_{m}\left[ F\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\left( \frac{\ell \left( J\right) }{\ell \left( F\right) }\right) ^{\eta }\mathrm{P}_{\kappa }^{\lambda }\left( G_{m}\left[ F\right] ,\mathbf{1}_{\pi _{\mathcal{F}}^{m+1}F\setminus \pi _{\mathcal{F}}^{m}F}\sigma \right) \right\vert ^{2}\mathbf{1}_{J}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }, \end{eqnarray*} where $\eta =\kappa -\varepsilon \left( n+\kappa -\lambda \right) $ is the constant appearing in (\ref{e.Jsimeq}). Just as above we note that for each $J\in \mathcal{D}$ the number of cubes $ F\in \mathcal{F}$ such that $J\in \mathcal{C}_{F}^{\omega ,\tau \text{-} \func{shift}}$ is at most $\tau $. So without loss of generality, we may simply suppose that there is just one such cube denoted $F\left[ J\right] $. Thus for each $m\in \mathbb{N}$, the above norm is at most \begin{equation*} \left\Vert \left( \sum_{J\in \mathcal{D}}\left\vert \left\Vert \mathbb{E} _{G_{m}\left[ F\left[ J\right] \right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\left( \frac{\ell \left( J\right) }{\ell \left( F\left[ J\right] \right) }\right) ^{\eta }\frac{\left\vert G_{m}\left[ F\left[ J\right] \right] \right\vert _{\sigma }}{\left\vert G_{m}\left[ F\left[ J\right] \right] \right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1} _{J}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }, \end{equation*} and the sum inside the parentheses equals \begin{eqnarray*} &&\sum_{J\in \mathcal{D}}\left\vert \left\Vert \mathbb{E}_{G_{m}\left[ F \left[ J\right] \right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\frac{ \left\vert G_{m}\left[ F\left[ J\right] \right] \right\vert _{\sigma }}{ \left\vert G_{m}\left[ F\left[ J\right] \right] \right\vert ^{1-\frac{ \lambda }{n}}}\right\vert ^{2}\left( \frac{\ell \left( J\right) }{\ell \left( F\left[ J\right] \right) }\right) ^{2\eta }\mathbf{1}_{J}\left( x\right) \\ &\lesssim &\left\vert \left\Vert \mathbb{E}_{G_{m}\left[ F\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\frac{\left\vert G_{m}\left[ F\right] \right\vert _{\sigma }}{\left\vert G_{m}\left[ F\right] \right\vert ^{1- \frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1}_{G_{m}\left[ F\right] }\left( x\right) . \end{eqnarray*} Altogether then, using the quadratic offset $A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}$ condition and doubling, we have \begin{eqnarray*} &&S\lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }2^{-m\eta }\left\Vert \left( \sum_{F\in \mathcal{ F}}\left\vert \left\Vert \mathbb{E}_{G_{m}\left[ F\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\frac{\left\vert G_{m}\left[ F\left[ J\right] \right] \right\vert _{\sigma }}{\left\vert G_{m}\left[ F\left[ J\right] \right] \right\vert ^{1-\frac{\lambda }{n}}}\right\vert ^{2}\mathbf{1}_{G_{m}\left[ F \right] }\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &\lesssim &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }2^{-m\eta }\left\Vert \left( \sum_{F\in \mathcal{ F}}\left\vert \left\Vert \mathbb{E}_{G_{m}\left[ F\right] ;\kappa }^{\sigma }f\right\Vert _{\infty }\frac{\left\vert G_{m}\left[ F\right] \right\vert _{\sigma }}{\left\vert G_{m}\left[ F\right] \right\vert ^{1-\frac{\lambda }{n }}}\right\vert ^{2}\mathbf{1}_{G_{m}\left[ F\right] }\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &\lesssim &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }2^{-m\eta }\left\Vert \left( \sum_{F\in \mathcal{ F}}\left\vert \bigtriangleup _{G_{m}\left[ F\right] ;\kappa }^{\sigma }f\left( x\right) \right\vert ^{2}\mathbf{1}_{F}\left( x\right) \right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \end{eqnarray*} and we can continue with \begin{eqnarray*} S &\leq &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }2^{-m\eta }\left\Vert \left( \sum_{G\in \mathcal{ G}}\sum_{F\in \mathcal{F}:\ G_{m}\left[ F\right] =G}\left\vert \bigtriangleup _{G;\kappa }^{\sigma }f\left( x\right) \right\vert ^{2} \mathbf{1}_{G}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &\leq &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }2^{-m\eta }\left\Vert \left( \sum_{G\in \mathcal{ G}}\left\vert \bigtriangleup _{G;\kappa }^{\sigma }f\left( x\right) \right\vert ^{2}\mathbf{1}_{G}\left( x\right) \right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \sigma \right) } \\ &\leq &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \sum_{m=1}^{\infty }2^{-m\eta }\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }=C_{\varepsilon ,n,\kappa ,\lambda }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }, \end{eqnarray*} provided (\ref{kappa needed}) holds. Thus we have proved the estimate \begin{equation*} \left\vert \sum_{F\in \mathcal{F}}\left\langle T_{\sigma }^{\lambda }\beta _{F},g_{F}\right\rangle _{\omega }\right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \end{equation*} which together with the corresponding estimate for $\sum_{F\in \mathcal{F} }\left\langle T_{\sigma }^{\lambda }\gamma _{F},g_{F}\right\rangle _{\omega } $ proved above, completes the proof of the Intertwining Proposition. \end{proof} Thus we have controlled both the first and second far below forms $\mathsf{T} _{\limfunc{far}\limfunc{below}}^{1}\left( f,g\right) $ and $\mathsf{T}_{ \limfunc{far}\limfunc{below}}^{2}\left( f,g\right) $ by the quadratic offset Muckenhoupt constant $A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}$. \subsection{Neighbour form} We begin with $M_{I^{\prime };\kappa }=\mathbf{1}_{I^{\prime }}\bigtriangleup _{I;\kappa }^{\sigma }f$ to obtain \begin{align*} & \mathsf{B}_{\limfunc{neighbour};\kappa }\left( f,g\right) =\sum_{F\in \mathcal{F}}\mathsf{B}_{\limfunc{neighbour};\kappa }^{F}\left( f,g\right) \\ & =\sum_{F\in \mathcal{F}}\sum_{\substack{ I\in \mathcal{C}_{F}\text{ and } J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}}\sum_{\theta \left( I_{J}\right) \in \mathfrak{C}_{\mathcal{ D}}\left( I\right) \setminus \left\{ I_{J}\right\} }\int_{\mathbb{R} ^{n}}T_{\sigma }^{\lambda }\left( \mathbf{1}_{\theta \left( I_{J}\right) }\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \ \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \ d\omega \left( x\right) \\ & =\int_{\mathbb{R}^{n}}\sum_{J\in \mathcal{D}}\left\{ \sum_{F\in \mathcal{F} }\sum_{\substack{ I\in \mathcal{C}_{F}\text{ and }J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}}\sum_{\theta \left( I_{J}\right) \in \mathfrak{C}_{\mathcal{D}}\left( I\right) \setminus \left\{ I_{J}\right\} }\right\} T_{\sigma }^{\lambda }\left( \mathbf{1}_{\theta \left( I_{J}\right) }\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \ \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \ d\omega \left( x\right) \\ & =\int_{\mathbb{R}^{n}}\sum_{J\in \mathcal{D}}\sum_{I\succ J}\bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \mathbf{1}_{\theta \left( I_{J}\right) }\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \ \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \ d\omega \left( x\right) , \end{align*} where for $J\in \mathcal{D}$ we write $I\succ J$ if $I$ satisfies \begin{equation*} \text{there is }F\in \mathcal{F}\text{ such that }I\in \mathcal{C}_{F}\text{ , }J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}}\text{ and }J\Subset _{\rho ,\varepsilon }I. \end{equation*} Applying the Cauchy-Schwarz and H\"{o}lder inequalities gives \begin{eqnarray*} &&\left\vert \mathsf{B}_{\limfunc{neighbour};\kappa }\left( f,g\right) \right\vert \leq \int_{\mathbb{R}^{n}}\left( \sum_{J\in \mathcal{D} }\sum_{I\succ J}\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \mathbf{1}_{\theta \left( I_{J}\right) }\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{ \frac{1}{2}}\left( \sum_{J\in \mathcal{D}}\sum_{I\succ J}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}d\omega \left( x\right) \\ &\leq &\left\Vert \left( \sum_{J\in \mathcal{D}}\sum_{I\succ J}\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \mathbf{1} _{\theta \left( I_{J}\right) }\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }\left\Vert \left( \sum_{J\in \mathcal{D} }\sum_{I\succ J}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \end{eqnarray*} where the final factor is dominated by $\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }$. Using the pivotal bound (\ref{piv lemma}), and the estimate $\left\Vert M_{I^{\prime };\kappa }\right\Vert _{L^{\infty }\left( \sigma \right) }\approx \frac{1}{\sqrt{\left\vert I^{\prime }\right\vert _{\sigma }}}\left\vert \widehat{f}\left( I\right) \right\vert $ from (\ref{analogue'}), we have \begin{align*} \left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( M_{I^{\prime };\kappa }\mathbf{1}_{I^{\prime }}\right) \left( x\right) \right\vert & \lesssim \mathrm{P}_{\kappa }^{\lambda }\left( J,\left\Vert M_{I^{\prime };\kappa }\right\Vert _{L^{\infty }\left( \sigma \right) } \mathbf{1}_{I^{\prime }}\sigma \right) \mathbf{1}_{J}\left( x\right) \\ & \lesssim \frac{1}{\sqrt{\left\vert I^{\prime }\right\vert _{\sigma }}} \left\vert \widehat{f}\left( I\right) \right\vert \mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1}_{I^{\prime }}\sigma \right) \mathbf{1} _{J}\left( x\right) . \end{align*} Now we pigeonhole the side lengths of $I$ and $J$ by $\ell \left( J\right) =2^{-s}\ell \left( I\right) $ and use goodness, followed by (\ref{kappa large}), to obtain \begin{eqnarray*} &&\left\Vert \left( \sum_{J\in \mathcal{D}}\sum_{I\succ J}\left\vert \bigtriangleup _{J;\kappa }^{\omega }T_{\sigma }^{\lambda }\left( \mathbf{1} _{\theta \left( I_{J}\right) }\bigtriangleup _{I;\kappa }^{\sigma }f\right) \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &\left\Vert \left( \sum_{J\in \mathcal{D}}\sum_{I\succ J:\ \ell \left( J\right) =2^{-s}\ell \left( I\right) }\left\vert \frac{1}{\sqrt{ \left\vert I^{\prime }\right\vert _{\sigma }}}\left\vert \widehat{f}\left( I\right) \right\vert \mathrm{P}_{\kappa }^{\lambda }\left( J,\mathbf{1} _{I^{\prime }}\sigma \right) \mathbf{1}_{J}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &2^{-\eta s}\left\Vert \left( \sum_{J\in \mathcal{D}}\sum_{I\succ J\ \ell \left( J\right) =2^{-s}\ell \left( I\right) }\left\vert \frac{1}{ \sqrt{\left\vert I^{\prime }\right\vert _{\sigma }}}\left\vert \widehat{f} \left( I\right) \right\vert \mathrm{P}_{\kappa }^{\lambda }\left( I_{J}, \mathbf{1}_{I^{\prime }}\sigma \right) \mathbf{1}_{J}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &2^{-\eta s}\left\Vert \left( \sum_{I\in \mathcal{D}}\left\vert \left\Vert \mathbb{E}_{I^{\prime }}^{\sigma }\bigtriangleup _{I;\kappa }^{\sigma }f\left( I\right) \right\Vert _{\infty }\frac{\left\vert I^{\prime }\right\vert _{\sigma }}{\left\vert I^{\prime }\right\vert ^{1-\frac{\lambda }{n}}}\mathbf{1}_{I^{\prime }}\left( x\right) \right\vert ^{2}\right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }, \end{eqnarray*} where again $\eta $ is the exponent from (\ref{e.Jsimeq}), which by the quadratic offset Muckenhoupt condition, is dominated by \begin{equation*} 2^{-\eta s}A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{I\in \mathcal{D}}\left\vert \left\Vert \mathbb{E}_{I^{\prime }}^{\sigma }\bigtriangleup _{I;\kappa }^{\sigma }f\left( I\right) \right\Vert _{\infty }\mathbf{1}_{I^{\prime }}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }\lesssim 2^{-\eta s}A_{p}^{\lambda ,\ell ^{2},\limfunc{offset }}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }. \end{equation*} Summing in $s\geq 0$ proves the required bound for the neighbour form, \begin{equation} \left\vert \mathsf{B}_{\limfunc{neighbour};\kappa }\left( f,g\right) \right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \label{neigh est} \end{equation} \section{Commutator form} Here we use the quadratic offset Muckenhoupt conditions to control the commutator form \begin{equation} \mathsf{B}_{\limfunc{commutator};\kappa }^{F}\left( f,g\right) \equiv \sum _{\substack{ I\in \mathcal{C}_{F}\text{ and }J\in \mathcal{C}_{F}^{\tau - \limfunc{shift}} \\ J\Subset _{\rho ,\varepsilon }I}}\left\langle \left[ T_{\sigma }^{\lambda },M_{I_{J}}\right] \mathbf{1}_{I_{J}},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\ , \label{def comm form} \end{equation} where for $I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) $, $ M_{I^{\prime };\kappa }\equiv \left( \bigtriangleup _{I;\kappa }^{\sigma }f\right) \mathbf{1}_{I^{\prime }}$. While we won't explicitly use the Pivotal Lemma, we will nevertheless apply Taylor expansions at several places in the argument. For the most part, we follow the structural ideas for the case $p=2$ in \cite{AlSaUr}, but there are some significant differences that arise in connection with exploiting vector-valued inequalities, in particular at the end of the argument where a new pigeon-holing argument is introduced. Fix $\kappa \geq 1$. Assume that $K^{\lambda }$ is a general standard $ \lambda $-fractional kernel in $\mathbb{R}^{n}$, and $T^{\lambda }$ is the associated Calder\'{o}n-Zygmund operator, and that $P_{\alpha ,a,I^{\prime }}\left( x\right) =\left( \frac{x-a}{\ell \left( I^{\prime }\right) }\right) ^{\alpha }=\left( \frac{x_{1}-a_{1}}{\ell \left( I^{\prime }\right) }\right) ^{\alpha _{1}}...\left( \frac{x_{n}-a_{n}}{\ell \left( I^{\prime }\right) } \right) ^{\alpha _{n}}$, where $1\leq \left\vert \alpha \right\vert \leq \kappa -1$ (since when $|\alpha |=0$, $P_{\alpha ,a,I^{\prime }}$ commutes with $T^{\lambda }$) and $I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) $, $I\in \mathcal{C}_{F}$. We use the well-known formula \begin{equation*} x^{\alpha }-y^{\alpha }=\sum_{k=1}^{n}\left( x_{k}-y_{k}\right) \dsum\limits_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}c_{\alpha ,\beta ,\gamma }x^{\beta }y^{\gamma }, \end{equation*} to write \begin{align*} & \mathbf{1}_{I^{\prime }}\left( x\right) \left[ P_{\alpha ,a,I^{\prime }}\ ,T_{\sigma }^{\lambda }\right] \mathbf{1}_{I^{\prime }}\left( x\right) = \mathbf{1}_{I^{\prime }}\left( x\right) \int_{\mathbb{R}^{n}}K^{\lambda }\left( x-y\right) \left\{ P_{\alpha ,a,I^{\prime }}\left( x\right) -P_{\alpha ,a,I^{\prime }}\left( y\right) \right\} \mathbf{1}_{I^{\prime }}\left( y\right) d\sigma \left( y\right) \\ & =\mathbf{1}_{I^{\prime }}\left( x\right) \int_{\mathbb{R}^{n}}K^{\lambda }\left( x-y\right) \left\{ \sum_{k=1}^{n}\left( \frac{x_{k}-y_{k}}{\ell \left( I^{\prime }\right) }\right) \dsum\limits_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}c_{\alpha ,\beta ,\gamma }\left( \frac{x-a}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\left( \frac{y-a}{\ell \left( I^{\prime }\right) } \right) ^{\gamma }\right\} \mathbf{1}_{I^{\prime }}\left( y\right) d\sigma \left( y\right) \\ & =\sum_{k=1}^{n}\dsum\limits_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}c_{\alpha ,\beta ,\gamma }\mathbf{1}_{I^{\prime }}\left( x\right) \left[ \int_{\mathbb{R} ^{n}}\Phi _{k}^{\lambda }\left( x-y\right) \left\{ \left( \frac{y-a}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }\right\} \mathbf{1}_{I^{\prime }}\left( y\right) d\sigma \left( y\right) \right] \left( \frac{x-a}{\ell \left( I^{\prime }\right) }\right) ^{\beta }, \end{align*} where $\Phi _{k}^{\lambda }\left( x-y\right) =K^{\lambda }\left( x-y\right) \left( \frac{x_{k}-y_{k}}{\ell \left( I^{\prime }\right) }\right) $. Thus $ \left[ P_{\alpha ,a,I^{\prime }},T_{\sigma }^{\lambda }\right] \mathbf{1} _{I^{\prime }}\left( x\right) $ is a `polynomial' of degree $\left\vert \alpha \right\vert -1$ with \emph{variable} coefficients. Recall that we are considering pairs $\left( I^{\prime },J\right) \in \mathcal{C}_{F}\times \mathcal{C}_{F}^{\tau -\limfunc{shift}}$ for $F\in \mathcal{F}$, with the property that $J\Subset _{\rho ,\varepsilon }I^{\prime }$, and for convenience in notation we denote this collection of pairs by $\mathcal{P}_{F}$. Integrating the above commutator against $ \bigtriangleup _{J;\kappa }^{\omega }g$ for some $J\in \mathcal{C}_{F}^{\tau -\limfunc{shift}}$ with $J\subset I^{\prime }$, we get, \begin{align} & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left\langle \left[ P_{\alpha ,a,I^{\prime }}\ ,T_{\sigma }^{\lambda }\right] \mathbf{1}_{I^{\prime }},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }=\int_{\mathbb{R}^{n}}\left[ P_{\alpha ,a,I^{\prime }}\ ,T_{\sigma }^{\lambda }\right] \mathbf{1} _{I^{\prime }}\left( x\right) \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) d\omega \left( x\right) \label{two pieces} \\ & =\sum_{k=1}^{n}\dsum\limits_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}c_{\alpha ,\beta ,\gamma }\mathbf{1}_{I^{\prime }}\left( x\right) \left[ \int_{\mathbb{R} ^{n}}\Phi _{k}^{\lambda }\left( x-y\right) \left\{ \left( \frac{y-a}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }\right\} \mathbf{1}_{I^{\prime }\setminus 2J}\left( y\right) d\sigma \left( y\right) \right] \left( \frac{ x-a}{\ell \left( I^{\prime }\right) }\right) ^{\beta } \notag \\ & +\sum_{k=1}^{n}\dsum\limits_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}c_{\alpha ,\beta ,\gamma }\mathbf{1}_{I^{\prime }}\left( x\right) \left[ \int_{\mathbb{R} ^{n}}\Phi _{k}^{\lambda }\left( x-y\right) \left\{ \left( \frac{y-a}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }\right\} \mathbf{1}_{2J}\left( y\right) d\sigma \left( y\right) \right] \left( \frac{x-a}{\ell \left( I^{\prime }\right) }\right) ^{\beta } \notag \\ & \equiv \func{Int}^{\lambda ,\natural }\left( I^{\prime },J\right) +\func{ Int}^{\lambda ,\flat }\left( I^{\prime },J\right) , \notag \end{align} where we are suppressing the dependence on $\alpha $ as well as $F\in \mathcal{F}$. For the first term $\func{Int}^{\lambda ,\natural }\left( I^{\prime },J\right) $ we write \begin{equation} \func{Int}^{\lambda ,\natural }\left( I^{\prime },J\right) =\sum_{k=1}^{n}\dsum\limits_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}c_{\alpha ,\beta ,\gamma } \func{Int}_{k,\beta ,\gamma }^{\lambda ,\natural }\left( I^{\prime },J\right) , \label{beta gamma} \end{equation} where with the choice $a=c_{J}$ the center of $J$, we define \begin{equation} \func{Int}_{k,\beta ,\gamma }^{\lambda ,\natural }\left( I^{\prime },J\right) \equiv \int_{J}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{x-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( x-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) d\omega \left( x\right) . \label{coefficients} \end{equation} While the term $\bigtriangleup _{J;\kappa }^{\omega }\left[ ...\right] $ need no longer vanish for operators other than the Hilbert transform, we will show below that they are suitably small. Indeed, we exploit as usual that the operator $\bigtriangleup _{J;\kappa }^{\omega }\left( \frac{x-c_{J} }{\ell \left( I^{\prime }\right) }\right) ^{\beta }$ is supported in $J$ and has vanishing $\omega $-means up to order $\kappa -\left\vert \beta \right\vert -1$, and that the function $\Phi _{k}^{\lambda }\left( z\right) $ is appropriately smooth away from $z=0$, i.e. $\left\vert \nabla ^{m}\Phi _{k}^{\lambda }\left( z\right) \right\vert \leq C_{m,n}\frac{1}{\left\vert z\right\vert ^{m+n-\lambda -1}\ell \left( I\right) }$. But first we must encode information regarding $f$, by recalling the orthonormal basis $\left\{ h_{I;\kappa }^{\sigma ,a}\right\} _{a\in \Gamma _{I,n,\kappa }}$ of $L_{I;\kappa }^{2}\left( \sigma \right) $, and the notation $\left\vert \widehat{f}\left( I\right) \right\vert =\left\Vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\Vert _{L^{2}\left( \sigma \right) }$ from (\ref{analogue'}). Consider the polynomial \begin{equation*} Q_{I^{\prime };\kappa }^{\sigma }\equiv \frac{1}{\left\vert \widehat{f} \left( I\right) \right\vert }1_{I^{\prime }}\bigtriangleup _{I;\kappa }^{\sigma }f=\frac{1}{\left\vert \widehat{f}\left( I\right) \right\vert } M_{I^{\prime };\kappa }, \end{equation*} that is a renormalization of the polynomial $M_{I^{\prime };\kappa }$ appearing in the commutator form (\ref{def comm form}). From (\ref{analogue'} ) we have $\left\Vert Q_{I^{\prime };\kappa }^{\sigma }\right\Vert _{\infty }\approx \frac{1}{\sqrt{\left\vert I\right\vert _{\sigma }}}$. Hence for $ c_{J}\in J\subset I^{\prime }$, if we write \begin{equation*} Q_{I^{\prime };\kappa }^{\sigma }\left( y\right) =\sum_{\left\vert \alpha \right\vert <\kappa }b_{\alpha }\left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\alpha }=\sum_{\left\vert \alpha \right\vert <\kappa }b_{\alpha }P_{\alpha ,c_{J},I^{\prime }}\left( y\right) , \end{equation*} and then rescale to the unit cube and invoke the fact that any two norms on a finite dimensional vector space are equivalent, we obtain \begin{equation} \sum_{\left\vert \alpha \right\vert <\kappa }\left\vert b_{\alpha }\right\vert \approx \left\Vert Q_{I^{\prime };\kappa }^{\sigma }\right\Vert _{\infty }\approx \frac{1}{\sqrt{\left\vert I\right\vert _{\sigma }}}. \label{we obtain} \end{equation} We then bound \begin{align*} & \left\vert \left\langle \left[ Q_{I^{\prime };\kappa },T_{\sigma }^{\lambda }\right] \mathbf{1}_{I^{\prime }\setminus 2J},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\right\vert \leq \sum_{\left\vert \alpha \right\vert <\kappa }\left\vert b_{\alpha }\left\langle \left[ P_{\alpha ,c_{J},I^{\prime }},T_{\sigma }^{\lambda } \right] \mathbf{1}_{I^{\prime }\setminus 2J},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\right\vert \\ & \lesssim \frac{1}{\sqrt{\left\vert I\right\vert _{\sigma }}} \max_{\left\vert \alpha \right\vert <\kappa }\left\vert \left\langle \left[ P_{\alpha ,c_{J},I^{\prime }},T_{\sigma }^{\lambda }\right] \mathbf{1} _{I^{\prime }\setminus 2J},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\right\vert , \end{align*} and estimate each inner product $\func{Int}^{\lambda ,\natural }\left( I^{\prime },J\right) $ by \begin{equation*} \left\vert \func{Int}^{\lambda ,\natural }\left( I^{\prime },J\right) \right\vert =\left\vert \sum_{k=1}^{n}\dsum\limits_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}c_{\alpha ,\beta ,\gamma }\func{Int}_{k,\beta ,\gamma }^{\lambda ,\natural }\left( I^{\prime },J\right) \right\vert \lesssim \max_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}\left\vert \func{Int}_{k,\beta ,\gamma }^{\lambda ,\natural }\left( I^{\prime },J\right) \right\vert , \end{equation*} where $\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1\geq 0$. We now sum in $F\in \mathcal{F}$, $J\in \mathcal{C}_{F}^{\tau -\limfunc{shift }}$, $I\in \mathcal{C}_{F}$, and $I^{\prime }\in \mathfrak{C}_{\mathcal{D} }\left( I\right) $, and apply the Cauchy-Schwarz inequality to obtain \begin{eqnarray*} &&\left\vert \mathsf{B}_{\limfunc{commutator};\kappa }^{F,\natural }\left( f,g\right) \right\vert \leq \sum_{k,\beta ,\gamma }\left\vert \widehat{f} \left( I\right) \right\vert \left\vert \left\langle \left[ T_{\sigma }^{\lambda },Q_{I_{J}}\right] \mathbf{1}_{I_{J}},\bigtriangleup _{J;\kappa }^{\omega }g\right\rangle _{\omega }\right\vert \leq \sum_{k,\beta ,\gamma }\left\vert \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\frac{\left\vert \widehat{f}\left( I\right) \right\vert }{ \sqrt{\left\vert I\right\vert _{\sigma }}}\func{Int}_{k,\beta ,\gamma }^{\lambda ,\natural }\left( I^{\prime },J\right) \right\vert \\ &=&\sum_{k,\beta ,\gamma }\left\vert \int_{\mathbb{R}^{n}}\sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\frac{ \left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{x-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( x-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) d\omega \left( x\right) \right\vert \\ &\leq &\sum_{k,\beta ,\gamma }\int_{\mathbb{R}^{n}}\left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2} } \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left( \sum_{F\in \mathcal{F}}\sum_{J\in \mathcal{C}_{F}^{\tau - \limfunc{shift}}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}d\omega \left( x\right) , \end{eqnarray*} and then apply H\"{o}lder's inequality to conclude that \begin{eqnarray*} &&\left\vert \mathsf{B}_{\limfunc{commutator};\kappa }^{F,\natural }\left( f,g\right) \right\vert \\ &\leq &\sum_{k,\beta ,\gamma }\left\Vert \left( \sum_{F\in \mathcal{F} }\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \widehat{f} \left( I\right) \bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{ z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{ \ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{J\in \mathcal{C} _{F}^{\tau -\limfunc{shift}}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{eqnarray*} The second factor on the right hand side is bounded by $\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }$ using the square function estimate in Theorem \ref{Alpert square thm}, and we now turn to estimating the first factor, \begin{equation*} \left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) } \right) ^{\gamma }d\sigma \left( y\right) \right] \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }. \end{equation*} Using Taylor's formula, and keeping in mind that $y\in I^{\prime }\setminus 2J$ and $x\in J$, and that $\left( \frac{x-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\mathbf{h}_{J;\kappa }^{\omega }\left( x\right) $ has vanishing $\omega $-means up to order $\kappa -1-\left\vert \beta \right\vert \,$, we have with $\mathbf{h}_{J;\kappa }^{\omega }=\left\{ h_{J;\kappa }^{\omega ,a}\right\} _{a\in \Gamma _{J,n,\kappa }}$, \begin{eqnarray*} &&\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \left( x\right) \\ &=&\left[ \int_{I^{\prime }\setminus 2J}\left\{ \int_{J}\left( \frac{z-c_{J} }{\ell \left( I^{\prime }\right) }\right) ^{\beta }\Phi _{k}^{\lambda }\left( z-y\right) \mathbf{h}_{J;\kappa }^{\omega }\left( z\right) d\omega \left( z\right) \right\} \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \mathbf{h} _{J;\kappa }^{\omega }\left( x\right) , \end{eqnarray*} where \begin{align*} & \left\vert \int_{J}\left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) } \right) ^{\beta }\Phi _{k}^{\lambda }\left( z-y\right) \mathbf{h}_{J;\kappa }^{\omega }\left( z\right) d\omega \left( z\right) \right\vert \\ & =\left\vert \int_{J}\frac{1}{\left( \kappa -\left\vert \beta \right\vert \right) !}\left( \left( z-c_{J}\right) \cdot \nabla \right) ^{\kappa -\left\vert \beta \right\vert }\Phi _{k}^{\lambda }\left( \eta _{J}^{\omega }\right) \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\mathbf{h}_{J;\kappa }^{\omega }\left( z\right) d\omega \left( z\right) \right\vert \\ & \lesssim \left\Vert \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) } \right) ^{\beta }\mathbf{h}_{J;\kappa }^{\omega }\left( z\right) \right\Vert _{L^{1}\left( \omega \right) }\frac{\ell \left( J\right) ^{\kappa -\left\vert \beta \right\vert }}{\left[ \ell \left( J\right) +\limfunc{dist} \left( y,J\right) \right] ^{\kappa -\left\vert \beta \right\vert +n-\lambda -1}\ell \left( I^{\prime }\right) } \\ & \lesssim \left( \frac{\ell \left( J\right) }{\ell \left( I^{\prime }\right) }\right) ^{\left\vert \beta \right\vert }\frac{\ell \left( J\right) ^{\kappa -\left\vert \beta \right\vert }}{\left[ \ell \left( J\right) + \limfunc{dist}\left( y,J\right) \right] ^{\kappa -\left\vert \beta \right\vert +n-\lambda -1}\ell \left( I^{\prime }\right) }\sqrt{\left\vert J\right\vert _{\omega }}, \end{align*} since \begin{equation*} \left\Vert \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\mathbf{h}_{J;\kappa }^{\omega }\left( z\right) \right\Vert _{L^{1}\left( \omega \right) }=\int_{J}\left\vert \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\mathbf{h}_{J;\kappa }^{\omega }\left( z\right) \right\vert d\omega \left( z\right) \leq \left( \frac{\ell \left( J\right) }{\ell \left( I^{\prime }\right) }\right) ^{\left\vert \beta \right\vert }\left\Vert \mathbf{h}_{J;\kappa }^{\omega }\right\Vert _{L^{1}\left( \omega \right) }\lesssim \left( \frac{\ell \left( J\right) }{ \ell \left( I^{\prime }\right) }\right) ^{\left\vert \beta \right\vert } \sqrt{\left\vert J\right\vert _{\omega }}. \end{equation*} Thus we have \begin{eqnarray*} &&\left\vert \bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J} }{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{ \ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \left( x\right) \right\vert \\ &\lesssim &\int_{I^{\prime }\setminus 2J}\left( \frac{\ell \left( J\right) }{ \ell \left( I^{\prime }\right) }\right) ^{\left\vert \beta \right\vert } \frac{\ell \left( J\right) ^{\kappa -\left\vert \beta \right\vert }}{\left[ \ell \left( J\right) +\limfunc{dist}\left( y,J\right) \right] ^{\kappa -\left\vert \beta \right\vert +n-\lambda -1}\ell \left( I^{\prime }\right) } \sqrt{\left\vert J\right\vert _{\omega }}\left\vert \mathbf{h}_{J;\kappa }^{\omega }\left( x\right) \right\vert \left( \frac{\ell \left( J\right) + \limfunc{dist}\left( y,J\right) }{\ell \left( I^{\prime }\right) }\right) ^{\left\vert \gamma \right\vert }d\sigma \left( y\right) \\ &=&\int_{I^{\prime }\setminus 2J}\left( \frac{\ell \left( J\right) }{\ell \left( I^{\prime }\right) }\right) ^{\left\vert \alpha \right\vert -1}\left( \frac{\ell \left( J\right) }{\ell \left( J\right) +\limfunc{dist}\left( y,J\right) }\right) ^{\kappa -\left\vert \alpha \right\vert +1}\frac{1}{ \left[ \ell \left( J\right) +\limfunc{dist}\left( y,J\right) \right] ^{n-\lambda -1}\ell \left( I^{\prime }\right) }d\sigma \left( y\right) \sqrt{ \left\vert J\right\vert _{\omega }}\left\vert \mathbf{h}_{J;\kappa }^{\omega }\left( x\right) \right\vert \\ &=&\left( \frac{\ell \left( J\right) }{\ell \left( I^{\prime }\right) } \right) ^{\left\vert \alpha \right\vert -1}\left\{ \int_{I^{\prime }\setminus 2J}\left( \frac{\ell \left( J\right) }{\ell \left( J\right) + \limfunc{dist}\left( y,J\right) }\right) ^{\kappa -\left\vert \alpha \right\vert +1}\frac{1}{\left[ \ell \left( J\right) +\limfunc{dist}\left( y,J\right) \right] ^{n-\lambda -1}\ell \left( I^{\prime }\right) }d\sigma \left( y\right) \right\} \mathbf{1}_{J}\left( x\right) , \end{eqnarray*} and now we estimate, \begin{eqnarray*} &&\func{Int}_{k,\beta ,\gamma }^{\lambda ,\natural }f\left( x\right) ^{2}\equiv \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) } \right) ^{\gamma }d\sigma \left( y\right) \right] \left( x\right) \right\vert ^{2} \\ &\lesssim &\sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\left( \frac{\ell \left( J\right) }{\ell \left( I^{\prime }\right) }\right) ^{\left\vert \alpha \right\vert -1}\right. \times \\ &&\ \ \ \ \ \ \ \ \ \ \left. \left\{ \int_{I^{\prime }\setminus 2J}\left( \frac{\ell \left( J\right) }{\ell \left( J\right) +\limfunc{dist}\left( y,J\right) }\right) ^{\kappa -\left\vert \alpha \right\vert +1}\frac{1}{ \left[ \ell \left( J\right) +\limfunc{dist}\left( y,J\right) \right] ^{n-\lambda -1}\ell \left( I^{\prime }\right) }d\sigma \left( y\right) \right\} \mathbf{1}_{J}\left( x\right) \right\vert ^{2}. \end{eqnarray*} Now we fix $t\in \mathbb{N}$, and estimate the sum over those $J\subset I^{\prime }$ with $\ell \left( J\right) =2^{-t}\ell \left( I^{\prime }\right) $ by splitting the integration in $y$ according to the size of $ \ell \left( J\right) +\limfunc{dist}\left( y,J\right) $, to obtain the following bound: \begin{align*} & \func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\natural }f\left( x\right) ^{2}\equiv \sum_{F\in \mathcal{F}}\sum_{\substack{ \left( I^{\prime },J\right) \in \mathcal{P}_{F} \\ \ell \left( J\right) =2^{-t}\ell \left( I\right) }}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{I^{\prime }\setminus 2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) } \right) ^{\gamma }d\sigma \left( y\right) \right] \left( x\right) \right\vert ^{2} \\ & \lesssim 2^{-2t\left( \left\vert \alpha \right\vert -1\right) }\sum_{F\in \mathcal{F}}\sum_{\substack{ \left( I^{\prime },J\right) \in \mathcal{P}_{F} \\ \ell \left( J\right) =2^{-t}\ell \left( I\right) }}\left\vert \frac{ \left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\int_{I^{\prime }\setminus 2J}\left( \frac{\ell \left( J\right) }{\ell \left( J\right) +\limfunc{dist}\left( y,J\right) } \right) ^{\kappa -\left\vert \alpha \right\vert +1}\frac{d\sigma \left( y\right) }{\left[ \ell \left( J\right) +\limfunc{dist}\left( y,J\right) \right] ^{n-\lambda -1}\ell \left( I^{\prime }\right) }\mathbf{1}_{J}\left( x\right) \right\vert ^{2} \\ & \lesssim 2^{-2t\left( \left\vert \alpha \right\vert -1\right) }\sum_{F\in \mathcal{F}}\sum_{\substack{ \left( I^{\prime },J\right) \in \mathcal{P}_{F} \\ \ell \left( J\right) =2^{-t}\ell \left( I\right) }}\left\vert \frac{ \left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\left\{ \sum_{s=1}^{t}\int_{2^{s+1}J\setminus 2^{s}J}\left( 2^{-s}\right) ^{\kappa -\left\vert \alpha \right\vert +1}\frac{ d\sigma \left( y\right) }{\left( 2^{s}\ell \left( J\right) \right) ^{n-\lambda -1}\ell \left( I^{\prime }\right) }\right\} \mathbf{1}_{J}\left( x\right) \right\vert ^{2} \\ & \lesssim 2^{-2t\left\vert \alpha \right\vert }\sum_{F\in \mathcal{F}}\sum _{\substack{ \left( I^{\prime },J\right) \in \mathcal{P}_{F} \\ \ell \left( J\right) =2^{-t}\ell \left( I\right) }}\left\vert \frac{\left\vert \widehat{f }\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}} \sum_{s=1}^{t}\left( 2^{-s}\right) ^{\kappa -\left\vert \alpha \right\vert +1}2^{-s\left( n-\lambda -1\right) }\frac{\left\vert 2^{s}J\right\vert _{\sigma }}{\ell \left( J\right) ^{n-\lambda }}\mathbf{1}_{J}\left( x\right) \right\vert ^{2}. \end{align*} Now we pigeonhole the sum in $J$ according to membership in the grandchildren $K$ of $I^{\prime }$ at depth $t-s$, and introduce the factor $ 2^{-s\varepsilon }2^{s\varepsilon }$ for some $0<\varepsilon <1$ in order to apply Cauchy-Schwarz in the sum over $s$, and then use $\ell \left( K\right) =2^{s-t}\ell \left( I\right) =2^{s-t}2^{t}\ell \left( J\right) $, to obtain \begin{eqnarray*} &&\left\vert \func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\natural }f\left( x\right) \right\vert \lesssim 2^{-t\left\vert \alpha \right\vert }\left( \sum_{F\in \mathcal{F}}\sum_{\substack{ \left( I^{\prime },J\right) \in \mathcal{P}_{F} \\ \ell \left( J\right) =2^{-t}\ell \left( I\right) }} \left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{ \left\vert I\right\vert _{\sigma }}}\sum_{s=1}^{t}\left( 2^{-s}\right) ^{\kappa -\left\vert \alpha \right\vert +1}2^{-s\left( n-\lambda -1\right) } \frac{\left\vert 2^{s}J\right\vert _{\sigma }}{\ell \left( J\right) ^{n-\lambda }}\mathbf{1}_{J}\left( x\right) \right\vert ^{2}\right) ^{\frac{1 }{2}} \\ &\lesssim &C_{\varepsilon }2^{-t\left\vert \alpha \right\vert }\left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P} _{F}}\sum_{s=1}^{t}\sum_{\substack{ K\in \mathfrak{C}_{\mathcal{D}}^{\left( t-s\right) }\left( I^{\prime }\right) \\ J\in \mathfrak{C}_{\mathcal{D} }^{\left( s\right) }\left( K\right) }}\left\vert \frac{\left\vert \widehat{f} \left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}} \left( 2^{-s}\right) ^{\kappa -\left\vert \alpha \right\vert +n-\lambda -\varepsilon }\frac{\left\vert 2^{s}J\right\vert _{\sigma }}{\ell \left( J\right) ^{n-\lambda }}\mathbf{1}_{J}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}} \\ &\lesssim &2^{-t\left\vert \alpha \right\vert }\left( \sum_{F\in \mathcal{F} }\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\sum_{s=1}^{t}\left( 2^{-s}\right) ^{2\left( \kappa -\left\vert \alpha \right\vert -\varepsilon \right) }\sum_{K\in \mathfrak{C}_{\mathcal{D}}^{\left( t-s\right) }\left( I^{\prime }\right) }\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\frac{\left\vert K\right\vert _{\sigma }}{\ell \left( K\right) ^{n-\lambda }}\mathbf{1} _{K}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}} \\ &\lesssim &2^{-t\left\vert \alpha \right\vert }\sum_{s=1}^{t}\left( 2^{-s}\right) ^{2\left( \kappa -\left\vert \alpha \right\vert -\varepsilon \right) }\left( \sum_{F\in \mathcal{F}}\sum_{I^{\prime }\in \mathcal{C} _{F}}\sum_{K\in \mathfrak{C}_{\mathcal{D}}^{\left( t-s\right) }\left( I^{\prime }\right) }\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\frac{\left\vert K\right\vert _{\sigma }}{\left\vert K\right\vert ^{1-\frac{\lambda }{n}}} \mathbf{1}_{K}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}, \end{eqnarray*} and by Minkowski's inequality and the quadratic Muckenhoupt condition, we obtain \begin{eqnarray*} &&\left\Vert \func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\natural }f\right\Vert _{L^{p}\left( \omega \right) }\lesssim 2^{-t\left\vert \alpha \right\vert }\sum_{s=1}^{t}\left( 2^{-s}\right) ^{2\left( \kappa -\left\vert \alpha \right\vert -\varepsilon \right) }\left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{I^{\prime }\in \mathcal{C}_{F}}\sum_{K\in \mathfrak{C}_{ \mathcal{D}}^{\left( t-s\right) }\left( I^{\prime }\right) }\left\vert \frac{ \left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\frac{\left\vert K\right\vert _{\sigma }}{ \left\vert K\right\vert ^{1-\frac{\lambda }{n}}}\mathbf{1}_{K}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &2^{-t\left\vert \alpha \right\vert }\sum_{s=1}^{t}\left( 2^{-s}\right) ^{2\left( \kappa -\left\vert \alpha \right\vert -\varepsilon \right) }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{I^{\prime }\in \mathcal{C}_{F}}\sum_{K\in \mathfrak{C}_{\mathcal{D}}^{\left( t-s\right) }\left( I^{\prime }\right) }\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\mathbf{1} _{K}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &\lesssim &2^{-t\left\vert \alpha \right\vert }\sum_{s=1}^{t}\left( 2^{-s}\right) ^{2\left( \kappa -\left\vert \alpha \right\vert -\varepsilon \right) }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{I^{\prime }\in \mathcal{C}_{F}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\mathbf{1} _{I^{\prime }}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \sigma \right) } \\ &\lesssim &2^{-t\left\vert \alpha \right\vert }A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \mathcal{S}_{ \limfunc{Alpert}}f\right\Vert _{L^{p}\left( \sigma \right) }\lesssim 2^{-t\left\vert \alpha \right\vert }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset }}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\ , \end{eqnarray*} since $1\leq \left\vert \alpha \right\vert \leq \kappa -1$ implies in particular that $\kappa -\left\vert \alpha \right\vert \geq 1>\varepsilon >0$ . At this point we use the inequalities (\ref{analogue}) and (\ref{analogue'}) to obtain \begin{eqnarray*} \left\vert \widehat{f}\left( I\right) \right\vert ^{2} &\lesssim &\left\Vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\Vert _{\infty }^{2}\left\vert I\right\vert _{\sigma }=\left\Vert \sum_{I^{\prime }\in \mathfrak{C}_{ \mathcal{D}}\left( I\right) }\mathbb{E}_{I^{\prime };\kappa }^{\sigma }\bigtriangleup _{I;\kappa }^{\sigma }f\right\Vert _{\infty }^{2}\left\vert I\right\vert _{\sigma } \\ &\lesssim &\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) }\left\Vert \mathbb{E}_{I^{\prime };\kappa }^{\sigma }\bigtriangleup _{I;\kappa }^{\sigma }f\right\Vert _{\infty }^{2}\left\vert I^{\prime }\right\vert _{\sigma }\lesssim \sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D }}\left( I\right) }\left( E_{I^{\prime }}^{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \right) ^{2}\left\vert I^{\prime }\right\vert _{\sigma } \\ &\leq &\sum_{I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) }\left( \inf_{x\in I^{\prime }}M_{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left( x\right) \right) ^{2}\left\vert I^{\prime }\right\vert _{\sigma }\lesssim \left( \inf_{x\in I}M_{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left( x\right) \right) ^{2}\left\vert I\right\vert _{\sigma }\ , \end{eqnarray*} where the final inequality uses that $\sigma $ is doubling, and thus giving \begin{equation} \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\lesssim \inf_{x\in I}M_{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left( x\right) . \label{giving} \end{equation} As a consequence we obtain \begin{eqnarray*} &&\left\Vert \left( \sum_{I\in \mathcal{D}}\left( \frac{\left\vert \widehat{f }\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}} \right) ^{2}\mathbf{1}_{I}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }\lesssim \left\Vert \left( \sum_{I\in \mathcal{ D}}\left( M_{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\left( x\right) \right\vert \right) ^{2}\mathbf{1}_{I}\left( x\right) \right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &\lesssim &\left\Vert \left( \sum_{I\in \mathcal{D}}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\left( x\right) \right\vert ^{2} \mathbf{1}_{I}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }=\left\Vert \mathcal{S}_{\limfunc{Alpert} ,\kappa }f\right\Vert _{L^{p}\left( \sigma \right) }\approx \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }, \end{eqnarray*} by the Fefferman-Stein vector-valued maximal theorem \cite{GrLiYa}, and the Alpert square function equivalence. Plugging this estimate back into a previous estimate yields \begin{equation*} \left\Vert \left\Vert \func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\natural }f\right\Vert _{\ell ^{2}}\right\Vert _{L^{p}\left( \omega \right) }\lesssim 2^{-t\left\vert \alpha \right\vert }\sum_{s=1}^{t}\left( 2^{-s}\right) ^{\kappa -\left\vert \alpha \right\vert }A_{p}^{\lambda ,\ell ^{2},\limfunc{ offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\lesssim 2^{-t\left\vert \alpha \right\vert }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }, \end{equation*} as required. In analogy with $\func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\natural }f$, we define $\func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\flat }f$ to consist of only that part of the sum inside $\func{Int}_{k,\beta ,\gamma }^{\lambda ,\flat }$ taken over $J\subset I^{\prime }$ with $\ell \left( J\right) =2^{-t}\ell \left( I^{\prime }\right) $, and we claim the same estimate holds for $\func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\flat }f$ as we just proved for $\func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\natural }f$, namely \begin{eqnarray} \left\Vert \func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\flat }f\right\Vert _{L^{p}\left( \omega \right) } &\lesssim &2^{-t\left\vert \alpha \right\vert }\sum_{s=1}^{t}\left( 2^{-s}\right) ^{\kappa -\left\vert \alpha \right\vert }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) } \label{ie} \\ &\lesssim &2^{-t\left\vert \alpha \right\vert }A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }. \notag \end{eqnarray} We will also let \begin{equation*} \mathcal{P}_{F}^{t}\equiv \left\{ \left( I^{\prime },J\right) \in \mathcal{P} _{F}:\ell \left( J\right) =2^{-t}\ell \left( I^{\prime }\right) \right\} . \end{equation*} Once we have proved (\ref{ie}), we can complete the proof of the required estimate, \begin{eqnarray*} &&\left\vert \mathsf{B}_{\limfunc{commutator};\kappa }^{F}\left( f,g\right) \right\vert \leq \sum_{t=r}^{\infty }\left( \left\Vert \left\Vert \func{Int} _{k,\beta ,\gamma ,t}^{\lambda ,\natural }f\right\Vert _{\ell ^{2}}\right\Vert _{L^{p}\left( \omega \right) }+\left\Vert \left\Vert \func{ Int}_{k,\beta ,\gamma ,t}^{\lambda ,\flat }f\right\Vert _{\ell ^{2}}\right\Vert _{L^{p}\left( \omega \right) }\right) \left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) } \\ &\lesssim &\sum_{t=r}^{\infty }2^{-t\left\vert \alpha \right\vert }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }\lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }\ . \end{eqnarray*} So it remains only to prove (\ref{ie}), and for this we start by proceeding exactly as we did for $\func{Int}_{k,\beta ,\gamma ,t}^{\lambda ,\natural }f$ , noting that in the integral defining \begin{equation*} \func{Int}^{\lambda ,\flat }\left( I^{\prime },J\right) =\sum_{k=1}^{n}\dsum\limits_{\left\vert \beta \right\vert +\left\vert \gamma \right\vert =\left\vert \alpha \right\vert -1}c_{\alpha ,\beta ,\gamma } \mathbf{1}_{I^{\prime }}\left( x\right) \left[ \int \Phi _{k}^{\lambda }\left( x-y\right) \left\{ \left( \frac{y-a}{\ell \left( I^{\prime }\right) } \right) ^{\gamma }\right\} \mathbf{1}_{2J}\left( y\right) d\sigma \left( y\right) \right] \left( \frac{x-a}{\ell \left( I^{\prime }\right) }\right) ^{\beta }, \end{equation*} the indicator $\mathbf{1}_{2J}\left( y\right) $ appears instead of the indicator $\mathbf{1}_{I^{\prime }\setminus 2J}\left( y\right) $ that appears in the integral defining $\func{Int}^{\lambda ,\natural }\left( I^{\prime },J\right) $. In particular, the above arguments show that \begin{eqnarray*} &&\left\vert \mathsf{B}_{\limfunc{commutator};\kappa }^{F,\flat }\left( f,g\right) \right\vert \leq \sum_{k,\beta ,\gamma }\left\vert \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\frac{ \left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\func{Int}_{k,\beta ,\gamma }^{\lambda ,\flat }\left( I^{\prime },J\right) \right\vert \\ &\leq &\sum_{k,\beta ,\gamma }\left\Vert \left( \sum_{F\in \mathcal{F} }\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \frac{ \left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{J\in \mathcal{C} _{F}^{\tau -\limfunc{shift}}}\left\vert \bigtriangleup _{J;\kappa }^{\omega }g\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \end{eqnarray*} where again, the only difference in this estimate as compared to that for $ \mathsf{B}_{\limfunc{commutator};\kappa }^{F,\natural }\left( f,g\right) $ is that the integration in $y$ with respect to $\sigma $ is now taken over $ 2J$ instead of over $I^{\prime }\setminus 2J$. We now turn to estimating the first factor \begin{equation} \left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \right\vert ^{2}\right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \omega \right) }. \label{first factor} \end{equation} In the case $p=2$ in \cite{AlSaUr}, the bound \begin{equation} \left\vert \Phi _{k}^{\lambda }\left( z-y\right) \right\vert =\left\vert K^{\lambda }\left( x-y\right) \left( \frac{x_{k}-y_{k}}{\ell \left( I^{\prime }\right) }\right) \right\vert \lesssim \frac{1}{\ell \left( I^{\prime }\right) }\frac{d\sigma \left( y\right) }{\left\vert z-y\right\vert ^{n-\lambda -1}}, \label{k bound} \end{equation} was exploited by using it together with the scalar Muckenhoupt condition to prove \begin{equation} \int_{J}\int_{2J}\frac{d\sigma \left( y\right) d\omega \left( z\right) }{ \left\vert z-y\right\vert ^{n-\lambda -1}}\lesssim \sqrt{A_{2}^{\lambda }} \ell \left( J\right) \sqrt{\left\vert J\right\vert _{\sigma }\left\vert J\right\vert _{\omega }}. \label{double int est} \end{equation} However, invoking this use of scalar Muckenhoupt too early, before passage from an $\omega $-integration to a $\sigma $-integration, results in an upper bound that is too big. At this point a new idea is needed in which we can exploit (\ref{k bound}) to dominate (\ref{first factor}) by an expression to which the quadratic Muckenhoupt condition applies. For this we note that the Fefferman-Stein vector-valued inequality for the maximal function \cite{GrLiYa} shows that the mixed norm in (\ref{first factor}) is dominated by \begin{equation} \left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\left( \frac{x-c_{J} }{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{2J}\Phi _{k}^{\lambda }\left( x-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right\vert ^{2}\mathbf{1 }_{J}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }, \label{first factor'} \end{equation} and now we introduce one last pigeon-holing. For every $J\in \mathcal{D}$ and $x\in J$ define the dyadic subcube $J_{x}^{ \left[ s\right] }$ to be the unique cube in $\mathcal{D}$ that contains $x$, is containied in $J$ and has $\ell \left( J_{x}^{\left[ s\right] }\right) =2^{-s}\ell \left( J\right) $. Thus the sequence $\left\{ J_{x}^{\left[ s \right] }\right\} _{s=0}^{\infty }$ is a tower of dyadic cubes that contains $x$ up to side length $\ell \left( J\right) $. Additional geometric gain arises through this pigeon-holing in the following estimate, \begin{equation*} \frac{\left\vert J\right\vert _{\sigma }}{\ell \left( J\right) ^{n-\lambda }} \mathbf{1}_{J}\left( x\right) =\sum_{s=0}^{\infty }2^{-s\left( n-\lambda \right) }\frac{\left\vert J_{x}^{\left[ s\right] }\setminus J_{x}^{\left[ s+1 \right] }\right\vert _{\sigma }}{\ell \left( J_{x}^{\left[ s\right] }\right) ^{n-\lambda }}\lesssim \sum_{s=0}^{\infty }2^{-s\left( n-\lambda \right) } \frac{\left\vert J_{x}^{\left[ s\right] }\right\vert _{\sigma }}{\ell \left( J_{x}^{\left[ s\right] }\right) ^{n-\lambda }}\mathbf{1}_{J_{x}^{\left[ s \right] }}\left( x\right) . \end{equation*} Now we use that $\sigma $ is doubling together with the previous estimates to write \begin{eqnarray*} &&\left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\bigtriangleup _{J;\kappa }^{\omega }\left[ \left( \frac{z-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{2J}\Phi _{k}^{\lambda }\left( z-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \right] \left( z\right) \right\vert ^{2}\right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &\sum_{t=r}^{\infty }\left\Vert \left( \sum_{F\in \mathcal{F} }\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t}}\left\vert \frac{ \left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\left( \frac{x-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\beta }\int_{2J}\Phi _{k}^{\lambda }\left( x-y\right) \left( \frac{y-c_{J}}{\ell \left( I^{\prime }\right) }\right) ^{\gamma }d\sigma \left( y\right) \mathbf{1}_{J}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &\sum_{t=r}^{\infty }\left\Vert \left( \sum_{F\in \mathcal{F} }\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t}}\left\vert \frac{ \left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\int_{J}\frac{1}{\left\vert x-y\right\vert ^{n-\lambda }}\frac{\ell \left( J\right) }{\ell \left( I^{\prime }\right) } d\sigma \left( y\right) \mathbf{1}_{J}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &\sum_{t=r}^{\infty }2^{-t}\sum_{s=0}^{\infty }2^{-s\left( n-\lambda \right) }\left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\frac{\left\vert J_{x}^{\left[ s\right] }\right\vert _{\sigma }}{ \ell \left( J_{x}^{\left[ s\right] }\right) ^{n-\lambda }}\mathbf{1}_{J_{x}^{ \left[ s\right] }}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2} }\right\Vert _{L^{p}\left( \omega \right) }. \end{eqnarray*} We finish by recalling that $I^{\prime }\in \mathfrak{C}_{\mathcal{D}}\left( I\right) $, and reorganizing the sum inside the $\omega $-norm for each fixed $t$ and $s$, to obtain \begin{eqnarray*} &&\left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\frac{ \left\vert J_{x}^{\left[ s\right] }\right\vert _{\sigma }}{\ell \left( J_{x}^{\left[ s\right] }\right) ^{n-\lambda }}\mathbf{1}_{J_{x}^{\left[ s \right] }}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &=&\left\Vert \left( \sum_{K\in \mathcal{D}}\sum_{\substack{ F\in \mathcal{F} \text{ and }\left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t} \\ K=J_{x}^{ \left[ s\right] }}}\left\vert \frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\frac{\left\vert K\right\vert _{\sigma }}{\ell \left( K\right) ^{n-\lambda }}\mathbf{1} _{K}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\leq &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{K\in \mathcal{D}}\sum_{\substack{ F\in \mathcal{F}\text{ and }\left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t} \\ K=J_{x}^{\left[ s\right] }}}\frac{\left\vert \widehat{f}\left( I\right) \right\vert ^{2}}{\left\vert I\right\vert _{\sigma }}\mathbf{1}_{K}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \end{eqnarray*} and we now reorganize the sum again inside the $\sigma $-norm to bound this in turn by \begin{eqnarray*} &&A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{K\in \mathcal{D}}\sum_{\substack{ F\in \mathcal{F} \text{ and }\left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t} \\ K=J_{x}^{ \left[ s\right] }}}\frac{\left\vert \widehat{f}\left( I\right) \right\vert ^{2}}{\left\vert I\right\vert _{\sigma }}\mathbf{1}_{K}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &=&A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{I\in \mathcal{D}}\sum_{F\in \mathcal{F}\text{ and } \left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t}}\frac{\left\vert \widehat{f}\left( I\right) \right\vert ^{2}}{\left\vert I\right\vert _{\sigma }}\mathbf{1}_{J_{x}^{\left[ s\right] }}\left( x\right) \right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &\leq &A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{I\in \mathcal{D}}\frac{\left\vert \widehat{f} \left( I\right) \right\vert ^{2}}{\left\vert I\right\vert _{\sigma }}\mathbf{ 1}_{I}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }. \end{eqnarray*} Now by (\ref{giving}), $\frac{\left\vert \widehat{f}\left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}}\lesssim \inf_{x\in I}M_{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left( x\right) $, we again obtain \begin{eqnarray*} &&A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{I\in \mathcal{D}}\left( \frac{\left\vert \widehat{f} \left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}} \right) ^{2}\mathbf{1}_{I}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }\lesssim \left\Vert \left( \sum_{I\in \mathcal{ D}}\left( M_{\sigma }\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left( x\right) \right) ^{2}\mathbf{1}_{I}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) } \\ &\lesssim &\left\Vert \left( \sum_{I\in \mathcal{D}}\left\vert \bigtriangleup _{I;\kappa }^{\sigma }f\right\vert \left( x\right) ^{2} \mathbf{1}_{I}\left( x\right) \right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }=\left\Vert \mathcal{S}_{\limfunc{Alpert} ,\kappa }f\right\Vert _{L^{p}\left( \sigma \right) }\approx \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }, \end{eqnarray*} by the Fefferman-Stein vector-valued maximal theorem, and the Alpert square function equivalence. Altogether we have now proved that the mixed norm in ( \ref{first factor}) is dominated by \begin{eqnarray*} &&\sum_{t=r}^{\infty }2^{-t}\sum_{s=0}^{\infty }2^{-s\left( n-\lambda \right) }\left\Vert \left( \sum_{F\in \mathcal{F}}\sum_{\left( I^{\prime },J\right) \in \mathcal{P}_{F}^{t}}\left\vert \frac{\left\vert \widehat{f} \left( I\right) \right\vert }{\sqrt{\left\vert I\right\vert _{\sigma }}} \frac{\left\vert J_{x}^{\left[ s\right] }\right\vert _{\sigma }}{\ell \left( J_{x}^{\left[ s\right] }\right) ^{n-\lambda }}\mathbf{1}_{J_{x}^{\left[ s \right] }}\left( x\right) \right\vert ^{2}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } \\ &\lesssim &\sum_{t=r}^{\infty }2^{-t}\sum_{s=0}^{\infty }2^{-s\left( n-\lambda \right) }A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }, \end{eqnarray*} which as pointed out earlier, completes the proof of the estimate \begin{equation*} \left\vert \mathsf{B}_{\limfunc{commutator};\kappa }^{F}\left( f,g\right) \right\vert \lesssim A_{p}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{equation*} \subsection{Conclusion of the proof} An examination of the schematic diagram at the beginning of the section on organization of the proof, together with all the estimates proved so far, completes the proof that \begin{equation*} \left\vert \left\langle T_{\sigma }^{\lambda }f,g\right\rangle _{\omega }\right\vert \lesssim \left[ \Gamma _{T^{\lambda },p}^{\ell ^{2}}+\mathcal{ AWBP}_{T^{\lambda },p}^{\ell ^{2},\kappa ,\rho }\left( \sigma ,\omega \right) \right] \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \end{equation*} where the constant $\Gamma _{T^{\lambda },p}^{\ell ^{2}}$ is the sum of the scalar testing and quadratic Muckenhoupt offset conditions \begin{equation*} \Gamma _{T^{\lambda },p}^{\ell ^{2}}\equiv \mathfrak{T}_{T^{\lambda },p}\left( \sigma ,\omega \right) +\mathfrak{T}_{T^{\lambda ,\ast },p^{\prime }}\left( \omega ,\sigma \right) +A_{p}^{\lambda ,\ell ^{2}, \limfunc{offset}}\left( \sigma ,\omega \right) +A_{p^{\prime }}^{\lambda ,\ell ^{2},\limfunc{offset}}\left( \omega ,\sigma \right) . \end{equation*} Now we invoke Lemma \ref{stronger} to obtain that for all $0<\varepsilon <1$ , there is a constant $C_{\varepsilon }$ such that \begin{equation*} \left\vert \left\langle T_{\sigma }^{\lambda }f,g\right\rangle _{\omega }\right\vert \lesssim \left\{ C_{\varepsilon }\left[ \Gamma _{T^{\lambda },p}^{\ell ^{2}}+\mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) \right] +\varepsilon \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) \right\} \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \end{equation*} from which we conclude that \begin{equation*} \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) \lesssim \left\{ C_{\varepsilon }\left[ \Gamma _{T^{\lambda },p}^{\ell ^{2}}+\mathcal{WBP} _{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) \right] +\varepsilon \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) \right\} \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{equation*} At this point, a standard argument using the definition of the two weight norm inequality (\ref{two weight'}), for which see e.g. \cite[Section 6] {AlSaUr}, shows that for any smooth truncation of $T^{\lambda }$, we can absorb the term $\varepsilon \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }$ into the left hand side and obtain (\ref{main inequ}), \begin{equation*} \mathfrak{N}_{T^{\lambda },p}\left( \sigma ,\omega \right) \lesssim \left[ \Gamma _{T^{\lambda },p}^{\ell ^{2}}+\mathcal{WBP}_{T^{\lambda },p}^{\ell ^{2}}\left( \sigma ,\omega \right) \right] \left\Vert f\right\Vert _{L^{p}\left( \sigma \right) }\left\Vert g\right\Vert _{L^{p^{\prime }}\left( \omega \right) }. \end{equation*} This completes the proof of Theorem \ref{main}. \section{Appendix} Regarding the quadratic Muckenhoupt condition in the case $p=2$, we clearly we have \begin{equation*} A_{2}^{\lambda ,\ell ^{2}}\left( \sigma ,\omega \right) +A_{2}^{\lambda ,\ell ^{2}}\left( \omega ,\sigma \right) \leq A_{2}^{\lambda }\left( \sigma ,\omega \right) , \end{equation*} for any pair of locally finite positive Borel measures. However, this fails when $1<p<\infty $, $\lambda =0$ and $p\neq 2$ as we now show. Let $1<p<\infty $, $0<\alpha \leq 1$ and define \begin{equation*} f\left( x\right) \equiv \frac{1}{x\left( \ln \frac{1}{x}\right) ^{1+\alpha }} \mathbf{1}_{\left( 0,\frac{1}{2}\right) }\left( x\right) , \end{equation*} and note that \begin{equation*} Mf\left( x\right) \mathbf{1}_{\left( 0,\frac{1}{2}\right) }\left( x\right) \approx \frac{1}{x\left( \ln \frac{1}{x}\right) ^{\alpha }}\mathbf{1} _{\left( 0,\frac{1}{2}\right) }\left( x\right) . \end{equation*} Then define \begin{eqnarray*} v\left( x\right) &\equiv &f\left( x\right) ^{1-p}dx=\left[ x\left( \ln \frac{ 1}{x}\right) ^{1+\alpha }\right] ^{p-1}\mathbf{1}_{\left( 0,\frac{1}{2} \right) }\left( x\right) dx, \\ w\left( x\right) &\equiv &Mf\left( x\right) ^{1-p}\approx \left[ x\left( \ln \frac{1}{x}\right) ^{\alpha }\right] ^{p-1}\mathbf{1}_{\left( 0,\frac{1}{2} \right) }\left( x\right) dx, \end{eqnarray*} so that \begin{eqnarray*} \int_{0}^{\frac{1}{2}}\left\vert f\left( x\right) \right\vert ^{p}v\left( x\right) dx &=&\int_{0}^{\frac{1}{2}}f\left( x\right) dx=\int_{0}^{\frac{1}{2 }}\frac{1}{x\left( \ln \frac{1}{x}\right) ^{1+\alpha }}<\infty , \\ \int_{0}^{\frac{1}{2}}\left\vert Mf\left( x\right) \right\vert ^{p}w\left( x\right) dx &=&\int_{0}^{\frac{1}{2}}Mf\left( x\right) dx\approx \int_{0}^{ \frac{1}{2}}\frac{1}{x\left( \ln \frac{1}{x}\right) ^{\alpha }}=\infty . \end{eqnarray*} On the other hand, using $\left( p-1\right) \left( 1-p^{\prime }\right) =-1$ we have for $0<r<\frac{1}{2}$, \begin{eqnarray*} &&\left( \frac{1}{r}\int_{0}^{r}w\left( x\right) dx\right) \left( \frac{1}{r} \int_{0}^{r}v\left( x\right) ^{1-p^{\prime }}dx\right) ^{p-1} \\ &=&\left( \frac{1}{r}\int_{0}^{r}\left[ x\left( \ln \frac{1}{x}\right) ^{\alpha }\right] ^{p-1}dx\right) \left( \frac{1}{r}\int_{0}^{r}\frac{1}{ x\left( \ln \frac{1}{x}\right) ^{1+\alpha }}dx\right) ^{p-1} \\ &\approx &\left( \frac{1}{r}r^{p}\left( \ln \frac{1}{r}\right) ^{\alpha \left( p-1\right) }\right) \left( \frac{1}{r}\left( \ln \frac{1}{r}\right) ^{-\alpha }\right) ^{p-1}=1, \end{eqnarray*} and it follows easily that \begin{equation*} \sup_{I\subset \left( 0,\frac{1}{2}\right) }\left( \frac{1}{\left\vert I\right\vert }\int_{I}w\left( x\right) dx\right) \left( \frac{1}{\left\vert I\right\vert }\int_{I}v\left( x\right) ^{1-p^{\prime }}dx\right) ^{p-1}<\infty . \end{equation*} Thus if we set \begin{eqnarray*} d\omega _{p,\alpha }\left( x\right) &\equiv &\left[ x\left( \ln \frac{1}{x} \right) ^{\alpha }\right] ^{p-1}\mathbf{1}_{\left( 0,\frac{1}{2}\right) }\left( x\right) dx, \\ d\sigma _{p,\alpha }\left( x\right) &\equiv &\frac{1}{x\left( \ln \frac{1}{x} \right) ^{1+\alpha }}\mathbf{1}_{\left( 0,\frac{1}{2}\right) }\left( x\right) dx, \end{eqnarray*} then we have both \textbf{finiteness} of the Muckenhoupt constant $A_{p}^{ \func{local}}\left( \sigma ,\omega \right) $ localized to $\left( 0,\frac{1}{ 2}\right) $, and \textbf{failure} of the norm inequality \begin{equation*} \int_{\mathbb{R}}\left\vert M\left( f\sigma \right) \left( x\right) \right\vert ^{p}d\omega \left( x\right) \lesssim \int_{\mathbb{R}}\left\vert f\left( x\right) \right\vert ^{p}d\sigma \left( x\right) . \end{equation*} Now we investigate the local \emph{quadratic} Muckenhoupt constant \begin{equation*} A_{p}^{\ell ^{2},\func{local}}\left( \sigma ,\omega \right) +A_{p^{\prime }}^{\ell ^{2},\func{local}}\left( \omega ,\sigma \right) \end{equation*} when $\lambda =0$, i.e. where \begin{eqnarray*} \left\Vert \left( \sum_{k=1}^{\infty }\left\vert a_{k}\frac{\left\vert I_{k}\right\vert _{\sigma }}{\left\vert I_{k}\right\vert }\right\vert ^{2} \mathbf{1}_{I_{k}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) } &\leq &A_{p}^{\ell ^{2},\func{local}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{k=1}^{\infty }\left\vert a_{k}\right\vert ^{2} \mathbf{1}_{I_{k}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }, \\ \left\Vert \left( \sum_{k=1}^{\infty }\left\vert a_{k}\frac{\left\vert I_{k}\right\vert _{\omega }}{\left\vert I_{k}\right\vert }\right\vert ^{2} \mathbf{1}_{I_{k}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \sigma \right) } &\leq &A_{p^{\prime }}^{\ell ^{2},\func{local}}\left( \omega ,\sigma \right) \left\Vert \left( \sum_{k=1}^{\infty }\left\vert a_{k}\right\vert ^{2}\mathbf{1}_{I_{k}}\right) ^{\frac{1}{2}}\right\Vert _{L^{p^{\prime }}\left( \omega \right) }, \end{eqnarray*} for all sequences $\left\{ I_{i}\right\} _{i=1}^{\infty }$ of intervals in $ I_{i}$ and all sequences $\left\{ a_{i}\right\} _{i=1}^{\infty }$ of numbers. We have \begin{eqnarray*} \left\vert \left[ 0,r\right] \right\vert _{\sigma } &=&\int_{0}^{r}\frac{1}{ x\left( \ln \frac{1}{x}\right) ^{1+\alpha }}dx\approx \frac{1}{\left( \ln \frac{1}{r}\right) ^{\alpha }}, \\ \left\vert \left[ 0,r\right] \right\vert _{\omega } &=&\int_{0}^{r}\left[ x\left( \ln \frac{1}{x}\right) ^{\alpha }\right] ^{p-1}dx\approx r^{p}\left( \ln \frac{1}{r}\right) ^{\alpha \left( p-1\right) }. \end{eqnarray*} Thus if we take $I_{k}=\left( 0,2^{-k}\right) $, the inequality becomes \begin{equation*} \left\Vert \left( \sum_{k=1}^{\infty }\left\vert a_{k}2^{k}\frac{1}{ k^{\alpha }}\right\vert ^{2}\mathbf{1}_{\left( 0,2^{-k}\right) }\right) ^{ \frac{1}{2}}\right\Vert _{L^{p}\left( \omega \right) }\leq A_{p}^{\ell ^{2}, \func{local}}\left( \sigma ,\omega \right) \left\Vert \left( \sum_{k=1}^{\infty }\left\vert a_{k}\right\vert ^{2}\mathbf{1}_{\left( 0,2^{-k}\right) }\right) ^{\frac{1}{2}}\right\Vert _{L^{p}\left( \sigma \right) }. \end{equation*} Now the $p^{th}$ power of the right hand side is \begin{eqnarray*} &&\int_{0}^{\frac{1}{2}}\left( \sum_{k=1}^{\infty }\left\vert a_{k}\right\vert ^{2}\mathbf{1}_{\left( 0,2^{-k}\right) }\left( x\right) \right) ^{\frac{p}{2}}\frac{1}{x\left( \ln \frac{1}{x}\right) ^{1+\alpha }} dx=\sum_{k=1}^{\infty }\int_{2^{-k-1}}^{2^{-k}}\left( \sum_{\ell =1}^{k}\left\vert a_{\ell }\right\vert ^{2}\right) ^{\frac{p}{2}}\frac{1}{ x\left( \ln \frac{1}{x}\right) ^{1+\alpha }}dx \\ &\approx &\sum_{k=1}^{\infty }\left( \sum_{\ell =1}^{k}\left\vert a_{\ell }\right\vert ^{2}\right) ^{\frac{p}{2}}\left( \frac{1}{k^{\alpha }}-\frac{1}{ \left( k+1\right) ^{\alpha }}\right) \approx \sum_{k=1}^{\infty }\left( \sum_{\ell =1}^{k}\left\vert a_{\ell }\right\vert ^{2}\right) ^{\frac{p}{2}} \frac{1}{k^{1+\alpha }}, \end{eqnarray*} and the $p^{th}$ power of the left hand side is \begin{eqnarray*} &&\int_{0}^{\frac{1}{2}}\left( \sum_{k=1}^{\infty }\left\vert a_{k}2^{k} \frac{1}{k^{\alpha }}\right\vert ^{2}\mathbf{1}_{\left( 0,2^{-k}\right) }\left( x\right) \right) ^{\frac{p}{2}}\left[ x\left( \ln \frac{1}{x}\right) ^{\alpha }\right] ^{p-1}dx \\ &=&\sum_{k=1}^{\infty }\int_{2^{-k-1}}^{2^{-k}}\left( \sum_{\ell =1}^{k}\left\vert a_{\ell }2^{\ell }\frac{1}{\ell ^{\alpha }}\right\vert ^{2}\right) ^{\frac{p}{2}}\left[ x\left( \ln \frac{1}{x}\right) ^{\alpha } \right] ^{p-1}dx\approx \sum_{k=1}^{\infty }\left( \sum_{\ell =1}^{k}\left\vert a_{\ell }2^{\ell }\frac{1}{\ell ^{\alpha }}\right\vert ^{2}\right) ^{\frac{p}{2}}2^{-kp}k^{\alpha \left( p-1\right) }. \end{eqnarray*} Thus the right hand side will be finite if \begin{equation*} a_{\ell }=\ell ^{\eta },\ \ \ \ \ \text{where }2\eta +1=\left( \alpha -\varepsilon \right) \frac{2}{p}>0, \end{equation*} since then \begin{equation*} \sum_{\ell =1}^{k}\left\vert a_{\ell }\right\vert ^{2}=\sum_{\ell =1}^{k}\ell ^{2\eta }\approx k^{2\eta +1}=k^{\left( \alpha -\varepsilon \right) \frac{2}{p}}\text{ and hence }\left( \sum_{\ell =1}^{k}\left\vert a_{\ell }\right\vert ^{2}\right) ^{\frac{p}{2}}=\frac{k^{1+\alpha }}{ k^{1+\varepsilon }}, \end{equation*} and so \begin{equation*} \sum_{k=1}^{\infty }\left( \sum_{\ell =1}^{k}\left\vert a_{\ell }\right\vert ^{2}\right) ^{\frac{p}{2}}\frac{1}{k^{1+\alpha }}=\sum_{k=1}^{\infty }\frac{1 }{k^{1+\varepsilon }}<\infty . \end{equation*} On the other hand, with this choice of $a_{\ell }$, the $p^{th}$ power of the left hand side is \begin{eqnarray*} &&\sum_{k=1}^{\infty }\left( \sum_{\ell =1}^{k}\left\vert a_{\ell }2^{\ell } \frac{1}{\ell ^{\alpha }}\right\vert ^{2}\right) ^{\frac{p}{2} }2^{-kp}k^{\alpha \left( p-1\right) }=\sum_{k=1}^{\infty }\left( \sum_{\ell =1}^{k}\left\vert 2^{\ell }\ell ^{\eta -\alpha }\right\vert ^{2}\right) ^{ \frac{p}{2}}2^{-kp}k^{\alpha \left( p-1\right) } \\ &\approx &\sum_{k=1}^{\infty }\left( \left\vert 2^{k}k^{\eta -\alpha }\right\vert ^{2}\right) ^{\frac{p}{2}}2^{-kp}k^{\alpha \left( p-1\right) }=\sum_{k=1}^{\infty }2^{kp}k^{\left( \eta -\alpha \right) p}2^{-kp}k^{\alpha \left( p-1\right) }=\sum_{k=1}^{\infty }k^{\eta p-\alpha p}k^{\alpha p-\alpha }=\sum_{k=1}^{\infty }k^{\eta p-\alpha }, \end{eqnarray*} which will be infinite if $\eta p-\alpha >-1$, and since $2\eta +1=\left( \alpha -\varepsilon \right) \frac{2}{p}$, this will be the case provided \begin{eqnarray*} -1 &<&\eta p-\alpha =\left[ \frac{\left( \alpha -\varepsilon \right) \frac{2 }{p}-1}{2}\right] p-\alpha =\left( \alpha -\varepsilon \right) -\frac{p}{2} -\alpha =-\varepsilon -\frac{p}{2}, \\ \text{i.e. }0 &<&\varepsilon <\frac{2-p}{2}. \end{eqnarray*} Thus we have a counterexample to the implication $A_{p}^{\func{local}}\left( \sigma ,\omega \right) \Longrightarrow A_{p}^{\ell ^{2},\func{local}}\left( \sigma ,\omega \right) +A_{p^{\prime }}^{\ell ^{2},\func{local}}\left( \omega ,\sigma \right) $ when $1<p<2$, provided we choose $\left( \sigma ,\omega \right) =\left( \sigma _{p,\alpha },\omega _{p,\alpha }\right) $ with $0<\alpha \leq 1$. \begin{proposition} Let $p\in \left( 1,\infty \right) \setminus \left\{ 2\right\} $. There is a weight pair $\left( \sigma ,\omega \right) $ such that \begin{eqnarray*} A_{p}^{\func{local}}\left( \sigma ,\omega \right) &<&\infty , \\ A_{p}^{\ell ^{2},\func{local}}\left( \sigma ,\omega \right) +A_{p^{\prime }}^{\ell ^{2},\func{local}}\left( \omega ,\sigma \right) &=&\infty . \end{eqnarray*} \end{proposition} \begin{proof} Let $\left( \sigma _{p,\alpha },\omega _{p,\alpha }\right) $ be the weight pair constructed above. If $1<p<2$, we can take $\left( \sigma ,\omega \right) =\left( \sigma _{p,1},\omega _{p,1}\right) $. If $2<p<\infty $, then $1<p^{\prime }<2$ and we can take $\left( \sigma ,\omega \right) =\left( \omega _{p^{\prime },1},\sigma _{p^{\prime },1}\right) $. \end{proof} \begin{remark} If we take $0<\alpha \leq 1$, then the two weight norm inequality for the maximal function fails with weights $\sigma _{p^{\prime },\alpha }$ and $ \omega _{p^{\prime },\alpha }$. \end{remark} \end{document}
arXiv
A "tidally locked" double planet? First, I'd like to take the definition of a "double-planet" as two bodies orbiting each other where the center of gravity is not inside the larger body. Also, the system would have to fill other planet requirements (like emptying it's own orbital area around the star). Now, let's say that the "binary system" is in an orbit close enough to a star to be tidally locked to the star. How would the tidal effects of the star's gravity effect the orbit of the planets around each other?? Let's take an example of a "double-planet" (of $0.075$ and $0.030\:\mathrm{M_{Earth}}$ with a separation of $1.17 \times 10^6\:\mathrm{km}$ orbiting their CM in about $450\:\mathrm{days}$. They are $0.085\:\mathrm{AU}$ from a star of mass $0.35\:\mathrm{M_{Sun}}$ and thus orbit the star in $15.3\:\mathrm{days}$. The planetary rotation/co-orbital period is much longer than the orbital period around the star, so I would think that the planets would have characteristics of a lone TL planet. I am wondering how such a system would evolve over time. Would someone on the surface of one of the planets experience experience night and day in 15.3 day cycles? Would the difference in tidal effects of the star on the different sized planets cause the orbit to "precess" or have some other effect? Imagine designing a long term calendar for this system!! orbit exoplanet tidal-locking Jack R. WoodsJack R. Woods $\begingroup$ This is a three body problem. There is no non-numerical solution. $\endgroup$ – adrianmcmenamin $\begingroup$ A long term calendar would be pretty easy to design. It'd be pretty short and the last day on the calendar would say "Planet Destruction Day!". $\endgroup$ – zephyr Your scenario isn't stable. A simple way to explain this is to imagine that the planet's orbit each other at the same rate they orbit the star (your scenario has them orbiting even slower). At the same rate of orbit, the synodic period essentially approaches infinity. (see diagram of the Moon's synodic orbit). When this happens, the inner planet is at L1 to the outer planet and the outer planet is at L2 to the inner planet. That's ofcourse, not quite right as the different masses would have different Hill Spheres, and the mutual gravitation of two objects massive to have a barrycenter outside the more massive planet would combine into a slightly faster orbit, but it's close enough to demonstrate the instability. The L1 and L2 points are on the border of the Hill Sphere and well outside the true region of stability. No system can be stable where the Moon (or binary planet system) orbits around each other slower than they orbit around the central star. They need to be at least 50% to 67% closer in than that would allow. Using the Orbital Period or "T" squared = distance ("a" semi-major axis) cubed, per Kepler's laws, the upper limit for a stable orbital period of a moon around a planet or a binary planet would be about 19%-35% the period of the planet's year. Any longer the Moon or binary system would risk instability. You might be able to set up a synchronous system where the planets orbited each other 4 times for every orbit around the sun, where they'd end up in the same (basically eclipse) position every perihelion with the planet experiencing greater tides (smaller and/or more fluid planet) closer to the sun at perihelion. I suspect a 4-1 orbital period ratio is about as close as you're likely to get for any length of time for a stable binary planet-star, tidal locked system and that would be odd, but I see no reason why it wouldn't be stable. It would only be possible around a small star where the planets were fairly close so that all 3 could have significant tidal effects on each other. Without strong tides you don't get tidal locking. You also can't have tidal locking with both objects when they have different orbital periods. Tidal locking means that the rotation period is equal to the (sidereal) orbital period. userLTKuserLTK $\begingroup$ Of course, I should have known that past L1 the star has a stronger attraction than the planet. I didn't do the math. This makes me think of a follow up question, however. I will ask it as a separate question. $\endgroup$ – Jack R. Woods Not the answer you're looking for? Browse other questions tagged orbit exoplanet tidal-locking or ask your own question. What day/night cycles, climate and seasons would experience Alpha Centauri Bb inhabitants? Can a star orbit around multiple planets or a planet with massive moons? Can a solar system exist where the second planet rotates fast, and the third planet is tidally locked to their star? What would be the year length of a habitable planet of 40 Eridani A? Is a three body gravitating system doomed to collapse? How would the characteristics of a habitable planet change with stars of different spectral types? What are the minimum variables that determines the orbital velocity of a planet?
CommonCrawl
We started hearing the buzz when Daytime TV Doctors, started touting these new pills that improve concentration, memory recall, focus, mental clarity and energy. And though we love the good Doctor and his purple gloves, we don't love the droves of hucksters who prey on his loyal viewers trying to make a quick buck, often selling low-grade versions of his medical discoveries. There are hundreds of cognitive enhancing pills (so called smart pills) on the market that simply do NOT work! With each of them claiming they are the best, how can you find the brain enhancing supplements that are both safe and effective? Our top brain enhancing pills have been picked by sorting and ranking the top brain enhancing products yourself. Our ratings are based on the following criteria. With so many different ones to choose from, choosing the best nootropics for you can be overwhelming at times. As usual, a decision this important will require research. Study up on the top nootropics which catch your eye the most. The nootropics you take will depend on what you want the enhancement for. The ingredients within each nootropic determine its specific function. For example, some nootropics contain ginkgo biloba, which can help memory, thinking speed, and increase attention span. Check the nootropic ingredients as you determine what end results you want to see. Some nootropics supplements can increase brain chemicals such as dopamine and serotonin. An increase in dopamine levels can be very useful for memory, alertness, reward and more. Many healthy adults, as well as college students take nootropics. This really supports the central nervous system and the brain. I can only talk from experience here, but I can remember being a teenager and just being a straight-up dick to any recruiters that came to my school. And I came from a military family. I'd ask douche-bag questions, I'd crack jokes like so... don't ask, don't tell only applies to everyone BUT the Navy, right? I never once considered enlisting because some 18 or 19 year old dickhead on hometown recruiting was hanging out in the cafeteria or hallways of my high school.Weirdly enough, however, what kinda put me over the line and made me enlist was the location of the recruiters' office. In the city I was living in at the time, the Armed Forces Recruitment Center was next door to an all-ages punk venue that I went to nearly every weekend. I spent many Saturday nights standing in a parking lot after a show, all bruised and bloody from a pit, smoking a joint, and staring at the windows of the closed recruiters' office. Propaganda posters of guys in full-battle-rattle obscured by a freshly scrawled Anarchy symbol or a collage of band stickers over the glass.I think trying to recruit kids from school has a child-molester-vibe to it. At least it did for me. But the recruiters defiantly being right next to a bunch of drunk and high punks, that somehow made it seem more like a truly bad-ass option. Like, sure, I'll totally join. After all, these guys don't run from the horde of skins and pins that descend every weekend like everyone else, they must be bad-ass. Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%. Ngo has experimented with piracetam himself ("The first time I tried it, I thought, 'Wow, this is pretty strong for a supplement.' I had a little bit of reflux, heartburn, but in general it was a cognitive enhancer. . . . I found it helpful") and the neurotransmitter DMEA ("You have an idea, it helps you finish the thought. It's for when people have difficulty finishing that last connection in the brain"). Caffeine keeps you awake, which keeps you coding. It may also be a nootropic, increasing brain-power. Both desirable results. However, it also inhibits vitamin D receptors, and as such decreases the body's uptake of this-much-needed-vitamin. OK, that's not so bad, you're not getting the maximum dose of vitamin D. So what? Well, by itself caffeine may not cause you any problems, but combined with cutting off a major source of the vitamin - the production via sunlight - you're leaving yourself open to deficiency in double-quick time. I noticed on SR something I had never seen before, an offer for 150mgx10 of Waklert for ฿13.47 (then, ฿1 = $3.14). I searched and it seemed Sun was somehow manufacturing armodafinil! Interesting. Maybe not cost-effective, but I tried out of curiosity. They look and are packaged the same as the Modalert, but at a higher price-point: 150 rather than 81 rupees. Not entirely sure how to use them: assuming quality is the same, 150mg Waklert is still 100mg less armodafinil than the 250mg Nuvigil pills. From its online reputation and product presentation to our own product run, Synagen IQ smacks of mediocre performance. A complete list of ingredients could have been convincing and decent, but the lack of information paired with the potential for side effects are enough for beginners to old-timers in nootropic use to shy away and opt for more trusted and reputable brands. There is plenty that needs to be done to uplift the brand and improve its overall ranking in the widely competitive industry. Learn More... It's not clear that there is much of an effect at all. This makes it hard to design a self-experiment - how big an effect on, say, dual n-back should I be expecting? Do I need an arduous long trial or an easy short one? This would principally determine the value of information too; chocolate seems like a net benefit even if it does not affect the mind, but it's also fairly costly, especially if one likes (as I do) dark chocolate. Given the mixed research, I don't think cocoa powder is worth investigating further as a nootropic. Amphetamines have a long track record as smart drugs, from the workaholic mathematician Paul Erdös, who relied on them to get through 19-hour maths binges, to the writer Graham Greene, who used them to write two books at once. More recently, there are plenty of anecdotal accounts in magazines about their widespread use in certain industries, such as journalism, the arts and finance. The next cheap proposition to test is that the 2ml dose is so large that the sedation/depressive effect of nicotine has begun to kick in. This is easy to test: take much less, like half a ml. I do so two or three times over the next day, and subjectively the feeling seems to be the same - which seems to support that proposition (although perhaps I've been placebo effecting myself this whole time, in which case the exact amount doesn't matter). If this theory is true, my previous sleep results don't show anything; one would expect nicotine-as-sedative to not hurt sleep or improve it. I skip the day (no cravings or addiction noticed), and take half a ml right before bed at 11:30; I fall asleep in 12 minutes and have a ZQ of ~105. The next few days I try putting one or two drops into the tea kettle, which seems to work as well (or poorly) as before. At that point, I was warned that there were some results that nicotine withdrawal can kick in with delays as long as a week, so I shouldn't be confident that a few days off proved an absence of addiction; I immediately quit to see what the week would bring. 4 or 7 days in, I didn't notice anything. I'm still using it, but I'm definitely a little nonplussed and disgruntled - I need some independent source of nicotine to compare with! "Cavin, you are phemomenal! An incredulous journey of a near death accident scripted by an incredible man who chose to share his knowledge of healing his own broken brain. I requested our public library purchase your book because everyone, those with and without brain injuries, should have access to YOUR brain and this book. Thank you for your legacy to mankind!" …Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. In addition, while the laboratory research reviewed here is of interest concerning the effects of stimulant drugs on specific cognitive processes, it does not tell us about the effects on cognition in the real world. How do these drugs affect academic performance when used by students? How do they affect the total knowledge and understanding that students take with them from a course? How do they affect various aspects of occupational performance? Similar questions have been addressed in relation to students and workers with ADHD (Barbaresi, Katusic, Colligan, Weaver, & Jacobsen, 2007; Halmøy, Fasmer, Gillberg, & Haavik, 2009; see also Advokat, 2010) but have yet to be addressed in the context of cognitive enhancement of normal individuals. It isn't unlikely to hear someone from Silicon Valley say the following: "I've just cycled off a stack of Piracetam and CDP-Choline because I didn't get the mental acuity I was expecting. I will try a blend of Noopept and Huperzine A for the next two weeks and see if I can increase my output by 10%. We don't have immortality yet and I would really like to join the three comma club before it's all over." Feeling behind, I resolved to take some armodafinil the next morning, which I did - but in my hurry I failed to recall that 200mg armodafinil was probably too much to take during the day, with its long half life. As a result, I felt irritated and not that great during the day (possibly aggravated by some caffeine - I wish some studies would be done on the possible interaction of modafinil and caffeine so I knew if I was imagining it or not). Certainly not what I had been hoping for. I went to bed after midnight (half an hour later than usual), and suffered severe insomnia. The time wasn't entirely wasted as I wrote a short story and figured out how to make nicotine gum placebos during the hours in the dark, but I could have done without the experience. All metrics omitted because it was a day usage. "As a neuro-optometrist who cares for many brain-injured patients experiencing visual challenges that negatively impact the progress of many of their other therapies, Cavin's book is a god-send! The very basic concept of good nutrition among all the conflicting advertisements and various "new" food plans and diets can be enough to put anyone into a brain fog much less a brain injured survivor! Cavin's book is straightforward and written from not only personal experience but the validation of so many well-respected contemporary health care researchers and practitioners! I will certainly be recommending this book as a "Survival/Recovery 101" resource for all my patients including those without brain injuries because we all need optimum health and well-being and it starts with proper nourishment! Kudos to Cavin Balaster!" "Such an informative and inspiring read! Insight into how optimal nutrients improved Cavin's own brain recovery make this knowledge-filled read compelling and relatable. The recommendations are easy to understand as well as scientifically-founded – it's not another fad diet manual. The additional tools and resources provided throughout make it possible for anyone to integrate these enhancements into their nutritional repertoire. Looking forward to more from Cavin and Feed a Brain!!!!!!" A number of different laboratory studies have assessed the acute effect of prescription stimulants on the cognition of normal adults. In the next four sections, we review this literature, with the goal of answering the following questions: First, do MPH (e.g., Ritalin) and d-AMP (by itself or as the main ingredient in Adderall) improve cognitive performance relative to placebo in normal healthy adults? Second, which cognitive systems are affected by these drugs? Third, how do the effects of the drugs depend on the individual using them? One possibility is that when an individual takes a drug like noopept, they experience greater alertness and mental clarity. So, while the objective ability to see may not actually improve, the ability to process visual stimuli increases, resulting in the perception of improved vision. This allows individuals to process visual cues more quickly, take in scenes more easily, and allows for the increased perception of smaller details. 20 March, 2x 13mg; first time, took around 11:30AM, half-life 3 hours, so halved by 2:30PM. Initial reaction: within 20 minutes, started to feel light-headed, experienced a bit of physical clumsiness while baking bread (dropped things or poured too much thrice); that began to pass in an hour, leaving what felt like a cheerier mood and less anxiety. Seems like it mostly wore off by 6PM. Redosed at 8PM TODO: maybe take a look at the HRV data? looks interestingly like HRV increased thanks to the tianeptine 21 March, 2x17mg; seemed to buffer effects of FBI visit 22 March, 2x 23 March, 2x 24 March, 2x 25 March, 2x 26 March, 2x 27 March, 2x 28 March, 2x 7 April, 2x 8 April, 2x 9 April, 2x 10 April, 2x 11 April, 2x 12 April, 2x 23 April, 2x 24 April, 2x 25 April, 2x 26 April, 2x 27 April, 2x 28 April, 2x 29 April, 2x 7 May, 2x 8 May, 2x 9 May, 2x 10 May, 2x 3 June, 2x 4 June, 2x 5 June, 2x 30 June, 2x 30 July, 1x 31 July, 1x 1 August, 2x 2 August, 2x 3 August, 2x 5 August, 2x 6 August, 2x 8 August, 2x 10 August, 2x 12 August: 2x 14 August: 2x 15 August: 2x 16 August: 1x 18 August: 2x 19 August: 2x 21 August: 2x 23 August: 1x 24 August: 1x 25 August: 1x 26 August: 2x 27 August: 1x 29 August: 2x 30 August: 1x 02 September: 1x 04 September: 1x 07 September: 2x 20 September: 1x 21 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 28 September: 2x 29 September: 2x 5 October: 2x 6 October: 1x 19 October: 1x 20 October: 1x 27 October: 1x 4 November: 1x 5 November: 1x 8 November: 1x 9 November: 2x 10 November: 1x 11 November: 1x 12 November: 1x 25 November: 1x 26 November: 1x 27 November: 1x 4 December: 2x 27 December: 1x 28 December: 1x 2017 7 January: 1x 8 January: 2x 10 January: 1x 16 January: 1x 17 January: 1x 20 January: 1x 24 January: 1x 25 January: 2x 27 January: 2x 28 January: 2x 1 February: 2x 3 February: 2x 8 February: 1x 16 February: 2x 17 February: 2x 18 February: 1x 22 February: 1x 27 February: 2x 14 March: 1x 15 March: 1x 16 March: 2x 17 March: 2x 18 March: 2x 19 March: 2x 20 March: 2x 21 March: 2x 22 March: 2x 23 March: 1x 24 March: 2x 25 March: 2x 26 March: 2x 27 March: 2x 28 March: 2x 29 March: 2x 30 March: 2x 31 March: 2x 01 April: 2x 02 April: 1x 03 April: 2x 04 April: 2x 05 April: 2x 06 April: 2x 07 April: 2x 08 April: 2x 09 April: 2x 10 April: 2x 11 April: 2x 20 April: 1x 21 April: 1x 22 April: 1x 23 April: 1x 24 April: 1x 25 April: 1x 26 April: 2x 27 April: 2x 28 April: 1x 30 April: 1x 01 May: 2x 02 May: 2x 03 May: 2x 04 May: 2x 05 May: 2x 06 May: 2x 07 May: 2x 08 May: 2x 09 May: 2x 10 May: 2x 11 May: 2x 12 May: 2x 13 May: 2x 14 May: 2x 15 May: 2x 16 May: 2x 17 May: 2x 18 May: 2x 19 May: 2x 20 May: 2x 21 May: 2x 22 May: 2x 23 May: 2x 24 May: 2x 25 May: 2x 26 May: 2x 27 May: 2x 28 May: 2x 29 May: 2x 30 May: 2x 1 June: 2x 2 June: 2x 3 June: 2x 4 June: 2x 5 June: 1x 6 June: 2x 7 June: 2x 8 June: 2x 9 June: 2x 10 June: 2x 11 June: 2x 12 June: 2x 13 June: 2x 14 June: 2x 15 June: 2x 16 June: 2x 17 June: 2x 18 June: 2x 19 June: 2x 20 June: 2x 22 June: 2x 21 June: 2x 02 July: 2x 03 July: 2x 04 July: 2x 05 July: 2x 06 July: 2x 07 July: 2x 08 July: 2x 09 July: 2x 10 July: 2x 11 July: 2x 12 July: 2x 13 July: 2x 14 July: 2x 15 July: 2x 16 July: 2x 17 July: 2x 18 July: 2x 19 July: 2x 20 July: 2x 21 July: 2x 22 July: 2x 23 July: 2x 24 July: 2x 25 July: 2x 26 July: 2x 27 July: 2x 28 July: 2x 29 July: 2x 30 July: 2x 31 July: 2x 01 August: 2x 02 August: 2x 03 August: 2x 04 August: 2x 05 August: 2x 06 August: 2x 07 August: 2x 08 August: 2x 09 August: 2x 10 August: 2x 11 August: 2x 12 August: 2x 13 August: 2x 14 August: 2x 15 August: 2x 16 August: 2x 17 August: 2x 18 August: 2x 19 August: 2x 20 August: 2x 21 August: 2x 22 August: 2x 23 August: 2x 24 August: 2x 25 August: 2x 26 August: 1x 27 August: 2x 28 August: 2x 29 August: 2x 30 August: 2x 31 August: 2x 01 September: 2x 02 September: 2x 03 September: 2x 04 September: 2x 05 September: 2x 06 September: 2x 07 September: 2x 08 September: 2x 09 September: 2x 10 September: 2x 11 September: 2x 12 September: 2x 13 September: 2x 14 September: 2x 15 September: 2x 16 September: 2x 17 September: 2x 18 September: 2x 19 September: 2x 20 September: 2x 21 September: 2x 22 September: 2x 23 September: 2x 24 September: 2x 25 September: 2x 26 September: 2x 27 September: 2x 28 September: 2x 29 September: 2x 30 September: 2x October 01 October: 2x 02 October: 2x 03 October: 2x 04 October: 2x 05 October: 2x 06 October: 2x 07 October: 2x 08 October: 2x 09 October: 2x 10 October: 2x 11 October: 2x 12 October: 2x 13 October: 2x 14 October: 2x 15 October: 2x 16 October: 2x 17 October: 2x 18 October: 2x 20 October: 2x 21 October: 2x 22 October: 2x 23 October: 2x 24 October: 2x 25 October: 2x 26 October: 2x 27 October: 2x 28 October: 2x 29 October: 2x 30 October: 2x 31 October: 2x 01 November: 2x 02 November: 2x 03 November: 2x 04 November: 2x 05 November: 2x 06 November: 2x 07 November: 2x 08 November: 2x 09 November: 2x 10 November: 2x 11 November: 2x 12 November: 2x 13 November: 2x 14 November: 2x 15 November: 2x 16 November: 2x 17 November: 2x 18 November: 2x 19 November: 2x 20 November: 2x 21 November: 2x 22 November: 2x 23 November: 2x 24 November: 2x 25 November: 2x 26 November: 2x 27 November: 2x 28 November: 2x 29 November: 2x 30 November: 2x 01 December: 2x 02 December: 2x 03 December: 2x 04 December: 2x 05 December: 2x 06 December: 2x 07 December: 2x 08 December: 2x 09 December: 2x 10 December: 2x 11 December: 2x 12 December: 2x 13 December: 2x 14 December: 2x 15 December: 2x 16 December: 2x 17 December: 2x 18 December: 2x 19 December: 2x 20 December: 2x 21 December: 2x 22 December: 2x 23 December: 2x 24 December: 2x 25 December: 2x ran out, last day: 25 December 2017 –> Terms and Conditions: The content and products found at feedabrain.com, adventuresinbraininjury.com, the Adventures in Brain Injury Podcast, or provided by Cavin Balaster or others on the Feed a Brain team is intended for informational purposes only and is not provided by medical professionals. The information on this website has not been evaluated by the food & drug administration or any other medical body. We do not aim to diagnose, treat, cure or prevent any illness or disease. Information is shared for educational purposes only. Readers/listeners/viewers should not act upon any information provided on this website or affiliated websites without seeking advice from a licensed physician, especially if pregnant, nursing, taking medication, or suffering from a medical condition. This website is not intended to create a physician-patient relationship. Despite some positive findings, a lot of studies find no effects of enhancers in healthy subjects. For instance, although some studies suggest moderate enhancing effects in well-rested subjects, modafinil mostly shows enhancing effects in cases of sleep deprivation. A recent study by Martha Farah and colleagues found that Adderall (mixed amphetamine salts) had only small effects on cognition but users believed that their performance was enhanced when compared to placebo. Manually mixing powders is too annoying, and pre-mixed pills are expensive in bulk. So if I'm not actively experimenting with something, and not yet rich, the best thing is to make my own pills, and if I'm making my own pills, I might as well make a custom formulation using the ones I've found personally effective. And since making pills is tedious, I want to not have to do it again for years. 3 years seems like a good interval - 1095 days. Since one is often busy and mayn't take that day's pills (there are enough ingredients it has to be multiple pills), it's safe to round it down to a nice even 1000 days. What sort of hypothetical stack could I make? What do the prices come out to be, and what might we omit in the interests of protecting our pocketbook? "Who doesn't want to maximize their cognitive ability? Who doesn't want to maximize their muscle mass?" asks Murali Doraiswamy, who has led several trials of cognitive enhancers at Duke University Health System and has been an adviser to pharmaceutical and supplement manufacturers as well as the Food and Drug Administration. He attributes the demand to an increasingly knowledge-based society that values mental quickness and agility above all else. Looking at the prices, the overwhelming expense is for modafinil. It's a powerful stimulant - possibly the single most effective ingredient in the list - but dang expensive. Worse, there's anecdotal evidence that one can develop tolerance to modafinil, so we might be wasting a great deal of money on it. (And for me, modafinil isn't even very useful in the daytime: I can't even notice it.) If we drop it, the cost drops by a full $800 from $1761 to $961 (almost halving) and to $0.96 per day. A remarkable difference, and if one were genetically insensitive to modafinil, one would definitely want to remove it. I stayed up late writing some poems and about how [email protected] kills, and decided to make a night of it. I took the armodafinil at 1 AM; the interesting bit is that this was the morning/evening after what turned out to be an Adderall (as opposed to placebo) trial, so perhaps I will see how well or ill they go together. A set of normal scores from a previous day was 32%/43%/51%/48%. At 11 PM, I scored 39% on DNB; at 1 AM, I scored 50%/43%; 5:15 AM, 39%/37%; 4:10 PM, 42%/40%; 11 PM, 55%/21%/38%. (▂▄▆▅ vs ▃▅▄▃▃▄▃▇▁▃) Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect. During the 1920s, Amphetamine was being researched as an asthma medication when its cognitive benefits were accidentally discovered. In many years that followed, this enhancer was exploited in a number of medical and nonmedical applications, for instance, to enhance alertness in military personnel, treat depression, improve athletic performance, etc. The above are all reasons to expect that even if I do excellent single-subject design self-experiments, there will still be the old problem of internal validity versus external validity: an experiment may be wrong or erroneous or unlucky in some way (lack of internal validity) or be right but not matter to anyone else (lack of external validity). For example, alcohol makes me sad & depressed; I could run the perfect blind randomized experiment for hundreds of trials and be extremely sure that alcohol makes me less happy, but would that prove that alcohol makes everyone sad or unhappy? Of course not, and as far as I know, for a lot of people alcohol has the opposite effect. So my hypothetical alcohol experiment might have tremendous internal validity (it does prove that I am sadder after inebriating), and zero external validity (someone who has never tried alcohol learns nothing about whether they will be depressed after imbibing). Keep this in mind if you are minded to take the experiments too seriously. Power-wise, the effects of testosterone are generally reported to be strong and unmistakable. Even a short experiment should work. I would want to measure DNB scores & Mnemosyne review averages as usual, to verify no gross mental deficits; the important measures would be physical activity, so either pedometer or miles on treadmill, and general productivity/mood. The former 2 variables should remain the same or increase, and the latter 2 should increase. Phenserine, as well as the drugs Aricept and Exelon, which are already on the market, work by increasing the level of acetylcholine, a neurotransmitter that is deficient in people with the disease. A neurotransmitter is a chemical that allows communication between nerve cells in the brain. In people with Alzheimer's disease, many brain cells have died, so the hope is to get the most out of those that remain by flooding the brain with acetylcholine. Each nootropic comes with a recommended amount to take. This is almost always based on a healthy adult male with an average weight and 'normal' metabolism. Nootropics (and many other drugs) are almost exclusively tested on healthy men. If you are a woman, older, smaller or in any other way not the 'average' man, always take into account that the quantity could be different for you. Modafinil, sold under the name Provigil, is a stimulant that some have dubbed the "genius pill." It is a wakefulness-promoting agent (modafinil) and glutamate activators (ampakine). Originally developed as a treatment for narcolepsy and other sleep disorders, physicians are now prescribing it "off-label" to cellists, judges, airline pilots, and scientists to enhance attention, memory and learning. According to Scientific American, "scientific efforts over the past century [to boost intelligence] have revealed a few promising chemicals, but only modafinil has passed rigorous tests of cognitive enhancement." A stimulant, it is a controlled substance with limited availability in the U.S. At dose #9, I've decided to give up on kratom. It is possible that it is helping me in some way that careful testing (eg. dual n-back over weeks) would reveal, but I don't have a strong belief that kratom would help me (I seem to benefit more from stimulants, and I'm not clear on how an opiate-bearer like kratom could stimulate me). So I have no reason to do careful testing. Oh well. The goal of this article has been to synthesize what is known about the use of prescription stimulants for cognitive enhancement and what is known about the cognitive effects of these drugs. We have eschewed discussion of ethical issues in favor of simply trying to get the facts straight. Although ethical issues cannot be decided on the basis of facts alone, neither can they be decided without relevant facts. Personal and societal values will dictate whether success through sheer effort is as good as success with pharmacologic help, whether the freedom to alter one's own brain chemistry is more important than the right to compete on a level playing field at school and work, and how much risk of dependence is too much risk. Yet these positions cannot be translated into ethical decisions in the real world without considerable empirical knowledge. Do the drugs actually improve cognition? Under what circumstances and for whom? Who will be using them and for what purposes? What are the mental and physical health risks for frequent cognitive-enhancement users? For occasional users? That said, there are plenty of studies out there that point to its benefits. One study, published in the British Journal of Pharmacology, suggests brain function in elderly patients can be greatly improved after regular dosing with Piracetam. Another study, published in the journal Psychopharmacology, found that Piracetam improved memory in most adult volunteers. And another, published in the Journal of Clinical Psychopharmacology, suggests it can help students, especially dyslexic students, improve their nonverbal learning skills, like reading ability and reading comprehension. Basically, researchers know it has an effect, but they don't know what or how, and pinning it down requires additional research. …The Fate of Nicotine in the Body also describes Battelle's animal work on nicotine absorption. Using C14-labeled nicotine in rabbits, the Battelle scientists compared gastric absorption with pulmonary absorption. Gastric absorption was slow, and first pass removal of nicotine by the liver (which transforms nicotine into inactive metabolites) was demonstrated following gastric administration, with consequently low systemic nicotine levels. In contrast, absorption from the lungs was rapid and led to widespread distribution. These results show that nicotine absorbed from the stomach is largely metabolized by the liver before it has a chance to get to the brain. That is why tobacco products have to be puffed, smoked or sucked on, or absorbed directly into the bloodstream (i.e., via a nicotine patch). A nicotine pill would not work because the nicotine would be inactivated before it reached the brain. Two studies investigated the effects of MPH on reversal learning in simple two-choice tasks (Clatworthy et al., 2009; Dodds et al., 2008). In these tasks, participants begin by choosing one of two stimuli and, after repeated trials with these stimuli, learn that one is usually rewarded and the other is usually not. The rewarded and nonrewarded stimuli are then reversed, and participants must then learn to choose the new rewarded stimulus. Although each of these studies found functional neuroimaging correlates of the effects of MPH on task-related brain activity (increased blood oxygenation level-dependent signal in frontal and striatal regions associated with task performance found by Dodds et al., 2008, using fMRI and increased dopamine release in the striatum as measured by increased raclopride displacement by Clatworthy et al., 2009, using PET), neither found reliable effects on behavioral performance in these tasks. The one significant result concerning purely behavioral measures was Clatworthy et al.'s (2009) finding that participants who scored higher on a self-report personality measure of impulsivity showed more performance enhancement with MPH. MPH's effect on performance in individuals was also related to its effects on individuals' dopamine activity in specific regions of the caudate nucleus. Before taking any supplement or chemical, people want to know if there will be long term effects or consequences, When Dr. Corneliu Giurgea first authored the term "nootropics" in 1972, he also outlined the characteristics that define nootropics. Besides the ability to benefit memory and support the cognitive processes, Dr. Giurgea believed that nootropics should be safe and non-toxic.
CommonCrawl
Louis Shapiro (mathematician) Louis Welles Shapiro (born 1941)[1] is an American mathematician working in the fields of combinatorics and finite group theory. He is an emeritus professor at Howard University.[2] Louis W. Shapiro Born United States NationalityAmerican Alma materHarvard University University of Maryland, College Park Scientific career FieldsMathematician Doctoral studentsNaiomi Cameron Shapiro attended Harvard University for his undergraduate studies and then the University of Maryland, College Park for graduate school.[3] Shapiro is most known for creating the Riordan array, named after mathematician John Riordan,[4] and developing the theory around it. He has been an organizer of and speaker at the yearly International Conference on Riordan Arrays and Related Topics,[5][6] which has been held annually beginning 2014. References 1. "Shapiro, Louis W." Virtual International Authority File.{{cite web}}: CS1 maint: url-status (link) 2. "People | Howard University Department of Mathematics". mathematics.howard.edu. 3. "Louis Welles Shapiro". Math Genealogy Project. 4. Louis W. Shapiro, Seyoum Getu, Wen-Jin Woan, Leon C. Woodson, The Riordan group, Discrete Applied Mathematics, Volume 34, Issues 1–3, 1991, pages 229–239, ISSN 0166-218X, https://doi.org/10.1016/0166-218X(91)90088-E. 5. "6th International Conference on Riordan Arrays and Related Topics". 6th International Conference on Riordan Arrays and Related Topics. 6. "5th International Conference on Riordan Arrays and Related Topics". 5th International Conference on Riordan Arrays and Related Topics. Authority control International • ISNI • VIAF National • Israel • United States Academics • CiNii • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Large-scale distributed L-BFGS Maryam M. Najafabadi ORCID: orcid.org/0000-0001-6715-09091, Taghi M. Khoshgoftaar1, Flavio Villanustre2 & John Holt2 With the increasing demand for examining and extracting patterns from massive amounts of data, it is critical to be able to train large models to fulfill the needs that recent advances in the machine learning area create. L-BFGS (Limited-memory Broyden Fletcher Goldfarb Shanno) is a numeric optimization method that has been effectively used for parameter estimation to train various machine learning models. As the number of parameters increase, implementing this algorithm on one single machine can be insufficient, due to the limited number of computational resources available. In this paper, we present a parallelized implementation of the L-BFGS algorithm on a distributed system which includes a cluster of commodity computing machines. We use open source HPCC Systems (High-Performance Computing Cluster) platform as the underlying distributed system to implement the L-BFGS algorithm. We initially provide an overview of the HPCC Systems framework and how it allows for the parallel and distributed computations important for Big Data analytics and, subsequently, we explain our implementation of the L-BFGS algorithm on this platform. Our experimental results show that our large-scale implementation of the L-BFGS algorithm can easily scale from training models with millions of parameters to models with billions of parameters by simply increasing the number of commodity computational nodes. A wide range of machine learning algorithms use optimization methods to train the model parameters [1]. In these algorithms, the training phase is formulated as an optimization problem. An objective function, created based on the parameters, needs to be optimized to train the model. An optimization method finds parameter values which minimize the objective function. New advances in the machine learning area, such as deep learning [2], have made the interplay between the optimization methods and machine learning one of the most important aspects of advanced computational science. Optimization methods are proving to be vital in order to train models which are able to extract information and patterns from huge volumes of data. With the recent interest in Big Data analytics, it is critical to be able to scale machine learning techniques to train large-scale models [3]. In addition, recent breakthroughs in representation learning and deep learning show that large models dramatically improve performance [4]. As the number of model parameters increase, classic implementations of optimization methods on one single machine are no longer feasible. Many applications require solving optimization problems with a large number of parameters. Problems of this scale are very common in the Big Data era [5,6,7]. Therefore, it is important to study the problem of large-scale optimizations on distributed systems. One of the optimization methods, which is extensively employed in machine learning, is Stochastic gradient descent (SGD) [8, 9]. SGD is simple to implement and it works fast when the number of training instances is high, as SGD does not use the whole training data in each iteration. However, SGD has its drawbacks, hyper parameters such as learning rate or the convergence criteria need to be tuned manually. If one is not familiar with the application at hand, it can be very difficult to determine a good learning rate or convergence criteria. A standard approach is to train the model with different parameters and test them on a validation dataset. The hyperparameters which give best performance results on the validation dataset are picked. Considering that the search space for SGD hyperparameters can be large, this approach can be computationally expensive and time consuming, especially on large-scale optimizations. Batch methods such as L-BFGS algorithm, along with the presence of a line search method [10] to automatically find the learning rate, are usually more stable and easier to check for convergence than SGD [11]. L-BFGS uses the approximated second order gradient information which provides a faster convergence toward the minimum. It is a popular algorithm for parameter estimation in machine learning and some works have shown its effectiveness over other optimization algorithms [11,12,13]. In a large-scale model, the parameters, their gradients, and the L-BFGS historical vectors are too large to fit in the memory of one single computational machine. This also makes the computations too complex to be handled by the processor. Due to this, there is a need for distributed computational platforms which allow parallelized implementations of advanced machine learning algorithms. Consequently, it is important to scale and parallelize L-BFGS effectively in a distributed system to train a large-scale model. In this paper, we explain a parallelized implementation of the L-BFGS algorithm on HPCC Systems platform. HPCC Systems is an open source, massive parallel-processing computing platform for Big Data processing and analytics [14]. HPCC Systems platform provides a distributed file storage system based on hardware clusters of commodity servers, system software, parallel application processing, and parallel programing development tools in an integrated system. Another notable existing large-scale tool for distributed implementations is MapReduce [15] and its open source implementation, Hadoop [16]. However, MapReduce was designed for parallel processing and it is ill-suited for the iterative computations inherent in optimization algorithms [4, 17]. HPCC Systems allows for parallelized iterative computations without the need to add any new framework over the current platform and without the limitation of adapting the algorithm to a specific platform (such as MapReduce key-value pairs). Our approach in implementing L-BFGS over the HPCC Systems platform distributes the parameter vector over many computing nodes. Therefore, a larger number of parameters can be handled by increasing the number of computational nodes. This makes our approach more scalable compared to the typical approaches in parallelized implementations of optimization algorithms, where the global gradient is computed by aggregating the local gradients which are computed on many machines [18]. Each machine maintains the whole parameter vector in order to calculate the local gradients on a specific subset of data examples. Thus, handling a larger number of parameters requires increasing the memory on each computational node which makes these approaches harder or even infeasible to scale, where the number of parameters are very large. On the other hand, our approach can scale to handle a very large number of parameters by simply increasing the number of commodity computational nodes (for example by increasing the number of instances on an Amazon Web Services cluster). The remainder of this paper is organized as follows. In section "Related work", we discuss related work on the topic of distributed implementation of optimization algorithms. Section "HPCC systems platform" explains the HPCC Systems platform and how it provides capabilities for a distributed implementation of the L-BFGS algorithm. Section "L-BFGS algorithm" provides theoretical details of the L-BFGS algorithm. In section "Implementation of L-BFGS on HPCC Systems", we explain our implementation details. In section "Results", we provide our experimental results. Finally, in section "Conclusion and discussion", we conclude our work and provide suggestions for future research. Optimization algorithms are the heart of many modern machine learning algorithms [19]. Some works have explored the scaling of optimization algorithms to build large-scale models with numerous parameters through distributed computing and parallelization [9, 18, 20, 21]. These methods focus on linear, convex models where global gradients are obtained by adding up the local gradients which are calculated on each computational node. The main limitation of these solutions is that each computational node needs to store the whole parameter vector to be able to calculate the local gradients. This can be infeasible when the number of parameters is very large. In another study, Niu et al. [22] only focus on optimization problems where the gradient is sparse, meaning that most gradient updates only modify small parts of the parameter vector. Such solutions are not general and can only work for a subset of problems. The research most related to ours are [18] and [4]. Agarwal et al. [18] present a system for learning linear predictors with convex losses on a cluster of 1000 machines. The key component in their system is a communication infrastructure called AllReduce which accumulates and broadcasts values over all nodes. They developed an implementation that is compatible with Hadoop. Each node maintains a local copy of the parameter vector. The L-BFGS algorithm runs locally on each node to accumulate the gradient values locally and the global gradient is obtained by AllReduce. This restricts the parameter vector size to the available memory on only one node. Due to this constraint, their solution only works up to 16 million parameters. Dean et al. [4] present the Sandblaster batch optimization framework for distributed implementation of L-BFGS. The key idea is to have a centralized sharded parameter server where the parameter vector is stored and manipulated in a distributed manner. To implement distributed L-BFGS, a coordinator process issues commands which are performed independently on each parameter vector shard. Our approach also utilizes a vector partitioning method to store the parameter vector across multiple machines. Unlike [18], where the number of parameters is limited to the available memory on one machine, the parameter vector is distributed on many machines which increases the number of parameters that can be stored. The approach presented in [4] requires a new framework with a parameter server and a coordinator to implement batch optimization algorithms. The approach presented in [18] requires the AllReduce platform on top of MapReduce. However, we do not design or add any new framework on top of the HPCC Systems platform for our implementations. The HPCC Systems platform provides a framework for a general solution for large-scale processing which is not limited to a specific implementation. It allows manipulation of the data locally on each node (similar to the parameter server in [4]). The computational commands are sent to all the computational nodes by a master node (similar to the coordinator approach in [4]). It also allows for aggregating and broadcasting the result globally (similar to AllReduce in [18]). Having all these capabilities, makes the HPCC Systems platform a perfect solution for parallel and large-scale computations. Since it is an open source platform, it allows practitioners to implement parallelized and distributed computations on large amounts of data without the need to design their own specific distributed platform. HPCC Systems platform Parallel relational database technology has proven ineffective in analyzing massive amounts of data [23,24,25]. As a result, several organizations developed new technologies which utilize large clusters of commodity servers to provide the underlying platform to process and analyze massive data. Some of these technologies include MapReduce [23,24,25], Hadoop [16] and the open source HPCC Systems. MapReduce is a basic system architecture designed by Google for processing and analyzing large datasets on commodity computing clusters. The MapReduce programming model allows distributed and parallelized transformations and aggregations over a cluster of machines. The Map function converts the input data to groups according to a key-value pair, and the Reduce function performs aggregation by key-value on the output of the Map function. For more complex computations, multiple MapReduce calls must be linked in a sequence. In contrast to MapReduce, in a DataFlow architecture a graph represents a programming unit that performs some kind of transformation on the data. Each node in the graph is an operation. Nodes in a graph are connected by edges representing DataFlow queues. Transferring of the data is done by connecting the DataFlow queues. An example for a DataFlow graph is shown in Fig. 1. First, the data is read from the disk. The next two operations sort and group the data records. Finally some aggregation metrics are extracted from groups of data and the results are written on the disk. This shows how the data is flowing from top to bottom via the connectors of the nodes. An example of a DataFlow computation Although MapReduce provides basic functionality for many data processing operations, users are limited since they need to adapt their applications to the MapReduce model to achieve parallelism. This can include the implementation of multiple sequenced operations which can add overhead to the overall processing time. In addition, many processing operations do not naturally fit into the group by-aggregation model using single key-value pairs. Even simple applications such as selection and projection must fit into this model and users need to provide custom MapReduce functions for such operations, which is more error prone and limits re-usability [24]. Some high-level languages, such as Sawzall [26] and Yahoo Pig's system [27], address some of the limitations of the MapReduce model by providing an external DataFlow-oriented programming language that is eventually translated into MapReduce processing sequences. Even though these languages provide standard data processing operators so users do not have to implement custom Map and Reduce functions, they are externally implemented and not integral to the MapReduce architecture. Thus, they rely on the same infrastructure and limited execution model provided by MapReduce. Thor processing cluster HPCC Systems platform, on the other hand, is an open-source integrated system environment which excels at both extract, transform and load (ETL) tasks and complex analytics using a common data centric parallel processing language called Enterprise Control Language (ECL). HPCC Systems platform is based on a DataFlow programming model. LexisNexis Risk SolutionsFootnote 1 independently developed and implemented this platform as a solution to large-scale data intensive computing. Similar to Hadoop, the HPCC Systems platform also uses commodity clusters of hardware running on top of the Linux operating system. It also includes additional system software and middleware components to meet the requirements for data-intensive computing such as comprehensive job execution, distributed query and file system support. The data refinery cluster in HPCC Systems (Thor system cluster) is designed for processing massive volumes of raw data which ranges from data cleansing and ETL processing to developing machine learning algorithms and building large-scale models. It functions as a distributed file system with parallel processing power spread across the nodes (machines). A Thor cluster can scale from a single node to thousands of nodes. HPCC also provides another type of cluster, called ROXIE [14], for rapid data delivery which is not in the scope of this paper. The Thor cluster is implemented using a master/slave topology with a single master and multiple slave processes, which provide a parallel job execution environment for programs coded in ECL. Each slave provides localized data storage and processing power within the distributed file system cluster. The Thor master monitors and coordinates the processing activities of the slave nodes and communicate status information. ECL programs are compiled into optimized C++ source code, which is subsequently linked into executable machine code distributed to the slave processes of a Thor cluster. The distribution of the code is done by the Thor master process. Figure 2 shows a representation of a physical Thor processing cluster. The distributed file system (DFS) used in the Thor cluster is record oriented which is somewhat different from the block format used in MapReduce clusters. Each record represents one data instance. Records can be fixed or variable length, and support a variety of standard (fixed record size, CSV, XML) and custom formats including nested child datasets. The files are usually transferred to a landing zone and from there they are partitioned and distributed as evenly as possible, with records in sequential order, across the available processes in the cluster. ECL programming language The ECL language is a data-centric, declarative language which allows to define parallel data processing on HPCC Systems. ECL is a flexible language, where its ease of use and development speed make HPCC Systems distinguishable from other data-intensive solutions. There are some key benefits with ECL that can be summarized as follows [14]: Enterprise Control Language incorporates transparent and implicit data parallelism regardless of the size of the computing clusters reducing the complexity of the parallel programming. Enterprise Control Language was specifically designed for manipulation of large amounts of data. It enables implementation of data intensive applications with complex data-flows and huge volumes of data. Since ECL is a higher-level abstraction over C++, it provides more productivity improvements for programmers over languages such as Java and C++. The ECL compiler generates highly optimized C++ for execution. The ECL programming language is a key factor in the flexibility and capabilities of the HPCC Systems processing environment. ECL is designed following the DataFlow model with the purpose of being transparent and an implicit parallel programming language for data-intensive applications. It is a declarative language which allows the programmer to define the data flow between operations and the DataFlow transformations that are necessary to achieve the results. In a declarative language, the execution is not determined by the order of the language statements, but from the sequence of the operations and transformations in a DataFlow represented by the data dependencies. This is very similar to the declarative style recently introduced by Google TensorFlow [28]. DataFlows defined in ECL are parallelized across the slave nodes which process partitions of the data. ECL includes extensive capabilities for data definition, filtering and data transformations. ECL is compiled into optimized C++ format and it allows in-line C++ functions to be incorporated into ECL statements. This allows the general data transformation and flow to be represented with ECL code, while the more complex internal manipulations on data records can be implemented as in-line C++ functions. This makes the ECL language distinguishable from other programing languages for data-centric implementations. Enterprise Control Language transform functions operate on a single record or pair of records at a time depending on the operations. Built-in transform operations in the ECL language which process through entire datasets include PROJECT, ITERATE, ROLLUP, JOIN, COMBINE, FETCH, NORMALIZE, DENORMALIZE, and PROCESS. For example, the transformation function for the JOIN operation, receives two records at a time and performs the join operation on them. The join operation can be as simple as finding the minimum of two values or as complex as a complicated user-defined in-line C++ function. The Thor system allows data transformation operations to be performed either locally on each physical node or globally across all nodes. For example, a global maximum can be found by aggregating all the local maximums obtained on each node. This is similar to the MapReduce approach, however, the big advantage of ECL is that this is done naturally and there is no need to define any key-value pair or any Map or Reduce functions. L-BFGS algorithm Most of the optimization methods start with an initial guess for x in order to minimize an objective function f(x). They iteratively generate a sequence of improving approximate solutions for x until a termination criteria is satisfied. In each iteration, the algorithm finds a direction \(p_k\) and moves along this direction from the current iterate \(x_k\) to a new iterate \(x_{k+1}\) that has a lower function value. The distance \(\alpha _k\) to move along \(p_k\) can be a constant value provided as a hyper-parameter to the optimization algorithm (e.g. SGD algorithm) or it can be calculated using a line search method (e.g. L-BFGS algorithm). The iteration is given by: $$\begin{aligned} x_{k+1} = x_k + \alpha _kp_k \end{aligned}$$ where \(x_k\) is the current point and \(x_{k+1}\) is the new/updated point. Based on the terminology provided in [10], \(p_k\) is called the step direction and \(\alpha _k\) is called the step length. Different optimization methods calculate these two values differently. Newton optimization methods use the second order gradient information to calculate the step direction. This includes calculating the inverse of the Hessian matrix. In a high dimensional setting, where the parameter vector x is very large, the calculation of the inverse of a Hessian matrix can get too expensive to compute. Quasi Newton methods overcome the problem of calculating the inverse of Hessian matrix in each iteration, by continuously updating an approximation of the inverse of the Hessian matrix in each iteration. The most popular Quasi Newton algorithm is the BFGS method, named for its discoverers, Broyden, Fletcher, Goldfarb, and Shanno [10]. In this method, \(\alpha _k\) is chosen to satisfy the Wolfe Condition [29] so there is no need to manually select a constant value for \(\alpha _k\). The step direction \(p_k\) is calculated based on an approximation of the inverse of the Hessian matrix. In each iteration, the approximation of the inverse of the Hessian matrix is updated based on the current \(s_k\) and \(y_k\) values. Where \(s_k\) presents the position difference and \(y_k\) represents the gradient difference in the iteration. These vectors are the same length as vector x. $$\begin{aligned} s_k = x_{k+1} - x_k \end{aligned}$$ $$\begin{aligned} y_k =\Delta f_{k+1} - \Delta f_k \end{aligned}$$ BFGS needs to keep an approximation of the inverse of the Hessian matrix in each iteration (an \(n \times n\) matrix), where n is the length of the parameter vector x. It becomes infeasible to store this matrix in the memory for large values of n. The L-BFGS (Limited-memory BFGS) algorithm modifies BFGS to obtain Hessian approximations that can be stored in just a few vectors of the length n. Instead of storing a fully dense \(n \times n\) approximation, L-BFGS stores just m vectors (\(m \ll n\)) of length n that implicitly represent the approximation. The main idea is that it uses curvature information from the most recent iterations. The curvature information from earlier iterations are considered to be less likely to be relevant to the Hessian behavior at the current iteration and are discarded in the favor of the memory. In L-BFGS, the \(\{s_k, y_k\}\) pairs are stored from the last m iteration which causes the algorithm to need \(2 \times m \times n\) storage compared to \(n \times n\) storage in the BFGS algorithm. The \(2 \times m\) memory vectors, along with the gradient at the current point, are used in the L-BFGS two-loop recursion algorithm to calculate the step direction. The L-BFGS algorithm and its two-loop recursion are shown in Algorithms 1 and 2, respectively. The next section covers our implementation of this algorithm on HPCC Systems platform. Implementation of L-BFGS on HPCC Systems Main idea We used the ECL language to implement the main DataFlow in the L-BFGS algorithm. We also implemented in-line C++ functions as required to perform some local computations. We used the HPCC Systems platform without adding any new framework on top of it or modifying any underlying platform configuration. To implement a large-scale L-BFGS algorithm where the length of the parameter vector x is very large, a natural solution would be to store and manipulate the vector x on several computing machines/nodes. If we use N machines, the parameter vector is divided into N non-overlapping partitions. Each partition is stored and manipulated locally on each machine. For example, if there are 100 machines available and the length of the vector x is \(8^{10}\) (80 GB), each machine ends up storing \(\frac{1}{100}\) of the parameter vector which requires 0.8 GB of memory. By using this approach, the problem of handling a parameter vector of size 80 GB is broken down to handling only 0.8 GB of partial vectors locally across 100 computational nodes. Even a machine with enough memory to store such a large parameter vector will need even more memory for the intermediate computations and will take a significant amount of time to run only one iteration. Distributing the storage and computations on several machines benefits both memory requirements and computational durations. The main idea in the implementation of our paralellized L-BFGS alorithm is to distribute the parameter vector over many machines. Each machine manipulates the portion of the locally assigned parameter vector. The L-BFGS caches (\(\{s_i, y_i\}\) pairs) are also stored on the machines locally. For example, if the \(j \text{th}\) machine stores the \(j \text{th}\) partition of the parameter vector, it also ends up storing the \(j \text{th}\) partition of the \(s_i\) and \(y_i\) vectors by performing all the computations locally. Each machine performs most of the operations independently. For instance, the summation of two vectors that are both distributed on several machines, includes adding up their corresponding partitions on each machine locally. By looking at the L-BFGS algorithm two-loop recursion shown in Algorithm 2, it is clear that all the computations can be performed locally, except the dot product calculation. As mentioned earlier, each machine stores a partition of the parameter vector and all the corresponding partitions of the \(\{s_i,y_i\}\) pairs. Therefore, we calculate the partial dot products on each machine locally. We then add up the local dot product results globally to obtain the final dot product result. Figure 3 shows the dot product computation. In the ECL language, a result which is computed by a global aggregation will be automatically accessible on all the nodes locally. There is no need to include any explicit ECL statement in the code to broadcast the global result on all the nodes. ECL implementation In this subsection, we explain our implementation using ECL language by providing some examples from the code. The goal is to demonstrate the simplicity of the ECL language as a language which provides parallelized computations. We refer the interested reader to ECL manual [30] for a detailed explanation of the ECL language. As mentioned in "HPCC Systems platform", the distributed file system (DFS) used in a Thor cluster is record oriented. Therefore, we represent each partition of the parameter vector x as a single record. Each record is stored locally on each computational node. Each record consists of several fields. For example, one field represents the node id on which the record is stored. Another field includes the partition values as a set of real numbers. The record definition in ECL language is shown below. It should be noted that since ECL is a declarative language, the \({:=}\) sign implies the declaration and should be read "is defined as". The record can include other fields as required by the computations. For simplicity, we only show the records related to the actual data and its distribution over several nodes. The aggregation of the records stored in all the nodes builds a dataset which represents the vector x. The above statement defines vector x as a dataset of records where each record has the x_record format. The "…" includes the actual parameter vector x values which can be a file that contains the initial parameter values or it can be a predefined dataset which is defined in ECL. We exclude that part for simplicity. Distributing this dataset over several machines is as easy as using a DISTRIBUTE statement in ECL and providing the node_id field as the distribution key. Since we only have one record per node_id, this means each record is stored on one machine. At this point, our initial parameter vector x is represented by a dataset called x_distributed where each record represents one partition and it is distributed on one individual node. The ECL language allows for the computational operations to be done on the dataset records in form of PROJECTs, JOINs, ROLLUPs, etc, locally or globally. For example, scaling the vector x by a constant number can be done as a PROJECT operation. The PROJECT operation processes through all records in the record set and performs a TRANFORM function on each record in turn. The TRANFORM function can be defined to multiply each vector partition by a constant number on each machine locally. The LOCAL keyword specifies the operation is performed on each computational node independently, without requiring interaction with all other nodes to acquire data. The TRANSFORM(…) defines the type of transformation that should be done on each record. In this case, the SET of real values in the "partition_values" field of each record is multiplied by a constant value and the "node_id" field remains the same. The local computations result in the corresponding records from the initial and the result datasets to end up on the same node. All the local computations in our implementation of L-BFGS algorithm are performed in the same manner. This results in the corresponding partitions from the L-BFGS cache information vectors, \(\{s_k, y_k\}\) pairs, the parameter vector itself and its gradient to end up on the same node with the same node_id values, which is important for a JOIN operation as shown below in the dot product operation. The dot product operation can be done using the JOIN statement. The above statement is joining the two datasets x_distributed and y_distributed where both datasets are distributed over several machines. The JOIN operation is performed locally by pairing the records from the left dataset (x_distributed) and the right dataset (y_distributed) with the same node_id values. The LOCAL keyword results in the two records to be joined locally. The transform function returns the local dot product value for each node_id. Using a simple SUM statement provides the final dot product result. The dot product result can then be used in any operation without any explicit reference to the fact that this is a global value that needs to broadcast to local machines. The HPCC Systems platform implicitly broadcasts such global values on local machines. Table 1 Dataset characteristics L-BFGS convergence on lshtc-small dataset (λ = 0.0001) L-BFGS convergence on lshtc-large dataset (λ = 0.0001) L-BFGS convergence on wikipedia-medium dataset (\(m=5\)) To showcase the effectiveness of our implementation, we consider three different datasets with increasing number of parameters, lshtc-small, lshtc-large and wikipedia-medium which are large-scale text classification datasets.Footnote 2 The characteristics of the datasets are shown in Table 1. Each instance in the wikipedia-medium dataset can belong to more than one class. To build the SoftMax objective function, we only considered the very first class among the multiple classes listed for each sample as its label for the wikipedia-medium dataset. We used the implemented L-BFGS algorithm to optimize the Softmax regression objective function [31] for these datasets. Softmax regression (or multinomial logistic regression) is a generalization of logistic regression for the case where there are multiple classes to be classified. The number of parameters for the SoftMax regression is equal to the multiplication of the number of classes by the number of features. We use double precision to represent real numbers (8 bytes). The parameter size column in Table 1 approximates the memory size which is needed to store the parameter vector by multiplying the number of parameters by 8. Since the parameter vector is not sparse, we store it as a dense vector which include continuous real values. We used a cluster of 20 machines, each with 4GB of RAM memory for the lshtc-small dataset. We used an AWS (Amazon Web Service) cluster with 16 instances of r3.8xlargeFootnote 3 (each instance runs 25 THOR nodes) for lshtc-large and wikipedia-medium datasets where each node has almost 9GB of RAM. Figure 4 and 5 show the difference from the optimal solution as the number of iterations increases for different values of m in the L-BFGS algorithm for lshtc-small and lshtc-large datasets, respectively. We chose the regularization parameter as λ = 0.0001 for these two datasets. We chose a λ value that causes the L-BFGS algorithm not to converge as quickly so we can demonstrate more iterations in our results. Figure 6 shows the difference from the optimal solution as the number of iterations increases for \(m=5\) for wikipedia-medium dataset. For this dataset, we chose λ = 0.0001 in addition to λ = 0.0001 because the L-BFGS algorithm converges very fast in the case of λ = 0.0001. The reason we only considered \(m=5\) for this datset is that the number of iteration for the L-BFGS algorithm is small. Tables 2, 3, and 4 present the corresponding information for the results shown in Figs. 4, 5, and 6, respectively. The number of iterations is the value where the L-BFGS algorithm reached the optimum point. We define the ending criteria for our L-BFGS algorithm the same as the ending criteria defined in minfunc library [32]. Since in each iteration, the Wolfe line search might need to calculate the objective function more than once to find the best step length, the overall number of times the objective function is calculated is usually more than the number of iterations in the L-BFGS algorithm. The total memory usage in these tables presents the required memory by the L-BFGS algorithm. It includes the memory required to store the updated parameter vector, the gradient vector, and \(2 \times m\) L-BFGS cache vectors. The results indicate that increasing the m value in the L-BFGS algorithm causes the algorithm to reach the optimum point in less number of iterations. However, the time it takes for the algorithm to reach the optimum point does not necessarily decrease. The reason is that increasing m causes the calculation of step direction in L-BFGS two-loop recursion algorithm takes more time. Our results show that the implemented L-BFGS algorithm on the HPCC Systems can easily scale from handling millions of parameters on dozens of computational nodes to handling billions of parameters on hundreds of machines in a reasonable amount of time. Handling the lshtc-small dataset on 20 computational nodes takes less than 15 min. The lshtc-large dataset with more than 4 billion parameters and the wikipedia-medium dataset with more than 10 billion parameters on a cluster of 400 nodes takes almost 1 h and half an hour, respectively. Although, wikipedia-medium is a larger dataset compared to lshtc-large dataset, the L-BFGS algorithm converges in a shorter time, because it requires a smaller number of iterations to reach the optimum. Table 2 Results description for lshtc-small dataset Table 3 Results description for lshtc-large dataset Table 4 Results description for wikipedia-medium dataset In this paper, we explained a parallelized distributed implementation of L-BFGS which works for training large-scale models with billions of parameters. The L-BFGS algorithm is an effective parameter optimization method which can be used for parameter estimation for various machine learning problems. We implemented the L-BFGS algorithm on HPCC Systems which is an open source, data-intensive computing system platform originally developed by LexisNexis Risk Solutions. Our main idea to implement the L-BFGS algorithm for large-scale models, where the number of parameters is very large, is to divide the parameter vector into partitions. Each partition is stored and manipulated locally on one computational node. In the L-BFGS algorithm, all the computations can be performed locally on each partition except the dot product computation which needs different computational nodes to share their information. The ECL language of the HPCC Systems platform simplifies implementing parallel computations which are done locally on each computational node, as well as performing global computations where computational nodes share information. We explained how we used these capabilities to implement L-BFGS algorithm on a HPCC platform. Our experimental results show that our implementation of the L-BFGS algorithm can scale from handling millions of parameters on dozens of machines to billions of parameters on hundreds of machines. The implemented L-BFGS algorithm can be used for parameter estimation in machine learning problems with a very large number of parameters. Additionally, It can be used in image or text classification applications, where the large number of features and classes naturally increase the number of model parameters, especially for models such as deep neural networks. Compared to the parallelized implementation of L-BFGS called Sandblaster, by Google, the HPCC Systems implementation does not require adding any new component such as a parameter server to the framework. HPCC Systems is an open source platform which already provides the data-centric parallel computing capabilities. It can be used by practitioners to implement their large-scale models without the need to design a new framework. In future work, we want to use the HPCC Systems parallelization capabilities on each node which is done through multithreaded processing to further speed up our implementations. http://www.lexisnexis.com/. http://lshtc.iit.demokritos.gr/. https://aws.amazon.com/ec2/instance-types/ Bennett KP, Parrado-Hernández E. The interplay of optimization and machine learning research. J Mach Learn Res. 2006;7:1265–81. Najafabadi MM, Villanustre F, Khoshgoftaar TM, Seliya N, Wald R, Muharemagic E. Deep learning applications and challenges in big data analytics. J Big Data. 2015;2(1):1–21. Xing EP, Ho Q, Xie P, Wei D. Strategies and principles of distributed machine learning on big data. Engineering. 2016;2(2):179–95. Dean J, Corrado G, Monga R, Chen K, Devin M, Mao M, Senior A, Tucker P, Yang K, Le QV, et al. Large scale distributed deep networks. In: Advances in neural information processing systems. Lake Tahoe, Nevada: Curran Associates Inc.; 2012. p. 1223–31. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ, editors. Advances in neural information processing systems. Lake Tahoe, Nevada: Curran Associates, Inc.; 2012. p. 1097–05. Dong L, Lin Z, Liang Y, He L, Zhang N, Chen Q, Cao X, Izquierdo E. A hierarchical distributed processing framework for big image data. IEEE Trans Big Data. 2016;2(4):297–309. Sliwinski TS, Kang SL. Applying parallel computing techniques to analyze terabyte atmospheric boundary layer model outputs. Big Data Res. 2017;7:31–41. Shalev-Shwartz S, Singer Y, Srebro N. Pegasos: primal estimated sub-gradient solver for svm. In: Proceedings of the 24th international conference on machine learning. New York: ACM; 2007. p. 807–14. Zinkevich M, Weimer M, Li L, Smola AJ. Parallelized stochastic gradient descent. In: Lafferty JD, Williams CKI, Shawe-Taylor J, Zemel RS, Culotta A, editors. Advances in neural information processing systems. Vancouver, British Columbia, Canada: Curran Associates Inc.; 2010. p. 2595–03. Nocedal J, Wright SJ. Numerical optimization. 2nd ed. New York: Springer; 2006. Ngiam J, Coates A, Lahiri A, Prochnow B, Le QV, Ng AY. On optimization methods for deep learning. In: Proceedings of the 28th international conference on machine learning (ICML-11). 2011. p. 265–72. Schraudolph NN, Yu J, Günter S, et al. A stochastic quasi-newton method for online convex optimization. Artif Intell Stat Conf. 2007;7:436–43. Daumé III, H.: Notes on cg and lm-bfgs optimization of logistic regression. http://www.umiacs.umd.edu/~hal/docs/daume04cg-bfgs, implementation http://www.umiacs.umd.edu/~hal/megam/. 2004; 198: 282. Middleton A, Solutions P. Hpcc systems: introduction to hpcc (high-performance computing cluster). White paper, LexisNexis Risk Solutions. 2011. http://cdn.hpccsystems.com/whitepapers/wp_introduction_HPCC.pdf. White T. Hadoop: the definitive guide. 3rd ed. 2012. Datasets, R.D.: A faulttolerant abstraction for inmemory cluster computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy Mccauley, Michael J. Franklin, Scott Shenker, Ion Stoica University of California: Berkeley. Agarwal A, Chapelle O, Dudík M, Langford J. A reliable effective terascale linear learning system. J Mach Learn Res. 2014;15(1):1111–33. Sra S, Nowozin S, Wright SJ. Optimization for machine learning. Cambridge: The MIT Press; 2011. Dekel O, Gilad-Bachrach R, Shamir O, Xiao L. Optimal distributed online prediction using mini-batches. J Mach Learn Res. 2012;13:165–202. Teo CH, Smola A, Vishwanathan S, Le QV. A scalable modular convex solver for regularized risk minimization. In: Proceedings of the 13th ACM SIGKDD international conference on knowledge discovery and data mining. New York: ACM; 2007. p. 727–36. Recht B, Re C, Wright S, Niu F. Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In: Shawe-Taylor J, Zemel RS, Bartlett PL, Pereira F, Weinberger KQ, editors. Advances in neural information processing systems. Granada, Spain: Curran Associates, Inc.; 2011. p. 693–701. Dean J, Ghemawat S. Mapreduce: a flexible data processing tool. Commun ACM. 2010;53(1):72–7. Chaiken R, Jenkins B, Larson P-Å, Ramsey B, Shakib D, Weaver S, Zhou J. Scope: easy and efficient parallel processing of massive data sets. Proc VLDB Endow. 2008;1(2):1265–76. Stonebraker M, Abadi D, DeWitt DJ, Madden S, Paulson E, Pavlo A, Rasin A. Mapreduce and parallel dbmss: friends or foes? Commun ACM. 2010;53(1):64–71. Pike R, Dorward S, Griesemer R, Quinlan S. Interpreting the data: parallel analysis with sawzall. Sci Program. 2005;13(4):277–98. Gates AF, Natkovich O, Chopra S, Kamath P, Narayanamurthy SM, Olston C, Reed B, Srinivasan S, Srivastava U. Building a high-level dataflow system on top of map-reduce: the pig experience. Proc VLDB Endow. 2009;2(2):1414–25. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, et al. Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint. 2016. arXiv:1603.04467. Wolfe P. Convergence conditions for ascent methods. SIAM Rev. 1969;11(2):226–35. Team BRD. Ecl language reference. White paper, LexisNexis Risk Solutions. 2015. http://cdn.hpccsystems.com/install/docs/3_4_0_1/ECLLanguageReference.pdf. Bishop CM. Pattern recognition and machine learning (information science and statistics). Secaucus: Springer; 2006. Schmidt M. minFunc: unconstrained differentiable multivariate optimization in Matlab. 2005. http://www.cs.ubc.ca/~schmidtm/Software/minFunc.html. MMN carried out the conception and design of the research, performed the implementations and drafted the manuscript. TMK, FV and JH provided reviews on the manuscript. JH set up the experimental framework on AWS and provided expert advice on ECL. All authors read and approved the final manuscript. Florida Atlantic University, 777 Glades Road, Boca Raton, FL, USA Maryam M. Najafabadi & Taghi M. Khoshgoftaar LexisNexis Business Information Solutions, 245 Peachtree Center Avenue, Atlanta, GA, USA Flavio Villanustre & John Holt Search for Maryam M. Najafabadi in: Search for Taghi M. Khoshgoftaar in: Search for Flavio Villanustre in: Search for John Holt in: Correspondence to Maryam M. Najafabadi. Najafabadi, M.M., Khoshgoftaar, T.M., Villanustre, F. et al. Large-scale distributed L-BFGS . J Big Data 4, 22 (2017) doi:10.1186/s40537-017-0084-5 Large-scale L-BFGS implementation Parallel and distributed processing HPCC systems
CommonCrawl
Collective interaction effects associated with mammalian behavioral traits reveal genetic factors connecting fear and hemostasis Hyung Jun Woo1 & Jaques Reifman1 Investigation of the genetic architectures that influence the behavioral traits of animals can provide important insights into human neuropsychiatric phenotypes. These traits, however, are often highly polygenic, with individual loci contributing only small effects to the overall association. The polygenicity makes it challenging to explain, for example, the widely observed comorbidity between stress and cardiac disease. We present an algorithm for inferring the collective association of a large number of interacting gene variants with a quantitative trait. Using simulated data, we demonstrate that by taking into account the non-uniform distribution of genotypes within a cohort, we can achieve greater power than regression-based methods for high-dimensional inference. We analyzed genome-wide data sets of outbred mice and pet dogs, and found neurobiological pathways whose associations with behavioral traits arose primarily from interaction effects: γ-carboxylated coagulation factors and downstream neuronal signaling were highly associated with conditioned fear, consistent with our previous finding in human post-traumatic stress disorder (PTSD) data. Prepulse inhibition in mice was associated with serotonin transporter and platelet homeostasis, and noise-induced fear in dogs with hemostasis. Our findings suggest a novel explanation for the observed comorbidity between PTSD/anxiety and cardiovascular diseases: key coagulation factors modulating hemostasis also regulate synaptic plasticity affecting the learning and extinction of fear. Mammalian behavioral traits, such as burrowing and parenting in wild mice [1, 2], as well as emotional behaviors in lab mice [3], domesticated animals [3, 4], and pets [5, 6], have strong genetic components that interact tightly with the environment. Animal models, which allow transgenic experiments and controlled phenotyping, also help us understand the neurobiological bases of human psychiatric disorders, such as schizophrenia, autism, depression, anxiety, and post-traumatic stress disorder (PTSD). Recent developments in genome-wide association studies have made it possible to perform unbiased, high-resolution interrogation of associated loci. However, such studies have largely been limited to human genetics, in which typical linkage disequilibrium (LD) between individuals is relatively small and high-quality reference panels of common variants are available [7]. Recent studies of outbred mouse stocks [8, 9] with lower degrees of relatedness than lab mice and of multiple breeds of domesticated animals [5, 6] have demonstrated the feasibility of genome-wide mapping of behavioral traits. Although near gene-level mapping resolution has been achieved, levels of LD between variants in many associated loci remain substantially higher than in typical human studies. With higher overall LD, difficulties in interpreting the results of association tests, in which each single-nucleotide polymorphism (SNP) is treated separately, are more pronounced than in human samples. In this context, analytical approaches that test groups of variants for collective association with traits have the potential to reveal hidden genetic factors otherwise undiscoverable by analyzing independent loci. Here, we report the development of an analytical method to infer collective genetic associations of a group of SNPs with quantitative traits. In this method, which is analogous to a similar approach for binary case-control phenotypes [10,11,12], pre-selected groups of variants (e.g., genes or pathways) are tested for their association with phenotypes while taking into account the distributions of both genotypes and phenotypes within the cohort. In conventional association studies, loci highly associated with quantitative traits are first identified using linear regression-based methods, and the putative causal genes near the loci are tested for enrichment in curated databases of pathways. Studies targeting epistatic effects have largely focused on extending single-SNP methods, namely via exhaustive or selective testing of SNP pairs [13,14,15]. Statistical models considering a large number of variants necessarily require regularization or variable selection. Such regularized high-dimensional inferences have a wide range of applications, including inference of gene expression network structures [16]. More specifically, in the context of quantitative trait analysis, such examples include studies aiming to build genomic predictors that employ aggregates of non-interacting, genome-wide SNPs [17]. In our approach for quantitative trait association analysis, we first select all variants proximal to genes belonging to a given pathway, and infer the collective association strength of these variants while taking into account the aggregated effect of interactions between them. This inference goes beyond linear regression and its multi-dimensional extension, ridge regression (RR), by accommodating nonuniform genotype distributions. The high degree of polygenicity observed in human psychiatric disorders [18], as well as evidence that prioritizing variants based on functional annotations enhances power [19, 20], suggests that collective genetic effects uniquely considered in the proposed method potentially make similarly important contributions to mammalian behavioral traits. We first used simulated data to demonstrate that substantially higher power could be achieved by such collective inference than by independent loci inference of quantitative trait associations and RR-based multi-locus tests. We chose RR for comparison because it extends linear regression―the main approach used for most association tests using single variant data―in a manner analogous to how our approach uses penalizer-based regularization. We then applied our method to the recent data sets of behavioral traits in outbred mice [8] and dogs [5], analyzing five behavioral assays for mice (fear conditioning, prepulse inhibition, elevated plus maze, forced swim test, and sleep) and fear-related personality traits for dogs. We tested the association of SNP groups formed by curated pathways, while including interaction effects within each SNP group. The classes of biological processes represented by highly ranked pathways associated with each trait, together with known experimental evidence from the literature, provide a markedly enhanced understanding of key psychiatric conditions, including fear, anxiety, and depression [21]. In particular, our inference results for fear conditioning suggest that γ-carboxylated proteases (thrombin and other coagulation factors) play a central role in modulating fear, consistent with recent experimental findings [22, 23], and offer a possible explanation for the comorbidity of PTSD in blood pressure [24] as well as coronary diseases [25, 26]. Our results from dog personality trait data reported by Ilska et al. [5]—fear of noise and fear of humans/objects—provided further support to this main conclusion. Continuous discriminant analysis for quantitative traits We formulated and implemented a collective inference algorithm adapted to genotype-quantitative trait data sets, as described in this subsection. We denote the data as D = {ak, y k }, where n is the total number of individuals and k = 1, …, n; ak denotes the genotype count vector for individual k (components \( {a}_i^k=0,1,2 \) and i = 1, …, m, where m is the number of SNPs); and y k is a continuous-variable phenotype of individual k. The log-likelihood of a statistical model is defined as: $$ L=\sum \limits_k\ln \Pr \left({\mathbf{a}}^k,{y}_k\right)=\sum \limits_k\ln \Pr \left({\mathbf{a}}^k|{y}_k\right)+A, $$ where A = ∑ k ln Pr(y k ) is the likelihood of the marginal distribution of the phenotype. We assume that the latter is distributed normally with mean μ and variance σ2. Maximizing L with respect to these two parameters leads to their estimates \( \widehat{\mu} \) and \( {\widehat{\sigma}}^2 \) (the sample mean and variance), which complete the specification of the marginal phenotype distribution. The conditional genotype distribution is modeled as: $$ \Pr \left(\mathbf{a}|y\right)=\frac{e^{H\left(\mathbf{a};y\right)}}{Z(y)}, $$ $$ H\left(\mathbf{a};y\right)=\sum \limits_{l=0}^1{y}^l\left[\sum \limits_i{h}_i^{(l)}\left({a}_i\right)+\sum \limits_{i<j}{J}_{ij}^{(l)}\Big({a}_i,{a}_j\Big)\right] $$ and \( Z(y)=\sum \limits_{\mathbf{a}}{e}^{H\left(\mathbf{a};y\right)} \) is the normalization factor. In Eq. (3), the two terms inside the brackets represent single-SNP and interaction effects, respectively, with parameters \( \theta =\left\{{h}_i^{(l)}(a),{J}_{ij}^{(l)}\left(a,b\right)\right\} \) defined for SNP indices i, j = 1, …, m, genotype indices a, b = 0, 1, 2, and the index representing phenotype-independent and -dependent effects, l = 0, 1. These parameters are set to zero if a = 0 or b = 0. The null hypothesis (no association between genotype a i and phenotype y) is then represented by the condition \( {h}_i^{(1)}(a)={J}_{ij}^{(1)}\left(a,b\right)=0, \) under which Eq. (2) becomes independent of y. Maximization of L in Eq. (1) with respect to these parameters involves computing derivatives: $$ \frac{\partial L/n}{\partial {h}_i^{(l)}(a)}={\widehat{f}}_i^{(l)}(a)-{f}_i^{(l)}(a)-{\lambda}_1{h}_i^{(l)}(a), $$ (4a) $$ \frac{\partial L/n}{\partial {J}_{ij}^{(l)}\left(a,b\right)}={\widehat{f}}_{ij}^{(l)}\left(a,b\right)-{f}_{ij}^{(l)}\left(a,b\right)-{\lambda}_2{J}_{ij}^{(l)}\left(a,b\right), $$ (4b) where \( {\widehat{f}}_i^{(l)}(a)={n}^{-1}{\sum}_k{y}_k^l\delta \left({a}_i^k,a\right) \) and \( {\widehat{f}}_{ij}^{(l)}\left(a,b\right)={n}^{-1}{\sum}_k{y}_k^l\delta \left({a}_i^k,a\right)\;\delta \left({a}_j^k,b\right) \) are the (phenotype-weighted if l = 1) sample genotype frequency and covariance, respectively; the Kronecker delta symbol is defined as δ(a, b) = 1 if a = b and 0 otherwise; and the corresponding quantities without the hat are population averages defined by: $$ {f}_i^{(l)}(a)=\frac{1}{n}\sum \limits_k{y}_k^l\sum \limits_{\mathbf{a}}\delta \left({a}_i,a\right)\Pr \left(\mathbf{a}|{y}_k\right), $$ $$ {f}_{ij}^{(l)}\left(a,b\right)=\frac{1}{n}{\sum \limits}_k{y}_k^l\sum \limits_a\delta \left({a}_i,a\right)\delta \left({a}_j,a\right)\Pr \left(a|{y}_k\right). $$ The aforementioned convention of setting the parameters to zero whenever a or b is zero makes the total number of unknown parameters equal to that for the sample genotype frequencies and covariances after taking into account constraints associated with their normalization conditions [10]. In Eq. (4), the last terms penalize overfitting under small sample sizes by forcing single-SNP and interaction parameters to be close to 0. The penalizers λ1 and λ2 are determined below by cross-validation. To calculate Eq. (2) and use it for Eqs. (4) and (5), an approximate treatment is necessary. We used the pseudo-likelihood (PL) method [27], which replaces the full distribution by a product over single-SNP distributions conditional on the data: $$ \Pr \left(\mathbf{a}|{y}_k\right)\approx \prod \limits_i\Pr \left({a}_i|{a}_{j\ne i}^k,{y}_k\right)=\prod \limits_i\frac{e^{H_i\left({a}_i|{\mathbf{a}}^k;{y}_k\right)}}{\sum_b{e}^{H_i\left(b|{\mathbf{a}}^k;{y}_k\right)}}, $$ $$ {H}_i\left(a|{\mathbf{a}}^k;{y}_k\right)=\sum \limits_{l=0}^1{y}_k^l\left[\sum \limits_i{h}_i^{(l)}(a)+\sum \limits_{j\ne i}{J}_{ij}^{(l)}\Big(a,{a}_j^k\Big)\right]. $$ The use of Eqs. (6) and (7) in Eq. (5) allows one to avoid full marginalization over genotypes via the use of single-SNP densities conditional on the data, making the computation tractable for large numbers of interacting SNPs. We determined the penalizers λ1 and λ2 by optimizing the Bayes estimator for phenotypes $$ \overline{y}\left(\mathbf{a}\right)=\int y\Pr \left(y|\mathbf{a}\right) dy=\frac{\int y\Pr \left(\mathbf{a}|y\right)\Pr (y) dy}{\int \Pr \left(\mathbf{a}|y\right)\Pr (y) dy} $$ evaluated by the trapezoidal rule under cross-validation, in which we divided the n individuals into training and test groups at a 4:1 ratio, inferred parameters from the training group under the given penalizers, and calculated Eq. (8) for the test group individuals (Fig. 1). We selected λ1 and λ2 that maximized the prediction score, defined as the correlation between predicted and actual phenotype values, \( R=\mathrm{Cor}\left[{y}_k,\overline{y}\left({\mathbf{a}}^k\right)\right] \). Continuous discriminant analysis for quantitative traits. Paired genotype (a)-phenotype (y) data for individuals are divided into training and test sets. The training set is used to model the conditional distribution Pr(a|y), while including the interaction effects between all m SNPs. Parameters with large magnitudes that often result from insufficient data are made unfavorable by the penalizer λ. Bayes' rule is then used to obtain Pr(y|a) and applied to predict phenotype values for individuals in the test group. The correlation R between the predicted and actual phenotypes is optimized with respect to λ. Because of the training/test set division, R2 is in general not equal to r2, the proportion of phenotype variance explained by genetic predictors. The latter can be estimated by using the optimized penalizer and repeating the inference The software GeDI (Genotype distribution-based inference), which implements the quantitative trait analysis algorithm, is available at https://github.com/BHSAI/GeDI. Ridge regression For purposes of comparison, we implemented RR, which fits the data to the model $$ y\left(\mathbf{a}\right)=\alpha +\sum \limits_i{a}_i{\beta}_i+\sum \limits_{i<j}{a}_i{a}_j{\gamma}_{ij}+\varepsilon, $$ where \( \varepsilon \sim \mathrm{N}\left(0,{\sigma}_y^2\right) \), or \( \overline{\mathbf{y}}=\mathbf{X}\kern0.1em \mathbf{b} \), where \( \overline{\mathbf{y}} \) is the column vector with elements y k , b is the coefficient vector with p = 1 + m + m (m – 1) / 2 elements {1, β, γ}, and X is the n × p data matrix with 1 for the first column, \( {a}_i^k \) for columns 2 to m + 1, and \( {a}_i^k{a}_j^k \) for the rest. This approach can be regarded as approximating the log likelihood of the data as: $$ L=\sum \limits_k\ln \left[\Pr \left({y}_k|{\mathbf{a}}^k\right)\Pr \left({\mathbf{a}}^k\right)\right]\approx \sum \limits_k\ln \Pr \left({y}_k|{\mathbf{a}}^k\right)=-\frac{1}{2{\sigma}_y^2}\sum \limits_k{\left[{y}_k-y\left({\mathbf{a}}^k\right)\right]}^2-\frac{n}{2}\ln \left(2{\pi \sigma}_y^2\right), $$ where the marginal genotype distribution is assumed to be uniform. We added a penalizer and maximized L − λ′ btb to obtain: $$ \widehat{\mathbf{b}}={\left({\mathbf{X}}^t\mathbf{X}+\lambda \mathbf{I}\right)}^{-1}{\mathbf{X}}^t\mathbf{y}=\mathbf{V}{\left({\mathbf{R}}^t\mathbf{R}+\lambda \mathbf{I}\right)}^{-1}{\mathbf{R}}^t\mathbf{y}, $$ where \( {\widehat{\sigma}}_y^2={\left(\mathbf{y}-\mathbf{Xb}\right)}^t\left(\mathbf{y}-\mathbf{Xb}\right)/n \), \( \lambda ={\lambda}^{\prime }{\widehat{\sigma}}_y^2 \), I is an identity matrix of dimension p, and V is a p × n orthogonal matrix from singular value decomposition [28] of X. The second form of Eq. (11) reduces computational costs for p > n. To enable this form, we chose λ to be uniform for all components of b in RR, whereas in CDA we used two distinct penalizers for the single-SNP and interaction terms, respectively. Simulated data We generated simulated data by first randomly assigning parameter values from normal distributions for a given number m of interacting SNPs. Phenotype values for a varying number of individuals (sample size) were sampled from the standard normal distribution. We then used Eq. (2) to calculate the probabilities of all possible genotypes (2m in total) for each value of y k and chose one genotype for each individual based on these probabilities. We then applied CDA and RR, performed 5-fold cross-validation, and determined the penalizers by maximizing R. For these simulations, we used the dominant model to reduce the computational cost of enumerating all possible genotypes. For all other computations using animal trait data, we used the genotypic model, which includes the dominant model as a special case and generally enhances the power to infer associations relative to the dominant model [12]. Outbred mice data We used the genotype data for 1934 mice reported by Nicod et al. [8] and selected animals for which the trait values under consideration were available. The sample size for mice ranged from 1065 to 1716 (Additional file 1: Table S1). We used the corrected mean time freezing during the cue and context tests for fear conditioning, average pulse reactivity for prepulse inhibition, fraction of time in open arms for the elevated plus maze, and sleep length in 24 h, as well as the difference in light and dark periods for sleep (Additional file 1: Table S1). Non-integral dosage values for imputed SNPs were rounded off to integral allele counts. The total number of SNPs for all data sets was 359,559. Fractional trait values between 0 and 1 were log-transformed before use with small constants added in the argument to avoid singularities. We examined the quantile-quantile plots of independent-SNP p-values obtained from linear regression for each data set, and sought to eliminate any inflation by stratifying the data set into two sub-samples based on a covariate. The division into male and female mice proved adequate for many traits, whereas forced swim and sleep difference between light and dark periods required different choices (immobility during the first 2 min and average body weight, respectively; Additional file 1: Table S1). We implemented and used a meta-analysis scheme for collective inference involving multiple sub-samples [12, 29] (two in this work). Each sample was first divided into training and test sets, and the training sets were used to infer single-SNP and interaction parameters separately for each sub-sample. These models were then averaged with sample-size weighting and subsequently used to predict phenotypes for the aggregated test animals of all sub-samples. The prediction score R was then optimized with respect to the penalizers. We used the genotype data for 885 Labrador Retrievers, reported by Ilska et al. [5]. For the two traits we considered (fear of noise and of humans/objects), the sample sizes were 868 and 882, respectively (Additional file 1: Table S1). The values on the questionnaire scale (from 1 to 5) were log-transformed before use as for mice. We used the first principal component to stratify animals into two groups (large and small principal component values; see Additional file 1: Figure S5) and performed meta-analyses. We chose 110,419 SNPs with known CanFam3 positions [30] for analysis. Association testing of pathway-based variant groups We used mouse and dog pathways from the Reactome database [31] (downloaded on December 23, 2016). For each gene set (mouse/dog orthologs of the human genes in the corresponding human pathway), we formed a union of all SNPs whose positions in the genome were within 50 kb of the coding regions of all genes. We considered all pathways with 5 or more SNPs (1502 and 1459 in total for mice and dogs, respectively). The mouse data set typically contained groups of neighboring SNPs with near perfect LD; before association testing, we used PLINK [32] (window size 50 bp shifted by 5 SNPs, LD threshold 0.9) to prune the SNP set of a given pathway, and then stratified the set into two covariate-dependent subgroups and performed collective inference meta-analysis. We chose this pruning procedure on the basis of our previous work showing that pathway-based association tests are insensitive to local LD, typically from 1.0 to ~ 0.5 [10, 12]. Pruning with a threshold of 0.9 substantially reduced the number of SNPs for each pathway in the mouse data set, allowing for consideration of much larger pathways than without pruning. We used the dog data set without pruning because it did not contain large chunks of SNPs with maximal LD. The main statistic we used for association testing was the prediction score R, defined as the correlation between the predicted and actual phenotypes calculated for individuals in the test set within cross-validation. The use of cross-validation allows us to avoid any bias arising from overfitting. We optimized the prediction score R with respect to λ1 and λ2, which we allowed to vary independently between 0.01 and 100. We included in this optimization the special case in which the interactions were turned off (λ2 = ∞). To estimate the p-values of SNP sets, we used the fact that our main statistic is a correlation, for which the null distribution is well-known analytically. We used P = 1 − Φ (z), where Φ is the cumulative distribution function of the standard normal distribution, \( z=\sqrt{n-3}\left(f-{f}_0\right) \), where f = (1/2) ln[(1 + R)/(1 − R)] and f0 = (1/2) ln[(1 + R0)/(1 − R0)] (Fisher's transformation). To obtain the mean correlation R0 under the null hypothesis necessary in this formula, we repeated the inference 10 times with phenotype labels permuted and calculated the mean of the correlations. This mean value was typically close to zero and positive but negative for some pathways. We tested this null distribution (the Fisher-transformed correlation is normally distributed) for a selection of SNP sets and calculated p-values by phenotype-label permutation directly as the fraction of replicates among ~ 1000 for which R < R0 (Additional file 1: Figure S3). We also tested the possibility of assuming R0 = 0, and found it to yield substantial deviations from the diagonal in the quantile-quantile plots for pathways with P close to 1; the only choice that produced correct p-value distributions was to allow R0 to be negative and to estimate it by phenotype permutation for each pathway (Additional file 1: Figure S4 and Fig. 4). We also compared the results based on the main Reactome pathway set with those based on an updated version (February 2018). We tested the association of 102 pathways that had been added to the main database with fear conditioning (cued test) and found that the highest ranked pathways had P values on the order of ~ 10− 3 (Additional file 1: Table S2). We estimated the broad-sense heritability of a pathway as follows: for a given pathway, we first divided the cohort into two halves and used the first half to identify the optimal values of penalizers λ1 and λ2, which maximize the prediction score R under cross-validation. We then applied the inference for the second half under these penalizer values and calculated the squared correlation between the predicted and actual trait values. For comparison, we used the software GCTA [33] and LDAK [34] to estimate the proportion of variance explained by non-interacting SNPs (narrow-sense heritability) contained in the same pathway (Fig. 8). In the latter calculations, we included sex (cued fear and prepulse inhibition) and the first principal component (fear of noise in dogs) as covariates. Collective inference for quantitative traits We implemented the algorithm we termed the continuous discriminant analysis (CDA) procedure for quantitative traits, where the genotype-phenotype data for m SNPs and n individuals were fit to a joint distribution model using the maximum likelihood method (Fig. 1). We first modeled the phenotype data by a normal distribution. We then considered the genotype distribution conditional on phenotypes parameterized by the sum of the additive and interaction terms of all variants. For binary phenotypes, these additive and interaction parameters are defined separately for case and control groups [12]. In contrast, for a single cohort with quantitative traits, we considered these parameters to be linear functions of the phenotype value. The intercept and slope of the additive and interaction parameters of the genotype distribution conditional on phenotypes were then inferred using the maximum likelihood method (see Methods). For the typical model sizes we considered (m, the number of SNPs, of up to ~ 1000), the total number of model parameters including the interaction terms often greatly exceeded the sample sizes, and regularization was necessary to prevent overfitting. We adopted a cross-validation scheme in which we divided the sample into training and test groups, and performed inference by using only training individuals (Fig. 1) in the presence of penalizers. The genotype distribution conditional on phenotypes was then used to predict the phenotype values for test individuals. We calculated the correlation R between the true and predicted phenotypes as the performance measure and maximized it with respect to the penalizers to determine the optimal fit. Because of the training/test group division, this prediction score R2 is distinct from the usual proportion of variance explained by regression (r2). We estimated the latter by dividing the sample into two parts, using the first half to determine the optimal penalizer values, and using the second half to calculate the squared correlation from self-prediction. We tested our algorithm using simulated data, for which m was small enough so that we could enumerate all possible genotypes. We maximized the prediction score R as a function of the penalizer under conditions where the inferred interaction parameters were closest to the true values for a given sample size (Fig. 2). Cross-validation efficiently identified the regularization conditions that optimized prediction, while avoiding overfitting for small sample sizes. We then compared the power to detect the overall significance of a group of interacting SNPs under CDA and RR [28] (i.e., linear regression, including all interaction terms, and regularized by the same cross-validation scheme as that for CDA). The optimized prediction score R was higher for CDA than for RR in all cases; in addition, the differences were greater the smaller the sample size n and the larger the number of SNPs m, which led to higher power (type I error α = 0.05, evaluated over multiple replicates of simulated samples) (Fig. 3). These results suggest that for small sample sizes and high dimensionality (number of interacting variants considered), CDA achieves higher power than regression-based methods to detect collective association strengths of interacting SNPs. The statistical significance of an inference scored by R can be evaluated using p-values obtained from the null distribution of R known for normally distributed data (see Methods). Regularized inference of genotype-quantitative trait associations for two different sample sizes and varying penalizer values. a–d Simulated data with n = 100, where the overall dependence of prediction score R (correlation between predicted and actual phenotype values for test individuals) is shown in a, and b–d show the comparisons between predicted and true parameter values (single-SNP parameter h and interaction J for each SNP and SNP pairs, respectively) for three different penalizer λ values. Closer to the diagonal is better. Note that the condition λ = 0.1 in c optimizing R (see a) gives the best fit. e–g Analogous results for n = 104. The sample size is large enough such that overfitting under small λ is negligible. The number of SNPs was m = 5 and the dominant model was used. Parameters were generated randomly from normal distributions: h(0) ~ N(−0.3, 0.12), h(1)~ N(0.3, 0.12), J(0) ~ N(0, 0.052), and J(1) ~ N(0.1, 0.052). The phenotype values {y k } for k = 1, …, n were generated from N(0, 1) and, for each individual, the conditional genotype distribution given by Eqs. (2–3) was used to generate genotypes Collective inference performance. Ridge regression (RR) and CDA were compared using simulated data. We first sampled phenotype values of n individuals from the standard normal distribution. Restricting ourselves to the number of SNPs (m ≤ 20) allowing for the enumeration of all possible genotypes, we then assigned single-SNP and interaction parameters for m = 10 (a), m = 15 (b), and m = 20 SNPs (c) from normal distributions h i (0) ~ N(0, 0.01), J ij (0) ~ N(0, 0.01), h i (1) ~ N(0, 0.01), and J ij (1) ~ N(0.1, 0.01), under the dominant model. We next calculated the genotype distribution conditional on phenotypes for all possible genotypes, and chose a genotype for each individual based on this distribution. We repeated this sampling for 100 replicates. For each data set, we applied RR and CDA collective inference, using a single penalizer λ determined by optimizing R by cross-validation (right column). Power was defined as the proportion of replicates for which P < 0.05 Behavioral traits of outbred mice We applied our algorithm to the genotype-quantitative trait data of outbred mice [8] (Additional file 1: Table S1 and Figure S1). Independent SNP p-values from CDA for the special case of single SNPs without interaction effects were numerically close to linear regression outcomes over a wide range of significance levels (Additional file 1: Figure S2). To account for the effects of covariates, such as sex, we used meta-analyses with sample size-weighted averaging of parameters [29], where the sample was sub-divided into two subgroups based on covariate distributions. For each group, we separately performed CDA inference and averaged the inferred parameters over the subgroups. In our inference, the association strength was quantified by the correlation R, and the corresponding p-value was estimated from the known null distribution of normally distributed data (see Methods). We tested this assumption using a selection of SNP sets and estimating their p-values directly by permutation sampling, and found good agreement (Additional file 1: Figure S3), which indicated that the computationally expensive permutation-based testing can be avoided in general. We then clustered SNPs into 1502 groups of varying sizes corresponding to pathways [31] and tested the association of each group with behavioral traits. (See Additional file 2: Table S3 for top-ranked pathway lists of all traits considered.) Fear conditioning We considered two fear conditioning traits: the fraction of time freezing during presentation of a tone (cue test) and that during exposure to the context alone (context test) [8]. Quantile-quantile plots of all pathway gene set-based SNP groups indicated adequate control of genomic inflation under meta-analyses with sex-based subgroups (Fig. 4a). We found stronger associations of top-ranked pathways for the cue test compared to the context test (Fig. 5a–b): for cue testing, the first group of pathways contained two that exceeded the Bonferroni threshold [Effects of phosphatidylinositol 4,5-phosphate (PIP2) hydrolysis, P = 2.4×10−6; Signal transduction, P = 3.3×10−5; see Additional file 1: Figure S4 for R values]. The presence of Signaling by G protein-coupled receptors (GPCRs; P = 3.6×10−4), which contains the PIP2 hydrolysis pathway, suggested that the strongest association with cued fear arises from the group of genes involved in post-synaptic signaling by GPCRs during memory consolidation [35]. The PIP2 hydrolysis pathway contained 82 SNPs (after pruning by LD r2 < 0.9) distributed over ~ 20 genes. None of these individual genomic loci were dominant in association strengths without interaction effects (Fig. 6a), indicating the collective nature of the PIP2 hydrolysis-cued fear association. Quantile-quantile plots of behavioral traits for pathway-based SNP groups from CDA inference. a Fear conditioning (FC) in cued and context tests. b Prepulse inhibition (PPI). c Forced swim test. d Elevated plus maze. e Sleep (total duration and difference in sleep lengths between lighted and dark periods) f Fear of noise and humans/objects in dogs. Data sets used are outbred mice (a–d) and Labrador Retrievers (e–f). Colored symbols and filled symbols represent pathways with false discovery rate < 0.05 and with significance exceeding Bonferroni-corrected threshold, respectively Top-ranked pathways associated with quantitative behavioral traits. Results for mice (a–g) and dogs (h) are shown. a Fear conditioning (FC) cue test. b FC context test. c Prepulse inhibition (PPI). d Elevated plus maze. e Forced swim test. f Sleep duration in 24 h. g Differences in sleep length in light (L) and dark (D) periods. h Fear of noise in dogs. Dashed red lines represent Bonferroni-corrected significance thresholds. Groups of pathways belonging to different classes are labeled with colored texts. ABC, ATP-binding cassette; Activ., activation/activates; alkyl., alkylation; assemb., assembly; biol., biology; biosynth., biosynthesis; catab., catabolism; Cdk, cyclin-dependent kinase; cell., cellular; CL, cardiolipin; clear., clearance; cleav., cleavage; cmplx., complex; cont., containing; demethyl., demethylates; devel., development/developmental; DSCAM, Down syndrome cell adhesion molecule; enab., enables; EPH, erythropoietin-producing human hepatocellular receptor; ER, endoplasmic reticulum; exec., execution; expr., expression; FA, fatty acid; facilit., facilitative; FZD, frizzled protein; gCOO, γ-carboxylation/carboxylated; glycosyl., glycosylation; GPCR, G protein-coupled receptor; homeo., homeostasis; IFN, interferon; IL, interleukin; ind., induces/induced; indep., independent; inhib., inhibition/inhibits; inter., interaction; interconv., interconversion; intermed., intermediate; ISG15, interferon-stimulated gene 15; LPC, lysophosphatidylcholine; LRRFIP1, leucine-rich repeat flightless-interacting protein 1; MAPK, mitogen-activated protein kinase; mech., mechanism; med., mediated; metab., metabolism; misc., miscellaneous; neg., negative; NFkB, nuclear factor kappa B; NLRP, NACHT, LRR and PYD domains-containing protein; oxidat., oxidation; PAO, polyamine oxidase; phosph., phosphorylation; PI, phosphatidylinositol; PIP2, phosphatidylinositol phosphate 2; pol I, polymerase I; prog., programmed; propept., propeptide; prot., protein; R., receptor/receptors; Rap, Ras-related protein; reg., regulation/regulates; remod., remodeling; remov., removal; repl., replication; resp., response; RSK, ribosomal 6 kinase; rxn., reaction; sig., signal/signaling; stimul., stimulation; synap., synaptic; synth., synthesis; sys., system; TDG, thymine-DNA glycosylase; term., terminal/terminates/termination; TET, ten-eleven translocation methylcytosine dioxygenase; TFAP, transcription factor activating enhancer binding protein; TNFR, tumor necrosis factor R; transcr., transcription; transm., transmembrane; transp., transports/transporter/transportation; TSR, thrombospondin repeat; ubiquit., ubiquitination; UFA, unsaturated fatty acid Independent-SNP and collective association levels of variants contributing to pathways. Those highly ranked for mouse behavioral traits are shown. a Manhattan plot for fear conditioning, showing single-SNP p-values for linear regression. The mouse SNPs for genes in two pathways, Effects of PIP2 hydrolysis and γ-carboxylation of protein precursors are shown in color. Horizontal lines show the collective inference p-values for these two pathways. b–c Detailed views of two loci contributing to pathways in a. d–f Prepulse inhibition and three pathways, Platelet homeostasis, Serotonin clearance from synaptic cleft, and Metallothioneins bind metals. The collective p-values of the latter two pathways (bottom horizontal lines) are indistinguishable. Filled rectangles represent the coding regions of genes indicated The second group of highly ranked pathways contained those involved in γ-carboxylated proteins, including their synthesis, transport in the endoplasmic reticulum (ER) and Golgi apparatus, and modifications (Fig. 5a), of which two had false discovery rates (FDR) < 0.05 (Removal of N-terminal propeptides from γ-carboxylated proteins, P = 6.3×10−5; γ-carboxylation of protein precursors, P = 7.6×10−5). The γ-carboxylation of protein precursors pathway contained 10 SNPs after LD pruning near two coagulation factor-coding genes, F2 (thrombin) and F9, in addition to the genes Bglap, Ggcx, Gas6, and Proc. Recent studies have revealed that γ-carboxyglutamate-containing coagulation factors, particularly thrombin, in addition to playing central roles in hemostasis of peripheral blood [36], also regulate synaptic plasticity by stimulating protease-activated receptor 1 (PAR1) [22]. PAR1, a GPCR highly expressed on neurons, is activated by the cleavage of its extracellular N-terminus via the action of thrombin [37, 38] as a protease. Coupling of PAR1 to Gαq protein activates phospholipase Cβ (PLCβ), which hydrolyzes PIP2 to generate second messenger molecules, inositol 1,4,5-triphosphate (IP3) and diacylglycerol (DAG), leading to the phosphorylation of cytosolic proteins by protein kinase C and mobilization of Ca2+, respectively [37, 39]. Together, the PIP2 hydrolysis and γ-carboxylation pathway groups in Fig. 5a strongly implicate this thrombin-PAR1 signaling pathway in cued fear, and in particular, the dynamic modulation of G protein coupling during long-term potentiation in the amygdala [22]. These results are also consistent with our previous finding of a high association between γ-carboxylation pathways and PTSD in humans [12, 40] (Fig. 4a in Ref. [12]; Transport of γ-carboxylated protein precursors from ER to Golgi apparatus, P = 9.6×10−5 in human PTSD versus P = 1.8×10−4 in the current study for cued fear). The third group of pathways associated with cued fear (P < 10−3) were those for transcription and mRNA decay (RNA polymerase I transcription termination, P = 4.1×10−4; mRNA decay by 3′ to 5′ exribonuclease, P = 8.7×10−4), which are likely relevant in the regulation of synaptic plasticity at the levels of transcription and translation, e.g., by the transport and storage of mRNAs in distal dendrites [41]. Gene sets associated with the amount of freezing during the context test showed a distribution similar to those during the cue test (Fig. 4a) but lacked pronounced groups of highly ranked pathways (Fig. 5b). The highest ranked pathways included those for cell cycle and axon guidance, which likely affect neural development and thereby fear responses. Prepulse inhibition The inference results for prepulse inhibition (Fig. 5c) were dominated by the top-ranked pathway, Platelet homeostasis (P = 5.6×10−7), whose strong association was clearly collective in nature, containing a large number of variants of individually low association levels scattered across different chromosomes (Fig. 6d). A pathway lower in association strength but nonetheless notable was Serotonin clearance from the synaptic cleft (P = 2.6×10−4), which describes the action of the serotonin transporter encoded by Slc6a4, the target gene of numerous antidepressant drugs known as serotonin reuptake inhibitors [42, 43]. SNPs for this pathway consisted solely of those near Slc6a4, whose individual association levels were negligible in contrast to their collective p-value (Fig. 6e). Serotonin transporter gene variants have previously been linked to startle responses in human subjects [44]. The large body of evidence implicating Slc6a4 in behavioral traits and psychiatric disorders [42, 43], along with the strong association of Platelet homeostasis, suggest that serotonin and its signaling play key roles in prepulse inhibition: in addition to modulating brain functions, serotonin is abundantly stored in platelets outside the brain and regulates vasoconstriction, dilation, and other cardiac functions [45]. A pathway similar in association strength was Metallothioneins bind metals (P = 2.7×10− 4), which contained SNPs near genes Mt1–4 (Fig. 6f). Elevated plus maze For elevated plus maze test data, we chose the fraction of time spent in closed arms as the trait representing anxiety. A relatively large portion of pathways showed substantial deviations from the null distribution for this trait, while the sex-based meta-analysis still adequately controlled for inflation in pathways (P > 0.1) (Fig. 4d). The highest-ranked pathways (Fig. 5d) were Interconversion of polyamines (P = 2.2×10−6), arising from SNPs near the Smox gene (Fig. 7b), and Hydrolysis of lysophosphatidylcholine (LPC; P = 7.7×10−6), containing SNPs near the Pla2g4a and Gpcpd1 genes (Fig. 7b–c). In contrast to these pathways, whose association appeared to arise from SNPs near genes located in one of the loci with strong LD, Interleukin (IL)-10 signaling (P = 1.5×10−5) was highly polygenic, similar to Effects of PIP2 hydrolysis and Platelet homeostasis (Fig. 6a,d). Independent-SNP and collective association levels of variants contributing to pathways associated with elevated plus maze. a Manhattan plot and variants in three pathways, Interconversion of polyamines, Hydrolysis of lysophosphatidylcholine, and Interleukin-10 signaling. b–c Detailed views of two loci contributing to pathways in a. Filled rectangles represent the coding regions of genes indicated Forced swim test and sleep-related traits Inference for the forced swim test (immobility during the last 4 min as a measure of depression) indicated signs of inflation under sex-based meta-analysis. We performed meta-analyses by sub-dividing the cohort into two groups of high and low immobility during the first 2 min and inferred the association with immobility during the last 4 min separately, so that only the component of depression traits induced by stress (forced swim) would be tested. This choice removed genomic inflation (Fig. 4c) and yielded a top-ranked pathway (Fig. 5e), Negative feedback regulation of mitogen-activated protein kinase (MAPK) pathway (P = 2.8×10−4), along with cell cycle pathways (G2/M DNA damage checkpoint, P = 2.6×10−4). Our finding of the association of MAPK signaling and its negative regulators is consistent with reported evidence for their roles in depression [46, 47]. We additionally tested two sleep-related traits—overall duration and difference in sleep length between light and dark periods [8]—and found Regulation of Frizzled by ubiquitination (P = 9×10−5) of Wnt signaling and Regulation of signaling by Nodal (P = 5×10−5) to be highly associated, respectively, with each trait (Fig. 5f–g). Behavioral traits of dogs To gain further insight into the genetic pathways associated with fear-related traits, we additionally analyzed recent dog personality trait data reported for Labrador Retrievers by Ilska et al. [5]. We chose two dog traits for analysis: fear of noise and of humans/objects. Independent SNP analysis by linear regression indicated substantial inflation from the population structure. We stratified the cohort into two groups by principal component analysis (Additional file 1: Figure S5), and used meta-analysis, which reduced inflation to levels comparable to those from linear mixed model analysis [48] for independent SNPs (Additional file 1: Figure S6). Quantile-quantile plots of pathways under collective inference indicated distributions (Fig. 4f) comparable to those of elevated plus maze for mice (Fig. 4d). Overall, fear of noise, the trait most strongly associated under independent-SNP analyses [5], was also more strongly associated with pathways than was fear of humans/objects. The pathway most strongly associated with fear of noise (Fig. 5h) was Hemostasis (P = 7.4×10−7), providing further support to the suggested contributions of coagulation factors to cued fear in mice (Fig. 5a) and serotonin in platelets to prepulse inhibition (Fig. 5c). Hemostasis is a large pathway (3592 SNPs) whose association was entirely collective (Additional file 1: Figure S7). Two pathways among those exceeding the Bonferroni threshold for fear of noise were Reversal of alkylation damage by DNA dioxygenases (P = 1.6×10−5) and LRR FLII-interacting protein 1 activates type I interferon production (P = 1.5×10−5). One possible route through which polymorphisms in these DNA damage and innate immune pathways could affect the fear response is via the neural development of cortical interneurons [49], whose disruption can lead to variations in the ability to control fear. Further relevance of these pathway groups to fear response is suggested by recent findings linking stress-hormone action to DNA damage and cytosolic detection of DNA [50, 51]. Pathways highly ranked for fear of humans/objects (Additional file 1: Figure S8) represented a range of similar and other developmentally relevant processes, including apoptosis (Breakdown of nuclear lamina, P = 1.9×10−5). We found that Synthesis of inositol phosphates (IPs) in the nucleus was also relatively highly ranked for fear of humans/objects albeit without exceeding the Bonferroni threshold (P = 4.9×10−5), consistent with the high association between Effects of PIP2 hydrolysis and cued fear in mice (Fig. 5a). These pathways highly ranked for fear in dogs were all predominantly collective in nature, with no constituent SNPs dominant in independent-SNP association levels (Additional file 1: Figure S7). Narrow- and broad-sense heritability estimates for pathways We estimated the proportion of variance explained by interacting SNPs for a selection of top-ranked pathways associated with mouse and dog behavioral traits (Fig. 8). In contrast to standard linear regression analyses involving low-dimensional predictors, in our approach, the proportion of variance explained r2 was obtained by evaluating the correlation between predicted and observed phenotypes using the optimal penalizing conditions determined from cross-validation (Fig. 1). This definition also implies that the genetic component of r2 corresponds to the broad-sense heritability that is, in general, non-additive. We compared our estimates of these heritability estimates for pathways highly ranked for cued fear and prepulse inhibition in mice (Fig. 8a–b) with the additive (narrow-sense) heritability computed by GCTA [33] and LDAK [34]. The narrow-sense heritability values computed by the two methods were similar, with those obtained by LDAK being relatively larger in magnitude overall. In contrast, broad-sense heritability was substantially larger to a varying degree, but typically more so for larger pathways, which hold more room for non-additive effects (Fig. 8a). Broad-sense heritability of pathways compared to proportion of additive variance explained. a Fear conditioning (cue test) in mice. b Prepulse inhibition (PPI) in mice. The top-ranked pathways in Fig. 5a,c are shown in the same order. CDA values represent r2 estimated using regularization conditions determined from cross-validation applied to half of the whole cohort and repeating the inference for the other half. Error bars represent the 95% c.i. The GCTA and LDAK outcomes represent the proportion of variance explained by the same set of SNPs but without interaction effects. For pathways in which the GCTA/LDAK p-values were higher than 0.05, the proportion of variance was set to zero. The CDA p-values are all smaller than 10−3 (Fig. 5) We introduced a quantitative trait-mapping approach that targets collective associations of a group of SNPs while taking into account inter-variant interaction effects as well as the effects of non-uniform, empirical high-dimensional distributions of genotypes within the cohort. Performance tests of the algorithm suggested a substantial enhancement in power compared to regression-based methods, similar to the finding for binary phenotypes [10]. Although the approach is marginally more demanding computationally than case-control analyses, the usual advantage of quantitative trait inference, of requiring smaller sample sizes to achieve similar levels of power, is expected to apply under collective inference as well. In addition to the quantitative trait data covered here and the binary case-control data considered in a previous study, one could also analyze categorical data with multiple discrete phenotypes. Categorical data can be treated by an extension of the binary phenotype formulation (Supplementary Text 1 in [10]). A limitation shared by both quantitative and discrete phenotype versions of our method is the reliance on a pathway database, which presumably influences the results strongly. We demonstrated the practical utility of our approach by applying it to the genotype-phenotype data sets of outbred mice [8] and pet dogs [5]. In contrast to human studies for which reference panels of common variants are available and typical LD values decay rapidly within genomic loci, these early mammalian genomic data still contain much higher degrees of LD, limiting the resolution of standard SNP-based analyses and making the identification of causal genes or SNPs challenging. Our collective inference approach has the potential to reveal groups of variants whose associations with a given trait are non-additive and therefore are relatively insensitive to the spatial extent of fine-scale correlations within a locus. Behavioral traits, for which typically SNP-based inferences yield relatively few dominant associations and whose genetic architectures are often highly polygenic, are especially suited to the approach. Our inference outcomes for major behavioral traits (Fig. 5) suggest that the nature of genetic associations of a given pathway can span the range between the additive limit, where a few dominant SNPs independently account for the association, to the purely collective limit, where a large number of SNPs spanning multiple loci of negligible individual association levels combine to produce a strong signal. Examples of additive groups are γ-carboxylation of protein precursors for cued fear in mice, for which the variants near the F2 gene had p-values close to that of the pathway as a whole (Fig. 6b), Interconversion of polyamines and Hydrolysis of LPC for anxiety in mice (Fig. 7a), where one or more of the genes located within associated loci likely raised the association strengths of pathways containing them. Examples of pathways with purely collective association for mice are Effects of PIP2 hydrolysis (Fig. 6a), Platelet homeostasis (Fig. 6d), and IL-10 signaling (Fig. 7a), whose associations cannot be reduced to a few SNPs or genes. Notably, genetic factors associated with fear in dogs (Fig. 5h) were all predominantly collective (Additional file 1: Figure S7). One of our major findings on the genetics of behavioral traits is for fear conditioning in the cued test (Fig. 5a): γ-carboxylated proteases (thrombin coded by F2, Fig. 6b) activate neuronal PAR1, triggering G protein-coupled signaling cascades (Effects of PIP2 hydrolysis, Fig. 5a) and long-term potentiation. Our previous observation that the same pathway groups were associated with human PTSD [12] provides strong support not only for our current interpretation, but also for the relevance of fear conditioning in mice as a model of PTSD. Bourgognon et al. explicitly demonstrated that this PAR1-G protein coupling activated by thrombin occurs in amygdala neurons, allowing for dynamic modulation of fear in mice [22]; they found that in fear-naive mice, PAR1 couples with Gαq (excitatory) and Gαo (inhibitory) proteins, whereas the latter becomes more important after conditioning. Our finding of the high association of PIP2 hydrolysis, which is downstream of the Gαq protein pathway, suggests that genetic polymorphisms affecting the generation of second messenger molecules during the excitatory phase of long-term potentiation contributes significantly to the heritability of conditioned fear responses. The role of the thrombin-PAR1 pathway in long-term potentiation and fear conditioning, furthermore, suggests a possible explanation for the commonly observed comorbidity of PTSD and cardiovascular diseases [25, 26]: individuals with collections of genetic polymorphisms that affect this neuronal pathway would also be at higher risk of impaired hemostasis and cardiovascular functions. The Hemostasis pathway was also found to be most strongly associated with gene sets differentially expressed in blood from PTSD subjects [52]. A second arm that is likely also contributing to this comorbidity involves the role of serotonin in neuronal functions and psychiatric disorders, including fear conditioning [44, 53] as well as in platelet homeostasis [45]. We found that these overlapping functions of serotonin were associated with prepulse inhibition in mice (Fig. 5c) and further replicated the association between hemostasis and fear from the analysis of the fear of noise in dogs (Fig. 5h). The association of polyamine pathways with the elevated plus maze test (Fig. 5d) is consistent with the known roles of polyamines in anxiety and depression, as for instance demonstrated in studies of high- and low-anxiety mice [54]. The numerous other pathways highly ranked for the behavioral traits of mice and dogs (Fig. 5) belong to classes of processes including cell cycle, axon guidance and migration, DNA repair, innate immune response, apoptosis, and cellular stress response. Together, they are consistent with the view that individual variation in behavioral traits are strongly affected by disruptions to neurodevelopmental processes, owing to the collective effects of polymorphisms, which likely result in impaired development of key neuronal structures such as cortical interneurons [12, 49]. The strong associations of DNA repair and type I interferon-mediated immune response pathways with fear of noise in dogs (Fig. 5h), furthermore, support recent experimental findings suggesting that stress is linked to inflammation via DNA damage and the resulting recognition of damaged DNA in the cytosol [50, 51]. Although the genetic architectures of mice/dogs differ markedly from those of humans, the overall picture suggested by our results on fear-related traits is likely to be relevant to human genetics, given the common evolutionary origin of fear responses shared by all mammals. Results using animal genetic data may also offer avenues for experimental validation. For instance, pharmacological experiments that target neuronal pathways involving γ-carboxylated proteases in mice [22] could benefit from the genetic screening results in this work, and may help identify similar drug candidates for treating human PTSD and other psychiatric conditions. Our estimates of the variance explained by interacting SNPs (Fig. 8) demonstrate that broad-sense heritability can be computed from genomic data for unrelated individuals. Furthermore, extensive epistatic effects among many SNPs make the heritability of different pathways non-additive. We presented a novel method to infer collective association of a large number of variants with quantitative traits while taking into account interaction effects. Applications to mammalian behavioral trait data revealed pathways linking stress-related phenotypes and hemostasis: neuronal signaling by γ-carboxylated proteases. Our work provides evidence suggesting that behavioral traits are strongly influenced by large-scale interaction effects among genetic variants. Continuous discriminant analysis Diacylglycerol ER: GPCR: G protein-coupled receptor Inositol, phosphate IP3: Inositol 1,4,5-triphosphate LD: Linkage disequilibrium MAPK: PAR1: Protease-activated receptor 1 PIP2: Phosphatidylinositol 4,5-phosphate PL: Pseudo-likelihood PTSD: SNP: Single-nucleotide polymorphism Bendesky A, Kwon YM, Lassance JM, Lewarch CL, Yao S, Peterson BK, et al. The genetic basis of parental care evolution in monogamous mice. Nature. 2017;544:434–9. Weber JN, Peterson BK, Hoekstra HE. Discrete genetic modules are responsible for complex burrow evolution in Peromyscus mice. Nature. 2013;493:402–5. Sousa N, Almeida OF, Wotjak CT. A hitchhiker's guide to behavioral analysis in laboratory rodents. Genes Brain Behav. 2006:5 Suppl 2:5–24. Wang GD, Xie HB, Peng MS, Irwin D, Zhang YP. Domestication genomics: evidence from animals. Annu Rev Anim Biosci. 2014;2:65–84. Ilska J, Haskell MJ, Blott SC, Sanchez-Molano E, Polgar Z, Lofgren SE, et al. Genetic characterization of dog personality traits. Genetics. 2017;206:1101–11. Zapata I, Serpell JA, Alvarez CE. Genetic mapping of canine fear and aggression. BMC Genomics. 2016;17:572. 1000 Genomes Project Consortium, Auton A, Brooks LD, Durbin RM, Garrison EP, Kang HM, et al. A global reference for human genetic variation. Nature. 2015;526:68–74. Nicod J, Davies RW, Cai N, Hassett C, Goodstadt L, Cosgrove C, et al. Genome-wide association of multiple complex traits in outbred mice by ultra-low-coverage sequencing. Nat Genet. 2016;48:912–8. Parker CC, Gopalakrishnan S, Carbonetto P, Gonzales NM, Leung E, Park YJ, et al. Genome-wide association study of behavioral, physiological and gene expression traits in outbred CFW mice. Nat Genet. 2016;48:919–26. Woo HJ, Yu C, Kumar K, Gold B, Reifman J. Genotype distribution-based inference of collective effects in genome-wide association studies: insights to age-related macular degeneration disease mechanism. BMC Genomics. 2016;17:695. Woo HJ, Yu C, Reifman J. Collective genetic interaction effects and the role of antigen-presenting cells in autoimmune diseases. PLoS One. 2017;12:e0169918. Woo HJ, Yu C, Kumar K, Reifman J. Large-scale interaction effects reveal missing heritability in schizophrenia, bipolar disorder and posttraumatic stress disorder. Transl Psychiatry. 2017;7:e1089. Laurie C, Wang S, Carlini-Garcia LA, Zeng ZB. Mapping epistatic quantitative trait loci. BMC Genomics. 2014;15:112. Crawford L, Zeng P, Mukherjee S, Zhou X. Detecting epistasis with the marginal epistasis test in genetic mapping studies of quantitative traits. PLoS Genet. 2017;13:e1006869. Wei WH, Hemani G, Haley CS. Detecting epistasis in human complex traits. Nat Rev Genet. 2014;15:722–33. Städler N, Dondelinger F, Hill SM, Akbani R, Lu Y, Mills GB, Mukherjee S. Molecular heterogeneity at the network level: high-dimensional testing, clustering and a TCGA case study. Bioinformatics. 2017;33:2890–6. de Los Campos G, Vazquez AI, Fernando R, Klimentidis YC, Sorensen D. Prediction of complex human traits using the genomic best linear unbiased predictor. PLoS Genet. 2013;9:e1003608. Schizophrenia Working Group of the Psychiatric Genomics Consortium. Biological insights from 108 schizophrenia-associated genetic loci. Nature. 2014;511:421–7. Article PubMed Central CAS Google Scholar Gagliano SA. It's all in the brain: a review of available functional genomic annotations. Biol Psychiatry. 2017;81:478–83. Gagliano SA, Ravji R, Barnes MR, Weale ME, Knight J. Smoking gun or circumstantial evidence? Comparison of statistical learning methods using functional annotations for prioritizing risk variants. Sci Rep. 2015;5:13373. Crawley JN. Behavioral phenotyping strategies for mutant mice. Neuron. 2008;57:809–18. Bourgognon JM, Schiavon E, Salah-Uddin H, Skrzypiec AE, Attwood BK, Shah RS, et al. Regulation of neuronal plasticity and fear by a dynamic change in PAR1-G protein coupling in the amygdala. Mol Psychiatry. 2013;18:1136–45. Ben Shimon M, Lenz M, Ikenberg B, Becker D, Shavit Stein E, Chapman J, et al. Thrombin regulation of synaptic transmission and plasticity: implications for health and disease. Front Cell Neurosci. 2015;9:151. Liang J, Le TH, Edwards DRV, Tayo BO, Gaulton KJ, Smith JA, et al. Single-trait and multi-trait genome-wide association analyses identify novel loci for blood pressure in African-ancestry populations. PLoS Genet. 2017;13:e1006728. Edmondson D, von Kanel R. Post-traumatic stress disorder and cardiovascular disease. Lancet Psychiatry. 2017;4:320–9. Pollard HB, Shivakumar C, Starr J, Eidelman O, Jacobowitz DM, Dalgard CL, et al. "Soldier's heart": a genetic basis for elevated cardiovascular disease risk associated with post-traumatic stress disorder. Front Mol Neurosci. 2016;9:87. Aurell E, Ekeberg M. Inverse Ising inference using all the data. Phys Rev Lett. 2012;108:090201. Hastie T, Tibshirani R, Friedman J. The elements of statistical learning. 2nd ed. New York: Springer; 2009. Willer CJ, Li Y, Abecasis GR. METAL: fast and efficient meta-analysis of genomewide association scans. Bioinformatics. 2010;26:2190–1. Lindblad-Toh K, Wade CM, Mikkelsen TS, Karlsson EK, Jaffe DB, Kamal M, et al. Genome sequence, comparative analysis and haplotype structure of the domestic dog. Nature. 2005;438:803–19. Fabregat A, Sidiropoulos K, Garapati P, Gillespie M, Hausmann K, Haw R, et al. The Reactome pathway knowledgebase. Nucleic Acids Res. 2016;44:D481–7. Chang CC, Chow CC, Tellier LC, Vattikuti S, Purcell SM, Lee JJ. Second-generation PLINK: rising to the challenge of larger and richer datasets. Gigascience. 2015;4:7. Yang J, Lee SH, Goddard ME, Visscher PM. GCTA: a tool for genome-wide complex trait analysis. Am J Hum Genet. 2011;88:76–82. Speed D, Hemani G, Johnson MR, Balding DJ. Improved heritability estimation from genome-wide SNPs. Am J Hum Genet. 2012;91:1011–21. Pape HC, Pare D. Plastic synaptic networks of the amygdala for the acquisition, expression, and extinction of conditioned fear. Physiol Rev. 2010;90:419–63. Hoffman M, Monroe DM. Coagulation 2006: a modern view of hemostasis. Hematol Oncol Clin North Am. 2007;21:1–11. Noorbakhsh F, Vergnolle N, Hollenberg MD, Power C. Proteinase-activated receptors in the nervous system. Nat Rev Neurosci. 2003;4:981–90. Ossovskaya VS, Bunnett NW. Protease-activated receptors: contribution to physiology and disease. Physiol Rev. 2004;84:579–621. Coughlin SR. Thrombin signalling and protease-activated receptors. Nature. 2000;407:258–64. Nievergelt CM, Maihofer AX, Mustapic M, Yurgil KA, Schork NJ, Miller MW, et al. Genomic predictors of combat stress vulnerability and resilience in U.S. marines: a genome-wide association study across multiple ancestries implicates PRTFDC1 as a potential PTSD gene. Psychoneuroendocrinology. 2015;51:459–71. Bramham CR, Wells DG. Dendritic mRNA: transport, translation and function. Nat Rev Neurosci. 2007;8:776–89. Lesch KP, Bengel D, Heils A, Sabol SZ, Greenberg BD, Petri S, et al. Association of anxiety-related traits with a polymorphism in the serotonin transporter gene regulatory region. Science. 1996;274:1527–31. Murphy DL, Lesch KP. Targeting the murine serotonin transporter: insights into human neurobiology. Nat Rev Neurosci. 2008;9:85–96. Brocke B, Armbruster D, Muller J, Hensch T, Jacob CP, Lesch KP, et al. Serotonin transporter gene variation impacts innate fear processing: acoustic startle response and emotional startle. Mol Psychiatry. 2006;11:1106–12. Berger M, Gray JA, Roth BL. The expanded biology of serotonin. Annu Rev Med. 2009;60:355–66. Duric V, Banasr M, Licznerski P, Schmidt HD, Stockmeier CA, Simen AA, et al. A negative regulator of MAP kinase causes depressive behavior. Nat Med. 2010;16:1328–32. Duman CH, Schlesinger L, Kodama M, Russell DS, Duman RS. A role for MAP kinase signaling in behavioral models of depression and antidepressant treatment. Biol Psychiatry. 2007;61:661–70. Zhou X, Stephens M. Genome-wide efficient mixed-model analysis for association studies. Nat Genet. 2012;44:821–4. Marin O. Interneuron dysfunction in psychiatric disorders. Nat Rev Neurosci. 2012;13:107–20. Hara MR, Kovacs JJ, Whalen EJ, Rajagopal S, Strachan RT, Grant W, et al. A stress response pathway regulates DNA damage through beta2-adrenoreceptors and beta-arrestin-1. Nature. 2011;477:349–53. Hartlova A, Erttmann SF, Raffi FA, Schmalz AM, Resch U, Anugula S, et al. DNA damage primes the type I interferon system via the cytosolic DNA sensor STING to promote anti-microbial innate immunity. Immunity. 2015;42:332–43. Breen MS, Maihofer AX, Glatt SJ, Tylee DS, Chandler SD, Tsuang MT, et al. Gene networks specific for innate immunity define post-traumatic stress disorder. Mol Psychiatry. 2015;20:1538–45. Marcinkiewcz CA, Mazzone CM, D'Agostino G, Halladay LR, Hardaway JA, DiBerto JF, et al. Serotonin engages an anxiety and fear-promoting circuit in the extended amygdala. Nature. 2016;537:97–101. Ditzen C, Varadarajulu J, Czibere L, Gonik M, Targosz BS, Hambsch B, et al. Proteomic-based genotyping in a mouse model of trait anxiety exposes disease-relevant pathways. Mol Psychiatry. 2010;15:702–11. We thank Joy Hoffman for computational resource management and Tatsuya Oyama for editorial assistance. Computations were performed using the high-performance computing resources at the U.S. Army Research Laboratory, the U.S. Air Force Research Laboratory, and the U.S. Army Engineer Research and Development Center. This work was supported by the U.S. Army Medical Research and Materiel Command (Ft. Detrick, Maryland). The opinions and assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the U.S. Army or of the U.S. Department of Defense. This paper has been approved for public release with unlimited distribution. The funding body was not involved in any of the following aspects of the research: study design; data collection, analysis, and interpretation; writing of the manuscript; any other function beyond direct funding of the research or its presentation. The mouse data we used [8] are available at European Nucleotide Archive (ENA) accession number ERP001040. The dog data we used [5] are available at https://doi.org/10.5061/dryad.171q5. The software we used is available at https://github.com/BHSAI/GeDI. Biotechnology High Performance Computing Software Applications Institute, Telemedicine and Advanced Technology Research Center, U.S. Army Medical Research and Materiel Command, Fort Detrick, MD, USA Hyung Jun Woo & Jaques Reifman Hyung Jun Woo Jaques Reifman HJW implemented the algorithm, wrote the software, performed the analyses, and wrote the manuscript. JR supervised the work, participated in the analyses, and contributed to manuscript writing. Both authors read and approved the final manuscript. Correspondence to Jaques Reifman. Table S1. Sample sizes of behavioral trait data considered for outbred mice8 and dogs5. Figure S1 Distributions of quantitative trait values used for association testing. See Table S1. c, e, h, and i have been log-transformed (c and e from percentages, and h and i from a scale of 1 to 5). Figure S2 Comparison of single-SNP p-values from linear regression (LR) and continuous discriminant analysis (CDA). Figure S3 Comparison of empirical p-values of groups of interacting SNPs estimated by permutation of phenotype labels (symbols) and the use of the null distribution of R for normally distributed data. Figure S4 Optimized prediction scores of 10 top-ranked pathways for fear conditioning (FC; cued test). Figure S5 Population stratification of Labrador Retrievers using principal component (PC) analysis. Figure S6 Quantile-quantile plots of independent SNP p-values for Labrador Retriever data. Figure S7 Independent-SNP and collective association levels of SNPs in pathways highly ranked for fear in dogs. Figure S8 Top-ranked pathways for fear of humans/objects (dogs). Table S2 Additional pathways from Reactome database (Feb. 2018) ranked by association strengths with respect to fear conditioning (cued test). (PDF 1811 kb) Table S3. Lists of highly ranked pathways for mouse and dog behavioral traits. Pathways with P < 0.05 are shown for fear conditioning (cued and context), prepulse inhibition, elevated plus maze, swim test, sleep duration and difference in light and dark periods, noise and human/object-oriented fear in dogs. (XLSX 120 kb) Woo, H.J., Reifman, J. Collective interaction effects associated with mammalian behavioral traits reveal genetic factors connecting fear and hemostasis. BMC Psychiatry 18, 175 (2018). https://doi.org/10.1186/s12888-018-1753-4 Quantitative trait Epistasis Behavioral genetics Psychiatric molecular genetics
CommonCrawl
\begin{document} \title{Tensor products of subspace lattices and rank one density} \author{S. Papapanayides and I. G. Todorov} \date{28 March 2012} \address{Department of Pure Mathematics, Queen's University Belfast, Belfast BT7 1NN, United Kingdom} \email{[email protected]} \address{Department of Pure Mathematics, Queen's University Belfast, Belfast BT7 1NN, United Kingdom} \email{[email protected]} \begin{abstract} We show that, if $\cl M$ is a subspace lattice with the property that the rank one subspace of its operator algebra is weak* dense, and $\cl L$ is a commutative subspace lattice, then $\cl L\otimes\cl M$ possesses property (p) introduced in \cite{todorovshulman1}. If $\cl M$ is moreover an atomic Boolean subspace lattice while $\cl L$ is any subspace lattice, we provide a concrete lattice theoretic description of $\cl L \otimes \cl M$ in terms of projection valued functions defined on the set of atoms of $\cl M$. As a consequence, we show that the Lattice Tensor Product Formula holds for $\Alg \cl M$ and any other reflexive operator algebra and give several further corollaries of these results. \end{abstract} \maketitle \section{Introduction}\label{s_intro} Let $\cl A$ and $\cl B$ be unital operator algebras acting on Hilbert space. The Lattice Tensor Product Formula (LTPF) problem asks if the invariant subspace lattice $\Lat (\cl A\otimes\cl B)$ of the (weak* spatial) tensor product of $\cl A$ and $\cl B$ is the tensor product of the invariant subspace lattices $\Lat\cl A$ and $\Lat \cl B$. The origins of this problem can be found in the Tomita Commutation Theorem, which asserts that the \lq\lq dual'' statement, namely the Algebra Tensor Product Formula, holds for the projection lattices of von Neumann algebras. The LTPF problem is related to the question of reflexivity for subspace lattices, which asks to decide whether a given lattice of projections on Hilbert space is the invariant subspace lattice of some operator algebra (see P. R. Halmos' pivotal paper \cite{boolHalmos}). Although reflexivity questions have attracted considerable attention in the literature, little progress has been made on the LTPF problem since the initiation of its study in \cite{ltpfintro}. One of the reasons for this is the lack of useful descriptions of the tensor product of two subspace lattices which, in its own right, is due to the lack of compatibility between the lattice operations and the strong operator topology. It is known, however, that the LTPF problem has an affirmative answer if $\cl A$ and $\cl B$ are von Neumann algebras one of which is injective \cite{todorovshulman1}, if $\cl A$ is a completely distributive CSL algebra, while $\cl B$ is any other operator algebra \cite{Harrison}, as well as when both $\cl A$ and $\cl B$ are CSL algebras \cite{todorovir}. Even the special case, where $\cl A$ consists of the scalar multiples of the identity operator, is in general open, although several partial results were obtained in \cite{todorovshulman1}. Properties related to the subspace generated by the rank one operators in a given operator algebra $\cl A$ (hereafter referred to as the rank one subspace of $\cl A$) have been widely studied (see, e.g. \cite[Chapter 23]{dav-book}). In this paper, we continue the study of the LTPF problem by considering the case where one of the algebras has the property that its rank one subspace is weak* dense. The class of operator algebras with this property is rather large; it includes as a special case the algebras of all operators leaving two fixed non-trivial subspaces invariant \cite{decKatavolos}, \cite{Papadakis}, as well as the operator algebras of more general atomic Boolean subspace lattices \cite{memoirs}. The paper is organised as follows: in Section \ref{s_p}, we show that if $\cl M$ is a subspace lattice such that the rank one subspace of the algebra $\cl A = \Alg \cl M$ is weak* dense in $\cl A$, then the tensor product of $\cl M$ with the full projection lattice on an infinite dimensional separable Hilbert space is reflexive. This establishes the LTPF for the algebras $\cl A$ and $\bb{C}I$. The result is then extended to lattices of the form $\cl M\otimes\cl L$, where $\cl L$ is a CSL, thus generalising a corresponding result proved earlier in \cite{todorovshulman1}. In Section \ref{s_tabsl}, we restrict our attention to the case where $\cl M$ is an atomic Boolean subspace lattice (ABSL), and achieve a convenient description of the tensor product $\cl M\otimes\cl L$, where $\cl L$ is an arbitrary subspace lattice, showing that it is isomorphic to the lattice of $\cl L$-valued maps defined on the set of atoms of $\cl M$. We also show that the property of semistrong closedness of subspace lattices, introduced and studied in \cite{todorovshulman1}, is preserved under tensoring with $\cl M$ (see Proposition \ref{p_see} for the complete statement). In Section \ref{s_ltpf}, we show that if $\cl L$ is any reflexive subspace lattice then the LTPF holds for the algebras $\cl A$ and $\Alg \cl L$. Some further consequences of the description of the tensor product from Section \ref{s_tabsl} are also included in Section \ref{s_ltpf}. In the next section, we collect some preliminaries and fix notation. \section{Preliminaries}\label{s_prel} Let $H$ be a Hilbert space and $\cl S_H$ be the set of all closed subspaces of $H$. The set $\cl S_H$ is a complete lattice with respect to the operations of intersection $\wedge$ and closed linear linear span $\vee$. Using the bijective correspondence between $\cl S_H$ and the set $\cl P_H$ of all orthogonal projections on $H$, under which a closed subspace $\cl F$ corresponds to the projection with range $\cl F$, we transfer the lattice structure of $\cl S_H$ to $\cl P_H$, and denote the lattice operations on $\cl P_H$ obtained in this way again by $\wedge$ and $\vee$. A \emph{subspace lattice} on $H$ is a sublattice $\cl L$ of $\cl P_H$ containing $0$ and $I$ and closed in the strong operator topology. Let $\cl B(H)$ be the algebra of all bounded linear operators acting on $H$. If $\cl A\subseteq \cl B(H)$, it is customary to denote by $\Lat\cl A$ the set of all projections on $H$ whose ranges are invariant under all operators in $\cl A$. It is easy to show that $\Lat \cl A$ is a subspace lattice. Conversely, given any set of projections $\cl L\subseteq \cl P_H$, let $\Alg\cl L$ be the set of all operators on $H$ leaving invariant each element of $\cl L$. It is easy to see that $\Alg\cl L$ is a unital subalgebra of $\cl B(H)$ closed in the weak operator topology. A subspace lattice $\cl L$ is called \emph{reflexive} if $\cl L = \Lat\Alg\cl L$. Similarly, an operator algebra $\cl A$ is called \emph{reflexive} if $\cl A = \Alg\Lat \cl A$. A subspace lattice $\cl L$ is called a \emph{commutative subspace lattice} (or \emph{CSL} for short) if $PQ = QP$ for all $P,Q\in \cl L$ \cite{arvenson}. If $H_1$ and $H_2$ are Hilbert spaces and $\cl L_1\subseteq \cl P_{H_1}$ and $\cl L_2\subseteq \cl P_{H_2}$ are subspace lattices, we denote by $\cl L_1\otimes\cl L_2$ the subspace lattice generated by the projections of the form $L_1\otimes L_2$ acting on the Hilbert space tensor product $H_1\otimes H_2$, where $L_1\in \cl L_1$ and $L_2\in \cl L_2$. Given operator algebras $\cl A_1\subseteq \cl B(H_1)$ and $\cl A_2\subseteq \cl B(H_2)$, we let $\cl A_1\otimes\cl A_2$ be the weak* closed operator subalgebra of $\cl B(H_1\otimes H_2)$ generated by the elementary tensors $A_1\otimes A_2$, with $A_1\in \cl A_1$ and $A_2\in \cl A_2$. We denote by $I$ the identity operator acting on a separable infinite dimensional Hilbert space, and set $1\otimes \cl A = \bb{C}I\otimes\cl A$, where $\bb{C}I = \{\lambda I : \lambda\in \bb{C}\}$. We say that the Lattice Tensor Product Formula (LTPF) holds for $\cl A_1$ and $\cl A_2$ if $$\Lat (\cl A_1\otimes\cl A_2) = \Lat\cl A_1\otimes \Lat \cl A_2.$$ Similarly, the Algebra Tensor Product Formula (ATPF) is said to hold for the subspace lattices $\cl L_1$ and $\cl L_2$ if $$\Alg (\cl L_1\otimes\cl L_2) = \Alg\cl L_1\otimes\Alg\cl L_2.$$ The following notion will play an essential role in this paper. \begin{definition}[\cite{todorovshulman1}]\label{d_p} A subspace lattice $\cl L$ is said to possess \emph{property (p)} if the lattice $\cl P_{\ell^2} \otimes \cl L$ is reflexive. \end{definition} \noindent It follows from \cite[Proposition 4.2]{todorovshulman1} that $\cl L$ possesses property (p) if and only if $\cl P_{\ell^2} \otimes \cl L = \Lat(1 \otimes \Alg\cl L)$. If $x,y\in H$, we denote by $R_{x,y}$ the rank one operator on $H$ given by $R_{x,y}(z) = (z,y)x$, $z\in H$. It was shown in \cite{Stronglyreflat} that the rank one operator $R_{x,y}$ belongs to $\Alg\cl L$ if and only if there exists $L\in \cl L$ such that $x = Lx$ and $L_- y = 0$, where $$L_- = \vee\{P\in \cl L : L\not\leq P\}.$$ We say that a subspace lattice $\cl L$ possesses the \emph{rank one density property} if the subspace of $\Alg\cl L$ generated by the rank one operators contained in $\Alg\cl L$ is weak* dense in $\Alg\cl L$. It was shown in \cite{unknown} that if $\cl L$ possesses the rank one density property then it is completely distributive. An atomic Boolean subspace lattice (ABSL) is a distributive and complemented subspace lattice for which there exists a set $\cl E = \{E_j\}_{j\in J}\subseteq \cl L$ of minimal projections (called \emph{atoms}) such that for every $L\in \cl L$ there exists $J_L\subseteq J$ with $L = \vee_{j\in J_E} E_j$ \cite{boolHalmos}, \cite{memoirs}. A special case of interest arises when $\cl E$ has two elements, see \cite{decKatavolos} and \cite{Papadakis}. Along with the strong operator topology, we will also use the semi-strong convergence introduced in \cite{limsupsHalmos}. Namely, a sequence $(P_n)_{n\in \bb{N}}$ of projections acting on a Hilbert space $H$ is said to converge semistrongly to a projection $P$ on $H$, if (a) for every $x\in PH$ there exists a sequence $(x_n)_{n\in \bb{N}}\subseteq H$ with $x_n \in P_n H$, $n\in \bb{N}$, such that $x_n\rightarrow_{n\rightarrow\infty} x$, and (b) if $(x_{k})_{k\in \bb{N}}\subseteq H$ is a convergent sequence of vectors such that $x_{k}\in P_{n_k}H$, for some increasing sequence $(n_k)_{k\in \bb{N}}\subseteq \bb{N}$, then $\lim_{k \rightarrow \infty} x_k \in PH$. It was shown in \cite{limsupsHalmos} that $P_n\rightarrow_{n\rightarrow \infty} P$ in the strong operator topology if and only if $P_n\rightarrow_{n\rightarrow \infty} P$ semistrongly and $P_n^{\perp}\rightarrow_{n\rightarrow \infty} P^{\perp}$ semistrongly, where, for a projection $Q$, we let $Q^{\perp} = I - Q$ be its orthogonal complement. The weak operator (resp. strong operator, weak*) topology will be denoted by {\it w} (resp. {\it s}, {\it w*}). \section{Property (p)}\label{s_p} Let $H$ and $K$ be Hilbert spaces, with $K$ infinite dimensional and separable, let $\cl P = \cl P_K$ be the full projection lattice on $K$ and let $\cl L\subseteq \cl P$ be a subspace lattice. For any subset $\cl E\subseteq \cl P_H$, we let $m(\cl E,\cl L)$ be the set of all maps from $\cl E$ to $\cl L$. If $f,g \in m(\cl E,\cl L)$, we define $f \vee g$ and $f \wedge g$ to be the elements of $m(\cl E,\cl L)$ given by $$(f \vee g)(E) = f(E) \vee g(E) \text{ and } (f \wedge g)(E) = f(E) \wedge g(E), \ \ \ E \in \cl E.$$ It is clear that, under these operations, $m(\cl E,\cl L)$ is a complete lattice. Let $\phi_{\cl E,\cl L} : \cl P_{K\otimes H} \rightarrow m(\cl E,\cl L)$ be the map sending a projection $Q$ on $K\otimes H$ to the map $f_Q$ given by \begin{equation}\label{eq_uli} f_Q(E) = \vee \{P\in \cl L : P \otimes E \leq Q\}, \ \ \ \ E \in \cl E. \end{equation} We note that if $E_1,E_2\in \cl E$ are such that $E_1\wedge E_2\in \cl E$, then \begin{equation}\label{eq_ul} f_Q(E_1) \vee f_Q(E_2) \leq f_Q(E_1\wedge E_2). \end{equation} Dually, let $\theta : m(\cl E,\cl L)\rightarrow \cl P_{K\otimes H}$ be the map given by $$\theta(f) = \vee \{f(E) \otimes E : E \in \cl E\}, \ \ \ \ f\in m(\cl E,\cl L).$$ For the rest of this section, fix a subspace lattice $\cl M\subseteq \cl P_H$ and let $\cl A = \Alg \cl M\subseteq \cl B(H)$. It is clear that the map $\theta$ sends $m(\cl M,\cl L)$ into $\cl L \otimes \cl M$, for every subspace lattice $\cl L\subseteq \cl P$. We first note that $\theta$ is $\vee$-preserving; the proof is straightforward and we omit it. \begin{proposition}\label{ultralemma2} If $(f_{\alpha})_{\alpha\in \bb{A}} \in m(\cl M,\cl L)$ then $\theta(\vee_{\alpha\in \bb{A}} f_{\alpha}) = \vee_{\alpha\in \bb{A}} \theta(f_{\alpha})$. \end{proposition} \begin{lemma}\label{l_cyclic} Let $\cl M\subseteq \cl P_H$ be a subspace lattice with the rank one density property, $\cl A = \Alg \cl M$ and $\xi \in K\otimes H$. There exists $f\in m(\cl M,\cl P)$ such that the projection onto the cyclic subspace $\overline{(1\otimes\cl A)\xi}$ coincides with $\theta(f)$. \end{lemma} \begin{proof} Let $\xi = {\sum}_{j=1}^{\infty} e_j \otimes x_j$, where $(e_j)_{j \in \mathbb{N}}$ is an orthonormal basis of $K$ and $(x_j)_{j \in \mathbb{N}}$ is a square-summable sequence in $H$. Let $f\in m(\cl M,\cl P)$ be the mapping which sends the projection $L \in \cl M$ to the projection $f(L)$ onto the subspace $$\overline{\left\{ \sum_{j=1}^{\infty} (x_j,q) e_j : q\in H, L_{-}q = 0 \right\}}.$$ Let $\cl R$ be the rank one subspace of $\cl A$ and $F\in \cl R$. By \cite{Stronglyreflat}, there exists $m\in \bb{N}$, pairwise distinct projections $L_i \in \mathcal{M}$, $i=1,\dots ,m$, and vectors $p^{(i)}_{k} = L_i p^{(i)}_{k}$ and $q^{(i)}_k = {(L_i)}_{-}^{\perp} q^{(i)}_k$, $k=1, \dots, l_i$, $l_i\in \bb{N}$, such that $F = {\sum}_{i=1}^m ({\sum}_{k=1}^{l_i} R_{p^{(i)}_{k},q^{(i)}_k})$. We have $$(I \otimes F) (\xi) = \sum_{i=1}^{m} \left(\sum_{k=1}^{l_i} \left(\left(I \otimes R_{p^{(i)}_{k},q^{(i)}_k}\right) \left(\sum_{j=1}^{\infty} e_j \otimes x_j\right)\right)\right)$$ $$= \sum_{i=1}^m \left(\sum_{k=1}^{l_i}\left( \sum_{j=1}^{\infty} (x_j, q^{(i)}_{k}) e_j \right) \otimes p^{(i)}_{k}\right).$$ It follows that $(I \otimes F) (\xi) \in \theta (f)(K \otimes H)$ and since $F$ is an arbitrary element of $\cl R$, we have that $(1 \otimes \mathcal{R}) \xi \subseteq \theta (f)(K \otimes H)$. The property $\overline{\cl R}^{w^*} = \cl A$ easily implies that $\overline{1\otimes \cl R}^{w} = 1\otimes \cl A$. A standard application of Hahn-Banach's Theorem shows that $\overline{(1\otimes\cl A)\xi} = \overline{(1\otimes\cl R)\xi}$. Thus, $\overline{(1 \otimes \mathcal{A}) \xi} \subseteq \theta (f)(K \otimes H)$. On the other hand, if $L \in \mathcal{M}$, $p \in LH$ and $q \in (L_{-})^{\perp} H$, then $$\left({\sum}_{j=1}^{\infty} (x_j, q) e_j\right) \otimes p = (I \otimes R_{p,q}) \xi \in \overline{(1 \otimes \mathcal{A}) \xi}.$$ Hence, $(f(L) \otimes L)(K \otimes H) \subseteq \overline{(1 \otimes \mathcal{A}) \xi}$ and so we have that $\theta (f)(K \otimes H) \subseteq \overline{(1 \otimes \mathcal{A}) \xi}$; thus, $\overline{(1 \otimes \mathcal{A}) \xi} = \theta (f)(K \otimes H)$ and the proof is complete. \end{proof} \begin{theorem}\label{A} Let $\cl M$ be a subspace lattice with the rank one density property. The restriction of the map $\phi = \phi_{\cl M,\cl P}$ to $\Lat(1\otimes \cl A)$ is injective, $\wedge$-preserving and \begin{equation}\label{eq_lat1} \theta \circ \phi|_{\Lat(1\otimes \cl A)} = \id|_{\Lat(1\otimes \cl A)}. \end{equation} In particular, $\cl M$ has property (p) and every element of $\cl P\otimes\cl M$ has the form $\vee_{M\in \cl M} f(M)\otimes M$, for some map $f : \cl M\rightarrow \cl P$. \end{theorem} \begin{proof} Let $Q \in \Lat(1 \otimes \mathcal{A})$ and $P_L = f_Q(L)$, $L \in \mathcal{M}$ (see (\ref{eq_uli}) for the definition of the map $f_Q$). Obviously $$N \overset{def}{=} \theta(\phi(Q))= \underset{L \in \mathcal{M}}{\vee}(P_L \otimes L) \leq Q.$$ Assume, by way of contradiction, that there exists $\xi \in Q (K \otimes H) \backslash N (K \otimes H)$. By Lemma \ref{l_cyclic}, there exists $f\in m(\cl M,\cl P)$ such that $\overline{(1 \otimes \mathcal{A}) \xi} = \theta (f)(K \otimes H)$. There exists $M \in \mathcal{M}$ such that $f(M) \nleq P_M$, for otherwise we would have that $\xi \in N (K \otimes H)$. Thus, $$\underset{L \in \mathcal{M}}{\vee}((f(L) \vee P_L) \otimes L) = (\underset{L \in \mathcal{M}}{\vee}(P_L \otimes L)) \vee (\underset{L \in \mathcal{M}}{\vee}(f(L) \otimes L))\leq Q;$$ in particular we have that $(f(M) \vee P_M) \otimes M \leq Q$, contradicting the maximality of $P_M$. This proves that $Q = N = \theta (\phi(Q))$. Since the range of $\theta$ is contained in $\cl P\otimes \cl M$, (\ref{eq_lat1}) implies that $\Lat (1 \otimes \cl A) \subseteq \mathcal{P} \otimes \cl M$. Since the converse inclusion is trivial, we conclude that $\Lat (1 \otimes \cl A) = \mathcal{P} \otimes \cl M$, that is, that $\cl M$ has property (p). We next observe that if $E_1, E_2 \in \mathcal{P} \otimes \mathcal{M}$ then \begin{equation}\label{eq_claim} E_1 \leq E_2 \ \Longleftrightarrow \ \phi(E_1) \leq \phi (E_2). \end{equation} Indeed, if $\phi(E_1) \leq \phi (E_2)$ then, by (\ref{eq_lat1}), $$E_1 = \theta(\phi(E_1)) \leq \theta(\phi (E_2)) = E_2.$$ The converse direction follows directly from the definition of $\phi$, and (\ref{eq_claim}) is proved. It follows from (\ref{eq_claim}) that $\phi|_{\cl P \otimes\cl M}$ is injective. It remains to show that $\phi$ is $\wedge$-preserving. Let $\{E_j\}_{j\in J} \subseteq \cl P \otimes\cl M$, $f_j = \phi(E_j), j \in J$, and $f = \phi(\underset{j \in J}{\wedge}E_j)$. By (\ref{eq_claim}) and the fact that $\underset{i \in J}{\wedge}E_i \leq E_j$ for all $j \in J$, we have that $f \leq f_j$ for all $j\in J$. Thus, \begin{equation}\label{eq_fle} f \leq \underset{i \in J}{\wedge} f_i. \end{equation} Now let $g = \phi(\theta(\wedge_{i\in J} f_i))$. By the definition of $\phi$, we have that $\wedge_{i\in J} f_i \leq g$. On the other hand, for every $j\in J$, we have by (\ref{eq_lat1}) that $$\theta(\wedge_{i\in J} f_i) \leq \theta(f_j) = \theta(\phi(E_j)) = E_j.$$ Hence, $\theta(\wedge_{i\in J} f_i) \leq \wedge_{i\in J} E_i$. By (\ref{eq_claim}), $g \leq f$ and hence $\wedge_{i\in J} f_i \leq f$; now (\ref{eq_fle}) implies that $\wedge_{i\in J} f_i = f$, showing that $\phi$ is $\wedge$-preserving. \end{proof} \noindent {\bf Remarks (i) } In Theorem \ref{A}, the assumption that $\cl M$ have the rank one density property is essential. Indeed, let $\cl D_0$ (resp. $\cl D$) be the multiplication masa of $L^{\infty}(0,1)$ (resp. $L^{\infty}([0,1]^2)$) acting on $L^2(0,1)$ (resp. $L^2(0,1)\otimes L^2(0,1)$), and let $\cl N_0$ (resp. $\cl N$) be the projection lattice of $\cl D_0$ (resp. $\cl D$). We have that $\cl N \equiv \cl N_0 \otimes \cl N_0 \subseteq \cl P \otimes \cl N_0$. For a measurable subset $\gamma$ of $(0,1)$ or of $(0,1)^2$, we write $M_{\gamma}$ for the projection of multiplication by the characteristic function of $\gamma$. Let $C$ be a non-null Cantor subset of $[0,1)$ and equip $[0,1)$ with the group operation of addition$\mod1$. Set $$F = \{(x,y) \in [0,1) \times [0,1) : x-y \in C\}.$$ The set $F$ is clearly non-null; we claim that it does not contain any non-trivial measurable rectangles. Indeed, suppose, by way of contradiction, that there exist non-null measurable subsets $\alpha$ and $\beta$ of $[0,1)$, such that $\alpha \times \beta \subseteq F$. It follows by the definition of $F$ that $\alpha - \beta = \{a-b : a \in \alpha, b \in \beta\}$ is contained in $C$. By a well-known version of Steinhaus' Theorem, we have that $\alpha - \beta$ contains an open interval. However, $\alpha - \beta \subseteq C$ and $C$ has empty interior, a contradiction. Thus, $F$ does not contain any non-trivial measurable rectangles and hence there exist no non-null subsets $\alpha$ and $\beta$ of $[0,1)$ such that $M_{\alpha} \otimes M_{\beta} \leq M_F$. We will prove that $\phi(M_F)(M_{\beta}) = 0$ for every measurable $\beta$. Fix such a $\beta$ and set $P = \phi(M_F)(M_{\beta})$. Let $P_1$ be the projection onto $\overline{\cl D_0 PK}$; then $P_1\in \cl N_0$ and $P\leq P_1$. Since $P_1\otimes M_{\beta}\leq M_F$, we have that $P_1 = 0$, showing that $P = 0$. It follows that identity (\ref{eq_lat1}) from Theorem \ref{A} does not hold in the case $\cl M = \cl N_0$. {\bf (ii) } Let $\cl M$ be an ABSL with the rank one density property, and $E_1$ and $E_2$ be atoms of $\cl M$. Also let $L_i \in \cl P$, $i=1,2$, be such that $L_1 \wedge L_2 \neq 0$ and $M = (L_1 \otimes E_1) \vee (L_2 \otimes E_2)$. Clearly $$M = (L_1 \otimes E_1) \vee (L_2 \otimes E_2) \vee ((L_1 \wedge L_2) \otimes (E_1 \vee E_2))$$ and thus the representation in Theorem \ref{A} is not unique. The map $\phi|_{\cl P \otimes \cl M}$ is not a $\vee$-preserving and thus not a lattice homomorphism. Indeed, it is easy to check that $\phi (L_i \otimes E_i)(E_i) = L_i$, $i= 1,2$, and $\phi (L_i \otimes E_i)(E_1\vee E_2) = 0$. Also, $\phi (M)(E_1 \vee E_2) \geq L_1 \wedge L_2 > 0$. Thus, $$\phi (M)(E_1 \vee E_2) \neq 0 = (\phi (L_1 \otimes E_1)(E_1 \vee E_2)) \vee (\phi (L_2 \otimes E_2)(E_1 \vee E_2))$$ and hence $\phi (M) \neq (\phi (L_1 \otimes E_1)) \vee (\phi (L_2 \otimes E_2))$. Theorem \ref{A} can now be extended as follows. \begin{theorem}\label{4elcsl3} Let $\mathcal{L}$ be a separably acting CSL and $\mathcal{M}$ be a subspace lattice with the rank one density property. Then $\mathcal{L} \otimes \mathcal{M}$ possesses property (p). \end{theorem} \begin{proof} By Theorem \ref{A}, $\mathcal{P} \otimes \mathcal{M}$ is reflexive. If $\mathcal{L}$ is a finite CSL then $\cl L$ is totally atomic and \cite[Theorem 12, Corollary 2]{Harrison} imply that $\mathcal{L} \otimes \mathcal{P} \otimes \mathcal{M}$ is reflexive. Hence, $\mathcal{L} \otimes \mathcal{M}$ has property (p). Now let $\mathcal{L}$ be an arbitrary separably acting CSL, $\{L_i\}_{i \in \bb{N}}$ be a strongly dense subset of $\cl L$, and $\cl L_n$ be the subspace lattice generated by the set $\{L_i\}_{i=1}^n$, $n\in \bb{N}$; clearly, $\cl L_n$ is finite for all $n \in \bb{N}$. Since $\cl L = \overline{\underset{n \in \bb{N}}{\cup}\cl L_n}^s$, we have that $\mathcal{L} \otimes \mathcal{M} = \underset{n \in \mathbb{N}}{\vee} ({\mathcal{L}}_n \otimes \mathcal{M})$. By the previous paragraph, ${\mathcal{L}}_n \otimes \mathcal{M}$ has property (p) for all $n \in \mathbb{N}$. By the strict approximativity of property (p) (see \cite[Proposition 4.1]{todorovshulman2}), $\cl L \otimes \cl M$ has property (p). \end{proof} \section{Tensoring with atomic Boolean subspace lattices} \label{s_tabsl} In this section, we restrict our attention to the case where $\cl M$ is an Atomic Boolean Subspace Lattice (ABSL) possessing the ultraweak rank one density property. Two atom ABSLs, namely lattices of the form $\{0,P,Q,I\}$, where $P\wedge Q = 0$ and $P\vee Q = I$, satisfy this property \cite{Papadakis} and it is not difficult to show that the rank one density property is preserved under taking meshed product (see \cite{memoirs} for the definition and properties of this construction). Our aim is to show that, if $\cl M$ is an ABSL with the rank one density property, $\cl E$ is the set of its atoms, and $\cl L$ is an arbitrary subspace lattice, then the map $\theta$ is an isomorphism from $m(\cl E,\cl L)$ onto $\cl L\otimes\cl M$. We first establish an important special case. \begin{lemma}\label{4elgen02} Let $\mathcal{M}$ be an ABSL acting on a Hilbert space $H$ having the rank one density property and let $\cl E = \{E_j : j \in J\}$ be the set of its atoms. Then $\theta|_{m(\cl E,\cl P)}$ is a complete lattice isomorphism of $m(\cl E,\cl P)$ onto $\cl P\otimes\cl M$ with inverse $\phi_{\cl E,\cl P}$. \end{lemma} \begin{proof} Let ${\mathcal{M}}_j = \{L \in \mathcal{M} : E_j \leq L\}$, $j\in J$. Fix $M \in \cl P \otimes \cl M$ and let $f = \phi_{\cl M, \cl P}(M)$. By Theorem \ref{A}, \begin{eqnarray*} \nonumber M &=& \underset{L \in \mathcal{M}}{\vee}(f(L) \otimes L) = \underset{L \in \mathcal{M}}{\vee}(f(L) \otimes (\underset{E_j \leq L}{\vee} E_j)) \\ \nonumber &=& \underset{L \in \mathcal{M}}{\vee}(\underset{E_j \leq L}{\vee}(f(L) \otimes E_j)) = \underset{j \in J}{\vee}((\underset{L \in {\mathcal{M}}_j}{\vee} f(L)) \otimes E_j)\\ & = & \underset{j \in J}{\vee} f(E_j) \otimes E_j, \end{eqnarray*} where the last identity follows from (\ref{eq_ul}). Thus, \begin{equation}\label{eq_rem} (\theta\circ \phi_{\cl E,\cl P})(M) = M, \ \ \ M\in \cl P\otimes\cl M. \end{equation} Let $\phi = \phi_{\cl E, \cl P}|_{\cl P\otimes\cl M}$ for brevity. We next check that \begin{equation}\label{eq_phith} (\phi \circ \theta)(f) = f, \ \ \ f\in m(\cl E,\cl P). \end{equation} Let $f\in m(\cl E,\cl P)$, $g = \phi \circ \theta (f)$, $M_j = f(E_j)$ and $P_j = g(E_j)$, $j\in J$. Set $M = \theta(f)$; by (\ref{eq_rem}), \begin{equation}\label{eq_nn} M = \theta(\phi(M)) = \theta(g). \end{equation} By the definition of $\phi$, we have that $f\leq g$, that is, $M_j \leq P_j$ for all $j\in J$. Suppose that there exists $i \in J$ such that $M_i < P_i$. We have that \begin{eqnarray*} M &=& \underset{j \in J}{\vee} (M_j \otimes E_j) \leq (M_i \otimes E_i) \vee \left({\underset{j \neq i}{\bigvee}} (I \otimes E_j)\right) \\ &=& (M_i \otimes E_i) \vee \left(I \otimes \left({\underset{j \neq i}{\bigvee}} E_j\right)\right) \\ &=& (M_i \otimes E_i) \vee \left(M_i \otimes \left({\underset{j \neq i}{\bigvee}} E_j\right)\right) \vee \left({M_i}^{\perp} \otimes \left({\underset{j \neq i}{\bigvee}} E_j\right)\right)\\ &=& (M_i \otimes I) \oplus \left({M_i}^{\perp} \otimes \left({\underset{j \neq i}{\bigvee}} E_j\right)\right), \end{eqnarray*} where for last equality we have used the fact that $M_i \otimes I$ and ${M_i}^{\perp} \otimes \left({\underset{j \neq i}{\bigvee}} E_j\right)$ are orthogonal. Let now $0 \neq p \in (P_iK) \ominus (M_iK)$ and $0 \neq e \in E_i H$. Using (\ref{eq_nn}), we have that $p \otimes e \in (P_i \otimes E_i)(K \otimes H) \subseteq M(K \otimes H)$ and that $(M_i \otimes I)(p \otimes e) = 0$. Hence $$p \otimes e \in \left({M_i}^{\perp} \otimes \left({\underset{j \neq i}{\bigvee}} E_j\right)\right)(K \otimes H)$$ and therefore $$0 \neq e \in \left(\left({\underset{j \neq i}{\bigvee}} E_j\right)H\right) \wedge (E_iH) = \{0\},$$ a contradiction. Hence $f = g = \phi(\theta (f))$ and (\ref{eq_phith}) is proved. By Proposition \ref{ultralemma2}, $\theta|_{m(\cl E,\cl P)}$ is $\vee$-preserving. By Theorem \ref{A}, $\phi$ is $\wedge$-preserving. Let $(f_{\alpha})_{\alpha\in \bb{A}}\subseteq m(\cl E,\cl P)$. Using (\ref{eq_phith}), we have $$\phi(\theta(\wedge_{\alpha\in \bb{A}} f_{\alpha})) = \wedge_{\alpha\in \bb{A}} f_{\alpha} = \wedge_{\alpha\in \bb{A}} (\phi\circ\theta)(f_{\alpha}) = \phi(\wedge_{\alpha\in \bb{A}}\theta(f_{\alpha})).$$ By (\ref{eq_rem}), $\phi$ is injective and so $\theta(\wedge_{\alpha\in \bb{A}} f_{\alpha}) = \wedge_{\alpha\in \bb{A}}\theta(f_{\alpha})$. The proof is complete. \end{proof} It will be helpful to isolate the following statement contained in Lemma \ref{4elgen02} for future reference. \begin{corollary}\label{c_cup} Let $\mathcal{M}$ be an ABSL acting on a Hilbert space $H$ having the rank one density property and let $\cl E = \{E_j : j \in J\}$ be the set of its atoms. If $M \in \mathcal{P} \otimes \mathcal{M}$, then there exists a unique family $(P_j)_{j \in J} \subseteq \mathcal{P}$ such that $M = \underset{j \in J}{\vee} (P_j \otimes E_j)$. \end{corollary} \begin{lemma}\label{ssop1} Let $H$ be a Hilbert space, $\cl M$ be an ABSL on $H$ with atoms $E_j$, $j \in J$ having the rank one density property, and $\{f, f_n : n\in \bb{N}\}\subseteq m(\cl E,\cl P)$. (i) \ If $\theta(f_n)\rightarrow_{n\rightarrow\infty} \theta(f)$ semistrongly then $f_n(E_j)\rightarrow_{n\rightarrow\infty} f(E_j)$ semi-strongly for every $j\in J$. (ii) If $f_n(E_j)\rightarrow_{n\rightarrow\infty} f(E_j)$ semistrongly for every $j\in J$ then there exists a subsequence $(\theta(f_{n_k}))_{k\in \bb{N}}$ of $(\theta(f_n))_{n\in \bb{N}}$ such that $\theta(f_{n_k}) \rightarrow_{k\rightarrow\infty} \theta(f)$ semistrongly. \end{lemma} \begin{proof} Let $L_n^j = f_n(E_j)$ and $L_j = f(E_j)$, $j\in J$, $n\in \bb{N}$. (i) Fix $k \in J$ and let $(x_i)_{i \in \mathbb{N}}$ be a sequence such that $x_i \in L_{n_i}^kK$, $i \in \mathbb{N}$, and $x_i \rightarrow x$ (where the sequence $(n_i)_{i \in \bb N}\subseteq \bb{N}$ is strictly increasing). Fix a non-zero vector $p \in E_kH$. It follows that $x_i \otimes p \rightarrow x \otimes p$. Clearly, $x_i \otimes p \in \theta(f_{n_i})(K \otimes H) $ for all $i \in \mathbb{N}$ and thus, by hypothesis, $x \otimes p \in \theta(f)(K \otimes H)$. Let $$\cl W = \{y : y \otimes p \in \theta(f)(K\otimes H) \text{ for all } p \in E_kH\}.$$ Clearly, $\cl W$ is a closed subspace such that $L_k K\subseteq \cl W$ and $x \in \cl W$. Also, $\cl W \otimes E_k H \subseteq \theta(f)(K \otimes H)$. By Lemma \ref{4elgen02}, $$\cl W \otimes E_kH \subseteq ((\underset{j \in J}{\vee} (L_j \otimes E_j))(K \otimes H)) \wedge (K \otimes E_kH) = L_kK \otimes E_kH.$$ It follows that $\cl W\subseteq L_kK$ and so $x\in L_kK$. Let $q$ be a non-zero vector in $H$ such that $(\underset{j \neq k}{\vee}E_j) q = (E_k)_{-}q = 0$. Write $q = p_0 + p_0'$ where $p_0 = E_k p_0$ and $E_k p_0' = 0$. Since $E_k^{\perp} \wedge (\underset{j \neq k}{\vee}E_j)^{\perp} = (E_k \vee (\underset{j \neq k}{\vee}E_j))^{\perp} = 0$, it follows that $p_0 \neq 0$ and thus $(p_0,q) \neq 0$. Let $p = \frac{p_0}{(p_0,q)}$; we have that $R_{p,q} \in \Alg \cl M$. Clearly $R_{p,q} p = p$ and $R_{p,q}$ annihilates $\underset{j \neq k}{\vee}E_j$. Fix $x \in L_kK$. By hypothesis, there exist a sequence $(\xi_n)_{n \in \mathbb{N}}$ such that $\xi_n = \theta(f_n) \xi_n$, $n\in \bb{N}$, and $\xi_n \rightarrow_{n\rightarrow\infty} x\otimes p$. Thus, $$x\otimes p = (I\otimes R_{p,q})(x\otimes p) = \lim_{n\rightarrow \infty} (I\otimes R_{p,q})\xi_n.$$ By the definition of $R_{p,q}$, we have that $(I\otimes R_{p,q})\xi_n \in (L_n^k\otimes E_k)(K\otimes H)$. Let $\psi : K \otimes H\rightarrow K$ be the bounded linear operator such that $$\psi(x_1 \otimes x_2) = \frac{(x_2,p)}{\|p\|^2}x_1, \text{ } x_1 \in K, \ x_2 \in H.$$ Clearly, $\psi (I \otimes R_{p,q}) \xi_n \in L_n^k K$ for all $n \in \mathbb{N}$, and $\psi((I \otimes R_{p,q})\xi_n)\rightarrow \psi(x\otimes p) = x$. This shows that $L_n^k \rightarrow L_k$ semistrongly. (ii) Suppose that $f_n(E_j)\rightarrow f(E_j)$ semistrongly for all $j\in J$. By the weak compactness of the unit ball of $\cl B(H)$ (see, e.g. \cite[Proposition 5.5]{conway}), there exists a subsequence $(\theta(f_{n_k}))_{k\in \bb{N}}$ of $(\theta(f_n))_{n\in \bb{N}}$ and a positive contraction $W$ on $H$ such that $\theta(f_{n_k}) \rightarrow_{k\rightarrow\infty} W$ in the weak operator topology. By \cite{limsupsHalmos}, $(\theta(f_{n_k}))_{k\in \bb{N}}$ converges semistrongly to the orthogonal projection $Q$ onto $\ker (I-W)$. By Theorem \ref{A}, $\cl P \otimes \cl M$ is reflexive and, by \cite[Proposition 3.1]{todorovshulman1}, it is semistrongly closed. Thus, $Q\in \cl P\otimes\cl M$ and, by Lemma \ref{4elgen02}, $Q = \theta(g)$ for some $g\in m(\cl E,\cl P)$. By (i), $f_{n_k}(E_j)\rightarrow_{k\rightarrow\infty} g(E_j)$ semistrongly. By the uniqueness of the semistrong limit, $f(E_j) = g(E_j)$ for all $j \in J$, that is, $f = g$ and so $\theta(f_{n_k})\rightarrow_{k\rightarrow\infty} \theta(f)$ semistrongly. \end{proof} The next proposition is certainly well-known; since we were not able to find a corresponding reference, we include its short proof for the convenience of the reader. \begin{proposition}\label{ultraweakABSL} Let $\cl M$ be an ABSL, $\cl E = \{E_j\}_{j \in J}$ be the set of its atoms, and let $D_j = \wedge_{i \neq j}({E_i}^\perp)$, $j\in J$. Then $\cl M^\perp \stackrel{def}{=} \{L^\perp : L \in \cl M\}$ is an ABSL whose set of atoms is $\cl D = \{D_j\}_{j \in J}$. \end{proposition} \begin{proof} It is a direct consequence of the de Morgan laws that $\cl M^{\perp}$ is distributive and that if $L \in \cl M$ and $L' \in \cl M$ is the complement of $L$ in $\cl M$, then ${L'}^\perp$ is a complement of $L^{\perp}$ in $\cl M^{\perp}$. Let $L \in \cl M^\perp$. If $0 \leq L < D_j$ for some $j \in J$, then $\vee_{i \neq j}E_i = {D_j}^\perp < L^\perp \in \cl M$. Since $L^\perp$ is equal to the closed linear span of the atoms that it majorises, it must contain $E_i$ and hence $L^\perp = I$, that is, $L = 0$. Thus, $D_j$ is an atom of $\cl M^{\perp}$, for each $j\in J$. If $L \in \cl M^\perp$, then there exists $S \subseteq J$ such that $L^\perp = \vee_{j \in S}E_j$. By distributivity, $$L = \vee_{j \notin S}(\wedge_{i \neq j}E_i^\perp) = \vee_{j \notin S} D_j.$$ We thus showed that $\cl M^\perp$ is an ABSL with atoms $\{D_j : j \in J\}$. \end{proof} In the rest of the section, we adopt the notation from Proposition \ref{ultraweakABSL}. If $f\in m(\cl E,\cl P)$, let $f^{\perp}\in m(\cl D,\cl P)$ be the map given by $f^{\perp}(D_j) = f(E_j)^{\perp}$, $j\in J$. \begin{lemma}\label{l_perpe} Let $\cl M$ be an ABSL with the rank one density property and $\cl E$ be the set of its atoms. If $f\in m(\cl E,\cl P)$ then $\theta(f)^{\perp} = \theta(f^{\perp})$. \end{lemma} \begin{proof} Since $\cl M$ has the rank one density property, the identity $\Alg \cl M^{\perp} = (\Alg\cl M)^*$ implies that $\cl M^{\perp}$ has the rank one density property as well. Let $f\in m(\cl E,\cl P)$ and $L_j = f(E_j)$, $j\in J$. Then \begin{eqnarray*} \theta(f)^{\perp} & = & (\underset{j \in J}{\vee}(L_j \otimes E_j))^{\perp} = \underset{j \in J}{\wedge} (L_j \otimes E_j)^\perp \nonumber \\ & = & \underset{j \in J}{\wedge} (({L_j}^{\perp} \otimes I) \vee (L_j \otimes {E_j}^\perp)) \nonumber \\ & = & \underset{j \in J}{\wedge} (({L_j}^{\perp} \otimes D_j) \vee ({L_j}^{\perp} \otimes {E_j}^\perp) \vee (L_j \otimes {E_j}^\perp)) \nonumber \\ &=& \underset{j \in J}{\wedge} (({L_j}^{\perp} \otimes D_j) \vee (I \otimes {E_j}^\perp)) \nonumber \\ &=& \underset{j \in J}{\wedge} (({L_j}^{\perp} \otimes D_j) \vee (I \otimes (\underset{i \neq j}{\vee}D_i))) \nonumber \\ &=& \underset{j \in J}{\wedge} (({L_j}^{\perp} \otimes D_j) \vee (\underset{i \neq j}{\vee}(I \otimes D_i)) = \underset{j \in J}{\vee} ({L_j}^{\perp} \otimes D_j) = \theta(f^{\perp}), \end{eqnarray*} where at the second last equality we used Lemma \ref{4elgen02}. \end{proof} The main result of this section is the following. \begin{theorem}\label{4elang6} Let $\mathcal{L}$ be a subspace lattice acting on a Hilbert space $K$ and $\mathcal{M}$ be an ABSL with the rank one density property. Let $\cl E = \{E_j: j \in J\}$ be the set of atoms of $\cl M$. Then $\theta|_{m(\cl E,\cl L)}$ is a complete lattice isomorphism of $m(\cl E,\cl L)$ onto $\cl L\otimes\cl M$ with inverse $\phi_{\cl E,\cl P}|_{\cl L\otimes\cl M}$. \end{theorem} \begin{proof} Let $$\mathcal{F} = \theta(m(\cl E,\cl L)) = \{\underset{j \in J}{\vee} (L_j \otimes E_j) : L_j \in \cl L, j \in J \}.$$ By Lemma \ref{4elgen02}, $\cl F$ is a projection lattice. We will show that $\mathcal{F}$ is strongly closed. Let $f_n\in m(\cl E,\cl L)$, $n\in \bb{N}$, and let $Q$ be a projection with $\theta(f_n)\rightarrow Q$ in the strong operator topology. Since $\theta(f_n) \in \cl P\otimes\cl M$ for all $n$, we have that $Q\in \cl P\otimes\cl M$. By Lemma \ref{4elgen02}, $Q = \theta(f)$ for some $f\in m(\cl E,\cl P)$. Since $\theta(f_n)\rightarrow_{n\rightarrow\infty} \theta(f)$ semistrongly \cite{limsupsHalmos}, Lemma \ref{ssop1} (i) implies that $f_n(E_j)\rightarrow_{n\rightarrow\infty} f(E_j)$ semistrongly, for all $j\in J$. Since $\cl M$ has the rank one density property, $\cl M^{\perp}$ does so as well. By \cite{limsupsHalmos}, $\theta(f_n)^{\perp}\rightarrow_{n\rightarrow\infty} \theta(f)^{\perp}$ semistrongly and by Lemma \ref{l_perpe}, $\theta(f_n^{\perp})\rightarrow_{n\rightarrow\infty} \theta(f^{\perp})$ semistrongly. By Lemma \ref{ssop1} (i), $f_n^{\perp}(D_j)\rightarrow_{n\rightarrow\infty} f^{\perp}(D_j)$ semistrongly for all $j\in J$, that is, $f_n(E_j)^{\perp}\rightarrow_{n\rightarrow\infty} f(E_j)^{\perp}$ semistrongly for all $j\in J$. By \cite{limsupsHalmos}, $f_n(E_j)\rightarrow_{n\rightarrow\infty} f(E_j)$ in the strong operator topology and, since $\cl L$ is strongly closed, we conclude that $f(E_j)\in \cl L$ for all $j\in J$. Thus, $\cl F$ is strongly closed. It follows that $\cl F = \cl L\otimes\cl M$. If $Q\in \cl L\otimes\cl M$ then by Corollary \ref{c_cup}, there exists a unique $f\in m(\cl E,\cl P)$ such that $\theta(f) = Q$. Since $\cl L\otimes\cl M = \theta(m(\cl E,\cl L))$, we have that $f\in m(\cl E,\cl L)$. Thus, $\phi_{\cl E,\cl P}(Q)\in m(\cl E,\cl L)$, and the rest of the statements follow from Lemma \ref{4elgen02}. \end{proof} We include some immediate corollaries of Theorem \ref{4elang6}. \begin{corollary}\label{c_cup2} Let $\mathcal{M}$ be an ABSL acting on a Hilbert space $H$ having the rank one density property, $\cl E = \{E_j : j \in J\}$ be the set of its atoms and $\cl L$ be any subspace lattice. If $M \in \mathcal{L} \otimes \mathcal{M}$, then there exists a unique family $(P_j)_{j \in J} \subseteq \mathcal{P}$ such that $M = \underset{j \in J}{\vee} (P_j \otimes E_j)$. Moreover, $P_j\in \cl L$ for each $j\in J$. \end{corollary} \begin{corollary}\label{c_dist} Let $\mathcal{L}$ be a subspace lattice and $\mathcal{M}$ be an ABSL with the rank one density property. If $\cl L$ is distributive then so is $\cl L\otimes\cl M$. \end{corollary} We finish this section with a stability result about semistrong closedness. We refer the reader to \cite{todorovshulman1}, where semistrongly closed subspace lattices were studied in detail. \begin{proposition}\label{p_see} Let $\mathcal{L}$ be a subspace lattice acting on a Hilbert space $K$ and $\mathcal{M}$ be an ABSL acting on a Hilbert space $H$, having the rank one density property. The lattice $\cl L$ is semistrongly closed if and only if the lattice $\cl L\otimes\cl M$ is semistrongly closed. \end{proposition} \begin{proof} Suppose that $\cl L$ is semistrongly closed and assume that $\{Q_n : n\in \bb{N}\}\subseteq \cl L\otimes \cl M$ with $Q_n\rightarrow Q$ semistrongly for some projection $Q$ on $K\otimes H$. By Theorem \ref{A}, $\cl P\otimes \cl M$ is reflexive, and by \cite{todorovshulman1}, it is semistrongly closed; hence, $Q\in \cl P\otimes\cl M$. Thus, by Lemma \ref{4elgen02}, $Q = \theta(f)$ for some $f\in m(\cl E,\cl P)$, where $\cl E$ is the set of atoms of $\cl M$. By Theorem \ref{4elang6}, there exist $f_n\in m(\cl E,\cl L)$ such that $Q_n = \theta(f_n)$, $n\in \bb{N}$. By Lemma \ref{ssop1} (i), $f_n(E_j)\rightarrow f(E_j)$ semistrongly and since $\cl L$ is semistrongly closed, $f(E_j)\in \cl L$; therefore, $Q\in \cl L\otimes\cl M$. Conversely, suppose that $\cl L\otimes\cl M$ is semistrongly closed. Fix an atom $E$ of $\cl M$. Suppose that $(L_n)_{n\in \bb{N}}\subseteq \cl L$ and that $L_n\rightarrow_{n\rightarrow \infty} L$ semistrongly for some projection $L\in \cl P$. By Lemma \ref{ssop1} (ii), there exists a subsequence $(n_k)_{k\in \bb{N}}$ with $L_{n_k}\otimes E\rightarrow_{k\rightarrow\infty} L\otimes E$ semistrongly. Since $\cl L\otimes\cl M$ is semistrongly closed, $L\otimes E\in \cl L\otimes\cl M$ and, by Corollary \ref{c_cup2}, $L\in \cl L$. \end{proof} \section{LTPF and other consequences}\label{s_ltpf} The next theorem, along with Corollary \ref{4elgen4}, are the main results of this section. We also give some more consequences of the results from the previous sections. \begin{theorem}\label{4elgen3minus} Let $\mathcal{L}$ be a subspace lattice acting on a Hilbert space $K$ and $\mathcal{M}$ be an ABSL acting on a Hilbert space $H$ and having the rank one density property. Let $\cl E = \{E_j : j \in J\}$ be the set of atoms of $\cl M$. Then \begin{eqnarray}\label{eq_long} \nonumber \Lat\Alg(\mathcal{L} \otimes \mathcal{M}) & = & (\Lat\Alg \mathcal{L}) \otimes \mathcal{M}\\ & = & \{ \underset{j \in J}{\vee}(f(E_j) \otimes E_j) : f \in m(\cl E, \Lat\Alg\mathcal{L})\}. \end{eqnarray} \end{theorem} \begin{proof} The second equality follows from Corollary \ref{c_cup2}. By hypothesis, the subalgebra of $\mathcal{A} = \Alg \cl M$ generated by the rank one operators in $\mathcal{A}$ is dense in $\mathcal{A}$ in the ultraweak topology. Hence, it follows from \cite[Theorem 2.1 and Proposition 1.1]{Kraus} that $\Alg(\mathcal{L} \otimes \mathcal{M}) = (\Alg\mathcal{L}) \otimes \mathcal{A}$. Thus, \begin{eqnarray*} (\Lat\Alg \mathcal{L}) \otimes \mathcal{M} &=& \Lat\Alg \mathcal{L} \otimes \Lat\Alg \mathcal{M} \\ &\subseteq& \Lat(\Alg \mathcal{L} \otimes \mathcal{A}) = \Lat\Alg(\mathcal{L} \otimes \mathcal{M}). \end{eqnarray*} It remains to prove the inclusion $\Lat\Alg(\mathcal{L} \otimes \mathcal{M}) \subseteq (\Lat\Alg \mathcal{L}) \otimes \mathcal{M}$. Let $k \in J$; then $(E_k)_{-} = \underset{j \neq k}{\vee} E_j$. Fix $E \in \Lat\Alg(\mathcal{L} \otimes \mathcal{M})$. Using Theorem \ref{A}, we have $$\Lat\Alg(\mathcal{L} \otimes \mathcal{M}) \subseteq \Lat\Alg(\mathcal{P} \otimes \mathcal{M}) = \cl P\otimes\cl M;$$ Corollary \ref{c_cup} now implies that there are unique projections $P_j \in \mathcal{P}$, $j \in J$ such that $E = \underset{j \in J}{\vee} (P_j \otimes E_j)$. The proof will be complete if we show that $P_j \in \Lat\Alg \cl L$ for all $j \in J$. Let $\mathcal{S}$ be the set of all rank one operators $R_{x,y}$ such that $E_k x = x$ and $(E_k)_- y = 0$. Clearly, $\cl S \subseteq \mathcal{A}$. Also let $T \in \Alg \mathcal{L}$ and $0 \neq S \in \mathcal{S}$. It is straightforward that $T \otimes S$ annihilates $(P_j \otimes E_j)(K \otimes H)$ for all $j \neq k$ and belongs to $\Alg(\cl L \otimes \cl M)$. Thus, \begin{eqnarray*} \overline{T P_kK} \otimes E_kH & = & \underset{S \in \mathcal{S}}{\vee} (\overline{T P_kK} \otimes \overline{SH}) = \underset{S \in \mathcal{S}}{\vee}(\overline{T P_kK} \otimes \overline{S E_kH})\\ & = & \underset{S \in \mathcal{S}}{\vee}\overline{(T \otimes S)E (K \otimes H)} \subseteq E(K\otimes H). \end{eqnarray*} Let $x\in P_k K$. For every $y\in E_k H$, we have by the last inclusion that $Tx \otimes y\in E(K \otimes H)$. Denoting by $[Tx]$ the projection on the subspace $\{\lambda Tx : \lambda \in \mathbb{C}\}$, we have that \begin{eqnarray*} & & ((P_k \vee [Tx]) \otimes E_k)\vee (\underset{j \neq k}{\vee} P_j \otimes E_j)\\ & = & ([Tx] \otimes E_k) \vee (P_k \otimes E_k) \vee (\underset{j \neq k}{\vee} (P_j \otimes E_j)) \subseteq E = \underset{j \in J}{\vee} (P_j \otimes E_j). \end{eqnarray*} By Corollary \ref{c_cup2}, $P_k \vee [Tx] = P_k$ and thus $Tx \in P_kK$. This shows that $P_k \in \Lat\Alg \mathcal{L}$ and (\ref{eq_long}) is proved. \end{proof} \begin{corollary}\label{4elgen4} Let $K$ be a Hilbert space, $\mathcal{L}$ be a reflexive subspace lattice acting on $K$ and $\mathcal{M}$ be an ABSL acting on a Hilbert space $H$, having the rank one density property. Then the LTPF holds for $\Alg \cl L$ and $\Alg \cl M$. \end{corollary} \begin{proof} The ATPF holds for $\cl L$ and $\cl M$ because $\cl M$ has the ultraweak rank one density property (see \cite[Theorem 2.1 and Proposition 1.1]{Kraus}). Let $\cl A = \Alg \cl M$ and $\cl B = \Alg \cl L$. Using Theorem \ref{4elgen3minus}, we have $$\Lat(\cl B \otimes \cl A) = \Lat\Alg (\cl L\otimes\cl M) = (\Lat\Alg \cl L)\otimes\cl M = (\Lat \cl B)\otimes(\Lat \cl A).$$ \end{proof} \begin{corollary}\label{c_reff} Let $\mathcal{M}$ be an ABSL having the rank one density property. A subspace lattice $\cl L$ is reflexive if and only if $\cl L\otimes \cl M$ is reflexive. \end{corollary} \begin{proof} If $\cl L$ is reflexive then $\cl L\otimes\cl M$ is reflexive by Theorem \ref{4elgen3minus}. Conversely, suppose that $\cl L\otimes\cl M$ is reflexive. Let $L\in \Lat\Alg\cl L$ and $E\in \cl M$ be an atom. By Theorem \ref{4elgen3minus}, $L\otimes E\in \cl L\otimes\cl M$ and, by Corollary \ref{c_cup2}, $L\in \cl L$. \end{proof} \begin{corollary}\label{thesis0ncor} If $\mathcal{L}$ is a subspace lattice having property (p) and $\mathcal{M}$ is an ABSL having the rank one density property, then $\cl L \otimes \cl M$ has property (p). \end{corollary} \begin{proof} By hypothesis, we have that $\cl P \otimes \cl L$ is reflexive. It follows from Corollary \ref{4elgen4} that $\cl P \otimes \cl L \otimes \cl M$ is reflexive, that is, $\cl L\otimes\cl M$ has property (p). \end{proof} \begin{corollary}\label{4elgen3} Let $H$ be a Hilbert space and $P$ and $Q$ be projections acting on $H$ such that $P \wedge Q = 0$ and $P \vee Q = I$. If $\mathcal{M}=\{0,P,Q,I\}$ and $\mathcal{L}$ is a subspace lattice acting on a Hilbert space $K$, then $$\Lat\Alg(\mathcal{L} \otimes \mathcal{M}) = \{(L_1 \otimes P) \vee (L_2 \otimes Q): L_1,L_2 \in \Lat\Alg\mathcal{L} \}.$$ Furthermore, if $\cl L$ is reflexive, then the LTPF holds for $\Alg \cl L$ and $\Alg \cl M$, and the lattice $\cl L\otimes\cl P$ is reflexive. \end{corollary} \begin{proof} The statement is immediate from Theorem \ref{4elgen3minus}, Corollary \ref{4elgen4} and the fact that two atom ABSLs satisfy the rank one density property \cite[Theorem 2.1]{Papadakis}. \end{proof} We finish this section with the following additional consequence of the above results. \begin{theorem} Let $\mathcal{L}$ and $\mathcal{M}$ be ABSLs with sets of atoms $\{D_i : i \in I\}$ and $\{E_j : j \in J\}$, respectively. If either $\cl L$ or $\cl M$ has the rank one density property, then $\mathcal{L} \otimes \mathcal{M}$ is an ABSL whose set of atoms is $\{D_i \otimes E_j : (i,j) \in I \times J\}$. \end{theorem} \begin{proof} Without loss of generality, we assume that $\mathcal{M}$ has the rank one density property. By Corollary \ref{4elgen4} and the fact that every ABSL is reflexive \cite{boolHalmos}, we have that $$\mathcal{L} \otimes \mathcal{M} = \{ \underset{j \in J}{\vee}(P_j \otimes E_j): P_j \in \mathcal{L}, \ j \in J \}.$$ On the other hand, for $j\in J$, we have that $$P_j \otimes E_j = (\vee_{D_i\leq P_j} D_i)\otimes E_j = \vee_{D_i\leq P_j} D_i \otimes E_j.$$ Thus, every element in $\mathcal{L} \otimes \mathcal{M}$ is the span of elements of the set $\{D_i \otimes E_j : (i,j) \in I \times J\}$. Suppose that $L = \underset{j \in J}{\vee}(P_j \otimes E_j) \subsetneq (D_{i_0} \otimes E_{j_0})$ where $(i_0,j_0) \in I \times J$. Since $\cl L$ is an ABSL, we have that either $P_{j_0} \wedge D_{i_0} = 0$, or $D_i \subseteq P_{j_0}$. If $D_{i_0} \subseteq P_{j_0}$, then $D_{i_0} \otimes E_{j_0} \subseteq L$ and thus $D_{i_0} \otimes E_{j_0} = L$. By hypothesis, $D_{i_0} \otimes E_{j_0} \neq L$, hence $P_{j_0} \wedge D_{i_0} = 0$. By Theorem \ref{4elang6}, $$L = L \wedge (D_{i_0} \otimes E_{j_0}) = (P_{j_0} \wedge D_{i_0}) \otimes E_{j_0} = 0.$$ Thus, $D_i\otimes E_j$ is an atom of $\cl L\otimes\cl M$ for all $i$ and $j$. It remains to prove that $\mathcal{L} \otimes \mathcal{M}$ is complemented and distributive. Let $L = \underset{j \in J}{\vee}(P_j \otimes E_j)$, where $P_j \in \mathcal{L}$, $j \in J$, and let $P'_j$ be the complement of $P_j$ in $\cl L$, for all $j \in J$. If $L' = \underset{j \in J}{\vee}({P'}_j \otimes E_j)$, then $$L \vee L' = \underset{j \in J}{\vee}((P_j \vee {P'}_j) \otimes E_j) = \underset{j \in J}{\vee}(I \otimes E_j) = I$$ and, by Theorem \ref{4elang6}, $$L \wedge L' = \underset{j \in J}{\vee}((P_j \wedge {P'}_j) \otimes E_j) = 0.$$ Hence $L'$ is a complement for $L$. Finally, the distributivity of $\cl L\otimes\cl M$ follows from Corollary \ref{c_dist}. \end{proof} \noindent {\bf Acknowledgements} We would like to thank A. Katavolos and V.S. Shulman for their remarks which helped us improve the exposition of the paper. \end{document}
arXiv
Convert the base-$64$ number $100_{64}$ to base $62$. The number $100_{64}$ is, by definition, $64^2$. We can rewrite this as $(62+2)^2$, then use algebra to expand it out as $62^2 + 4\cdot 62 + 4$. Writing this in base $62$, we obtain $\boxed{144}$ (that is, $144_{62}$).
Math Dataset
Charles Dawson Kunal Garg Kwesi Rutledge Mingxin Yu Oswin So Ruixiao Yang Rujul Gandhi Songyuan Zhang Yongchao Chen Yue Meng Automated testing, verification & design optimization Certified Learning for Control Safe human-robot interaction and planning Our Research show submenu for "Our Research" Learning Certified Control using Contraction Metric Home | REALM | Research Blogs | Learning Certified Control using Contraction Metric Abstract: In this paper, we solve the problem of finding a certified control policy that drives a robot from any given initial state and under any bounded disturbance to the desired reference trajectory, with guarantees on the convergence or bounds on the tracking error. Such a controller is crucial in safe motion planning. We leverage the advanced theory in Control Contraction Metric and design a learning framework based on neural networks to co-synthesize the contraction metric and the controller for control-affine systems. We further provide methods to validate the convergence and bounded error guarantees. We demonstrate the performance of our method using a suite of challenging robotic models, including models with learned dynamics as neural networks. We compare our approach with leading methods using sum-of-squares programming, reinforcement learning, and model predictive control. Results show that our methods indeed can handle a broader class of systems with less tracking error and faster execution speed. Code is available at https://github.com/sundw2014/C3M. The paper is available at https://arxiv.org/abs/2011.12569. Formally, given a control-affine system $\dot{x} = f(x) + B(x)u$, we want to find a feedback controller $u(\cdot,\cdot,\cdot)$, so that for any reference $(x^*(t), u^*(t))$ solving the ODE, the closed-loop system perturbed by $d$ $$ \dot{x} = f(x(t)) + B(x(t))u(x(t),x^*(t),u^*(t)) + d(t)$$ satisfies that the tracking error $|x(t) – x^*(t)|$ is upper bounded, for all $|d(t)| \leq \epsilon$ and all initial condition $x(0) \in \mathcal{X}$. Contraction analysis Contraction analysis can be viewed as a differential version of Lyapunov's theory. It analyzes the incremental stability by considering the evolution of the distance between two neighboring trajectories of the system. Lyapunov's theory considers whether the trajectories will finally converge to a point (equilibrium), while contraction theory considers whether all the trajectories converge to a common trajectory. Control contraction metric (CCM) theory extends the analysis to cases with control input, which enables tracking controller synthesis. Consider a system $\dot{x} = f(x) + B(x) u$, its virtual displacement $\delta_x$ between any pair of arbitrarily close neighboring trajectories evolves as $\dot{\delta}_x = A(x, u) \delta_x + B(x) \delta_u$, where $A(x,u) := \frac{\partial f}{\partial x} + \sum_{i=1}^{m}u^i\frac{\partial b_i}{\partial x}$. We say $M : \mathcal{X} \mapsto \mathbb{S}_n^{\geq 0}$ is a CCM if there exists a controller $u(x,x^*,u^*)$ s.t. $\forall x, x^*, u^* \in \mathcal{X} \times \mathcal{X} \times \mathcal{U}$, $\forall \delta_x \in \mathcal{T}_{x}{\mathcal{X}}$, $$\frac{d}{dt}\left(\delta_x^\intercal M(x) \delta_x\right) \leq -\lambda \delta_x^\intercal M(x) \delta_x,$$ which implies $\|\delta_x\|_M := \sqrt{\delta_x^\intercal M(x) \delta_x}$ converges to zero exponentially at rate $\lambda$. Such a closed-loop system is referred to be contracting (under metric $M$), which implies good tracking performance. $$\frac{d}{dt}\left(\delta_x^\intercal M(x) \delta_x\right) \leq -\lambda \delta_x^\intercal M(x) \delta_x,\, \forall \delta_x,\, \forall x,x^*,u^*,$$ $$\dot{\delta}_x = A(x, u) \delta_x + B(x) \delta_u,$$ is equivalent to $\forall x,x^*,u^*$, $$\dot{M} + \mathtt{sym}\left(M(A+BK)\right) + 2 \lambda M \preceq 0,~~~~~(*)$$ where $K := \frac{\partial u}{\partial x}$ and $\mathtt{sym}\left(A\right) = A + A^\intercal$. Learning a controller with a controller The learning framework. The loss function is designed as follows. $$\mathcal{L}_{M,u} (\theta_M, \theta_u) := \mathop{\mathbb{E}}_{x,x^*,u^* \sim \mathtt{Unif}}\left[L_{NSD}\left(\dot{M} + \mathtt{sym}\left(M(A+BK)\right) + 2 \lambda M\right)\right],$$ where $L_{NSD} : \mathbb{R}^{n \times n} \mapsto \mathbb{R}_{\geq}$ and $L_{NSD}(A) = 0$ iff. matrix $A$ is negative semi-definite. Obviously, by assuming the LHS of $(*)$ is continuous, $$\mathcal{L}_{M,u} = 0~~~\Longleftrightarrow~~~~~(*)\text{ holds } \forall x,x^*,u^* \in \mathcal{X} \times \mathcal{X} \times \mathcal{U}.$$ Experimental results The above figures show the tracking error of the proposed method and several others. The proposed method outperforms all the others. It is worth mentioning that $\texttt{Quadrotor}$ is a $9$-dimensional system, and part of the dynamics of $\texttt{Neural Lander}$ is represented as an NN. Back to Main AeroAstro Site
CommonCrawl
\begin{document} \begin{center} {\Large\bf Considering Relativistic Symmetry as the First Principle of Quantum Mechanics} \\ \ \\ {\large Takuya Kawahara \footnote[1]{k\[email protected]}} \\ \ \\ \textit{GAIA System Solutions Inc., Tokyo 141-0022, Japan} \end{center} \noindent \textbf{Abstract:} \ On the basis of the relativistic symmetry of Minkowski space, we derive a Lorentz invariant equation for a spread electron. This equation slightly differs from the Dirac equation and includes additional terms originating from the spread of an electron. Further, we calculate the anomalous magnetic moment based on these terms. These calculations do not include any divergence; therefore, renormalization procedures are unnecessary. In addition, the relativistic symmetry existing among coordinate systems will provide a new prospect for the foundations of quantum mechanics like the measurement process. \\ \noindent \textit{Keywords: \ relativistic symmetry, Minkowski space, Lorentz invariant equation, anomalous magnetic moment, measurement theory, EPR correlation} \\ \noindent \textit{PACS(2006): \ 03.65.Pm, 13.40.Em, 03.65.Ta, 03.65.Ud} \section{\label{sec:level1}Introduction} \hspace*{0.6cm}There are many problems associated with a relativistic quantum field theory. In particular, the issue of infinity accompanied by radiative correction is troublesome. Renormalization methods allow most of the divergence to be eliminated; however, it is difficult to accept this method as the final solution. In addition, much work has been done on the study of Dirac particles \cite{spohn}. Nevertheless, even today, quantum field theory continues to be problematic with regard to its relationship with the theory of relativity. Therefore, apart from the conventional approach, we will directly derive a Lorentz invariant equation for an electron on the basis of the symmetry of Minkowski space. Thus, we assume two fundamental principles, instead of the usual rules of quantum mechanics, as follows: \hangindent=2\parindent (\,i\,) An electron has an inherent relativistic symmetry, i.e., the behavior of an electron is described as the function of only an invariant parameter in Minkowski space. \hangindent=2\parindent (ii) An electron has a finite size as the world length that is proportional to the reciprocal of the inertial mass. \noindent These principles imply that an electron identifies the Minkowski space as one-dimensional; this must be a cause of the quantum behavior of an electron. In Sec.2, we extract a relativistic invariant parameter in Minkowski space. In Sec.3, based on these principles, we derive a relativistic equation for a spread electron. This equation slightly differs from the Dirac equation and includes additional terms originating from the spread of an electron. These terms are interpreted as an enhanced Pauli term, which is related to the anomalous magnetic moment \cite{weinberg}. Up to now, the Pauli term has been disregarded because it makes renormalization impossible. Nevertheless, in Sec.4, we calculate the corrections in magnetic moment based on these terms, without renormalization. In addition, in Sec.5, the measurement process in quantum theory is discussed based on the relativistic symmetry existing among coordinate systems. \section{\label{sec:level2}Extraction of a Relativistic Invariant Parameter} \hspace*{0.6cm}In this paper, we use Einstein's summation convention for indices $ {\mu },{\nu }, $ and $ {\xi } \; ( = 0, 1, 2, 3 ) $. Using the Minkowski metric $ \textit{\textsf{g}}_{\mu \nu } \! = diag(+,-,-,-) $, we define the relation between a covariant vector $ a_{\mu } $ and a contravariant vector $ a^{\nu } $ as follows: \begin{equation} a_{\mu } = \textit{\textsf{g}}_{\mu \nu } a^{\nu } . \label{201} \end{equation} Further, we substitute $ \hbar = c = 1 $ as a rule. According to the special theory of relativity, world length squared $ \delta s ^{2 } $ is a Lorentz invariant. For any inertial coordinate system, the following identity holds between a world length $ \delta s $ and coordinate intervals $ \delta x ^{\mu } $: \begin{equation} \delta s^{2 } = \textit{\textsf{g}}_{\mu \nu } \delta x^{\mu } \delta x^{\nu } , \ \ \ \mbox{where} \ \quad x^{\mu } \equiv ( x^{0 } , x^{1 } , x^{2 } , x^{3 }) \equiv ( t , \boldsymbol{x } ) . \label{202} \end{equation} For convenience, we designate the origin of the inertial coordinate system as $ s = 0 $; thus, the quadratic form (\ref{202}) becomes \begin{equation} s^2 = \textit{\textsf{g}}_{\mu \nu } x^{\mu } x^{\nu } . \label{203} \end{equation} In order to extract the relativistic invariant parameter $ s $, we take the square root of (\ref{203}) in the linear form. Now, we assume that the quadratic form (\ref{203}) is decomposed as follows: \begin{equation} s = \gamma _{\mu } x^{\mu } . \label{204} \end{equation} Let us square both sides of Eq.(\ref{204}): \begin{equation} {s }^{2 } \! \! = \! \! \sum_{ {\mu } > {\nu } } ( \gamma _{\mu } \gamma _{\nu } + \gamma _{\nu } \gamma _{\mu } ) x^{\mu } x^{\nu } \! \! + \! \frac{1 }{2 } \! \sum_{ {\mu } = {\nu } } ( \gamma _{\mu } \gamma _{\nu } + \gamma _{\nu } \gamma _{\mu } ) x^{\mu } x^{\nu } \! . \label{205} \end{equation} On the other hand, the quadratic form (\ref{203}) can be rewritten as follows: \begin{equation} s^2 = \sum_{ {\mu } > {\nu } } 2 \textit{\textsf{g}}_{\mu \nu } x^{\mu } x^{\nu } + \frac{1 }{2 } \sum_{ {\mu } = {\nu } } 2 \textit{\textsf{g}}_{\mu \nu } x^{\mu } x^{\nu } . \label{206} \end{equation} Expressions (\ref{205}) and (\ref{206}) are equivalent when $ \gamma _{\mu } $ satisfies the following relation: \begin{equation} \gamma _{\mu } \gamma _{\nu } + \gamma _{\nu } \gamma _{\mu } = 2 \textit{\textsf{g}}_{\mu \nu } . \label{207} \end{equation} It implies that $ \gamma _{\mu } $ are isomorphic forms of the Dirac $ \gamma $-matrices. We now introduce the rule of raising and lowering indices of the $ \gamma $-matrices as well as the vectors: \begin{equation} \gamma ^{0 } = \gamma _{0 } , \ \gamma ^{k } = - \gamma _{k } \quad ( k = 1, 2, 3 ) . \label{208} \end{equation} Similar to Eq.(\ref{204}), the following identical equation holds: \begin{equation} \frac{d }{ds } \Psi = \gamma ^{\mu } \:\! \partial _{\mu } \Psi , \ \mbox{where} \ \ \ \partial _{\mu } \equiv \frac{\partial }{\partial x^{\mu } } \equiv \Bigl( \frac{\partial }{\partial x^{0 } }, \frac{\partial } {\partial x^{1 } }, \frac{\partial }{\partial x^{2 } }, \frac{\partial }{\partial x^{3 } } \Bigr) \equiv \Bigl( \frac{\partial}{\partial t}, \boldsymbol{\nabla } \Bigr) . \label{209} \end{equation} When $ \Psi $ is a function only of argument $ s $, \begin{equation} \frac{d \Psi (s ) }{d s } = \frac{\partial x^{\mu } }{\partial s } \, \frac{\partial \Psi (s ) }{\partial x^{\mu } } = \gamma ^{\mu } \:\! \partial _{\mu } \Psi , \ \mbox{where} \ \ \ \frac{\partial x^{\mu } }{\partial s } = \Bigl( \frac{\partial s }{\partial x^{\mu } } \Bigr)^{-1 } = \gamma _{\mu }^{-1 } = \gamma ^{\mu } . \label{210} \end{equation} In this case, Eq.(\ref{209}) is valid. However, since the identical equation (\ref{209}) contains $ \gamma $-matrices in the operator, we consider $ \Psi $ as a four-component function. \section{\label{sec:level3}Derivation of Equations for a Spread Electron} \hspace*{0.6cm}In this section, we derive a relativistic difference equation for a spread electron. Further, we derive a wave equation from this difference equation. The wave equation agrees with the Dirac equation except for the existence of additional terms originating from the spread of an electron. \subsection{\label{subsec:level31}Derivation of the difference equation} Based on the principles referred to in Sec.1, we identify the space-time behavior of an electron using the following equation: \begin{equation} \rho ( s + \delta s ) = \rho (s ) , \label{301} \end{equation} where $ \rho $ denotes the density scalar and $ \delta s $ denotes the world length of the electron. This equation implies the conservation law for the existing probability of a spread electron under a proper time evolution. We transform Eq.(\ref{301}) into an equation for any inertial coordinate system as follows: We introduce an adjoint of $ \Psi $, $ \overline{\Psi } \equiv \Psi ^{\dagger } \gamma ^{0 } $, and define the density scalar $ \rho $ as follows: \begin{equation} \rho (s ) \; {\equiv } \; \overline{\Psi } (s ) \Psi (s ) . \label{302} \end{equation} Here, $ \Psi $ is a wave function of an electron, which will be clarified later. From definition (\ref{302}), $ \overline{\Psi } \Psi $, and not $ \Psi $, evidently relates to the existing probability of an electron. By substituting (\ref{302}) in Eq.(\ref{301}), we obtain \begin{equation} \overline{\Psi } ( s + \delta s ) \Psi ( s + \delta s ) = \overline{\Psi } (s ) \Psi (s ) . \label{303} \end{equation} $ \Psi (s + \delta s ) $ is expressed as the Maclaurin series: \begin{equation} \Psi ( s + \delta s ) \to \exp \Big( \delta s \frac{d }{ds } \Big) \Psi (s ) . \label{304} \end{equation} Since $ \Psi $ depends only on $ s $, (\ref{209}) can be substituted in (\ref{304}): \begin{equation} \exp \Big( \delta s \frac{d }{ds } \Big) \Psi (s ) \to \exp ( \delta s \, \gamma ^{\mu } \:\! {\partial }_{\mu } ) \Psi . \label{305} \end{equation} Similar to Eqs.(\ref{304}) and (\ref{305}), using the relation $ \gamma ^{0 } ( \gamma ^{\mu } )^{\dagger } \gamma ^{0 } = \gamma ^{\mu } $, we obtain \begin{eqnarray} \begin{aligned} \overline{\Psi } ( s + \delta s ) \equiv \Psi^{\dagger } ( s + \delta s ) \gamma ^{0 } \to \overline{\Psi } \, \exp ( \delta s \, \gamma ^{\mu } \overleftarrow{{\partial } }_{ \!\! \mu }) , \hspace{5mm} \label{306} \\ \mbox{where} \quad \overline{\Psi } \gamma ^{\mu } \overleftarrow{{\partial } }_{ \!\! \mu } \; {\equiv } \; {\partial }_{\mu } \overline{\Psi } \gamma ^{\mu } . \end{aligned} \end{eqnarray} By introducing the $ 4 \times 4 $ square matrix $ U $ of parameter $ \delta s $, we assume the following equation: \begin{equation} \exp ( \delta s \, \gamma ^{\mu } \:\! {\partial }_{\mu } ) \Psi = U ( \delta s ) \Psi . \label{307} \end{equation} Then, an adjoint equation of Eq.(\ref{307}) is expressed as follows: \begin{equation} \overline{\Psi } \exp ( \delta s \, \gamma ^{\mu } \overleftarrow{{\partial } }_{ \!\! \mu } ) = \overline{\Psi } \gamma ^{0 } U^{\dagger } ( \delta s ) \gamma ^{0 } . \label{308} \end{equation} Multiplying each side of Eq.(\ref{308}) from the left with the corresponding sides of Eq.(\ref{307}), we obtain the following expression: \begin{equation} \overline{\Psi } \exp ( \delta s \, \gamma ^{\nu } \overleftarrow{{\partial } }_{ \!\! \nu } ) \exp ( \delta s \, \gamma ^{\mu } \:\! {\partial }_{\mu } ) \Psi = \overline{\Psi } \gamma ^{0 } U^{\dagger } ( \delta s ) \gamma ^{0 } U ( \delta s ) \Psi . \label{309} \end{equation} Therefore, matrix $ U $ should satisfy the relation: \begin{equation} \gamma ^{0 } U^{\dagger } ( \delta s ) \gamma ^{0 } U ( \delta s ) = {\rm I } , \label{310} \end{equation} such that Eq.(\ref{309}) is equivalent to Eq.(\ref{303}). Here, we assume that matrix $ U (\delta s ) $ is continuous for $ \delta s $ and is expressed as follows by the introduction of the $ 4 \times 4 $ square matrix $ M $: \begin{subequations} \begin{eqnarray} U \! & = & \! \exp ( {\rm i } \, \delta s M ) \label{311a} \\ \mbox{and} \hspace{10mm} U^{\dagger } \! \! \! & = & \! \exp ( - {\rm i } \, \delta s M^{\dagger } ) . \hspace{15mm} \label{311b} \end{eqnarray} \end{subequations} Matrix $ U^{\dagger } $ can be expressed as \begin{equation} U^{\dagger } ( \delta \;\!\! s ) \! = \exp ( - {\rm i } \, \delta \;\!\! s M^{\dagger } ) = \lim _{n \to \infty } \Bigl( {\rm I } - \frac{ {\rm i } \, \delta \;\!\! s } {n} M^{\dagger } \Bigr)^n . \label{313} \end{equation} If matrix $ M $ is chosen to satisfy the relation: \begin{equation} \gamma ^{0 } M^{\dagger } \gamma ^{0 } = M , \label{312} \end{equation} we get \begin{eqnarray} \begin{aligned} \gamma ^{0 } U^{\dagger } \gamma ^{0 } U \! = \lim _{n \to \infty } \gamma ^{0 } \Bigl( {\rm I } - \frac{ {\rm i } \, \delta \;\!\! s }{n} M^{\dagger } \Bigr)^n \gamma ^{0 } U \hspace{30mm} \\ = \gamma ^{0 } \Bigl( {\rm I } - \frac{ {\rm i } \, \delta \;\!\! s } {n} M^{\dagger } \Bigr) \gamma ^{0 } \gamma ^{0 } \Bigl( {\rm I } - \frac{ {\rm i } \, \delta \;\!\! s } {n} M^{\dagger } \Bigr) \gamma ^{0 } \cdots U \hspace{2.5mm} \\ = \Bigl( {\rm I } - \frac{ {\rm i } \, \delta \;\!\! s }{n} M \Bigr) \Bigl( {\rm I } - \frac{ {\rm i } \, \delta \;\!\! s }{n} M \Bigr) \cdots U \hspace{21.5mm} \\ = \lim _{n \to \infty } \Bigl( {\rm I } - \frac{ {\rm i } \, \delta \;\!\! s }{n} M \Bigr)^n = \exp ( - {\rm i } \, \delta s M ) \, U \hspace{11mm} \\ = U^{-1} U = {\rm I } , \hspace{57mm} \label{314} \end{aligned} \end{eqnarray} then matrix $ U $ satisfies (\ref{310}). Now, operating $ \gamma ^{0 } $ from the left-hand side of (\ref{312}), we have \begin{equation} M^{\dagger } \gamma ^{0 } = \gamma ^{0 } M . \label{315} \end{equation} On the other hand, \begin{equation} M^{\dagger } \gamma ^{0 } = M^{\dagger } ( \gamma ^{0 } )^{\dagger } = ( \gamma ^{0 } M )^{\dagger } . \label{316} \end{equation} Hence, relation (\ref{312}) is equivalent to the condition that $ \gamma ^{0 } M $ is hermitian. Since $ \gamma ^{0 } \gamma ^{\mu } $ is hermitian, we adopt a linear combination of $ \gamma ^{\mu } $ as $ M $: \begin{equation} M = - {\rm e } \gamma ^{\mu } A_{\mu } , \label{317} \end{equation} where $ {\rm e } $ is the electron charge and $ {A }_{\mu } $ represents the four-vector potential of an electromagnetic field. Then, Eq.(\ref{307}) becomes \begin{equation} \exp ( \delta s \, \gamma ^{\mu } \:\! {\partial }_{\mu } ) \Psi = \exp ( - {\rm i } {\rm \, \delta } \;\!\! s \, {\rm e } \gamma ^{\mu } \! {A }_{\mu } ) \Psi . \label{318} \end{equation} We consider Eq.(\ref{318}) to be the fundamental equation for a spread electron in an electromagnetic field. \subsection{\label{subsec:level32}Derivation of the wave equation} In Eq.(\ref{318}), we substitute \begin{eqnarray} \begin{aligned} \left\{ \begin{array}{ll} X \; {\equiv } \;\; \delta s \, \gamma ^{\mu } \:\! {\partial }_{\mu } , \label{319} \\ \;\! Y \; {\equiv } \;\; {\rm i } \, \delta s \, {\rm e } \gamma ^{\mu } \! {A }_{\mu } . \end{array} \right. \end{aligned} \end{eqnarray} Then, \begin{equation} \exp ( X ) \Psi = \exp (- Y ) \Psi . \label{320} \end{equation} It follows that \begin{equation} \{ {\exp ( X ) \exp ( Y ) } \} \exp ( - Y ) \Psi = \exp (- Y ) \Psi . \label{321} \end{equation} According to the Campbell-Hausdorff formula: \begin{eqnarray} \begin{aligned} \exp ( X ) \exp ( Y ) = \exp ( Z ) , \ \ \mbox{where} \hspace{65mm} \\ Z = X + Y + \frac{1 }{2 } [ \, X , \, Y \, ] + \frac{1 }{12 } {\{} \, [ \, [ \, X , \, Y \, ], \, Y \, ] - [ \, [ \, X , \, Y \,] , \, X \, ] \, {\}} + \cdots , \label{322} \end{aligned} \end{eqnarray} equation (\ref{321}) becomes \begin{equation} \{ \exp ( Z ) - {\rm I } \} \exp (- Y ) \Psi= 0 . \label{323} \end{equation} We can expand $ \{ \exp ( Z ) - {\rm I } \} $ into the infinite product of a sine function as follows: \begin{eqnarray} \begin{aligned} \exp ( Z ) - {\rm I } \hspace{103.5mm} \\ = \exp \Big( \frac{ Z }{2 } \Big) \Big\{ \exp \Big( \frac{ Z }{2 } \Big) - \exp \Bigl( \! - \frac{ Z }{2 } \Big) \Bigr\} \hspace{61.5mm} \\ = - 2 \, {\rm i } \exp \Big( \frac{ Z }{2 } \Big) \: {\sin } \Big( \frac{ {\rm i } Z }{2 } \Big) \hspace{83mm} \\ = \exp \Big( \frac{ Z }{2 } \Big) Z {\prod_{ n = 1 }^{\infty } } \Bigl\{ \Big( {\rm I } - \frac{ {\rm i } Z }{ 2 \:\! n \pi } \Big) \exp \Big( \frac{ {\rm i } Z }{ 2 \:\! n \pi } \Big) \Bigr\} \Bigl\{ \Big( {\rm I } + \frac{ {\rm i } Z }{ 2 \:\! n \pi } \Big) \exp \Big( \! - \frac{ {\rm i } Z }{ 2 \:\! n \pi } \Big) \Bigr\} . \hspace{0mm} \label{324} \end{aligned} \end{eqnarray} Thus, the equation that $ \Psi $ should satisfy is \begin{equation} ( \:\! {\rm i } Z - 2 \:\! n \pi ) \exp ( - Y ) \Psi = 0 \;;\;\; n = 0, \pm 1, \pm 2, \cdots . \label{325} \end{equation} Using the expansion \begin{equation} \phi \, \exp ( {\rm i } \, \omega ) = \exp ( {\rm i } \, \omega ) \{ \, { \phi + {\rm i } \, [ \, \phi , \, \omega \, ] - \frac{1 }{2 } [ \, [ \, \phi , \, \omega \, ] , \, \omega \, ] + \cdots } \, \} , \label{326} \end{equation} and substituting \begin{eqnarray} \begin{aligned} \left\{ \begin{array}{ll} \phi \; {\equiv } \; {\rm i } Z - 2 \:\! n \pi , \label{327} \\ \omega \; {\equiv } \; {\rm i } Y , \end{array} \right. \end{aligned} \end{eqnarray} we can obtain the following wave equation: \begin{equation} \Bigl( {\rm i } \gamma ^{\mu } \:\! \partial _{\mu } - {\rm e } \gamma ^{\mu } A_{\mu } + V ( \delta s ) - \frac{2 n \pi }{\delta s } \Bigr) \Psi = 0 , \label{328} \end{equation} where $ \delta s $ is determined as the Compton wavelength $ \lambda _{c } $($ = 2 \pi / m $) of the electron and $ n $ is selected as $ 1 $ such that the mass term of the electron is specified correctly. In this case, wave equation (\ref{328}) agrees with the Dirac equation except for the existence of $ V (\delta s ) $. Here, $ V (\delta s ) $ is represented as a power series of $ \delta s $ as follows: \begin{equation} V ( \delta s ) = V _{1 } + V _{2 } + \cdots , \hspace{58mm} \label{329} \end{equation} \addtocounter{equation}{-1} \begin{subequations} \begin{eqnarray} V _{1 } \! \! & \equiv & \! \! + \frac{1 }{2 } \delta s \, {\rm e } \, [ \, \gamma ^{\mu } \:\! \partial _{\mu }, \, \gamma ^{\nu } A_{\nu } \, ] , \label{329a} \\ V _{2 } \! \! & \equiv & \! \! - \frac{1 }{12 } {\rm i } \, \delta s^{2 } \, {\rm e } \, [ \, [ \gamma ^{\mu } \:\! \partial _{\mu }, \, \gamma ^{\nu } A_{\nu } \, ], \, \gamma ^{\xi } ( {\rm i } \partial _{\xi } + {\rm e } A_{\xi }) \, ] . \hspace{6mm} \label{329b} \end{eqnarray} \end{subequations} In addition, if $ \delta s $ is the infinitesimal, $ V ( \delta s ) \to 0 $ and the mass term corresponds to the infinite bare electron mass. Then, the usual Dirac equation is reproduced when the mass term is renormalized. Therefore, the Dirac equation is inherently relativistic invariant. However, we assume another standpoint because $ V ( \delta s ) $ has the physical significance as shown below. \section{\label{sec:level4}Calculation of the Anomalous Magnetic Moment} \hspace*{0.6cm}In this section, by considering $ V _n $ as the nth order correction of the Dirac equation, we evaluate the corrections in magnetic moment using the Foldy-Wouthuysen transformation \cite{foldy} (FW transformation). The result obtained is in good agreement with the QED calculation. \subsection{\label{subsec:level41}FW transformation of the Dirac equation} We begin with the FW transformation of the Dirac equation: \begin{equation} ( {\rm i } \gamma ^{\mu } \:\! \partial _{\mu } - {\rm e } \gamma ^{\mu } A_{\mu } - m ) \Psi = 0 . \label{401} \end{equation} Note that $ \beta \equiv \gamma ^{0 }, \ \beta ^{2 } = 1, \ \boldsymbol{\alpha } \equiv \beta \boldsymbol{\gamma } $, $ \partial_{\mu } \equiv ( \partial / \partial t, \boldsymbol{\nabla } ) $, and $ A_{\mu } \equiv (\phi , - \! \boldsymbol{A } ) $. Thus, multiplying the left-hand side of Eq.(\ref{401}) with the $ \beta $ matrix, we obtain a time-independent Dirac Hamiltonian: \begin{equation} {\mathcal{H } } = \beta m + \varepsilon + o \hspace{45mm} \label{402} \end{equation} \vspace*{-9mm} \addtocounter{equation}{-1} \begin{subequations} \begin{eqnarray} \mbox{with the even operator} \ \ \varepsilon \! & = & \! {\rm e } \phi \label{402a} \hspace{10mm} \\ \mbox{and the odd operator} \ \ o \! & = & \! \boldsymbol{\alpha } \, {\dotprod } \, \boldsymbol{\pi } ; \label{402b} \hspace{10mm} \\ \boldsymbol{\pi } \equiv \boldsymbol{p } - {\rm e } \boldsymbol{A } \! \! & \equiv & \! \! - {\rm i } \boldsymbol{\nabla } - {\rm e } \boldsymbol{A } . \nonumber \end{eqnarray} \end{subequations} Performing the FW transformation eliminates the odd operator from $ {\mathcal{H } } $: \begin{equation} {\mathcal{H^{\textsc{FW}} } } \simeq \beta m + \varepsilon + \frac{1 }{2 m } \beta o^{2 } - \frac{1 }{8 m^{2 } } [ \, o , \, [ \, o , \, \varepsilon \, ] \, ] . \label{403} \end{equation} Using the identity \begin{equation} ( \boldsymbol{\alpha } {\dotprod } \, \boldsymbol{a } ) ( \boldsymbol{\alpha } {\dotprod } \, \boldsymbol{b } ) = \boldsymbol{a } \, {\dotprod } \, \boldsymbol{b } + {\rm i } \boldsymbol{\sigma } {\dotprod } ( \boldsymbol{a } \! \boldsymbol{\times } \! \boldsymbol{b } ) , \label{404} \end{equation} where $ \boldsymbol{a } $ and $ \boldsymbol{b } $ are arbitrary vectors and $ \boldsymbol{\sigma } $ denotes the $ 4 \times 4 $ Dirac spin matrix, we can obtain the explicit form of Eq.(\ref{403}) as follows: \begin{eqnarray} \begin{aligned} {\mathcal{H^{\textsc{FW}} } } \simeq \frac{1 }{2 m } \beta \boldsymbol{\pi }^{2 } + {\rm e } \phi + \beta m \hspace{46mm} \\ - \frac{ {\rm e } }{2 m } \frac{ \textit{\textsf{g}} }{2} \beta \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } - \frac{1 }{2 } \frac{ {\rm e } }{2 m } \frac{1 }{m } \boldsymbol{\sigma } {\dotprod } ( \boldsymbol{E \! \times \! \pi } ) + \frac{ {\rm e } }{8 m^2 } \boldsymbol{\nabla }^2 \phi , \hspace{0mm} \\ \mbox{where} \quad \boldsymbol{B } = \boldsymbol{\nabla \! \times \! A } , \ \boldsymbol{E } = - \boldsymbol{\nabla } \phi , \label{405} \end{aligned} \end{eqnarray} and the gyromagnetic ratio $ \textit{\textsf{g}} $ of an electron described by the Dirac equation is only $ 2 $. \subsection{\label{subsec:level42}Effect of $ V _{1 } $} We evaluate the alteration in $ {\mathcal{H^{\textsc{FW}} } } $ by adding $ V _{1 } $ to the Dirac equation. $ V _{1 } $ is rewritten as follows: \begin{eqnarray} \begin{aligned} V _{1 } = + \kappa \frac{ {\rm e } }{4 m } \left\{ \sigma ^{\mu \nu } F_{\mu \nu } - 2 \sigma ^{\mu \nu } ( A_{\mu } \partial _{\nu } - A_{\nu } \partial _{\mu } ) \right\} , \hspace{18mm} \\ \mbox{where} \quad \kappa \equiv - {\rm i } \, \delta s \, m = - 2 \pi {\rm i } , \hspace{36mm} \\ \sigma ^{\mu \nu } \equiv \frac{ {\rm i } }{2 } [ \, \gamma ^{\mu }, \, \gamma ^{\nu } \, ] , \ \mbox{and} \ \ F_{\mu \nu } \equiv \frac{\partial A_{\nu } } {\partial x^{\mu } } - \frac{\partial A_{\mu } }{\partial x^{\nu } } . \label{406} \end{aligned} \end{eqnarray} This expression contains the Pauli term $ \kappa \left( {\rm e } / 4 m \right) \sigma ^{\mu \nu } F_{\mu \nu } $; therefore, it appears to be related to the magnetic moment. Further, $ V _{1 } $ can also be expressed as follows: \begin{equation} V _{1 } = - \kappa \frac{ {\rm e } }{2 m } \Bigl\{ ( \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } - {\rm i } \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } ) - 2 \Bigl\{ \boldsymbol{\sigma } {\dotprod } ( \boldsymbol{ A \! \times \! \nabla } ) - {\rm i } \boldsymbol{\alpha } {\dotprod } \Bigl( \phi \boldsymbol{\nabla } + \boldsymbol{A } \frac{\partial}{\partial t} \Bigr) \! \Bigr\} \! \Bigr\} , \label{407} \end{equation} where alterations in (\ref{402a}) and (\ref{402b}) are \addtocounter{equation}{-1} \begin{subequations} \begin{eqnarray} \delta \varepsilon \! \! & = & \! \! + \kappa \frac{ {\rm e } }{2 m } \beta \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{B } - 2 \boldsymbol{ A \! \times \! \nabla } \right) \label{408a} \\ \mbox{and} \ \ \delta o \! \! & = & \! \! - \kappa \frac{ {\rm e } }{2 m } \beta \, {\rm i } \boldsymbol{\alpha } {\dotprod } \Bigl\{ \boldsymbol{E } - 2 \Bigl( \phi \boldsymbol{\nabla } + \boldsymbol{A } \frac{\partial}{\partial t} \Bigr) \! \Bigr\} . \label{408b} \end{eqnarray} \end{subequations} We then calculate the alterations in $ {\mathcal{H^{\textsc{FW}} } } $ by using identity (\ref{404}), \begin{eqnarray} \begin{aligned} \delta \Bigl\{ \frac{1 }{2 m } \beta o^{2 } \Bigr\} \simeq \frac{1 }{2 m } \beta ( \, o \, \delta o + \delta o \, o \, ) \hspace{46.5mm} \\ \quad = + \kappa \frac{ {\rm e } }{4 m^2 } {\rm i } \, \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{ \nabla \! \times \! E } \right) + \kappa \frac{ {\rm e } }{4 m^2 } \left( \boldsymbol{\nabla } {\dotprod } \boldsymbol{E } + \boldsymbol{E } {\dotprod } \boldsymbol{\nabla } \right) \label{409} \hspace{0mm} \\ \qquad - \kappa \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{B } - 2 \boldsymbol{ A \! \times \! \nabla } \right) \! \Bigl\{ \frac{\beta }{m } \Bigl( {\rm i } \frac{\partial}{\partial t} - {\rm e } \phi \Bigr) \! \Bigr\} \hspace{2.5mm} \\ - \kappa \frac{ {\rm e }^2 }{2 m^2 } \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{ A \! \times \! E } \right) \hspace{45.5mm} \end{aligned} \end{eqnarray} \vspace*{-4mm} and \begin{eqnarray} \begin{aligned} \delta \Bigl\{ - \frac{1 }{8 m^{2 } } [ \, o, \, [ \, o, \, \varepsilon \, ] \, ] \Bigr\} \hspace{67mm} \\ \simeq - \frac{1 }{8 m^{2 } } {\{} [ \, \delta o, \, [ \, o, \, \varepsilon \, ] \, ] \! + \! [ \, o , \, [ \, \delta o , \, \varepsilon \, ] \, ] \! + \! [ \, o , \, [ \, o , \, \delta \varepsilon \, ] \, ] {\}} \hspace{0mm} \\ \simeq - \kappa \frac{ {\rm e }^2 }{4 m^2 } {\rm i } \, \boldsymbol{A } {\dotprod } \boldsymbol{E } \Bigl\{ \frac{\beta }{m } \Bigl( {\rm i } \frac{\partial}{\partial t} - {\rm e } \phi \Bigr) \! \Bigr\} . \hspace{29mm} \label{410} \end{aligned} \end{eqnarray} Consequently, the alteration in $ {\mathcal{H^{\textsc{FW}} } } $ due to $ V _{1 } $ can be expressed as follows: \begin{eqnarray} \begin{aligned} \delta {\mathcal{H^{\textsc{FW}} } } \simeq \delta \varepsilon + \delta \Bigl\{ \frac{1 }{2 m } \beta o^{2 } \Bigr\} + \delta \Bigl\{ - \frac{1 }{8 m^{2 } } [ \, o , \, [ \, o , \, \varepsilon \, ] \, ] \Bigr\} \hspace{17.5mm} \\ \simeq + \kappa \frac{ {\rm e } }{4 m^2 } {\rm i } \, \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{ \nabla \! \times \! E } \right) + \kappa \frac{ {\rm e } }{4 m^2 } \left( \boldsymbol{\nabla } {\dotprod } \boldsymbol{E } + \boldsymbol{E } {\dotprod } \boldsymbol{\nabla } \right) \hspace{11mm} \\ - \kappa \frac{ {\rm e } }{2 m } \beta \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{B } - 2 \boldsymbol{ A \! \times \! \nabla } \right) \! \Bigl\{ \frac{\beta }{m } \Bigl( {\rm i } \frac{\partial}{\partial t} - {\rm e } \phi - \beta m \Bigr) \! \Bigr\} \hspace{3.5mm} \label{411} \\ - \kappa \frac{ {\rm e }^2 }{2 m^2 } \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{ A \! \times \! E } \right) - \kappa \frac{ {\rm e }^2 }{4 m^2 } {\rm i } \, \boldsymbol{A } {\dotprod } \boldsymbol{E } \Bigl\{ \frac{\beta }{m } \Bigl( {\rm i } \frac{\partial}{\partial t} - {\rm e } \phi \Bigr) \! \Bigr\} . \end{aligned} \end{eqnarray} We now assume the following conditions: \begin{itemize} \item The external electromagnetic field is sufficiently small and static. \item The kinetic energy is sufficiently smaller than the rest energy of the electron. \end{itemize} Then, in Eq.(\ref{411}), $ \boldsymbol{ \nabla \! \times \! E } = 0 $ and $ {\rm i } ( \partial / \partial t ) - {\rm e } \phi \simeq \beta m $. In addition, since the scalar potential $ \phi $ is time-independent, it commutes with $ {\mathcal{H^{\textsc{FW}} } } $, i.e., \begin{equation} 0 = \frac{d \phi }{dt } = {\rm i } \, [{\mathcal{H^{\textsc{FW}} } } , \phi \, ] \, \simeq \beta \frac{ {\rm i } }{2 m } \left( \, p^{2 } \phi - \phi \, p^{2 } \, \right) = \beta \frac{ {\rm i } }{2 m } \left( \, \boldsymbol{\nabla } {\dotprod } \boldsymbol{E } + \boldsymbol{E } {\dotprod } \boldsymbol{\nabla } \, \right) . \label{412} \end{equation} Therefore, Eq.(\ref{411}) becomes \begin{equation} \delta {\mathcal{H^{\textsc{FW}} } } \simeq - \kappa \frac{ {\rm e }^2 }{2 m^2 } \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{ A \! \times \! E } \right) - \kappa \frac{ {\rm e }^2 }{4 m^2 } {\rm i } \, \boldsymbol{A } {\dotprod } \boldsymbol{E } . \label{413} \end{equation} Since $ \boldsymbol{ A \! \times \! E } $ and $ \boldsymbol{A } {\dotprod } \boldsymbol{E } $ are sufficiently small, $ \delta {\mathcal{H^{\textsc{FW}} } } $ can be neglected as compared to $ {\mathcal{H^{\textsc{FW}} } } $. Hence, Eq.(\ref{328}) roughly agrees with the Dirac equation. \subsection{\label{subsec:level43}Self-energy influence} Here, we assume that there exist no external electric charges. According to classical electromagnetics, the electron obtains its self-energy in the form of electrostatic energy. In addition, the electrostatic energy $ {\rm e } \phi $ in the Dirac Hamiltonian differs from $ m $ by a factor of the $ \beta $ matrix: \begin{equation} {\mathcal{H } } \simeq \beta m + {\rm e } \phi = \beta \left( m + \beta {\rm e } \phi \right) . \label{414} \end{equation} Hence, the self-energy can be defined as $ \delta m \equiv \beta {\rm e } \phi $ such that $ \delta m $ may behave as a part of $ m $. We now evaluate the alteration $ \delta {\mathcal{H^{\textsc{FW}} } } $ by taking the self-energy into consideration. Here, the electric field generated by the rest electron is $ \boldsymbol{E } = - ( \phi / r^2 ) \boldsymbol{r } $, and the vector potential for a constant magnetic field is $ \boldsymbol{A }= (1/2) \boldsymbol{B \! \times \! r } $, where $ \boldsymbol{r } $ is the position vector from the charge. Then, the second term in (\ref{413}) becomes zero, since $ \boldsymbol{A } {\dotprod } \boldsymbol{E } = - (1/2) ( \phi / r^2 ) \left( \boldsymbol{B \! \times \! r } \right) {\dotprod } \, \boldsymbol{r } = 0 . $ On the other hand, $ \left( \boldsymbol{B \! \times \! r } \right) \! \boldsymbol{\times r } = - \boldsymbol{B } \left( \boldsymbol{r } \, {\dotprod } \, \boldsymbol{r } \right) + \boldsymbol{r } \left( \boldsymbol{B } \, {\dotprod } \, \boldsymbol{r } \right) = - \boldsymbol{B } \, r^2 $, since the mean value of $ \boldsymbol{B } \, {\dotprod } \, \boldsymbol{r } $ becomes zero due to the spherical symmetry of $ \boldsymbol{r } $. Therefore, the first term in (\ref{413}) becomes \begin{eqnarray} \begin{aligned} - \kappa \frac{ {\rm e }^2 }{2 m^2 } \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{A \! \times \! E } \right) = - \frac{1 }{2 } \Bigl( \kappa \frac{ \beta {\rm e } \phi }{m } \Bigr) \frac{ {\rm e } }{2 m } \, \frac{ \beta \, \boldsymbol{\sigma } {\dotprod } \{ - \! \left( \boldsymbol{B \! \times \! r } \right) \! \boldsymbol{\times r } \} }{r^2 } \\ = - \frac{1 }{2 } \Bigl( \kappa \frac{\delta m }{m } \Bigr) \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } . \hspace{25mm} \label{415} \end{aligned} \end{eqnarray} It is observed that (\ref{415}) gives a correction in the magnetic moment that is proportional to $ \delta m $. \subsection{\label{subsec:level44}Self-energy estimation} In this study, we assumed that an electron has a time-like size $ \delta s $ as the world length. We then interpret $ 0.5 \, \delta s \left( = \pi / m \right) $ as the four-dimensional radius $ r_{0 } $ of the electron, which is the same degree of Zitterbewegung amplitude being expected from the Dirac equation. The classical calculation provides a good approximation of the self-energy since the quantum effects are insignificant when $ r_{0 } > 1/m $, even if these effects take part in. In order to estimate the self-energy of an electron that is spread in four dimensions, we extend the definition of the electric field, as shown below, by applying Gauss's law to the surface of a four-dimensional sphere. The area of the four-dimensional sphere is $ 2 \pi^2 r^3 $ and the electric charge on the sphere is $ \rm e $ multiplied by $ r_0 $, hence, the four-dimensional electric field $ \widetilde{ \boldsymbol{E} } $ shall be defined as \begin{equation} \widetilde{ \boldsymbol{E} } = \frac{ r_0 {\rm e }}{ 2 \pi^2 \epsilon _0 r^3 } \frac{ \boldsymbol{r} }{r}, \ \ \ \label{416} \end{equation} where $ \epsilon _0 $ denotes the dielectric constant of vacuum. Thus, the four-dimensional self-energy $ \widetilde{\delta m} $ of an electron can be estimated by an analogy with the classical electrostatic energy as \begin{eqnarray} r_0 \delta m = \widetilde{\delta m} = \frac{\epsilon _0}{2} \int { \widetilde{ \boldsymbol{E} } }^2 {\rm d }^4 r. \hspace{0mm} \label{417} \end{eqnarray} Therefore, \begin{eqnarray} \begin{aligned} \delta m = \frac{\epsilon _0}{2 r_0 } \int \widetilde{ \boldsymbol{E} }^2 {\rm d }^4 r = \frac{\epsilon _0}{2 r_0 } \int _{r _0}^{\infty} \Bigl( \frac{ r_0 {\rm e }}{ 2 \pi^2 \epsilon _0 r^3 } \Bigr)^{ 2 } 2 \pi^2 r^3 {\rm i } {\rm d } r \\ = {\rm i } \frac{ {\rm e }^2} { 8 \pi^2 \epsilon _0 r _0 } , \hspace{63.5mm} \label{418} \end{aligned} \end{eqnarray} where spatial integration is performed with respect to the imaginary radius $ {\rm i } r $, since an electron has a time-like spread. Thus, the self-energy of an electron with a time-like spread becomes an imaginary number and it is not observable. Nevertheless, by substituting (\ref{418}) into (\ref{415}), we can obtain the first-order correction of the magnetic moment: \begin{equation} - \frac{1 }{2 } \Bigl( \kappa \frac{\delta m }{m } \Bigr) \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } = - \frac{1 }{2 } \Bigl( \frac{\alpha }{ \pi } \Bigr) \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } , \label{419} \end{equation} where $ \alpha \left( \equiv {\rm e }^2 / 4 \pi \epsilon _0 \right) $ denotes the fine structure constant. Accordingly, the correction in the gyromagnetic ratio can be expressed as \begin{equation} \frac{\textit{\textsf{g}} - 2 }{2 } = \frac{1 }{2 } \left(\frac{\alpha }{\pi } \right) ; \label{420} \end{equation} this expression agrees with the calculation by J.~Schwinger (1948) \cite{schwinger}. \subsection{\label{subsec:level45}Higher-order correction in the magnetic moment} Finally, we calculate the $ \alpha^2 $-order correction in the magnetic moment. We use the symbol $ \delta^{(2 ) } $ to denote second-order variations and omit calculations that do not directly contribute to the magnetic moment. We now expand (\ref{329b}) as follows: \begin{eqnarray} V _{2 } = \kappa ^2 \frac{ {\rm e } }{12 m^2 } {\rm i } \, [ \, [ \gamma ^{\mu } \:\! \partial _{\mu }, \, \gamma ^{\nu } A_{\nu } \, ], \, \gamma ^{\xi } \left( {\rm i } \partial _{\xi } + {\rm e } A_{\xi } \right) \, ] , \ \ \ \label{421} \end{eqnarray} \vspace*{-7mm} where \begin{eqnarray} & & [ \, [ \gamma ^{\mu } \:\! \partial _{\mu }, \, \gamma ^{\nu } A_{\nu } \, ], \, \gamma ^{\xi } \left( {\rm i } \partial _{\xi } + {\rm e } A_{\xi } \right) \, ] \nonumber \\ & & = \nonumber \\ (a) \hspace{5mm} & & +2 \, \gamma ^{\xi } \gamma ^{\nu } \left( \partial _{\xi } A_{\nu } \right) \left( {\rm i } \gamma ^{\mu } \partial _{\mu } - {\rm e } \gamma ^{\mu } A_{\mu } \right) \nonumber \\ (b) \hspace{5mm} & & + 4 A_{\nu } \gamma ^{\mu } \left( {\rm i } \partial _{\mu } - {\rm e } A_{\mu } \right) \partial ^{\, \nu } \nonumber \\ (c) \hspace{5mm} & & - 4 {\rm i } \, \gamma ^{\nu } A_{\nu } \, \gamma ^{\xi } \partial _{\xi } \, \gamma ^{\mu } \partial _{\mu } \nonumber \\ (d) \hspace{5mm} & & + 4 {\rm e } \, \gamma ^{\nu } A_{\nu } \, \gamma ^{\xi } A_{\xi } \, \gamma ^{\mu } \partial _{\mu } \label{422} \\ (e) \hspace{5mm} & & - 2 {\rm i } \, \gamma ^{\mu } \left( \partial _{\nu } A_{\mu } \right) \partial ^{\, \nu } \nonumber \\ (f) \hspace{5mm} & & - 4 {\rm e } \, \gamma ^{\nu } \left( \partial _{\mu } A_{\nu } \right) A^{\mu } \nonumber \\ (g) \hspace{5mm} & & + 6 {\rm e } \, \gamma ^{\mu } \left( \partial _{\mu } A_{\nu } \right) A^{\nu } . \nonumber \end{eqnarray} In the following, we evaluate each term in (\ref{422}). Note that $ {\rm e } \gamma ^{0 } A_{0 } = \beta {\rm e } \phi = \delta m $. Thus, we have \begin{eqnarray} \begin{aligned} \mbox{(a):} \hspace{4mm} + 2 \, \gamma ^{\xi } \gamma ^{\nu } \left( \partial _{\xi } A_{\nu } \right) \left( {\rm i } \gamma ^{\mu } \partial _{\mu } - {\rm e } \gamma ^{\mu } A_{\mu } \right) \hspace{61mm} \\ \simeq + 2 {\rm i } \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } \left( {\rm i } \gamma ^{\mu } \partial _{\mu } - {\rm e } \gamma ^{\mu } A_{\mu } \right) + 2 \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } \left( {\rm i } \gamma ^{0 } \partial _{0 } - {\rm e } \gamma ^{0 } A_{0 } \right) \hspace{28.0mm} \label{423} \\ \simeq + 2 {\rm i } \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } \, m - 2 {\rm i } \, \beta \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } \, \partial _{0 } - 2 \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } \, \delta m . \hspace{50.5mm} \end{aligned} \end{eqnarray} The first term in (\ref{423}) results in the following alteration in $ {\mathcal{H^{\textsc{FW}} } } $: \begin{equation} \delta^{(2 ) } \varepsilon _{a1 } = - \beta \, \kappa ^2 \frac{ {\rm e } }{12 m^2 } {\rm i } \left( + 2 {\rm i } \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } \, m \right) = + \frac{1 }{3 } \kappa ^2 \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } . \label{424} \end{equation} Although (\ref{424}) contributes to the magnetic moment, it will be counterbalanced by another correction term that will be calculated later in (\ref{431}). The second term in (\ref{423}) is unrelated to the magnetic moment since the $ \boldsymbol{\sigma } $ matrix does not appear in the result of the FW transformation. The third term in (\ref{423}) contributes to the magnetic moment; hence, it will be evaluated below together with the (g) term. The terms (b), (c), and (d) might contribute to the magnetic moment through the variations in $ \boldsymbol{p } $ and $ \boldsymbol{A } $ of (\ref{402b}). It should be noted that $ {\rm i } \gamma ^{0 } \partial _{0 } \simeq m + \delta m $ and $ {\rm e } \gamma ^{0 } A_{0 } = \delta m $. Thus, we have \begin{eqnarray} \begin{aligned} \mbox{(b):} \hspace{4mm} + 4 A_{\nu } \gamma ^{\mu } \left( {\rm i } \partial _{\mu } - {\rm e } A_{\mu } \right) \partial ^{\, \nu } \hspace{74.5mm} \\ \simeq + \frac{4 {\rm i } }{ {\rm e } } \sum _{k} {\rm e } \gamma ^{0 } A_{0 } \, \gamma ^{k } \left( {\rm i } \partial _{k } + {\rm e } A_{k } \right) {\rm i } \gamma ^{0 } \partial _{0 } - \frac{4 {\rm i } }{ {\rm e } } \, {\rm e } \gamma ^{0 } A_{0 } \, \gamma ^{0 } \left( {\rm i } \partial _{0 } - {\rm e } A_{0 } \right) {\rm i } \gamma ^{0 } \partial _{0 } \label{425} \hspace{0mm} \\ \simeq - \frac{4 {\rm i } }{ {\rm e } } \sum _{k} \gamma ^{k } \left( - {\rm i } \partial _{k } - {\rm e } A_{k } \right) \delta m \left( m + \delta m \right) - \frac{4 {\rm i } }{ {\rm e } } \, \delta m \, m \left( m + \delta m \right) , \hspace{9.0mm} \end{aligned} \end{eqnarray} \begin{eqnarray} \begin{aligned} \mbox{(c):} \hspace{4mm} - 4 {\rm i } \, \gamma ^{\nu } A_{\nu } \, \gamma ^{\xi } \partial _{\xi } \, \gamma ^{\mu } \partial _{\mu } \hspace{82.5mm} \\ \simeq - \frac{4 {\rm i } }{ {\rm e } } \sum _{k} {\rm e } \gamma ^{k } A_{k } \, {\rm i } \gamma ^{0 } \partial _{0 } \, {\rm i } \gamma ^{0 } \partial _{0 } + \frac{4 {\rm i } }{ {\rm e } } \, {\rm e } \gamma ^{0 } A_{0 } \, {\rm i } \gamma ^{0 } \partial _{0 } \, {\rm i } \gamma ^{0 } \partial _{0 } \hspace{32.5mm} \\ - \frac{4 {\rm i } }{ {\rm e } } \sum _{k} {\rm e } \gamma ^{0 } A_{0 } \, ( \gamma ^{0 } \gamma ^{k } + \gamma ^{k } \gamma ^{0 } ) \, \partial _{0 } \, \partial _{k } \hspace{58.0mm} \label{426} \\ \simeq + \frac{4 {\rm i } }{ {\rm e } } \sum _{k} \gamma ^{k } \left( - {\rm e } A_{k } \right) \left( m + \delta m \right) ^2 + \frac{4 {\rm i } }{ {\rm e } } \, \delta m \left( m + \delta m \right) ^2 , \hspace{28.0mm} \end{aligned} \end{eqnarray} \begin{eqnarray} \begin{aligned} \mbox{(d):} \hspace{4mm} + 4 {\rm e } \, \gamma ^{\nu } A_{\nu } \, \gamma ^{\xi } A_{\xi } \, \gamma ^{\mu } \partial _{\mu } \hspace{82.0mm} \\ \simeq - \frac{4 {\rm i } }{ {\rm e } } \sum _{k} {\rm e } \gamma ^{0 } A_{0 } \, {\rm e } \gamma ^{0 } A_{0 } \, {\rm i } \gamma ^{k } \partial _{k } - \frac{4 {\rm i } }{ {\rm e } } \, {\rm e } \gamma ^{0 } A_{0 } \, {\rm e } \gamma ^{0 } A_{0 } \, {\rm i } \gamma ^{0 } \partial _{0 } \hspace{30.0mm} \\ + \frac{4 {\rm i } }{ {\rm e } } \sum _{k} \, ( \gamma ^{0 } \gamma ^{k } + \gamma ^{k } \gamma ^{0 } ) \, {\rm e } ^2 A_{0 } A_{k } \, {\rm i } \gamma ^{0 } \partial _{0 } \label{427} \hspace{55.0mm} \\ \simeq + \frac{4 {\rm i } }{ {\rm e } } \sum _{k} \gamma ^{k } \left( - {\rm i } \partial _{k } \right) \delta m ^2 - \frac{4 {\rm i } }{ {\rm e } } \, \delta m ^2 \left( m + \delta m \right) . \hspace{42.0mm} \end{aligned} \end{eqnarray} After collecting the terms (b), (c), and (d), we get the following two terms: \begin{equation} + \frac{4 {\rm i } }{ {\rm e } } \sum _{k} \! \gamma ^{k } \! \left( - {\rm e } A_{k } \right) m^2 \! + \frac{4 {\rm i } }{ {\rm e } } \sum _{k} \! \gamma ^{k } \! \left( {\rm i } \partial _{k } - {\rm e } A_{k } \right) m \, \delta m . \label{428} \end{equation} These terms result in the following alterations in $ {\mathcal{H } } $: \begin{eqnarray} \begin{aligned} \delta^{(2 ) } o_{bcd} \simeq - \beta \, \kappa ^2 \frac{ {\rm e } }{12 m^2 } {\rm i } \, \frac{4 {\rm i } }{ {\rm e } } \sum _{k} \! \gamma ^{k } \! \left( - {\rm e } A_{k } \right) m^2 \hspace{16.0mm} \\ - \beta \, \kappa ^2 \frac{ {\rm e } }{12 m^2 } {\rm i } \, \frac{4 {\rm i } }{ {\rm e } } \sum _{k} \! \gamma ^{k } \! \left( {\rm i } \partial _{k } - {\rm e } A_{k } \right) m \, \delta m \label{429} \hspace{4.5mm} \\ = + \frac{1 }{3 } \kappa ^2 \! \left( - {\rm e } \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{A } \right) + \frac{1 }{3 } \kappa ^2 \frac{\delta m }{m } \, \boldsymbol{\alpha } {\dotprod } \left( - \boldsymbol{p } - {\rm e } \boldsymbol{A } \right) . \end{aligned} \end{eqnarray} Then, vector potential $ \boldsymbol{A } $ in (\ref{402b}) is corrected by the first term in (\ref{429}) as \begin{equation} \boldsymbol{A } \to \Bigl( 1 + \frac{1 }{3 } \kappa ^2 \Bigr) \boldsymbol{A } . \label{430} \end{equation} Accordingly, the magnetic moment in (\ref{405}) is corrected as \begin{equation} - \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } \to - \Bigl( 1 + \frac{1 }{3 } \kappa ^2 \Bigr) \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } . \label{431} \end{equation} The variation in (\ref{431}) is counterbalanced by $ \delta^{(2 ) } \varepsilon_{a1 } $, which was previously calculated. In the second term of (\ref{429}), corrections in the magnetic moment due to variations in $ \boldsymbol{p } $ and $ \boldsymbol{A } $ cancel each other out in the result of the FW transformation. The (e) and (f) terms can be neglected under the conditions assumed in Subsection \ref{subsec:level42}. \begin{eqnarray} \begin{aligned} \mbox{(g):} \hspace{4mm} + 6 {\rm e } \, \gamma ^{\mu } \left( \partial _{\mu } A_{\nu } \right) A^{\nu } \hspace{85.0mm} \\ \simeq + 6 \sum _{k} \gamma ^{0 } \gamma ^{k } \! \left( - \partial _{k } A_{0 } \right) {\rm e } \gamma ^{0 } A_{0 } \simeq + 6 \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } \, \delta m . \hspace{42.0mm} \label{432} \end{aligned} \end{eqnarray} This term contributes to the magnetic moment as well as the third term of (a). In any case, we obtain the alteration in $ {\mathcal{H } } $ related to the magnetic moment by adding (g) and the third term of (a) as follows: \begin{equation} \delta^{(2 ) } o_{ a3 + g } = - \beta \, \kappa ^2 \frac{ {\rm e } }{12 m^2 } {\rm i } \left( - 2 \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } \, \delta m + 6 \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } \, \delta m \right) = - \frac{1 }{3 } \kappa ^2 \frac{ \delta m }{m } \frac{ {\rm e } }{m } {\rm i } \, \beta \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } . \label{433} \end{equation} This term results in the following alteration in $ {\mathcal{H^{\textsc{FW}} } } $: \begin{eqnarray} \begin{aligned} \delta^{(2 ) } \Bigl\{ \frac{1 }{2 m } \beta \, o \, ^2 \Bigr\}_{ a3 + g } \simeq \frac{1 }{2 m } \beta \{ \, o \, \delta^{(2 ) } o_{ a3 + g } \, + \delta^{(2 ) } o_{ a3 + g } \, o \, \} \hspace{28mm} \\ \simeq - \frac{1 }{3 } \kappa ^2 \frac{ \delta m }{m } \frac{ {\rm e } }{2 m^2 } {\rm i } \, \beta \{ \left( - {\rm e } \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{A } \right) \left( \beta \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } \right) + \left( \beta \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{E } \right) \left( - {\rm e } \, \boldsymbol{\alpha } {\dotprod } \boldsymbol{A } \right) \} \label{434} \\ = + \frac{1 }{3 } \Bigl( \kappa \frac{ \delta m }{m } \Bigr) \Bigl\{ \kappa \frac{ {\rm e }^2 }{m^2 } \, \boldsymbol{\sigma } {\dotprod } \left( \boldsymbol{ A \! \times \! E } \right) \Bigr\} = + \frac{1 }{3 } \left( \frac{\alpha }{\pi } \right)^2 \! \! \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } . \hspace{8mm} \end{aligned} \end{eqnarray} Consequently, the correction in the gyromagnetic ratio up to the order of $ \alpha^2 $ becomes \begin{equation} \frac{\textit{\textsf{g}} - 2 }{2 } = \frac{1 }{2 } \left( \frac{\alpha } {\pi } \right) - \frac{1 }{3 } \left( \frac{\alpha }{\pi } \right)^2 , \hspace{3mm} \label{435} \end{equation} whereas the corresponding correction calculated in QED is \ \cite{sommerfield, petermann1} \begin{equation} \frac{\textit{\textsf{g}}_{qed} - 2 }{2 } = \frac{1 }{2 } \left( \frac{\alpha }{\pi } \right) - 0.3285 \left( \frac{\alpha }{\pi } \right)^2 . \label{436} \end{equation} Both these results agree within the error margin of the order of $ \alpha^3 $. \subsection{\label{subsec:level46}Vacuum polarization effect} In the previous sections, the effect of vacuum polarization is not taken into account. Hence, we consider that the marginal error between (\ref{435}) and (\ref{436}) can be attributed to vacuum polarization due to the electron pair creation. The size of the error can be approximated by the following expression: \begin{equation} \frac{\textit{\textsf{g}}_{qed} }{2} - \frac{ \, \textit{\textsf{g}} \, }{2} = \Bigl\{ - 0.3285 + \frac{1}{3} \, \Bigr\} \! \left( \frac{\alpha }{\pi } \right)^2 \simeq + \frac{1 }{12 } \! \left( \frac{\alpha }{4 \pi } \right)^2 \! \! . \ \label{437} \end{equation} Accordingly, we assume that the alteration in $ {\mathcal{H^{\textsc{FW}} } } $ due to the electron pair creation is given by the following formula: \begin{equation} \delta {\mathcal{H^{\textsc{FW}} }_{{\rm e }^+ {\rm e }^- } } \simeq - \frac{1 }{12 } \! \left( \frac{\alpha }{4 \pi } \right)^2 \! \! \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } . \label{438} \end{equation} Then, the correction in the gyromagnetic ratio for the $ \alpha^2 $ order is recalculated as \begin{equation} - \Bigl\{ + \frac{1}{3} - \frac{1 }{12 } \Bigl( \frac{ \, 1 \, }{4} \Bigr)^2 \, \Bigr\} \! \left( \frac{\alpha }{\pi } \right)^2 \! \simeq - 0.3281 \left( \frac{\alpha }{\pi } \right)^2 . \label{439} \end{equation} In fact, this value is almost in agreement with that of the QED calculation and differs from the experimental value by only $ 1.5 \left( \alpha /\pi \right)^3 $ \cite{wesley}. The above assumption is found to be appropriate by estimating the muon magnetic moment in which the influence of vacuum polarization is more significant. The vacuum polarization due to the muon pair creation yields a correction similar to that observed in Eq.(\ref{438}): \begin{equation} \delta {\mathcal{H^{\textsc{FW}} }_{\mu^+ \mu^- } } \simeq - \frac{1 }{12 } \! \left( \frac{\alpha }{4 \pi } \right)^2 \! \! \frac{ {\rm e } }{2 m_\mu } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } , \label{440} \end{equation} where $ m_\mu $ denotes the muon mass. In addition, the effect of electron pair creation exists. With regard to the muon magnetic moment, we simply assume that the effect of electron pair creation is the same as that given by Eq.(\ref{438}): \begin{equation} \delta {\mathcal{H^{\textsc{FW}} }_{{\rm e }^+ {\rm e }^- } } \simeq - \frac{1 }{12 } \! \left( \frac{\alpha }{4 \pi } \right)^2 \! \! \frac{ {\rm e } }{2 m } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } = - \frac{1 }{12 } \! \left( \frac{\alpha }{4 \pi } \right)^2 \! \! \Bigl( \frac{ \, m_\mu }{m } \Bigr) \frac{ {\rm e } }{2 m_\mu } \beta \, \boldsymbol{\sigma } {\dotprod } \boldsymbol{B } , \label{441} \end{equation} where the mass ratio $ m_\mu / m $ is around $ 206.8 $. Then, the correction in the muon gyromagnetic ratio for the $ \alpha^2 $ order is obtained by adding (\ref{440}), (\ref{441}), and (\ref{434}) for the muon mass: \begin{eqnarray} - \Bigl\{ + \frac{1}{3} - \frac{1 }{12 } \Bigl( \frac{ \, 1 \, }{4} \Bigr)^2 \! \Bigl( 1 + \! \frac{ \, m_\mu }{m } \Bigr) \Bigr\} \! \left( \frac{\alpha }{\pi } \right)^2 \! \simeq 0.75 \left( \frac{\alpha } {\pi } \right)^2 . \label{442} \end{eqnarray} This value agrees with that obtained by the QED calculation \cite{suura, petermann2}. Therefore, the second-order correction (\ref{435}) is also considered to be an appropriate result when the effect of vacuum polarization is not taken into account. \section{\label{sec:level5}Many-Coordinate Systems Interpretation} \hspace*{0.6cm}We assumed that an electron has an inherent relativistic symmetry. In other words, all the inertial coordinate systems in Minkowski space are symmetric and superpositioned from a viewpoint of a free electron. In this context, the measurement process is also explained as symmetry breaking caused by the observation from a specific inertial coordinate system. This results in a nonlocal stochastic process because for an electron, the measurement implies an unpredictable selection of a specific coordinate system in which the observation is performed. For example, quantum entanglement, i.e., the EPR correlation \cite{epr} of pair particles with opposite helicity, is prepared by the superposition of a right- and a left-handed coordinate system, which correspond to either of the eigenstates of helicity. The observation of helicity in one particle concurrently fixes the state of another particle through the selection of either of the coordinate systems. The many-coordinate systems interpretation presented here is similar to the many-worlds interpretation of Everett \cite{everett} \textit{et al}. They propose the existence of many worlds corresponding to the superpositioned eigenstates. However, it is not the worlds but the coordinate systems that will branch because of the observation. Special relativity guarantees that all the coordinate systems that may branch exist in a Minkowski space. In addition, we consider that the material particle in classical mechanics is a substance in which relativistic symmetry is almost lost due to the coupling of a large number of elementary particles. \section{\label{sec:level6}Conclusion} \hspace*{0.6cm}In this study, we assumed that the quantum behavior of an electron lies in its relativistic symmetry. Based on this idea, we derived a Lorentz invariant equation for the spread electron and demonstrated the validity of the equation by calculating the anomalous magnetic moment without renormalization. In addition, based on the same idea, we consistently explained the measurement process in quantum theory. The calculation method in the present paper is not practical since the electromagnetic interaction is not the minimal one and is not gauge invariant. However, an inherent relativistic symmetry holds true also for the dimensionless electron described by the unrenormalized Dirac equation. We conclude that the foundations of quantum mechanics will be understood only in relation to relativistic symmetry; this is the only manner in which the foundations of both theories can be bridged within a conventional Minkowski space. \end{document}
arXiv
\begin{document} \title{Bounded convergence theorems} \author[P. Niemiec]{Piotr Niemiec} \address{Instytut Matematyki\\{}Wydzia\l{} Matematyki i~Informatyki\\{} Uniwersytet Jagiello\'{n}ski\\{}ul. \L{}ojasiewicza 6\\{}30-348 Krak\'{o}w\\{}Poland} \email{[email protected]} \begin{abstract} There are presented certain results on extending continuous linear operators defined on spaces of $E$-valued continuous functions (defined on a compact Hausdorff space $X$) to linear operators defined on spaces of $E$-valued measurable functions in a way such that uniformly bounded sequences of functions that converge pointwise in the weak (or norm) topology of $E$ are sent to sequences that converge in the weak, norm or weak* topology of the target space. As an application, a new description of uniform closures of convex subsets of $C(X,E)$ is given. Also new and strong results on integral representations of continuous linear operators defined on $C(X,E)$ are presented. A new classes of vector measures are introduced and various bounded convergence theorems for them are proved. \end{abstract} \subjclass[2010]{Primary 46G10; Secondary 46E40.} \keywords{Vector measure; dual Banach space; Riesz characterisation theorem; weakly sequentially complete Banach space; dominated convergence theorem; bounded convergence theorem; function space.} \maketitle \section{Introduction} Lebesgue's dominated convergence theorem (for nonnegative measures) is a fundamental as well as powerful tool which finds applications in many mathematical branches. (In this paper all measures are meant to be countably additive.) Although nonnegative measures were naturally generalised to vector-valued set functions (usually called \textit{vector measures}) many years ago (see, for example, \cite{din}, \cite{d-u} or Chapter~IV in \cite{d-s}) and the above result waited many generalisations, one of the disadvantages of vector integrals (of vector-valued functions with respect to vector integrals) is the difficulty in verifying that a specific function is integrable. For instance, if the total variation of a vector measure is infinite, not every bounded measurable function with separable image is integrable, in the opposite to the scalar case (since every scalar-valued measure automatically has finite variation). This causes that the concepts of integrating vector-valued functions with respect to vector measures (proposed by Bartle \cite{bar}, Dinculeanu \cite{din}, Goodrich \cite{go1,go2}, Lewis \cite{lew}, Tucker and Wayment \cite{t-w}, Smith and Tucker \cite{s-t} and others) is not as popular as the classical theory of measure and integration (and the theory of integrating vector-valued functions with respect to nonnegative measures or scalar-valued functions with respect to vector measures; see, for example, \cite{d-u}). In this paper we introduce a new class of vector measures with respect to which all bounded measurable functions with separable images are integrable and for which (strong) bounded convergence theorem holds (which may be seen as a counterpart of the Lebesgue dominated convergence theorem). Our approach is based on results on extending continuous linear operators (such as stated in the abstract). To formulate the main of them, let us first introduce necessary definitions. Everywhere below $X$ and $\Omega$ are, respectively, a compact and a locally compact Hausdorff space and $E$ and $F$ are Banach spaces. \begin{dfn}{M(A)} For a nonempty set $Z$, let $\ell_{\infty}(Z,E)$ stand for the Banach space of all $E$-valued bounded functions on $Z$ (equipped with the sup-norm induced by the norm of $E$). For every set $A \subset \ell_{\infty}(Z,E)$, the space \textit{$\mathscr{M}(A)$} is defined as the smallest set among all $B \subset \ell_{\infty}(Z,E)$ such that: \begin{enumerate}[(M1)]\addtocounter{enumi}{-1} \item $A \subset B$; \item whenever $f_n \in B$ are uniformly bounded and converge pointwise to $f \in \ell_{\infty}(Z,E)$ in the weak topology of $E$, then $f \in B$. \end{enumerate} It is an easy exercise that $\mathscr{M}(V)$ is a linear subspace of $\ell_{\infty}(Z,E)$ provided $V$ is so.\par By $C(X,E)$ ($C_0(\Omega,E)$) we denote the subspace of $\ell_{\infty}(X,E)$ (resp. of $\ell_{\infty}(\Omega,E)$) consisting of all continuous functions from $X$ into $E$ (resp. from $\Omega$ into $E$ that vanish at infinity). For simplicity, we put $\ell_{\infty}^{\mathbb{K}} \df \ell_{\infty}(\mathbb{N},\mathbb{K})$ (where $\mathbb{N} \df \{1,2,\ldots\}$). \end{dfn} Our main result on extending continuous linear operators reads as follows. \begin{thm}{1} Let $V$ be a linear subspace of $C(X,E)$. Every continuous linear operator $T\colon V \to F^*$ is uniquely extendable to a linear operator $\bar{T}\colon \mathscr{M}(V) \to F^*$ such that: \begin{itemize} \item[(BC*)] whenever $f_n \in \mathscr{M}(V)$ are uniformly bounded and converge pointwise to $f \in \mathscr{M}(V)$ in the weak topology of $E$, then $\bar{T} f_n$ converge to $\bar{T} f$ in the weak* topology of $F^*$. \end{itemize} Moreover, $\bar{T}$ is continuous and $\|\bar{T}\| = \|T\|$. \end{thm} In the above notation, ``BC'' is the abbreviation of \textit{bounded convergence} and ``*'' is to emphasize that the \textit{final} convergence is in the weak* topology. In the sequel, we shall continue this concept.\par It is a matter of taste to think of integrals as derived from measures (a typical approach in measure theory) or conversely (for example, starting from Riesz' characterisation theorem or from the Daniell theory of integrals; see \cite{dan} or Chapter~XIII in \cite{mau}). In this paper we follow the latter approach, generalising the classical Riesz characterisation theorem in a new way, which led us to the introduction of a new class of vector measures: \begin{dfn}{i-m} For $T_n \in \mathscr{L}(E,F)$ (where $\mathscr{L}(E,F)$ stands for the Banach space of all continuous linear operators from $E$ into $F$), the series $\sum_{n=1}^{\infty} T_n$ is said to be \textit{independently} convergent if the series \begin{equation}\label{eqn:i-m} \sum_{n=1}^{\infty} T_n x_n \end{equation} is convergent in the norm topology of $F$ for every bounded sequence of elements $x_n$ of $E$. (If this happens, the series \eqref{eqn:i-m} is unconditionally convergent.)\par A set function $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ (where $\mathfrak{M}$ is a $\sigma$-algebra of a set $Z$) is called an \textit{i-measure} if $\mu(\bigcup_{n=1}^{\infty} A_n)x = \sum_{n=1}^{\infty} \mu(A_n)x$ (for each $x \in E$) and the series $\sum_{n=1}^{\infty} \mu(A_n)$ is independently convergent for any sequence of pairwise disjoint sets $A_n \in \mathfrak{M}$. The \textit{total semivariation} $\|\mu\|_Z \in [0,\infty]$ of $\mu$ is given by \begin{multline}\label{eqn:semi} \|\mu\|_Z \df \sup\Bigl\{\bigl\|\sum_{n=1}^N \mu(A_n) x_n\bigr\|\colon\ N < \infty,\ A_n \in \mathfrak{M} \textup{ are pairwise disjoint},\\x_n \in E,\ \|x_n\| \leqslant 1\Bigr\} \end{multline} (compare \S4 of Chapter~I in \cite{din}). \end{dfn} We shall prove in \LEM{op} that every independently convergent series of elements of $\mathscr{L}(E,F)$ is convergent in the norm topology of $\mathscr{L}(E,F)$ (and thus every i-measure is a vector measure with respect to the norm topology of $\mathscr{L}(E,F)$). What is more, it turns out that each i-measure has finite total semivariation (see \THM{fin}). This discovery enables us to define the vector integral $\int_Z f \dint{\mu}$ of any $E$-valued bounded measurable function $f$ with separable image with respect to a given $\mathscr{L}(E,F)$-valued i-measure $\mu$ on a set $Z$. We also show that the operator $\bar{T}$ given by $\bar{T} f \df \int_Z f \dint{\mu}$ satisfies condition (BC*) with the weak topology of $F$ inserted in place of the weak* topology of $F^*$, or with the norm topologies on $E$ and $F$ (and $\mathscr{M}(V)$ replaced by the space of all functions $f$ with the properties specified above). This is shown in \THM[s]{bwc} and \THM[]{bnc}. These remarks may justify a conclusion that i-measures are the best counterparts (in the operator-valued case) of finite nonnegative (or scalar-valued) measures.\par Taking into account the Riesz characterisation theorem, continuous linear operators from Banach spaces of the form $C(X,E)$ (into arbitrary Banach spaces) may be called abstract vector integrals. There are a number of results which justify such a terminology (see, for example, \cite{go1,go2}, \cite{s-t} or Theorem~9 in \S5 of Chapter~III in \cite{din}). However, in most of them the final vector measure is only finitely additive. In our characterisation (in a special case) the final measure is an i-measure (and thus it is countably additive): \begin{thm}{3} Let $F$ be a weakly sequentially complete Banach space or a dual Banach space containing no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$ and let $\Omega$ be a locally compact Hausdorff space. For every continuous linear operator $T\colon C_0(\Omega,E) \to F$ there exists a unique regular Borel i-measure $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F)$ such that \begin{equation}\label{eqn:vint} T f = \int_{\Omega} f \dint{\mu} \qquad (f \in C_0(\Omega,E)). \end{equation} Conversely, if $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F)$ is an arbitrary regular i-measure \textup{(}and $F$ is an arbitrary Banach space\textup{)}, then \eqref{eqn:vint} correctly defines a continuous linear operator $T\colon C_0(\Omega,E) \to F$ such that $\|T\| = \|\mu\|_{\Omega}$. \end{thm} We also give an integral representation of continuous linear operators from $C_0(\Omega,E)$ which take values in arbitrary Banach spaces. This is done with the help of so-called \textit{weak*} i-measures, introduced and discussed in Section~6.\par As a consequence of \THM{3} and bounded convergence theorems for i-measures, we obtain a new result on the description of the uniform closure of a convex subset of $C(X,E)$: \begin{thm}{2} In each of the three cases specified below, the norm closure of a convex subset $\mathscr{K}$ of $C_0(\Omega,E)$ coincides with the set of all functions $f \in C_0(\Omega,E)$ such that $f\bigr|_L \in \mathscr{M}\bigl(\mathscr{K}\bigr|_L\bigr)$ \textup{(}where $\mathscr{K}\bigr|_L \df \{g\bigr|_L \in C(L,E)\colon\ g \in \mathscr{K}\}$\textup{)} for any compact set $L \subset \Omega$: \begin{itemize} \item $\Omega$ is compact; or \item $\mathscr{K}$ is bounded; or \item $E$ is a $C^*$-algebra and $\mathscr{K}$ is a $*$-subalgebra of $C_0(\Omega,E)$. \end{itemize} \end{thm} The above result seems to be a convenient tool. Recently we use some of its variations to describe models for subhomogeneous $C^*$-algebras (which may be seen as a solution of a long-standing problem). The paper on this is in preparation.\par The paper is organised as follows. Section~2 is devoted to the proof of \THM{1} and some of its generalisations. In Section~3 we introduce \textit{variationally} sequentially complete Banach spaces (which all weakly sequentially complete as well as all dual Banach spaces belong to), give a new characterisation of weakly sequentially complete Banach spaces and formulate a variation of \THM{1} for operators taking values in variationally sequentially complete Banach spaces. The fourth part discusses in details i-measures and contains a preliminary material to the proof of \THM{3}. Section~5 is devoted to weak* i-measures. Section~6 discusses regular i-measures as well as regular weak* i-measures. It contains a proof of \THM{3} and its variations for operators taking values in variationally sequentially complete Banach spaces containing no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$ (see \THM{vsc}) and in dual Banach spaces (consult \THM{W*}) as well as totally arbitrary Banach spaces (see \COR{Riesz}). The last, seventh part is devoted to the proof of \THM{2} and some of its variations. We give there also an illustrative application and an example showing that the boundedness condition in the second case of \THM{2} cannot be, in general, dropped. \subsection*{Notation and terminology} Throughout the whole paper, all topological spaces are assumed to be Hausdorff. $X$, $\Omega$, and $E$ and $F$ are reserved to denote, respectively, a compact space, a locally compact space and two Banach spaces over the field $\mathbb{K}$ of real or complex numbers. The dual of a locally convex topological vector space $(G,\tau)$ is denoted by $(G,\tau)^*$ (or simply $G^*$ if it is known from the context with respect to which topology on $G$ the dual is taken) and is understood as the vector space of all continuous linear functionals on $(G,\tau)$. A subset $A$ of a topological space $Y$ is \textit{sequentially closed} if $A$ contains the limits of all convergent (in $Y$) sequences whose entries belong to $A$. $A$ is \textit{$\sigma$-compact} if it is a countable union of compact subsets of $Y$. Finally, $\mathfrak{B}(Y)$ stands for the $\sigma$-algebra of all \textit{Borel} sets in $Y$; that is, $\mathfrak{B}(Y)$ is the smallest $\sigma$-algebra of subsets of $Y$ that contains all open sets.\par All notations and terminologies introduced in \DEF[s]{M(A)} and \DEF[]{i-m} are obligatory. \section{Extending linear operators} \begin{dfn}{measurable} Let $\mathfrak{M}$ be a $\sigma$-algebra on a set $Z$. A function $f\colon Z \to E$ is said to be \textit{$\mathfrak{M}$-measurable} if \begin{itemize} \item $f(X)$ is a separable subspace of $E$; and \item $f$ is weakly $\mathfrak{M}$-measurable; that is, for any $\psi \in E^*$, the function $\psi \circ f\colon Z \to \mathbb{K}$ is $\mathfrak{M}$-measurable. \end{itemize} Thanks to a theorem of Pettis \cite{pet}, $f$ is $\mathfrak{M}$-measurable iff $f(Z)$ is a separable subspace of $E$ and the inverse image of every Borel set in $E$ under $f$ belongs to $\mathfrak{M}$.\par $M_{\mathfrak{M}}(Z,E)$ is defined as the subspace of $\ell_{\infty}(Z,E)$ consisting of all bounded $\mathfrak{M}$-measurable functions $f\colon Z \to E$.\par For a compact space $X$, let $\mathfrak{M}(X)$ be the smallest $\sigma$-algebra on $X$ that contains all closed sets in $X$ of type $\mathscr{G}_{\delta}$. \textit{$M(X,E)$} stands for $M_{\mathfrak{M}(X)}(X,E)$. \end{dfn} It is worth noting here that, in general, not every open set in $X$ belongs to $\mathfrak{M}(X)$. But if $X$ is metrisable (or, more generally, perfectly normal), then $\mathfrak{M}(X) = \mathfrak{B}(X)$.\par The next result is certainly known. For the reader's convenience, we give its proof. \begin{lem}{1} $\mathscr{M}(C(X,E)) = M(X,E)$. \end{lem} \begin{proof} First of all, observe that $\mathfrak{M}(X)$ is the smallest $\sigma$-algebra on $X$ with respect to which all $\mathbb{K}$-valued continuous functions on $X$ are measurable. It is therefore an elementary exercise to check that the set $B = M(X,E)$ satisfies conditions (M0)--(M1) for $A = C(X,E)$. Consequently, $\mathscr{M}(C(X,E)) \subset M(X,E)$. Instead of proving the reverse inclusion, we shall show a little bit more: that $M(X,E)$ coincides with the smallest set $N(E)$ among all $B \subset \ell_{\infty}(X,E)$ which include $C(X,E)$ and satisfy the condition: \begin{enumerate}[(M1')] \item whenever $f_n \in B$ are uniformly bounded and converge pointwise to $f \in \ell_{\infty}(X,E)$ in the norm topology of $E$, then $f \in B$. \end{enumerate} To this end, for any $A \subset X$, denote by $j_A\colon X \to \{0,1\}$ the characteristic function of $A$. First we assume $E = \mathbb{K}$. Observe that $N(\mathbb{K})$ is a unital subalgebra of $\ell_{\infty}(X,\mathbb{K})$. This implies that $\mathfrak{N} \df \{A \in \mathfrak{M}(X)\colon\ j_A \in N(\mathbb{K})\}$ is a $\sigma$-algebra on $X$. So, to conclude that $\mathfrak{N} = \mathfrak{M}(X)$, it suffices to show that each closed set of type $\mathscr{G}_{\delta}$ belongs to $\mathfrak{N}$. But this is immediate, since for any such set $K$ there are sequences $U_1 \supset U_2 \supset \ldots$ of open sets in $X$ and $f_1,f_2,\ldots\colon X \to [0,1]$ of continuous functions such that $j_K \leqslant f_n \leqslant j_{U_n}$ and $K = \bigcap_{n=1}^{\infty} U_n$. Consequently, $j_K$ is the pointwise limit of $f_n$'s and hence $K \in \mathfrak{N}$. This shows that $\mathfrak{N} = \mathfrak{M}(X)$. Now, since every scalar-valued bounded $\mathfrak{M}(X)$-measurable function is a uniform limit of linear combinations of characteristic functions of members of $\mathfrak{M}(X)$, we get that $M(X,\mathbb{K}) \subset N(\mathbb{K})$. We turn to the general case.\par For simplicity, we shall call any function $u\colon X \to E$ such that $u(X)$ is countable (finite or not) and the inverse image of every point of $E$ under $u$ is a member of $\mathfrak{M}(X)$ \textit{semisimple}. For any scalar-valued function $f\colon X \to \mathbb{K}$ and each vector $x \in E$, we use $f(\cdot)x$ to denote a function from $X$ into $E$, computed pointwise. Now fix $e \in E$ and consider families $F(e) \df \{u \in M(X,\mathbb{K})\colon\ u(\cdot)e \in N(E)\}$ and $\mathfrak{M}_e \df \{B \in \mathfrak{M}(X)\colon\ j_B \in F(e)\}$. Since $C(X,\mathbb{K}) \subset F(e)$, it follows from the previous part of the proof that $F(e) = M(X,\mathbb{K})$ and $\mathfrak{M}(e) = \mathfrak{M}(X)$. One easily deduces from these connections and (M1') that \begin{itemize} \item[($\star$)] any semisimple function $u\colon X \to E$ belongs to $N(E)$. \end{itemize} Now take any $u \in M(X,E)$. Since the range of $u$ is a separable space and $u$ is weakly $\mathfrak{M}(X)$-measurable, one concludes that: \begin{itemize} \item the inverse image of any closed ball in $E$ under $u$ belongs to $\mathfrak{M}(X)$; \item for any $\varepsilon > 0$, there exists a countable (finite or not) collection of pairwise disjoint members of $\mathfrak{M}(X)$ whose union coincides with $X$ and images under $u$ are contained in closed $\varepsilon$-balls of $E$. \end{itemize} Now using the latter of the above properties, for each $n > 0$, construct a semisimple function $u_n\colon X \to E$ whose uniform distance from $u$ is less than $1/n$. So, $u$ is a uniform limit of semisimple function and hence $u \in N(E)$, by ($\star$). \end{proof} Although the next lemma is very simple, it is crucial for our further purposes. \begin{lem}{2} Let $Y$ be a compact space and $U\colon E \to C(Y,\mathbb{K})$ be a linear isometric embedding. For each $v \in \ell_{\infty}(X,E)$ let $L v\colon X \times Y \to \mathbb{K}$ be given by $(L v)(x,y) \df U(v(x))(y)$. Then the assignment $v \mapsto L v$ defines a linear isometric embedding $L$ of $\ell_{\infty}(X,E)$ into $\ell_{\infty}(X \times Y,\mathbb{K})$ such that: \begin{enumerate}[\upshape(L1)] \item $L(C(X,E)) \subset C(X \times Y,\mathbb{K})$; \item if $v_n \in \ell_{\infty}(X,E)$ are uniformly bounded and converge pointwise to $v \in \ell_{\infty}(X,E)$ in the weak topology of $E$, then $L v_n$ are uniformly bounded as well and converge pointwise to $L v$; \item for any set $A \subset \ell_{\infty}(X,E)$, $L(\mathscr{M}(A)) \subset \mathscr{M}(L(A))$ where the sets $\mathscr{M}(A)$ and $\mathscr{M}(L(A))$ are computed in, respectively, $\ell_{\infty}(X,E)$ and $\ell_{\infty}(X \times Y,\mathbb{K})$. \end{enumerate} \end{lem} \begin{proof} It is readily seen that $L\colon \ell_{\infty}(X,E) \to \ell_{\infty}(X \times Y,\mathbb{K})$ is linear and isometric. Point (L1) is a well-known topological result---consult, for example, Theorems~3.4.7, 3.4.8 and 3.4.9 in \cite{eng}. (L2) follows from the facts that $U$ is continuous in the weak topologies of $E$ and $C(Y,\mathbb{K})$, and the weak topology of $C(Y,\mathbb{K})$ is finer than the pointwise convergence topology. Finally, (L3) is implied by (L2). \end{proof} Let us call a locally convex topological vector space $G$ \textit{initial} if its topology coincides with the weak topology of $G$. Equivalently, $G(,\tau_0)$ is initial iff $\tau_0$ is the coarsest topology among all locally convex topologies $\tau$ on $G$ for which the sets $(G,\tau)^*$ and $(G,\tau_0)^*$ (considered here with no topology) coincide. Important examples of such spaces are Banach spaces equipped with the weak topologies as well as dual Banach spaces equipped with the weak* topologies. Recall that $G$ is \textit{sequentially complete} if every Cauchy sequence in $G$ is convergent. The following result is a generalisation of \THM{1}: \begin{thm}{1'} Let $G$ be an initial sequentially complete locally convex topological vector space and $V$ be a linear subspace of $C(X,E)$. Every continuous linear operator $T\colon V \to G$ is uniquely extendable to a linear operator $\bar{T}\colon \mathscr{M}(V) \to G$ such that: \begin{itemize} \item[(BC')] whenever $f_n \in \mathscr{M}(V)$ are uniformly bounded and converge pointwise to $f \in \mathscr{M}(V)$ in the weak topology of $E$, then $\bar{T} f_n$ converge to $\bar{T} f$. \end{itemize} Moreover, $\bar{T}$ is continuous. \end{thm} \begin{proof} It follows from (BC') and the very definition of $\mathscr{M}(V)$ that $\bar{T}$ is unique. To establish the existence of $\bar{T}$, first note that the initiality and sequential completeness of $G$ imply that: \begin{itemize} \item[(CC)] if $z_n \in G$ are such that $\psi(z_n)$ converge (in $\mathbb{K}$) for any $\psi \in G^*$, then $z_n$ converge (in $G$). \end{itemize} Next, there is an isometric linear embedding $U\colon E \to C(Y,\mathbb{K})$ for a suitably chosen compact space $Y$. Let $L\colon \ell_{\infty}(X,E) \to \ell_{\infty}(X \times Y,\mathbb{K})$ be as specified in \LEM{2}. We put $W \df L(V)$ and define $S\colon W \to G$ by $S \df T \circ (L\bigr|_V)^{-1}$. It is enough to show that there is a linear extension $\bar{S}\colon \mathscr{M}(W) \to G$ of $S$ (where $\mathscr{M}(W)$ is computed in $\ell_{\infty}(X \times Y,\mathbb{K})$) such that: \begin{itemize} \item[(BC'')] whenever $f_n \in \mathscr{M}(W)$ are uniformly bounded and converge pointwise to $f \in \mathscr{M}(W)$, then $\bar{S}(f_n)$ converge to $\bar{S}(f)$, \end{itemize} because then $\bar{T} \df \bar{S} \circ L\bigr|_{\mathscr{M}(V)}$ is well defined (by condition (L3) of \LEM{2}), extends $T$ and satisfies (BC') (thanks to (L2)). For simplicity, everywhere below $\alpha$ denotes an arbitrary countable ordinal. To establish the existence of $\bar{S}$, for any $\alpha$, we define a space $W_{\alpha}$ by transfinite induction as follows: $W_0 = W$ and for $\alpha > 0$, $W_{\alpha}$ consists of all pointwise limits of uniformly bounded sequences from $\bigcup_{\xi<\alpha} W_{\xi}$ (convergent in the pointwise topology). It is easy to check that each of $W_{\alpha}$ is a linear subspace of $C(X \times Y,\mathbb{K})$ and that $\mathscr{M}(W) = \bigcup_{\alpha} W_{\alpha}$. Since any sequence of members of $\mathscr{M}(W)$ is contained in $W_{\alpha}$ for some $\alpha$, it suffices to show that there exists a transfinite sequence $S_{\alpha}\colon W_{\alpha} \to G$ of linear operators such that: \begin{enumerate}[(E1)] \item $S_0 = S$; \item $S_{\alpha}$ extends $S_{\xi}$ provided $\xi < \alpha$; \item whenever $f_n \in W_{\alpha}$ are uniformly bounded and converge pointwise to $f \in W_{\alpha}$, then $S_{\alpha} f_n$ converge to $S_{\alpha} f$ \end{enumerate} (because then $\bar{S}$ may simply be defined by $\bar{S} f \df S_{\alpha} f$ where $\alpha$ is chosen so that $f \in W_{\alpha}$). It follows from the Hahn-Banach and the Riesz characterisation theorems that for any $\psi \in G^*$, there is a $\mathbb{K}$-valued regular Borel measure $\mu_{\psi}$ on $X \times Y$ such that: \begin{equation*} \psi(S f) = \int_{X \times Y} f \dint{\mu_{\psi}} \qquad (f \in W). \end{equation*} Define $S_0$ as specified in (E1) and assume that for some $\alpha > 0$, $S_{\xi}$ is defined for any $\xi < \alpha$ in a way such that for each $\psi \in G^*$, \begin{equation}\label{eqn:repr} \psi(S_{\xi} f) = \int_{X \times Y} f \dint{\mu_{\psi}} \qquad (f \in W_{\xi}). \end{equation} We shall define $S_{\alpha}$ so that \eqref{eqn:repr} holds for $\xi = \alpha$ and then we shall check that conditions (E2)--(E3) are satisfied. Let $u \in W_{\alpha}$. There is a uniformly bounded sequence $u_n \in W_{\xi_n}$ (with $\xi_n < \alpha$) which converges pointwise to $u$. It then follows from Lebesgue's dominated convergence theorem and \eqref{eqn:repr} that \begin{equation}\label{eqn:seq} \lim_{n\to\infty} \psi(S_{\xi_n} u_n) = \int_{X \times Y} u \dint{\mu_{\psi}} \end{equation} for each $\psi \in G^*$. So, we conclude from (CC) that $S_{\xi_n} u_n$ converge. We define $S_{\alpha} u$ as the limit of the last mentioned sequence. It follows from \eqref{eqn:seq} that \eqref{eqn:repr} is satisfied for $\xi = \alpha$ and $f = u$ (and any $\psi \in G^*$). This implies that the definition of $S_{\alpha} u$ is independent of the choice of the functions $u_n$. Finally, \eqref{eqn:repr} applied for all $\xi \leqslant \alpha$ shows that (E2) holds, and combined with Lebesgue's dominated convergence theorem gives (E3) (because $G$ is initial).\par To complete the proof, it remains to observe that the continuity of $\bar{T}$ follows from (BC') (since $\mathscr{M}(V)$ is metrisable, it suffices to check the sequential continuity). \end{proof} \begin{proof}[Proof of \THM{1}] Taking into account \THM{1'}, it is enough to verify that $F^*$ is initial and sequentially complete in the weak* topology, and that the extension of $T$ does not increase the norm. Both the above properties of $F^*$ are immediate. And to convince oneself that $\|\bar{T}\| = \|T\|$, it suffices to repeat the proof of \THM{1'} and check that $\|S_{\alpha}\| = \|S\|$ for each countable ordinal $\alpha$, which may simply be provided by choosing the measures $\mu_{\psi}$ (for $\psi \in F = (F,\textup{weak*})^*$) appearing in \eqref{eqn:repr} so that the total variation $|\mu_{\psi}|(X \times Y)$ of $\mu_{\psi}$ does not exceed $\|S\| \cdot \|\psi\|$. \end{proof} As an immediate consequence of \THM{1} and \LEM{1}, we obtain the following result, announced in the abstract. \begin{cor}{C-M} Every continuous linear operator $T\colon C(X,E) \to F^*$ is uniquely extendable to a linear operator $\bar{T}\colon M(X,E) \to F^*$ satisfying condition \textup{(BC*)} of \THM{1} with $M(X,E)$ inserted in place of $\mathscr{M}(V)$. Moreover, $\bar{T}$ is continuous and $\|\bar{T}\| = \|T\|$. \end{cor} \section{Variational sequential completeness} Recall that a Banach space is \textit{weakly sequentially complete} (briefly, \textit{wsc}) if it is sequentially complete with respect to the weak topology. Each reflexive Banach space is wsc and $\ell_1$ is an example of a nonreflexive wsc Banach space. These two exclusive examples are, in a sense, exhaustive. Namely, by a celebrated result due to Rosenthal \cite{ros}, every wsc Banach space is either reflexive or contains an isomorphic copy of $\ell_1$. An interesting characterisation of wsc Banach spaces is given below. \begin{pro}{wsc} For a Banach space $F$ the following conditions are equivalent: \begin{enumerate}[\upshape(a)] \item Every continuous linear operator $T\colon V \to F$ from a linear subspace $V$ of \textup{(}some Banach space of the form\textup{)} $C(X,E)$ extends uniquely to a linear operator $\bar{T}\colon \mathscr{M}(V) \to F$ such that: \begin{itemize} \item[(BC)] whenever $f_n \in \mathscr{M}(V)$ are uniformly bounded and converge pointwise to $f \in \mathscr{M}(V)$ in the weak topology of $E$, then $\bar{T} f_n$ converge to $\bar{T} f$ in the weak topology of $F$. \end{itemize} \textup{(}Moreover, $\bar{T}$ is continuous and $\|\bar{T}\| = \|T\|$.\textup{)} \item $F$ is wsc. \end{enumerate} \end{pro} \begin{proof} One easily deduces from \THM{1'} that (a) is implied by (b). (The additional claim of (a) may be shown as explained in the proof of \THM{1}.) To see that the reverse implication also holds, take a sequence $z_1,z_2,\ldots \in F$ which is Cauchy in the weak topology. Define $X$ as the closed unit ball of $F^*$ equipped with the weak* topology and put $E \df \mathbb{K}$. Further, for each $x \in F$, we use $e_x\colon X \to E$ to denote the evaluation map at $x$; that is, $e_x(\psi) = \psi(x)$. Denote by $F_0$ the linear span of all $z_n$, put $V \df \{e_z\colon\ z \in F_0\} \subset C(X,E)$ and define $T\colon V \to F$ by $T e_z \df z$. It is readily seen that $T$ is continuous (even isometric) and linear. So, it follows from (a) that there is a linear extension $\bar{T}\colon \mathscr{M}(V) \to F$ of $T$ which satisfies (BC). Since the sequence of all $z_n$ is Cauchy in the weak topology of $F$, the formula $u(\psi) \df \lim_{n\to\infty} \psi(z_n)$ correctly defines a function $u\colon X \to E$. Notice that the functions $e_{z_n}$ are uniformly bounded and converge pointwise to $u$. Thus, $u \in \mathscr{M}(V)$ and, by (BC), $z_n = \bar{T} e_{z_n}$ converge to $\bar{T} z$ in the weak topology of $F$. \end{proof} \THM[s]{1} and \THM[]{1'} and \PRO{wsc} suggest to distinguish certain Banach spaces, which we do below. \begin{dfn}{vsc} A Banach space $F$ is said to be \textit{variationally sequentially complete} (briefly, \textit{vsc}) if there is a set $\mathscr{F} \subset F^*$ such that: \begin{enumerate}[(vsc1)] \item there is a positive constant $\lambda$ such that for any $x \in F$, \begin{equation*} \frac{1}{\lambda} \sup\{|\psi(x)|\colon\ \psi \in \mathscr{F}\} \leqslant \|x\| \leqslant \lambda \sup\{|\psi(x)|\colon\ \psi \in \mathscr{F}\}; \end{equation*} \item whenever $z_n \in F$ are uniformly bounded and $\psi(z_n)$ converge for each $\psi \in \mathscr{F}$, then there exists $z \in F$ such that $\lim_{n\to\infty} \psi(z_n) = \psi(z)$ for all $\psi \in \mathscr{F}$. \end{enumerate} It is worth noting that the point $z$ appearing in (vsc2) is unique. For simplicity, we shall denote it by $\mathscr{F}$-$\lim_{n\to\infty} z_n$.\par More specifically, $F$ is called \textit{$\alpha$-vsc} (where $\alpha \geqslant 1$) if there exists $\mathscr{F} \subset F^*$ such that (vsc1)--(vsc2) hold with $\lambda = \alpha$. \end{dfn} Basic examples of vsc spaces are wsc as well as dual Banach spaces. It is also clear that a Banach space is vsc provided it is isomorphic to a vsc Banach space.\par It is an easy exercise to show that a Banach space is wsc iff it is sequentially closed in the weak* topology of its second dual. A counterpart of this characterisation for vsc Banach spaces is given below. \begin{pro}{vsc-dual} A Banach space $F$ is vsc iff it is isomorphic to a linear subspace $W$ of some dual Banach space $Z^*$ such that $W$ is sequentially closed in the weak* topology of $Z^*$. \end{pro} \begin{proof} First assume $F$ is vsc and let $\mathscr{F} \subset F^*$ be such that (vsc1)--(vsc2) are fulfilled. We put $Z \df \ell_1(\mathscr{F},\mathbb{K})$; that is, $Z$ consists of all functions $u\colon \mathscr{F} \to \mathbb{K}$ such that $\|u\| \df \sum_{\psi \in \mathscr{F}} |u(\psi)| < \infty$. Then $Z^* = \ell_{\infty}(\mathscr{F},\mathbb{K})$. Define $\Phi\colon F \to \ell_{\infty}(\mathscr{F},\mathbb{K})$ by $(\Phi f)(\psi) = \psi(f)$. It follows from (vsc1) that $\Phi$ is a well defined topological embedding. We claim that $W \df \Phi(F)$ is sequentially closed in the weak* topology of $\ell_{\infty}(\mathscr{F},\mathbb{K})$. To see this, let $z_n \in F$ be such that $\Phi(z_n)$ converge to $u \in \ell_{\infty}(\mathscr{F},\mathbb{K})$ in the weak* topology. Then $\Phi(z_n)$ are uniformly bounded and, consequently, so are $z_n$. Furthermore, $\psi(z_n)$ converge for any $\psi \in \mathscr{F}$. So, (vsc2) implies that $z \df \mathscr{F}\textup{-}\lim_{n\to\infty} z_n$ well defines a vector in $F$ and $u = \Phi(z)$.\par Conversely, assume $F$ is isomorphic to $W$ where $W \subset Z^*$ is as specified in the proposition. It suffices to check that $W$ is vsc. For any $x \in Z$, let $j_x \in W^*$ be given by $j_x(\psi) \df \psi(x)$. Put $\mathscr{F} \df \{j_x\colon\ x \in Z,\ \|x\| \leqslant 1\}$. We see that (vsc1) holds with $\lambda = 1$. Now assume $\varphi_n \in W$ are such that $\psi(\varphi_n)$ converge for any $\psi \in \mathscr{F}$. Then $\varphi_n$ converge pointwise (on the whole $Z$) to some function $\varphi\colon Z \to \mathbb{K}$. It now follows from the Uniform Boundedness Principle that $\varphi \in Z^*$ and, consequently (since $W$ is sequentially closed), $\varphi \in W$. This shows that (vsc2) holds and we are done. \end{proof} As a consequence, we obtain \begin{pro}{vsc} Every continuous linear operator $T\colon V \to F$ from a linear subspace $V$ of \textup{(}some space of the form\textup{)} $C(X,E)$ into a vsc Banach space $F$ is extendable to a continuous linear operator $\bar{T}\colon \mathscr{M}(V) \to F$. \end{pro} \begin{proof} Let $\Phi\colon F \to W$ be an isomorphism where $W$ is a linear subspace of a dual Banach space $Z^*$ that is sequentially closed in the weak* topology (see \PRO{vsc-dual}). Put $L \df \Phi \circ T\colon V \to W \subset Z^*$. It follows from \THM{1} that there exists a linear extension $\bar{L}\colon \mathscr{M}(V) \to F^*$ of $L$ such that $\|\bar{L}\| = \|L\|$. What is more, the proof of \THM{1'} shows that all values of $\bar{L}$ belong to $W$, since $W$ is sequentially closed in $Z^*$. Thus $\bar{T} \df \Phi^{-1} \circ \bar{L}$ well defines a continuous linear extension of $T$ we searched for. \end{proof} For $V = C(X,E)$ (and under an additional assumption on $F$), \PRO{vsc} shall be strengthened in \COR{vsc}. \begin{rem}{vsc} The above proof shows that, under the notation of \PRO{vsc}: \begin{itemize} \item every continuous linear operator $T\colon V \to F$ extends to a continuous linear operator $\bar{T}\colon \mathscr{M}(V) \to F$ such that $\|\bar{T}\| \leqslant \lambda^2 \|T\|$ provided $F$ is $\lambda$-vsc; \item a linear subspace of a dual Banach space which is sequentially closed in the weak* topology is $1$-vsc. \end{itemize} We shall use these observations in the sequel. \end{rem} \PRO{vsc} combined with \REM{vsc} yields \begin{cor}{w*sc} Let $F_{sc}$ be the smallest linear subspace of $F^{**}$ that contains $F$ and is sequentially closed in the weak* topology of $F^{**}$. Every continuous linear operator $T\colon V \to F$ from a linear subspace $V$ of \textup{(}some space of the form\textup{)} $C(X,E)$ is extendable to a continuous linear operator $\bar{T}\colon \mathscr{M}(V) \to F_{sc}$ such that $\|\bar{T}\| = \|T\|$. \end{cor} \begin{exm}{1vsc} Let $V$ be a linear subspace of $C(X,F)$ where $F$ is a reflexive Banach space. Then $\mathscr{M}(V)$ is a $1$-vsc (in particular, $M(X,F)$ is a $1$-vsc). Indeed, $\ell_{\infty}(X,F)$ is the dual Banach space of \begin{equation*} \ell_1(X,F^*) \df \Bigl\{u\colon X \to F^*|\quad (\|u\| \df) \sum_{x \in X} \|u(x)\| < \infty\Bigr\} \end{equation*} and a sequence of elements of $\ell_{\infty}(X,F)$ converges in the weak* topology iff it is uniformly bounded and converges (to the same limit) pointwise in the weak topology of $F$ (because $F$ is reflexive). We conclude that $\mathscr{M}(V)$ is sequentially closed in the weak* topology of $\ell_{\infty}(X,F)$. So, the assertion follows from \REM{vsc}.\par The same argument proves that $M_{\mathfrak{M}}(Z,E)$ is $1$-vsc provided $E$ is reflexive and $\mathfrak{M}$ is a $\sigma$-algebra on $Z$. \end{exm} In the last section we shall prove a counterpart of \THM{3} for vsc Banach spaces $F$ which contain no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$ (see \THM{vsc}). It seems to be interesting and helpful to know more about vsc Banach spaces. This will be the subject of our further studies. \section{Strong results on vector integrals} As we mentioned in the introductory part, taking into account the Riesz characterisation theorem, continuous linear operators from $C(X,E)$ into arbitrary Banach spaces may be called (abstract) \textit{vector integrals}. Such a terminology may be justified, for example, by a theorem formulated below. \begin{thm}[Theorem~9 in \S5 of Chapter~III in \cite{din}]{din} For every continuous linear operator $T\colon C(X,E) \to F$ and a closed linear norming subspace $Z$ of $F^*$, there exists a finitely additive set function $m\colon \mathfrak{B}(X) \to \mathscr{L}(E,Z^*)$ such that \begin{equation}\label{eqn:int} Tf = \int_X f \dint{\mu} \qquad (f \in C(X,E)). \end{equation} \end{thm} For a proof and the definition of the integral appearing in \eqref{eqn:int}, consult \cite{din}. Other results in this fashion may be found, for example, in \cite{go1,go2} and \cite{lew}.\par The reader should notice that, under the notation of \THM{din}, $Z^*$ differs from $F$, unless $F$ is a dual Banach space. \THM{3} shows that in the case when $F$ is wsc, the set function $\mu$ may always be taken so that it takes values in $\mathscr{L}(E,F)$. (More generally, it suffices that $F$ is vsc and contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$; see \THM{vsc} in the last section.) $L^1([0,1])$ is an example of a wsc Banach space which is isomorphic to no dual Banach space. To formulate our first result on vector measures, we recall \begin{dfn}{op-vec} Whenever $\mathfrak{M}$ is a $\sigma$-algebra of subsets of some set, a set function $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ is said to be an \textit{operator measure} if for any $x \in E$ and $\psi \in F^*$, the set function $\mathfrak{M} \ni A \mapsto \psi(\mu(A) x) \in \mathbb{K}$ is a scalar-valued measure. According to the Orlicz-Pettis theorem (see, for example, Corollary~4 on page~22 in \cite{d-u}), if $\mu$ is an operator-valued measure and $A_n \in \mathfrak{M}$ are pairwise disjoint, then $\mu(\bigcup_{n=1}^{\infty} A_n) x = \sum_{n=1}^{\infty} \mu(A_n) x$ (the convergence in the norm topology) for each $x \in E$.\par Similarly, a set function $\mu\colon \mathfrak{M} \to F$ is said to be a \textit{vector measure} if for any $\psi \in F^*$, the set function $\mathfrak{M} \ni A \mapsto \psi(\mu(A)) \in \mathbb{K}$ is a scalar-valued measure. Equivalently, $\mu$ is a vector measure iff $\mu(\bigcup_{n=1}^{\infty} A_n) = \sum_{n=1}^{\infty} \mu(A_n)$ (the convergence in the norm topology) for any sequence of pairwise disjoint sets $A_n \in \mathfrak{M}$.\par Finally, a set function $\mu\colon \mathfrak{M} \to F^*$ is said to be a \textit{weak* vector measure} if the set function $\mathfrak{M} \ni A \mapsto (\mu(A))(f) \in \mathbb{K}$ is a (scalar-valued countably additive) measure for any $f \in F$.\par It is worth emphasizing here that a set function $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ is an operator measure provided it is a vector measure, but the reverse implication may fail to hold. \end{dfn} The reader is referred to \DEF{i-m} (in the introductory section) to recall the notion of an i-measure. The next result shows that every such a set function is a vector measure. \begin{lem}{op} A series $\sum_{n=1}^{\infty} T_n$ with summands in $\mathscr{L}(E,F)$ is convergent in the norm topology of $\mathscr{L}(E,F)$ provided it is independently convergent. In particular, every i-measure is a vector measure. \end{lem} \begin{proof} By the assumptions, for each $n > 0$, the formula $S_n x \df \sum_{k=n}^{\infty} T_k x$ correctly defines a linear operator $S_n\colon E \to F$. It follows from the Uniform Boundedness Principle that $S_n \in \mathscr{L}(E,F)$. It remains to check that $\lim_{n\to\infty} \|S_n\| = 0$. We assume, on the contrary, that $\|S_n\| > \varepsilon$ for some $\varepsilon > 0$ and infinitely many $n$. We shall mimic the proof of Schur's lemma (on weakly convergent sequences in $\ell_1$). Let $\nu_1$ and $x_1 \in E$ be, respectively, a positive integer and a unit vector such that $\|S_{\nu_1} x_1\| > \varepsilon$. It follows from our hypothesis that there is $\nu_2 > \nu_1$ such that $\|\sum_{k=\nu_1}^{\nu_2-1} T_k x_1\| > \varepsilon$ and $\|S_{\nu_2}\| > \varepsilon$. We continue this procedure: if $\nu_1 < \ldots < \nu_m$ are integers (where $m > 1$) and $x_1,\ldots,x_{m-1}$ are unit vectors of $E$ such that $\|S_{\nu_m}\| > \varepsilon$ and \begin{equation}\label{eqn:x} \Bigl\|\sum_{k=\nu_j}^{\nu_{j+1}-1} T_k x_j\Bigr\| > \varepsilon \end{equation} for each $j \in \{1,\ldots,m-1\}$, we may find an integer $\nu_{m+1} > \nu_m$ and a unit vector $x_m \in E$ for which $\|S_{\nu_{m+1}}\| > \varepsilon$ and \eqref{eqn:x} holds for $j = m$. In this way we obtain a bounded sequence of vectors $x_n$ and an increasing sequence of integers $\nu_n$ such that \eqref{eqn:x} holds for each $j$. But, if follows from the assumptions of the lemma that the series $\sum_{n=1}^{\infty} (\sum_{k=\nu_n}^{\nu_{n+1}-1} T_k x_n)$ converges in the norm topology, which contradicts \eqref{eqn:x}.\par The additional claim of the lemma simply follows. \end{proof} Another strong property of i-measures is established below. \begin{thm}{fin} Every i-measure has finite total semivariation. \end{thm} \begin{proof} Let $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ be an i-measure defined on a $\sigma$-algebra $\mathfrak{M}$ of subsets of a set $Z$. Suppose, on the contrary, that $\|\mu\|_Z = \infty$. For an arbitrary set $A \in \mathfrak{M}$ we may similarly define $\|\mu\|_A \in [0,\infty]$ as the supremum of all numbers of the form $\|\sum_{n=1}^N \mu(A_n) x_n\|$ where $N$ is finite, $A_n \in \mathfrak{M}$ are pairwise disjoint subsets of $A$ and $x_n \in E$ have norms not exceeding $1$ (compare \eqref{eqn:semi}). The set function $\mathfrak{M} \ni A \mapsto \|\mu\|_A \in [0,\infty]$ is called the \textit{semivariation} of $\mu$ (see \S4 of Chapter~I in \cite{din}) and known to have the following (simple) properties: \begin{enumerate}[(SM1)] \item $\|\mu\|_A \leqslant \|\mu\|_B$ provided $A, B \in \mathfrak{M}$ are such that $A \subset B$; \item $\|\mu\|_{\bigcup_{n=1}^{\infty} A_n} \leqslant \sum_{n=1}^{\infty} \|\mu\|_{A_n}$ for any collection of sets $A_n \in \mathfrak{M}$. \end{enumerate} We divide the proof into a few separate cases.\par First assume that \begin{itemize} \item[(C1)] every set $B \in \mathfrak{M}$ with $\|\mu\|_B = \infty$ may be written in the form $B = B_1 \cup B_2$ where $B_1, B_2 \in \mathfrak{M}$ are pairwise disjoint and $\|\mu\|_{B_1} = \|\mu\|_{B_2} = \infty$. \end{itemize} Using (C1) and the induction argument, we easily find an infinite sequence of pairwise disjoint sets $B_n \in \mathfrak{M}$ for which $\|\mu\|_{B_n} = \infty$. So, it follows from the definition of the semivariation that for each $n$ we may find finite systems $z_1^{(n)},\ldots,z_{N_n}^{(n)} \in E$ of vectors whose norms are not greater than $1$ and disjoint sets $C_1^{(n)},\ldots, C_{N_n}^{(n)} \in \mathfrak{M}$ contained in $B_n$ such that \begin{equation}\label{eqn:c1} \Bigl\|\sum_{k=1}^{N_n} \mu(C_k^{(n)}) z_k^{(n)}\Bigr\| > 1. \end{equation} Now it suffices to arrange all sets $C_j^{(n)}$ in a sequence $A_1,A_2,\ldots$ and the vectors $z_j^{(n)}$ in a corresponding sequence $x_1,x_2,\ldots$. Since the sets $A_n$ are pairwise disjoint, we conclude from the definition of an i-measure that the series \begin{equation}\label{eqn:series} \sum_{n=1}^{\infty} \mu(A_n) x_n \end{equation} is unconditionally convergent (in the norm topology), which is contradictory to \eqref{eqn:c1}. Thus, in that case the proof is complete.\par Now we assume that there is a set $W \in \mathfrak{M}$ with $\|\mu\|_W = \infty$ such that whenever $W = A \cup B$ and $A, B \in \mathfrak{M}$ are pairwise disjoint, then $\|\mu\|_A < \infty$ or $\|\mu\|_B < \infty$. We then conclude from (SM1) that \begin{itemize} \item[(C2)] if $A, B \in \mathfrak{M}$ are two disjoint subsets of $W$ and $\|\mu\|_A = \infty$, then $\|\mu\|_B < \infty$. \end{itemize} This case is divided into two subcases. First we additionally assume that there are a subset $V \in \mathfrak{M}$ of $W$ with $\|\mu\|_V = \infty$ and a number $\varepsilon > 0$ such that \begin{itemize} \item[(C3)] if $D \in \mathfrak{M}$ is a subset of $V$ with $\|\mu\|_D = \infty$, then there is a set $B \in \mathfrak{M}$ contained in $D$ for which $\varepsilon < \|\mu\|_B < \infty$. \end{itemize} Using (C3) for $D = V$, we may find a set $B_1 \in \mathfrak{M}$ contained in $V$ such that $\varepsilon < \|\mu\|_{B_1} < \infty$. We infer from (SM2) that $\|\mu\|_V \leqslant \|\mu\|_{V \setminus B_1} + \|\mu\|_{B_1}$ and hence $\|\mu\|_{V_1} = \infty$ for $V_1 \df V \setminus B_1$. Repeating this reasoning for $D = V_1$, we may find a set $B_2 \in \mathfrak{M}$ contained in $V_1$ for which $\varepsilon < \|\mu\|_{B_2} < \infty$. Then $\|\mu\|_{V_2} = \infty$ for $V_2 \df V_1 \setminus B_2$. Continuing this procedure, we obtain a sequence of pairwise disjoint sets $B_n \in \mathfrak{M}$ such that $\|\mu\|_{B_n} > \varepsilon$. Now repeating the reasoning from the previous case, we see that for each $n$ there are finite systems $z_1^{(n)},\ldots,z_{N_n}^{(n)} \in E$ of vectors whose norms are not greater than $1$ and $C_1^{(n)},\ldots,C_{N_n}^{(n)} \in \mathfrak{M}$ of pairwise disjoint subsets of $B_n$ such that $\|\sum_{k=1}^{N_n} \mu(C_k^{(n)}) z_k^{(n)}\| > \varepsilon$. As shown before, this leads us to a contradiction with the fact that some series of the form \eqref{eqn:series} is unconditionally convergent.\par Finally, we add to (C2) the negation of (C3): \begin{itemize} \item[(C4)] whenever $V \in \mathfrak{M}$ is a subset of $W$ with $\|\mu\|_V = \infty$ and $\varepsilon$ is a positive real number, then there exists a set $D = D(V,\varepsilon) \in \mathfrak{M}$ contained in $V$ such that $\|\mu\|_D = \infty$ and every subset $B \in \mathfrak{M}$ of $D$ with $\|\mu\|_B < \infty$ satisfies $\|\mu\|_B \leqslant \varepsilon$. \end{itemize} We now define by a recursive formula sets $V_n \in \mathfrak{M}$: $V_0 \df D(W,1)$ and $V_n \df D(V_{n-1},2^{-n})$ for $n > 0$. Put $V \df \bigcap_{n=0}^{\infty} V_n$ and, for $n > 0$, $L_n \df V_{n-1} \setminus V_n$. Since the sets $V_n$ decrease, we see that \begin{equation}\label{eqn:V0} V_0 = V \cup \bigcup_{n=1}^{\infty} L_n. \end{equation} Further, it follows from (C2) that $\|\mu\|_{L_n} < \infty$ (because $\|\mu\|_{V_n} = \infty$ and $L_n \cap V_n = \varnothing$) and hence, thanks to the definition of $V_{n-1}$ (see (C4)), $\|\mu\|_{L_n} \leqslant 2^{-n}$. So, (SM2) applied to \eqref{eqn:V0} gives \begin{equation}\label{eqn:V} \|\mu\|_V = \infty. \end{equation} Moreover, since $V \subset V_n$ for each $n$, we deduce from the property of $V_n$ specified in (C4) that \begin{itemize} \item[($*$)] for every subset $B \in \mathfrak{M}$ of $V$, $\|\mu\|_B \in \{0,\infty\}$. \end{itemize} We now fix a finite collection $A_1,\ldots,A_N \in \mathfrak{M}$ of pairwise disjoint subsets of $V$ and a corresponding system $x_1,\ldots,x_N \in E$ of vectors whose norms do not exceed $1$. We infer from (C2) and ($*$) that there is an index $k \in \{1,\ldots,N\}$ such that $\|\mu\|_{A_j} = 0$ for any $j \neq k$. Noticing that $\|\mu(A_j) x_j\| \leqslant \|\mu\|_{A_j}$, we get $\sum_{j=1}^N \mu(A_j) x_j = \mu(A_k) x_k$. Consequently, \begin{align*} \|\mu\|_V &= \sup\{\|\mu(A) x\|\colon\ A \in \mathfrak{M},\ A \subset V,\ x \in E,\ \|x\| \leqslant 1\}\\ &= \sup\{\|\mu(A)\|\colon\ A \in \mathfrak{M},\ A \subset V\}. \end{align*} Since $\mu$ is a vector measure (by \LEM{op}), its range is a bounded set in $\mathscr{L}(E,F)$ (see, for example, Corollary~19 on page~9 in \cite{din}; a stronger property of countably additive vector measures is the content of the Bartle-Dunford-Schwartz theorem, see Corollary~7 on page~14 in \cite{din}), and therefore the above formula contradicts \eqref{eqn:V}, which finishes the whole proof. \end{proof} As a consequence of \THM{fin}, we obtain a generalisation of the Bartle-Dunford-Schwartz theorem on the absolute continuity of vector measures with respect to some finite nonnegative measures (see, for example, Corollary~6 on page~14 in \cite{d-u}). Below we continue the notation introduced in the above proof. \begin{cor}{abs} If $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ is an i-measure, then there exists a measure $\lambda\colon \mathfrak{M} \to [0,\infty)$ such that the following condition is satisfied. \begin{itemize} \item[(ac)] For every $\varepsilon > 0$ there is $\delta(\varepsilon) > 0$ such that $\|\mu\|_A \leqslant \varepsilon$ whenever $A \in \mathfrak{M}$ satisfies $\lambda(A) \leqslant \delta(\varepsilon)$. \end{itemize} What is more, the measure $\lambda$ may be taken so that for each $A \in \mathfrak{M}$, \begin{equation}\label{eqn:mut-abs} 0 \leqslant \lambda(A) \leqslant \|\mu\|_A. \end{equation} \end{cor} Before giving a proof, we wish to emphasize that the above result is \textit{not} a special case of the Bartle-Dunford-Schwartz theorem mentioned above, because the semivariation of an i-measure is, in general, greater than the semivariation of a valued measure, defined in Definition~4 on page~2 in \cite{d-u}. \begin{proof} Let $\Gamma$ be the set of all finite systems $\gamma = (A_1,\ldots,A_N;x_1,\ldots,x_N)$ consisting of pairwise disjoint sets $A_n \in \mathfrak{M}$ and vectors $x_n \in E$ whose norms are not greater than $1$. For each such $\gamma$ we define a set function $\mu_{\gamma}\colon \mathfrak{M} \to F$ by $\mu_{\gamma}(B) \df \sum_{j=1}^N \mu(B \cap A_j) x_j$ (provided $\gamma = (A_1,\ldots,A_N;x_1,\ldots,x_N)$). It is easy to see that $\mu_{\gamma}$ is a vector measure. Observe also that \begin{equation}\label{eqn:svar} \sup_{\gamma\in\Gamma} \|\mu_{\gamma}(B)\| = \|\mu\|_B \qquad (B \in \mathfrak{M}). \end{equation} The above formula, combined with \THM{fin}, yields that the collection $\{\mu_{\gamma}\}_{\gamma\in\Gamma}$ is uniformly bounded. Further, let $A_n \in \mathfrak{M}$ be pairwise disjoint sets. We claim that \begin{equation}\label{eqn:usa} \lim_{n\to\infty} \|\mu\|_{A_n} = 0. \end{equation} Because if not, we may and do assume (after passing to a subsequence, if necessary) that $\|\mu\|_{A_n} > \varepsilon$ for some positive real number $\varepsilon$ and all $n$. But this is impossible, as shown in the proof of \THM{fin} (in the part concerning (C3)). So, \eqref{eqn:usa} holds which, combined with \eqref{eqn:svar}, means that the collection $\{\mu_{\gamma}\}_{\gamma\in\Gamma}$ is \textit{uniformly strongly additive} (consult Proposition~17 on page~8 in \cite{d-u}). We now deduce from Corollary~5 on page~13 in \cite{d-u} that there is a measure $\lambda\colon \mathfrak{M} \to [0,\infty)$ such that \eqref{eqn:mut-abs} holds and for any $\varepsilon > 0$ there is $\delta > 0$ for which $\sup_{\gamma\in\Gamma} \|\mu_{\gamma}(A)\| \leqslant \varepsilon$ provided $\lambda(A) \leqslant \delta$. So, a look at \eqref{eqn:svar} finishes the proof. \end{proof} Whenever $\mu$ is an i-measure and $\lambda$ is a probabilistic measure, both defined on a common $\sigma$-algebra, we shall write $\mu \ll \lambda$ if (ac) is fulfilled.\par As a consequence of \COR{abs}, we obtain the following generalisation of a theorem of Pettis (see Theorem~1 on page~10 in \cite{d-u}). \begin{cor}{abs-abs} For an i-measure $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ and a measure $\nu\colon \mathfrak{M} \to [0,\infty)$, $\mu \ll \nu$ iff $\mu$ vanishes on all sets on which $\nu$ vanishes. \end{cor} \begin{proof} The `only if' part is immediate. To show the `if' part, assume $\mu$ vanishes on all sets on which $\nu$ vanishes. By \COR{abs}, there exists a measure $\lambda\colon \mathfrak{M} \to [0,\infty)$ such that \begin{equation}\label{eqn:ll} \mu \ll \lambda \end{equation} and \eqref{eqn:mut-abs} is fulfilled. We infer from the latter condition that $\lambda(A) = 0$ iff $\|\mu\|_A = 0$. But $\|\mu\|_A = 0$ if and only if{} $\mu$ vanishes on all measurable subsets of $A$. We conclude that if $\nu(A) = 0$, then $\lambda(A) = 0$. So, it follows from the Radon-Nikodym theorem that there exists an $\mathfrak{M}$-measurable function $g\colon Z \to [0,\infty)$ (where $Z$ is the set on which $\mathfrak{M}$ is a $\sigma$-algebra) such that $\lambda(A) = \int_A g \dint{\nu}$ for all $A \in \mathfrak{M}$. In particular, $g$ is $\nu$-integrable and therefore for any $\varepsilon > 0$ there exists $\delta > 0$ such that $\int_A g \dint{\nu} \leqslant \varepsilon$ provided $\nu(A) \leqslant \delta$. This property, combined with \eqref{eqn:ll}, finishes the proof. \end{proof} \begin{rem}{fin} From \THM{fin} one may deduce the following result, which, due to the knowledge of the author, is new: \begin{quote} \textit{The variation of a vector measure $\mu\colon \mathfrak{M} \to E$ is a finite measure iff $\sum_{n=1}^{\infty} \|\mu(A_n)\| < \infty$ for any countable collection of pairwise disjoint sets $A_n \in \mathfrak{M}$.} \end{quote} The necessity is immediate, while the sufficiency follows from the fact that a measure satisfying the condition formulated above may naturally be identified with an i-measure, as described below.\par Assume $\mu\colon \mathfrak{M} \to E$ is a vector measure which satisfies the above condition. Since every vector $x$ of $E$ naturally induces a (continuous) linear operator from $\mathbb{K}$ into $E$ (which sends $1$ to $x$), we may identify $\mu$ with a set function of $\mathfrak{M}$ into $\mathscr{L}(\mathbb{K},E)$. Under such an identification, $\mu$ turns out to be an i-measure whose total semivariation is equal to the total variation of $\mu$, regarded as an $E$-valued set function. We leave the details to the reader. \end{rem} The book \cite{din} is devoted to integration of vector-valued functions with respect to vector-valued set functions. Below we adapt this concept to define integration with respect to i-measures, which turns out to be much easier and more elegant. \begin{dfn}{vint} Let $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ be an i-measure defined on a $\sigma$-algebra $\mathfrak{M}$ of subsets of a set $Z$. Denote by $S_{\mathfrak{M}}(Z,E)$ the set of all functions $f \in \ell_{\infty}(Z,E)$ such that the set $f(Z)$ is countable and $f^{-1}(\{e\}) \in \mathfrak{M}$ for any $e \in E$. It is easy to see that $S_{\mathfrak{M}}(Z,E)$ is a linear subspace of $\ell_{\infty}(Z,E)$. For any $f \in S_{\mathfrak{M}}(Z,E)$ we define \begin{equation}\label{eqn:i-int} \int_Z f \dint{\mu} = \int_Z f(z) \dint{\mu(z)} \df \sum_{e \in E} \mu(f^{-1}(\{e\})) e \end{equation} (the above series is unconditionally convergent; see \DEF{i-m}) and call $\int_Z f \dint{\mu}$ the \textit{integral} of $f$ with respect to $\mu$.\par The uniform closure of $S_{\mathfrak{M}}(Z,E)$ coincides with $M_{\mathfrak{M}}(Z,E)$ (see \DEF{measurable}). \end{dfn} Our aim is to extend the integral defined above from $S_{\mathfrak{M}}(X,E)$ to $M_{\mathfrak{M}}(X,E)$. This is enabled thanks to \THM{fin} and the following \begin{lem}{cont} For every i-measure $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ \textup{(}where $\mathfrak{M}$ is a $\sigma$-algebra on a set $Z$\textup{)}, the operator $T\colon S_{\mathfrak{M}}(Z,E) \ni f \mapsto \int_Z f \dint{\mu} \in F$ is linear and continuous. Moreover, $\|T\| = \|\mu\|_Z$. \end{lem} A simple proof of \LEM{cont} is left to the reader. \begin{dfn}{VINT} Let $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ be an i-measure defined on a $\sigma$-algebra $\mathfrak{M}$ of subsets of a set $Z$. For any $f \in M_{\mathfrak{M}}(Z,E)$, the \textit{integral} $\int_Z f\dint{\mu} = \int_Z f(z) \dint{\mu(z)}$ of $f$ with respect to $\mu$ is defined as $\bar{T} f$ where $\bar{T}\colon M_{\mathfrak{M}}(Z,E) \to F$ is the unique continuous extension of $T\colon S_{\mathfrak{M}}(Z,E) \ni f \mapsto \int_Z f \dint{\mu} \in F$. Then $\|\int_Z f \dint{\mu}\| \leqslant \|f\| \cdot \|\mu\|_Z$ for any $f \in M_{\mathfrak{M}}(Z,E)$. \end{dfn} Our main result on i-measures is the following \begin{thm}[Bounded Weak Convergence Theorem]{bwc} Let $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ be an i-measure \textup{(}where $\mathfrak{M}$ is a $\sigma$-algebra on a set $Z$\textup{)}. If $f_n \in M_{\mathfrak{M}}(Z,E)$ are uniformly bounded and converge pointwise to $f\colon Z \to E$ in the weak topology of $E$, then $\int_Z f_n \dint{\mu}$ converge to $\int_Z f \dint{\mu}$ in the weak topology of $F$. \end{thm} The main difficulty in the proof of the above result is that sequences which weakly converge to $0$ may consist of unit vectors. We precede the proof of \THM{bwc} by a few auxiliary results. From now until the end of the proof, $Z$, $\mathfrak{M}$ and $\mu$ are as specified in \THM{bwc}.\par We begin with a counterpart of the Bartle Bounded Convergence \cite{bar} (see also Theorem~1 on page~56 in \cite{d-u}) for i-measures. Other results in the topic of bounded and dominated convergence theorems the reader may find in \cite{t-w} and \cite{s-t}. \begin{thm}[Bounded Norm Convergence Theorem]{bnc} If $f_n \in M_{\mathfrak{M}}(Z,E)$ are uniformly bounded and converge pointwise to $f\colon Z \to E$ in the norm topology of $E$, then $\int_Z f_n \dint{\mu}$ converge to $\int_Z f \dint{\mu}$ in the norm topology of $F$. \end{thm} \begin{proof} We mimic the proof of the Bartle Convergence Theorem presented in \cite{d-u}. It follows from \COR{abs} that there is a probabilistic measure $\lambda\colon \mathfrak{M} \to [0,1]$ such that (ac) holds. We need to show that $\|\int_Z g_n \dint{\mu}\|$ converge to $0$ for $g_n \df f_n - f$. For each $n$, there is $u_n \in S_{\mathfrak{M}}(X,E)$ such that $\|g_n - u_n\| < 2^{-n}$ and $\|\int_Z g_n \dint{\mu} - \int_Z u_n \dint{\mu(z)}\| < 2^{-n}$. We conclude that it suffices to show that $\|\int_Z u_n \dint{\mu}\|$ converge to $0$. Note that the functions $u_n$ are uniformly bounded and converge pointwise to $0$ in the norm topology of $E$. Suppose $\|u_n(z)\| \leqslant C$ for all $n$ and $z \in Z$ (and a positive constant $C$). Fix $\varepsilon > 0$ and put $\delta = \delta(\varepsilon/C)$ (see (ac)). It follows from Egoroff's theorem that there exists a set $A \in \mathfrak{M}$ such that $\lambda(A) \leqslant \delta$ and the functions $u_n$ converge uniformly to $0$ on $Z \setminus A$. So, denoting (as usual) by $j_A$ and $j_{Z \setminus A}$ the characteristic functions of $A$ and $Z \setminus A$ (respectively), we see that the functions $j_{Z \setminus A} u_n$ converge uniformly to $0$. Consequently, $\lim_{n\to\infty} \|\int_Z j_{Z \setminus A} u_n \dint{\mu}\| = 0$. Further, it follows from the definition of the vector integral that $\|\int_Z j_A u_n \dint{\mu}\| \leqslant \|u_n\| \cdot \|\mu\|_A$. Finally, from the choice of $A$ and $\delta$ we infer that $\|\mu\|_A \leqslant \varepsilon/C$ and therefore \begin{equation*} \limsup_{n\to\infty} \Bigl\|\int_Z u_n \dint{\mu}\Bigr\| \leqslant \limsup_{n\to\infty} \Bigl\|\int_Z j_A u_n \dint{\mu}\Bigr\| + \limsup_{n\to\infty} \Bigl\|\int_Z j_{Z \setminus A} u_n \dint{\mu}\Bigr\| \leqslant \varepsilon \end{equation*} and we are done. \end{proof} \begin{lem}{A} Let $Y$ be a compact metrisable space and $u_n$ be members of $M(Y,E)$. Then the set $S$ of all $y \in Y$ for which $u_n(y)$ converge to $0$ in the weak topology of $E$ is coanalytic \textup{(}in the sense of Suslin\textup{)}. \end{lem} \begin{proof} Let us recall that $S$ is coanalytic provided $Y \setminus S$ coincides with the image of a Borel subset of $Y \times [0,1]$ under a continuous function, which we shall now show.\par Let $E_0$ be the closed linear span of the set $\bigcup_{n=1}^{\infty} u_n(Y)$. Since $E_0$ is separable, there exists an isometric linear operator $U\colon E_0 \to C([0,1],\mathbb{K})$. Since sequences of elements of $C([0,1],\mathbb{K})$ which converge to $0$ in the weak topology (of $C([0,1],\mathbb{K})$) are simply characterised, we infer that for an arbitrary sequence of elements $z_n$ of $E$, \begin{itemize} \item[(w)] $z_n$ converge to $0$ in the weak topology of $E$ iff $U z_n$ converge pointwise to $0$. \end{itemize} Now define $v_n\colon Y \times [0,1] \to E$ by $v_n(y,t) \df U(u_n(y))(t)$. Then $v_n \in M(Y \times [0,1],\mathbb{K})$ (compare \LEM{2}), which means, by the metrisability of $Y$, that $v_n$ are Borel. We conclude that the set $B \df \{(y,t) \in Y \times [0,1]\colon\ \lim_{n\to\infty} v_n(y,t) = 0\}$ is Borel in $Y \times [0,1]$. Note that (w) implies that \begin{equation*} y \in S \iff \{y\} \times [0,1] \subset B. \end{equation*} So, denoting by $\pi\colon Y \times [0,1] \to Y$ the natural projection, we see that $S = Y \setminus \pi((Y \times [0,1]) \setminus B)$, which finishes the proof. \end{proof} \begin{lem}{B} Let $u_n \in S_{\mathfrak{M}}(Z,E)$ be uniformly bounded and converge pointwise to $0$ in the weak topology of $E$ and let $\psi \in F^*$. There exist a compact metrisable space $Y$, an i-measure $\nu\colon \mathfrak{M}(Y) \to \mathscr{L}(E,F)$, and uniformly bounded functions $v_n \in M(Y,E)$ which converge pointwise to $0$ in the weak topology of $E$ and satisfy \textup{(}for each $n$\textup{)} \begin{equation}\label{eqn:same} \psi\Bigl(\int_Z u_n \dint{\mu}\Bigr) = \psi\Bigl(\int_Y v_n \dint{\nu}\Bigr). \end{equation} \end{lem} \begin{proof} Denote by $\mathscr{B}$ the collection of all nonempty sets of the form $u_n^{-1}(\{e\})$ where $n$ and $e \in E$ are arbitrary. Observe that $\mathscr{B}$ is countable (and nonempty). So, we may arrange all members of $\mathscr{B}$ in an infinite sequence $A_1,A_2,\ldots$ (repeating, if necessary, some of them). For simplicity, let $j_n\colon Z \to \{0,1\}$ stand for the characteristic function of $A_n$. Put $Y \df \{0,1\}^{\omega}$ (that is, $Y$ is the infinite countable power of $\{0,1\}$) and equip $Y$ with the product topology. Define $\Phi\colon Z \to Y$ by $\Phi(z) \df (j_n(z))_{n=1}^{\infty}$. It is easy to see that $\Phi^{-1}(B) \in \mathfrak{M}$ for any $B \in \mathfrak{M}(Y)$ (since $Y$ is metrisable, $\mathfrak{M}(Y)$ consists of all Borel sets in $Y$). Further, let $\nu\colon \mathfrak{M}(Y) \to \mathscr{L}(E,F)$ be given by $\nu(B) \df \mu(\Phi^{-1}(B))$. It is readily seen that $\nu$ is an i-measure such that $\|\nu\|_Z \leqslant \|\mu\|_Z$. Further, we put $Z' \df \Phi(Z)$ and $Y_m \df \{(y_n)_{n=1}^{\infty} \in Y\colon\ y_m = 1\} (\in \mathfrak{M}(Y))$. Observe that \begin{equation}\label{eqn:trace} \Phi(A_n) = Z' \cap Y_n \end{equation} for any $n > 0$. We claim that there exist uniformly bounded functions $w_n \in M(Y,E)$ such that for any superset $C \in \mathfrak{M}(Y)$ of $Z'$ and each $n$, \begin{equation}\label{eqn:u-w} \int_Z u_n \dint{\mu} = \int_Y j_C(y) w_n(y) \dint{\nu(y)} \end{equation} where $j_C\colon Y \to \{0,1\}$ is the characteristic function of $C$. We may define the functions $w_n$ as follows. Fix $n$ and for simplicity put (for a moment) $u = u_n$. Write $u(Y) = \{e_1,e_2, \ldots\}$ where the vectors $e_k$ are distinct (so, there can be finitely many such vectors) and denote by $m_k$ a natural number such that $A_{m_k} = u^{-1}(\{e_k\})$. Notice that the sets $A_{m_1},A_{m_2},\ldots$ are pairwise disjoint and cover $Z$. It follows from the definition of $\Phi$ that hence also the sets $\Phi(A_{m_1}),\Phi(A_{m_2}),\ldots$ are pairwise disjoint (although $\Phi$ may not be one-to-one). Thus, we infer from \eqref{eqn:trace} that there are \textit{pairwise disjoint} sets $B_k \in \mathfrak{M}(Y)$ such that \begin{equation}\label{eqn:Bk} B_k \cap Z' = \Phi(A_{m_k}). \end{equation} We define $w_n$ by the rules: $w_n(y) = e_k$ for $y \in B_k$ and $w_n(y) = 0$ if $y \notin \bigcup_k B_k$. Since $w_n(Y) \subset u_n(Y) \cup \{0\}$, we see that the functions $w_n$ are uniformly bounded (it is also clear that they belong to $M(Y,E)$). Let us briefly check \eqref{eqn:u-w}. If $Z ' \subset C \in \mathfrak{M}(Y)$ and $w \df j_C w_n$, then (under the above notation) $\Phi(A_{m_k}) = (B_k \cap C) \cap Z'$, thanks to \eqref{eqn:Bk}. So, $\Phi^{-1}(w^{-1}(\{e_k\})) = A_{m_k}$ provided $e_k \neq 0$. Hence \begin{equation*} \int_Y w \dint{\nu} = \sum_{e \in E} \nu(w^{-1}(\{e\})) e = \sum_{e_k \neq 0} \mu(A_{m_k}) e_k = \int_Z u_n \dint{\mu}, \end{equation*} which finishes the proof of \eqref{eqn:u-w}.\par Now let $S$ consist of all $y \in Y$ for which $w_n(y)$ converge to $0$ in the weak topology of $E$. It follows from \LEM{A} that $S$ is coanalytic. Observe that $w_n \circ \Phi = u_n$ (thanks to \eqref{eqn:Bk}) and therefore $Z' \subset S$. Denote by $\nu_{\psi}\colon \mathfrak{M}(Y) \to \mathscr{L}(E,\mathbb{K}) = E^*$ the i-measure given by $\nu_{\psi}(A) = \psi \circ \nu(A)$. Now let $\lambda\colon \mathfrak{M} \to [0,\infty]$ be the so-called \textit{variation} of $\nu_{\psi}$; that is, \begin{equation*} \lambda(A) = \sup\Bigl\{\sum_{n=1}^{\infty} \|\nu_{\psi}(A_n)\|\colon\ A_n \in \mathfrak{M} \textup{ are pairwise disjoint subsets of } A\Bigr\}. \end{equation*} It follows from Proposition~4 (on page~54) in \S4 of Chapter~I in \cite{din} (and may easily be checked) that $\lambda(Z) \leqslant \|\nu\|_Z$. So, $\lambda$ is a finite measure. Since coanalytic sets are \textit{measurable} with respect to any finite Borel measure (consult, for example, Theorem~A.13 in \cite{tak}; see also Theorem~1 in \S4 of Chapter~XIII in \cite{k-m}), we deduce that there are two sets $A, B \in \mathfrak{M}(Y)$ such that $A \subset S \subset B$ and $\lambda(B \setminus A) = 0$. Consequently, $B \supset Z'$ and thus \eqref{eqn:u-w} holds for $C = B$. We put $v_n \df j_A w_n \in M(Y,E)$. We see that the functions $v_n$ are uniformly bounded and converge pointwise to $0$ in the weak topology of $E$, since $A \subset S$. To show \eqref{eqn:same}, we note that $\int_Y (v_n - j_B w_n) \dint{\nu_{\psi}} = 0$ (since $\lambda(B \setminus A) = 0$) and hence \begin{multline*} \psi\Bigl(\int_Y v_n \dint{\nu}\Bigr) = \psi\Bigl(\sum_{e \in E} \nu(v_n^{-1}(\{e\})) e\Bigr) = \sum_{e \in E} \nu_{\psi}(v_n^{-1}(\{e\})) e = \int_Y v_n \dint{\nu_{\psi}}\\ = \int_Y j_B w_n \dint{\nu_{\psi}} = \psi\Bigl(\int_Y j_B w_n \dint{\nu}\Bigr) = \psi\Bigl(\int_Z u_n \dint{\mu}\Bigr). \end{multline*} \end{proof} \begin{lem}{C} Let $Y$ be a compact space and $\nu\colon \mathfrak{M}(Y) \to \mathscr{L}(E,F)$ be an i-measure. Let $T\colon C(Y,E) \to F$ be given by $T f \df \int_Y f \dint{\nu}$ and let $\bar{T}\colon M(Y,E) \to F^{**}$ be as specified in \COR{C-M}. Then $\|T\| = \|\nu\|_Y$ and \begin{equation}\label{eqn:Tint} \bar{T} f = \int_Y f \dint{\nu} \qquad (f \in M(Y,E)). \end{equation} In particular, $\bar{T}\colon M(Y,E) \to F$. \end{lem} \begin{proof} Denote by $S f$ the right-hand side expression of \eqref{eqn:Tint}. Then $S\colon M(Y,E) \to F$ is linear, continuous and $\|S\| = \|\nu\|_Y$. So, to conclude the whole assertion, it suffices to show that $S = \bar{T}$. Since the weak topology of $F$ coincides with the topology on $F$ inherited from the weak* topology of $F^{**}$, we infer from \THM{bnc} and \COR{C-M} that for $L \df S$ as well as $L \df \bar{T}$ one has \begin{itemize} \item[(bc*)] whenever $u_n \in M(Y,E)$ are uniformly bounded and converge pointwise to $u\colon Y \to E$ in the norm topology of $E$, then $L u_n$ converge to $L u$ in the weak* topology of $F^{**}$. \end{itemize} Further, the proof of \LEM{1} shows that $M(Y,E)$ coincides with the smallest set among all $B \subset \ell_{\infty}(Y,E)$ which include $C(Y,E)$ and satisfy (M1') with $Y$ inserted in place of $X$ (see the proof of \LEM{1}). So, we easily infer from this property and from (bc*) that $S = \bar{T}$. \end{proof} \begin{proof}[Proof of \THM{bwc}] We begin similarly as in the proof of \THM{bnc}: it is enough to show that $\int_Z g_n \dint{\mu}$ converge to $0$ in the weak topology of $F$ for $g_n \df f_n - f$. For each $n$, there is $u_n \in S_{\mathfrak{M}}(X,E)$ such that $\|g_n - u_n\| < 2^{-n}$ and $\|\int_Z g_n \dint{\mu} - \int_Z u_n \dint{\mu(z)}\| < 2^{-n}$. We conclude that it suffices to show that $\int_Z u_n \dint{\mu}$ converge to $0$ in the weak topology of $F$. Note that the functions $u_n$ are uniformly bounded and converge pointwise to $0$ in the weak topology of $E$. Let $\psi \in F^*$. We only need to show that \begin{equation}\label{eqn:lim} \lim_{n\to\infty}\psi\Bigl(\int_Z u_n \dint{\mu}\Bigr) = 0. \end{equation} It follows from \LEM{B} that we may and do assume $Z$ is a compact topological space and $\mathfrak{M} = \mathfrak{M}(Z)$. Let $T\colon C(Z,E) \to F$ be given by $T f \df \int_Z \dint{\mu}$ and let $\bar{T}\colon M(Z,E) \to F^{**}$ be as specified in \COR{C-M}. We infer from \LEM{C} that $\int_Z u_n \dint{\mu} = \bar{T} u_n$, and from (BC*) that $\bar{T} u_n$ converge pointwise to $0$ in the weak* topology of $F^{**}$. Consequently, \eqref{eqn:lim} is fulfilled. \end{proof} \begin{rem}{integrable} \THM{bnc} enables us to define vector-valued integrable functions with respect to i-measures. Namely, if $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$ is an i-measure on a set $Z$ and $g\colon Z \to E$ is an arbitrary $\mathfrak{M}$-measurable (in the sense of \DEF{measurable}) function, we put $\mathfrak{D}(g) \df \{A \in \mathfrak{M}\colon\ j_A g \in \ell_{\infty}(Z,E)\}$. Notice that $\mathfrak{D}(g)$ is an ideal in $\mathfrak{M}$ such that every set $A$ in $\mathfrak{M}$ is a countable union of members of $\mathfrak{D}(g)$. We call the function $g$ \textit{integrable} if the set function $\nu\colon \mathfrak{D}(g) \ni A \mapsto \int_Z j_A g \dint{\mu} \in F$ extends to a (necessarily unique) vector measure $\bar{\nu}\colon \mathfrak{M} \to F$. If this happens, for each $A \in \mathfrak{M}$ we define the \textit{integral} $\int_A g \dint{\mu}$ (of $g$ on $A$ with respect to $\mu$) as $\bar{\nu}(A)$. Notice that in the above situation, the set function $\nu$ is always a \textit{conditional} vector measure; that is, if $A_n \in \mathfrak{D}(g)$ are pairwise disjoint and $\bigcup_{n=1}^{\infty} A_n \in \mathfrak{D}(g)$, then $\nu(\bigcup_{n=1}^{\infty} A_n) = \sum_{n=1}^{\infty} \nu(A_n)$, which follows from \THM{bnc}. In particular, every bounded $\mathfrak{M}$-measurable function is integrable. One may show that integrable functions form a vector space and the integral $\int_A$ (with respect to $\mu$) is a linear operator (for each $A \in \mathfrak{M}$). We will not develop this concept here---this remark has only an introductory character. \end{rem} \section{Weak* i-measures} This part is devoted to generalisation of the concept of i-measures to the context of weak* topologies of dual Banach spaces and to give representations of continuous linear operators from $C(X,E)$ into dual Banach spaces. To make the presentation simple and transparent, for $T \in \mathscr{L}(E,F^*)$ and $f \in F$ we shall write $\scalar{f}{T(\cdot)}$ to denote the functional $E \ni x \mapsto (T x) f \in \mathbb{K}$.\par We begin with \begin{dfn}{w*} A series $\sum_{n=1}^{\infty} T_n$ with summands in $\mathscr{L}(E,F^*)$ is said to be \textit{independently w*-convergent} if the series $\sum_{n=1}^{\infty} \scalar{f}{T_n(\cdot)}$ (of elements of $\mathscr{L}(E,\mathbb{K})$) is independently convergent for every $f \in F$. A \textit{weak* i-measure} is a set function $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F^*)$ (where $\mathfrak{M}$ is a $\sigma$-algebra on a set $Z$) if for any $f \in F$, the set function $\mathfrak{M} \ni A \mapsto \scalar{f}{\mu(A)(\cdot)} \in \mathscr{L}(E,\mathbb{K})$ is an i-measure. Equivalently, $\mu$ is a weak* i-measure iff \begin{equation*} \scalar{f}{\mu\Bigl(\bigcup_{n=1}^{\infty} A_n\Bigr)(\cdot)} = \sum_{n=1}^{\infty} \scalar{f}{\mu(A_n)(\cdot)} \end{equation*} (for any $f \in F$) and the series $\sum_{n=1}^{\infty} \mu(A_n)$ is independently w*-convergent for any collection of pairwise disjoint sets $A_n \in \mathfrak{M}$. The \textit{total semivariation} $\|\mu\|_Z \in [0,\infty]$ of a weak* i-measure is defined by the formula \eqref{eqn:semi}, as for i-measures. \end{dfn} As for i-measures, it turns out that \begin{pro}{fin} Every weak* i-measure has finite total semivariation. \end{pro} In the proof we shall need the following elementary result, whose proof is given for the sake of completeness. \begin{lem}{compl} For any $\sigma$-algebra $\mathfrak{M}$ on a set $Z$ and Banach spaces $E$ and $F$, the set $\EuScript{M}(\mathfrak{M},\mathscr{L}(E,F))$ is a Banach space when the algebraic operations are defined pointwise and the norm is a function which assigns to each i-measure its total semivariation. \end{lem} \begin{proof} It is readily seen that $\EuScript{M}(\mathfrak{M},\mathscr{L}(E,F))$ is a vector space and the function $\|\cdot\|_Z$ is a norm (thanks to \THM{fin}). Take a Cauchy sequence of i-measures $\mu_n\colon \mathfrak{M} \to \mathscr{L}(E,F)$. For any $A \in \mathfrak{M}$ we have $\|\mu_n(A) - \mu_m(A)\| \leqslant \|\mu_n - \mu_m\|_Z$ and therefore $\mu(A) \df \lim_{n\to\infty} \mu_n(A)$ is a well defined member of $\mathscr{L}(E,F)$. In this way we have obtained a set function $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F)$. It is immediate that $\mu$ is finitely additive. For any $\varepsilon > 0$, choose $\nu_{\varepsilon}$ such that \begin{equation*} \|\mu_n - \mu_m\|_Z \leqslant \frac12 \varepsilon \end{equation*} for all $n, m \geqslant \nu_{\varepsilon}$. Fix a countable collection of pairwise disjoint sets $A_k \in \mathfrak{M}$ and a sequence of vectors $x_k \in E$ whose norms do not exceed $1$. For $n, m \geqslant \nu_{\varepsilon}$ and arbitrary $N$ and $M$ we have $\|\sum_{k=N}^{N+M} \mu_n(A_k) x_k - \mu_m(A_k) x_k\| \leqslant \|\mu_n - \mu_m\|_Z \leqslant \frac12 \varepsilon$. So, letting $m \to \infty$, we get \begin{equation}\label{eqn:fund} \Bigl\|\sum_{k=N}^{N+M} \mu_n(A_k) x_k - \sum_{k=N}^{N+M} \mu(A_k) x_k\Bigr\| \leqslant \frac12 \varepsilon \qquad (n \geqslant \nu_{\varepsilon}). \end{equation} This, in particular, yields that $\|\mu_n - \mu\|_Z \leqslant \frac12 \varepsilon$ for $n \geqslant \nu_{\varepsilon}$ and consequently $\lim_{n\to\infty} \|\mu_n - \mu\|_Z = 0$, provided $\mu$ is an i-measure. Further, for $n = \nu_{\varepsilon}$ the series $\sum_{k=1}^{\infty} \mu_n(A_k) x_k$ is convergent, hence there is $N_0$ such that $\|\sum_{k=N}^{N+M} \mu_n(A_k) x_k\| \leqslant \frac12 \varepsilon$ whenever $N \geqslant N_0$ and $M > 0$. This inequality, combined with \eqref{eqn:fund}, gives $\|\sum_{k=N}^{N+M} \mu(A_k) x_k\| \leqslant \varepsilon$ for any $N \geqslant N_0$ and $M > 0$. We conclude that the series $\sum_{k=1}^{\infty} \mu(A_k) x_k$ is convergent. Finally, when $x_k = x \in E$ for each $k$ (where $\|x\| \leqslant 1$), $A_1 = B \in \mathfrak{M}$ and $A_k = \varnothing$ for all $k > 1$, \eqref{eqn:fund} gives \begin{equation}\label{eqn:single} \|\mu_n(B) x - \mu(B) x\| \leqslant \frac12 \varepsilon \qquad (n \geqslant \nu_{\varepsilon}). \end{equation} So, if (again) the sets $A_k$ are pairwise disjoint and $A \df \bigcup_{k=1}^{\infty} A_k$, then for $n = \nu_{\varepsilon}$ there is $M$ such that $\|\mu_n(A \setminus \bigcup_{k=1}^N A_k) x\| \leqslant \frac12 \varepsilon$ for any $N \geqslant M$. Putting $B = A \setminus \bigcup_{k=1}^N A_k$ (and $n = \nu_{\varepsilon}$) in \eqref{eqn:single}, we deduce that $\|\mu(A \setminus \bigcup_{k=1}^N A_k) x\| \leqslant \varepsilon$ for any $N \geqslant M$. Thus, $\lim_{n\to\infty} \|\mu(A \setminus \bigcup_{k=1}^n A_k) x\| = 0$, which means that $\mu$ is countably additive and consequently $\mu \in \EuScript{M}(\mathfrak{M},\mathscr{L}(E,F))$. \end{proof} \begin{proof}[Proof of \PRO{fin}] Let $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F^*)$ be a weak* i-measure defined on a $\sigma$-algebra $\mathfrak{M}$ of subsets of a set $Z$. For any $f \in F$ define $\mu_f\colon \mathfrak{M} \to \mathscr{L}(E,\mathbb{K})$ by $\mu_f(A) \df \scalar{f}{\mu(A)(\cdot)}$. We infer from the definition of a weak* i-measure that $\mu_f \in \EuScript{M}(\mathfrak{M},\mathscr{L}(E,F))$ and from \LEM{compl} that $\EuScript{M}(\mathfrak{M},\mathscr{L}(E,F))$ is a Banach space. So, we conclude from the Closed Graph Theorem that a linear operator $\Phi\colon F \ni f \mapsto \mu_f \in \EuScript{M}(\mathfrak{M},\mathscr{L}(E,F))$ is continuous (it is obvious that the graph of $\Phi$ is closed). Hence, $M \df \sup\{\|\mu_f\|_Z\colon\ f \in F,\ \|f\| \leqslant 1\} < \infty$. Now take a collection of $N$ pairwise disjoint sets $A_n \in \mathfrak{M}$ and a corresponding system of vectors $x_n \in E$ whose norms do not exceed $1$. Then \begin{align*} \Bigl\|\sum_{n=1}^N \mu(A_n) x_n\Bigr\| &= \sup\Bigl\{\Bigl|\sum_{n=1}^N (\mu(A_n) x_n)(f)\Bigr|\colon\ f \in F,\ \|f\| \leqslant 1\Bigr\}\\ &= \sup\Bigl\{\Bigl|\sum_{n=1}^N \mu_f(A_n) x_n\Bigr|\colon\ f \in F,\ \|f\| \leqslant 1\Bigr\} \leqslant M \end{align*} and thus $\|\mu\|_Z \leqslant M$. \end{proof} \begin{exm}{nonabs} One may hope (being inspired by \COR{abs} and \PRO{fin}) that for every weak* i-measure $\mu$ there is a nonnegative real-valued measure $\lambda$ such that $\mu$ vanishes on all sets on which $\lambda$ vanishes. As the following example shows, in some cases this is very far from the truth.\par Let $Z$ be an uncountable set and $\mathfrak{M}$ the $\sigma$-algebra of all subsets of $Z$. Let $E = \mathbb{K}$ and $F = \ell_1(Z,\mathbb{K})$. Then $F^* = \ell_{\infty}(Z,\mathbb{K})$. Further, for any set $A \in \mathfrak{M}$ let $\mu(A)\colon E \to F$ be given by $\mu(A)\lambda = \lambda j_A$ where, as usual, $j_A$ is the characteristic function of $A$. We see that $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F^*)$. Observe that $\mu(A) = 0$ iff $A = \varnothing$ and thus there is no measure $\lambda\colon \mathfrak{M} \to [0,\infty)$ for which $\mu \ll \lambda$ (because $Z$ is uncountable). However, $\mu$ is a weak* i-measure, which may simply be verified. \end{exm} Let $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F^*)$ be a weak* i-measure defined on a $\sigma$-algebra $\mathfrak{M}$ of subsets of a set $Z$. For $f \in S_{\mathfrak{M}}(Z,E)$, we define the \textit{weak* integral} $\int^{w*}_Z f \dint{\mu}$ of $f$ with respect to $\mu$ as the right-hand side expression of \eqref{eqn:i-int}, understood in the weak* topology of $F^*$; that is, \begin{equation*} \Bigl(\int^{w*}_Z f \dint{\mu}\Bigr)(v) = \sum_{e \in E} \Bigl(\mu(f^{-1}(\{e\}))e\Bigr)(v) \qquad (v \in F). \end{equation*} We see (as for i-measures) that the operator $L\colon S_{\mathfrak{M}}(Z,E) \ni f \mapsto \int^{w*}_Z f \dint{\mu} \in F^*$ is linear and continuous, and $\|L\| = \|\mu\|_Z$ (because the norm of $F^*$ is lower semicontinuous with respect to the weak* topology). We extend the operator $L$ to the whole $M_{\mathfrak{M}}(Z,E)$ and for $f \in M_{\mathfrak{M}}(Z,E)$ use $\int^{w*}_Z f \dint{\mu}$ to denote the value at $f$ of the unique continuous extension of $L$, which is called the \textit{weak* integral} of $f$ (with respect to $\mu$). We see that $\|\int^{w*}_Z f \dint{\mu}\| \leqslant \|f\| \cdot \|\mu\|_Z$. Note also that if the weak* i-measure is actually an i-measure, then $\int^{w*}_Z f \dint{\mu} = \int_Z f \dint{\mu}$ for any $f \in M_{\mathfrak{M}}(Z,E)$. We also have: \begin{thm}[Bounded Weak* Convergence Theorem]{bw*c} Let $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F^*)$ be a weak* i-measure \textup{(}where $\mathfrak{M}$ is a $\sigma$-algebra on a set $Z$\textup{)}. If $f_n \in M_{\mathfrak{M}}(Z,E)$ are uniformly bounded and converge pointwise to $f\colon Z \to E$ in the weak topology of $E$, then $\int^{w*}_Z f_n \dint{\mu}$ converge to $\int^{w*}_Z f \dint{\mu}$ in the weak* topology of $F^*$. \end{thm} \begin{proof} Fix $v \in F$. We need to show that $(\int^{w*}_Z f_n \dint{\mu})(v)$ converge to $(\int^{w*}_Z f \dint{\mu})(v)$. Define $\nu\colon \mathfrak{M} \to \mathscr{L}(E,\mathbb{K}) = E^*$ by $\nu(A) \df \scalar{v}{\mu(A)(\cdot)}$. It follows from the definition of a weak* i-measure that $\nu$ is an i-measure. What is more, $\|\nu\|_Z \leqslant \|\mu\|_Z$ and \begin{equation}\label{eqn:m-n} \Bigl(\int^{w*}_Z u \dint{\mu}\Bigr)(v) = \int_Z v \dint{\nu} \quad (\in \mathbb{K}) \end{equation} for any $u \in M_{\mathfrak{M}}(Z,E)$ (this is clear for $u \in S_{\mathfrak{M}}(Z,E)$ and for arbitrary $u$ follows from the continuity in $u$ of both sides of \eqref{eqn:m-n}). So, the assertion of the theorem follows from \eqref{eqn:m-n} and \THM{bwc} applied for $\nu$. \end{proof} In some cases weak* i-measures are automatically i-measures, as shown by \begin{pro}{w*hide} Let $W$ be a linear subspace of $F^*$ such that $W$ is sequentially closed in the weak* topology of $F^*$ and any weak* vector measure $\nu\colon \mathfrak{M} \to F^*$ whose range is contained in $W$ is a vector measure. Then any weak* i-measure $\mu\colon \mathfrak{M} \to \mathscr{L}(E,W) \subset \mathscr{L}(E,F^*)$ is an i-measure. In particular, if $W$ is a linear subspace of $F^*$ that is sequentially closed in the weak* topology and contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$, then every $\mathscr{L}(E,W)$-valued weak* i-measure is an i-measure. \end{pro} \begin{proof} Fix an infinite collection of pairwise disjoint sets $A_n \in \mathfrak{M}$ and a bounded sequence of vectors $x_n \in E$. For each $f \in F$ define $\nu_f\colon \mathfrak{M} \to \mathbb{K}$ by $\nu_f(B) \df \sum_{n=1}^{\infty} (\mu(A_n \cap B) x_n)(f)$. Since the set functions $\mathfrak{M} \ni B \mapsto (\mu(A_n \cap B) x_n)(f) \in \mathbb{K}$ are measures, we see (e.g. by the Vitali-Hahn-Saks-Nikodym theorem; consult Theorem~8 on page~23 in \cite{d-u}) that $\nu_f$ is a measure as well. Consequently, the formula $(\nu(B))(f) \df \nu_f(B)\ (B \in \mathfrak{M},\ f \in F)$ correctly defines a weak* vector measure $\nu\colon \mathfrak{M} \to F^*$. What is more, it follows from the definition of $\nu$ and the property that $W$ is sequentially closed in the weak* topology of $F$ that $\nu(B) \in W$ for any $B \in \mathfrak{M}$. Thus, $\nu$ is a vector measure, which implies that the series $\sum_{n=1}^{\infty} \nu(A_n)$ is convergent in the norm topology. But $\nu(A_n) = \mu(A_n) x_n$ and consequently $\sum_{n=1}^{\infty} \mu(A_n)$ is independently convergent. Since, in addition, $\scalar{f}{\mu(\bigcup_{n=1}^{\infty} A_n)(\cdot)} = \sum_{n=1}^{\infty} \scalar{f}{\mu(A_n)(\cdot)}\ (f \in F)$, we see that $\mu(\bigcup_{n=1}^{\infty} A_n) f = \sum_{n=1}^{\infty} \mu(A_n) f\ (f \in F)$ and we are done.\par An additional claim follows from a celebrated result due to Diestel and Faires \cite{d-f} (see also \cite{die} or Theorem~2 on page~20 in \cite{d-u}) which implies that each $W$-valued weak* vector measure is a vector measure provided $W$ contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$. \end{proof} The proofs of the next two results are skipped. The first of them immediately follows from the definition of the weak* integral for elements of $S_{\mathfrak{M}}(Z,E)$, while the second is a consequence of \THM{bw*c} and (BC*). \begin{pro}{sc} Let $W$ be a linear subspace of $F^*$ that is sequentially closed in the weak* topology of $F^*$. If $\mu\colon \mathfrak{M} \to \mathscr{L}(E,W) \subset \mathscr{L}(E,F^*)$ is a weak* i-measure \textup{(}where $\mathfrak{M}$ is a $\sigma$-algebra of subsets of a set $Z$\textup{)}, then $\int^{w*}_Z f \dint{\mu} \in W$ for any $f \in M_{\mathfrak{M}}(Z,E)$. \end{pro} \begin{pro}{w*T} Let $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,W) \subset \mathscr{L}(E,F^*)$ be a weak* i-measure \textup{(}where $W$ is a linear subspace of $F^*$ that is sequentially closed in the weak* topology of $F^*$\textup{)}. Let $T\colon C(X,E) \to W$ be given by $T f \df \int^{w*}_X f \dint{\mu}$ and let $\bar{T}\colon M(X,E) \to F^*$ be as specified in \COR{C-M}. Then $\|T\| = \|\mu\|_X$ and \begin{equation*} \bar{T} f = \int^{w*}_X f \dint{\mu} \qquad (f \in M(X,E)). \end{equation*} \end{pro} \begin{thm}{w*} Let $W$ be a linear subspace of $F^*$ that is sequentially closed in the weak* topology of $F^*$. For every continuous linear operator $T\colon C(X,E) \to W$ there exists a unique weak* i-measure $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,W) \subset \mathscr{L}(E,F^*)$ for which \begin{equation}\label{eqn:w*repr} T f = \int^{w*}_X f \dint{\mu} \qquad (f \in C(X,E)). \end{equation} Moreover, $\|T\| = \|\mu\|_X$. \end{thm} \begin{proof} Assume $T\colon C(X,E) \to W$ is a continuous linear operator. The uniqueness of $\mu$ as well as the additional claim of the theorem immediately follow from \PRO{w*T}. We shall now show the existence of $\mu$. We infer from the proof of \PRO{vsc} that $T$ extends to $\bar{T}\colon M(X,E) \to W$ which satisfies (BC*). We define $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,W)$ by the rule $\mu(A) x \df \bar{T}(j_A(\cdot) x)$ where $j_A\colon X \to \{0,1\}$ is the characteristic function of $A$ (here we also continue the notational convention introduced in the proof of \LEM{1}). It is easily seen that $\mu(A) \in \mathscr{L}(E,W)$. Assume $A_n \in \mathfrak{M}(X)$ are pairwise disjoint and let $x_n \in E$ be uniformly bounded. Put $s_N \df \sum_{k=1}^N j_{A_k}(\cdot) x_k\ (N=1,2,\ldots,\infty)$. Notice that the functions $s_n$ are uniformly bounded and converge pointwise (in the norm topology of $E$) to $s_{\infty}$. So, it follows from (BC*) that the functionals $\sum_{k=1}^n \mu(A_k) x_k = \bar{T} s_n$ converge to $\bar{T} s_{\infty}$ in the weak* topology of $F^*$. This implies that the series $\sum_{k=1}^{\infty} \mu(A_k)$ is independently weak* convergent. What is more, if $x_k = x \in E$ for each $k$, then, under the above notations, $s_{\infty} = j_{\bigcup_{k=1}^{\infty} A_k}(\cdot) x$ and we see that the series $\sum_{n=1}^{\infty} \mu(A_k) x$ converges in the weak* topology of $F^*$ to $\bar{T} s_{\infty} = \mu(\bigcup_{k=1}^{\infty} A_k) x$. We conclude that $\mu$ is a weak* i-measure. Finally, putting $L f \df \int^{w*}_X f \dint{\mu}$ for $f \in M(X,E)$, we see that $L\colon M(X,E) \to F^*$ and $\bar{T}$ are two continuous functions which coincide on $S_{\mathfrak{M}(X)}(X,E)$. Since this last space is dense in $M(X,E)$, we conclude that $L = \bar{T}$ and thus \eqref{eqn:w*repr} holds. \end{proof} The proof of the next result is omitted. \begin{cor}{w*} Let $F_{sc}$ be the smallest linear subspace of $F^{**}$ that contains $F$ and is sequentially closed in the weak* topology of $F^{**}$. For every continuous linear operator $T\colon C(X,E) \to F$ there is a \textup{(}unique\textup{)} weak* i-measure $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,F_{sc}) \subset \mathscr{L}(E,F^{**})$ for which \eqref{eqn:w*repr} holds. \end{cor} \begin{pro}{VSC} Let $F$ be a vsc Banach space that contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$. For every continuous linear operator $T\colon C(X,E) \to F$ there exists a unique i-measure $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,F)$ such that \eqref{eqn:vint} holds. \end{pro} \begin{proof} We start from the existence part. There exists a linear isomorphism $\Phi\colon F \to W \subset Z^*$ such that $W$ is a linear subspace of a dual Banach space $Z^*$ that is sequentially closed in the weak* topology (see \PRO{vsc-dual}). It follows from \THM{w*} that there is a weak* i-measure $\nu\colon \mathfrak{M}(X) \to \mathscr{L}(E,W)$ such that \begin{equation}\label{eqn:trm} (\Phi \circ T) f = \int^{w*}_X f \dint{\nu} \qquad (f \in C(X,E)). \end{equation} Since $F$ contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$, so does $W$ and \PRO{w*hide} implies that $\nu$ is an i-measure. We define $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,F)$ by $\mu(A) \df \Phi^{-1} \circ \nu(A)$. Straightforward calculations shows that $\mu$ is also an i-measure. Moreover, for $u \in S_{\mathfrak{M}(X)}(X,E)$ one simply has \begin{equation}\label{eqn:tri} \int_X u \dint{\mu} = \Phi^{-1}\Bigl(\int_X u \dint{\nu}\Bigr) \end{equation} and thus \eqref{eqn:tri} holds for all $u \in M(X,E)$. Consequently, \eqref{eqn:tri} and \eqref{eqn:trm} yield \eqref{eqn:vint}.\par To establish the uniqueness of $\mu$, it is enough to check that if $\lambda\colon \mathfrak{M}(X) \to \mathscr{L}(E,F)$ is an i-measure such that $\int_X f \dint{\lambda} = 0$ for each $f \in C(X,E)$, then $\lambda = 0$. But this simply follows from \THM{bwc} and the characterisation of $M(X,E)$ given in \LEM{1}. \end{proof} \begin{rem}{marg} \PRO{extend} and \LEM{norm} will show that, under the notation of \PRO{VSC}, $\|\mu\|_X = \|T\|$. \end{rem} Taking into account the characterisation of wsc Banach spaces formulated in \PRO{wsc}, the following result is a little bit surprising. \begin{cor}{vsc} Let $F$ be a vsc Banach space that contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$. Every continuous linear operator $T\colon C(X,E) \to F$ admits a unique linear extension $\bar{T}\colon M(X,E) \to F$ such that \textup{(BC)} holds with $M(X,E)$ inserted in place of $\mathscr{M}(V)$. Moreover, $\bar{T}$ is continuous and $\|\bar{T}\| = \|T\|$. \end{cor} \begin{proof} Uniqueness, as usual, follows from \LEM{1} and (BC). To establish the existence, apply \PRO{VSC} to get an i-measure $\mu$ such that \eqref{eqn:vint} holds and $\|\mu\|_X = \|T\|$ (see \REM{marg}). Then define $\bar{T}\colon M(X,E) \to F$ by $\bar{T} f \df \int_X f \dint{\mu}$ and use \THM{bwc} to show (BC). \end{proof} \begin{exm}{linfty} Let us show that the assumption in \PRO{VSC} that $F$ contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$ is essential. Put $F \df \ell_{\infty}^{\mathbb{K}}$. Since $F$ is a dual Banach space, it is vsc. Now let $T\colon C([0,1],\mathbb{K}) \to F$ be given by $T f \df (f(\frac1n))_{n=1}^{\infty}$. It is an elementary exercise to find a uniformly bounded sequence of functions $f_n \in C([0,1],\mathbb{K})$ which converge pointwise to $0$ but the vectors $T f_n$ diverge in the norm topology. This is contradictory to \THM{bnc}, whose assertion has to be true for any operator $T$ for which \PRO{VSC} holds. \end{exm} \begin{rem}{w*integrable} As for i-measures, we wish to introduce the concept of integration of (possibly unbounded) functions with respect to weak* i-measures. Let $\mu\colon \mathfrak{M} \to \mathscr{L}(E,F^*)$ be a weak* i-measure on a set $Z$ and $g\colon Z \to E$ an $\mathfrak{M}$-measurable function (see \DEF{measurable}). Let $\mathfrak{D}(g)$ be as specified in \REM{integrable}. Define $\nu\colon \mathfrak{D}(g) \to F^*$ by $\nu(A) \df \int^{w*}_Z j_A g \dint{\mu}$. The function $g$ is said to be \textit{weak* integrable} if the set function $\nu\colon \mathfrak{D}(g) \ni A \mapsto \int^{w*}_Z j_A g \dint{\mu} \in F^*$ extends to a (necessarily unique) weak* vector measure $\bar{\nu}\colon \mathfrak{M} \to F^*$. If this happens, for each $A \in \mathfrak{M}$ we define the \textit{weak* integral} $\int^{w*}_A g \dint{\mu}$ (of $g$ on $A$ with respect to $\mu$) as $\bar{\nu}(A)$. In the above situation, the set function $\nu$ is always a \textit{conditional} weak* vector measure, which follows from \THM{bw*c}. Thus, every bounded $\mathfrak{M}$-measurable function is weak* integrable. Weak* integrable functions form a vector space and the weak* integral $\int_A$ (with respect to $\mu$) is a linear operator (for each $A \in \mathfrak{M}$). \end{rem} \section{Regularisation of i-measures} In this section $Y = \Omega \sqcup \{\infty\}$ is a one-point compactification of $\Omega$. \begin{dfn}{regular} An i-measure $\mu$ defined on $\mathfrak{B}(\Omega)$ is said to be \textit{regular} if every set $A \in \mathfrak{B}(\Omega)$ includes a $\sigma$-compact set $K$ such that $\mu$ vanishes on every Borel set contained in $K \setminus A$. \end{dfn} It is an easy task to check that all regular i-measures form a linear subspace, to be denoted by $\EuScript{M}_r(\mathfrak{B}(\Omega),\mathscr{L}(E,F))$, of $\EuScript{M}(\mathfrak{B}(\Omega),\mathscr{L}(E,F))$.\par What we mean by a \textit{regularisation} of an i-measure is the property formulated below. \begin{pro}{extend} Every i-measure $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,F)$ is uniquely extendable to a regular i-measure $\bar{\mu}\colon \mathfrak{B}(X) \to \mathscr{L}(E,F)$. What is more, $\|\bar{\mu}\|_X = \|\mu\|_X$ and there exists a regular measure $\bar{\lambda}\colon \mathfrak{B}(X) \to [0,\infty)$ such that $\bar{\mu} \ll \bar{\lambda}$. \end{pro} \begin{proof} Let $\lambda\colon \mathfrak{M}(X) \to [0,\infty)$ be a measure such that $\mu \ll \lambda$ (see \COR{abs}). Then $\lambda$ extends uniquely to a regular measure $\bar{\lambda}\colon \mathfrak{B}(X) \to [0,\infty)$ (this property may simply be concluded from the Riesz characterisation theorem applied for the linear functional given by $C(X,\mathbb{K}) \ni f \mapsto \int_X f \dint{\lambda} \in \mathbb{K}$). The measure $\bar{\lambda}$ has the following property: \begin{itemize} \item[($**$)] for any set $A \in \mathfrak{B}(X)$ there exists a set $A^{\#} \in \mathfrak{M}(X)$ such that $\bar{\lambda}(A \setminus A^{\#}) = \bar{\lambda}(A^{\#} \setminus A) = 0$. \end{itemize} Notice also that if $A$ and $A^{\#}$ are as specified above and $A^{\#\#} \in \mathfrak{M}(X)$ is such that $\bar{\lambda}(A \setminus A^{\#\#}) = \bar{\lambda}(A^{\#\#} \setminus A) = 0$, then $\mu(A^{\#}) = \mu(A^{\#\#})$ (because $\lambda(A^{\#} \setminus A^{\#\#}) = \lambda(A^{\#\#} \setminus A^{\#}) = 0$ and $\mu \ll \lambda$). This observation means that the formula $\bar{\mu}(A) \df \mu(A^{\#})$ where, for $A \in \mathfrak{B}(X)$, $A^{\#}$ is as specified in ($**$) correctly defines a set function $\bar{\mu}\colon \mathfrak{B}(X) \to \mathscr{L}(E,F)$, which extends $\mu$. Now take a sequence of pairwise disjoint sets $A_n \in \mathfrak{B}(X)$. We can find a sequence of \textit{pairwise disjoint} sets $A_n^{\#} \in \mathfrak{M}(X)$ for which ($**$) is satisfied with $A_n$ and $A_n^{\#}$ inserted in place of $A$ and $A^{\#}$ (respectively). Consequently, the series $\sum_{n=1}^{\infty} \mu(A_n)$ is independently convergent, $\bar{\mu}(A_n) = \mu(A_n^{\#})$ for each $n$ and $\bar{\mu}(\bigcup_{n=1}^{\infty} A_n) = \mu(\bigcup_{n=1}^{\infty} A_n^{\#})$, which implies that $\bar{\mu}$ is an i-measure and $\|\bar{\mu}\|_X = \|\mu\|_X$.\par Further, if $\bar{\lambda}(A) = 0$ and $A^{\#}$ is as specified in ($**$), then $\lambda(A^{\#}) = 0$ and, consequently, $\bar{\mu}(A) = \mu(A^{\#}) = 0$. This shows that $\bar{\mu} \ll \bar{\lambda}$ (see \COR{abs-abs}). Finally, for any $A \in \mathfrak{B}(X)$ one can find a $\sigma$-compact set $K \subset A$ such that $\bar{\lambda}(A \setminus K) = 0$ and hence $\bar{\mu}$ vanishes on each Borel subset of $A \setminus K$. This proves that $\bar{\mu}$ is regular.\par To establish the uniqueness of $\bar{\mu}$, assume $\bar{\mu}'\colon \mathfrak{B}(X) \to \mathscr{L}(E,F)$ is another regular i-measure extending $\mu$. For each $e \in E$ and $\psi \in F^*$, we define $\bar{\mu}_{e,\psi}\colon \mathfrak{B}(X) \to \mathbb{K}$ (and similarly $\bar{\mu}'_{e,\psi}\colon \mathfrak{B}(X) \to \mathbb{K}$) by $\bar{\mu}_{e,\psi}(A) \df (\psi \circ \bar{\mu}(A))(e)$. It follows from the regularity of $\bar{\mu}$ and $\bar{\mu}'$ that $\bar{\mu}_{e,\psi}$ and $\bar{\mu}'_{e,\psi}$ are regular scalar-valued measures. But both these scalar-valued measures coincide on $\mathfrak{M}(X)$ and hence $\bar{\mu}'_{e,\psi} = \bar{\mu}_{e,\psi}$ (thanks to the Riesz characterisation theorem). Consequently, $\bar{\mu}' = \bar{\mu}$. \end{proof} As in \PRO{extend}, we denote by $\bar{\mu}$ the unique regular Borel i-measure which extends an i-measure $\mu$ defined on $\mathfrak{M}(X)$. \begin{cor}{reg} For an i-measure $\mu\colon \mathfrak{B}(X) \to \mathscr{L}(E,F)$, the following conditions are equivalent: \begin{enumerate}[\upshape(i)] \item $\mu$ is regular; \item for any $e \in E$ and $\psi \in F^*$, the scalar-valued measure $\mu_{e,\psi}\colon \mathfrak{B}(X) \ni A \mapsto (\psi \circ \mu(A))(e) \in \mathbb{K}$ is regular; \item there exists a regular measure $\rho\colon \mathfrak{B}(X) \to [0,\infty)$ such that $\mu \ll \rho$; \item there exists a regular measure $\lambda\colon \mathfrak{B}(X) \to [0,\infty)$ such that $\mu \ll \lambda$ and $\lambda(A) \leqslant \|\mu\|_A$ for each $A \in \mathfrak{B}(X)$. \end{enumerate} \end{cor} \begin{proof} Implications (iv)$\implies$(iii)$\implies$(i)$\implies$(ii) are clear. Further, it follows from \PRO{extend} applied for the i-measure $\mu\bigr|_{\mathfrak{M}(X)}$ that (iii) follows from (i), and that (i) is implied by (ii) (see the proof of the uniqueness part in \PRO{extend}). So, it remains to check that (iii) implies (iv). Let $\rho$ and $\lambda$ be as specified in (iii) and \COR{abs}, respectively. We may assume $\lambda$ satisfies \eqref{eqn:mut-abs}. It remains to check that $\lambda$ is regular, which simply follows from the fact that $\lambda \ll \rho$ (because $\lambda$ vanishes precisely on those sets on which $\mu$ vanishes---see the proof of \COR{abs-abs}). \end{proof} The proof of the next (very simple) result is left to the reader. \begin{lem}{one-point} An i-measure $\mu\colon \mathfrak{B}(Y) \to \mathscr{L}(E,F)$ is regular iff $\nu \df \mu\bigr|_{\mathfrak{B}(\Omega)}$, treated as an i-measure on $\Omega$, is regular. \end{lem} \begin{lem}{norm} Let $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F)$ be a regular i-measure. Then $\|T_{\nu}\| = \|\nu\|_{\Omega}$ where \begin{equation}\label{eqn:Tnu} T_{\nu}\colon C_0(\Omega,E) \ni f \mapsto \int_{\Omega} f \dint{\nu} \in F. \end{equation} \end{lem} \begin{proof} It is clear that $\|T_{\nu}\| \leqslant \|\nu\|_{\Omega}$. To show the reverse inequality, take a finite collection of $N$ pairwise disjoint sets $A_k \in \mathfrak{B}(\Omega)$ and a corresponding system of $N$ vectors $x_k \in E$ whose norms do not exceed $1$. We only need to check that $\|\sum_{k=1}^N \nu(A_k) x_k\| \leqslant \|T_{\nu}\|$. It follows from the definition of a regular i-measure that for each $k$ there exists a sequence of compact subsets $K_n^{(k)}$ of $A_k$ such that $\lim_{n\to\infty} \|\nu(K_n^{(k)}) - \nu(A_k)\| = 0$. Then, when $n$ is fixed, the sets $K_n^{(k)}$ are pairwise disjoint; and $\lim_{n\to\infty} \|\sum_{k=1}^N \nu(K_n^{(k)}) x_k\| = \|\sum_{k=1}^N \nu(A_k) x_k\|$. This argument allows us to assume the sets $A_k$ are compact. Further, we conclude (again) from the regularity of $\nu$ that for each $k$ there is a decreasing sequence of open supersets $U_n^{(k)}$ of $A_k$ such that $\nu$ vanishes on every Borel subset of $\bigcap_{n=1}^{\infty} U_n^{(k)} \setminus A_k$. We may also assume that, in addition, the sets $U_1^{(k)}$ are pairwise disjoint. Now, using e.g. Urysohn's lemma, (for each $k$) we may find a decreasing sequence of compact $\mathscr{G}_{\delta}$-sets $F_n^{(k)}$ with $A_k \subset F_n^{(k)} \subset U_n^{(k)}$. Then, for each fixed $n$, the sets $F_n^{(k)}$ are pairwise disjoint; $\lim_{n\to\infty} \|\nu(F_n^{(k)}) - \nu(\bigcap_{n=1}^{\infty} F_n^{(k)})\| = 0$ and $\nu(\bigcap_{n=1}^{\infty} F_n^{(k)}) = \nu(A_k)$ (because $\bigcap_{n=1}^{\infty} F_n^{(k)} \setminus A_k \subset \bigcap_{n=1}^{\infty} U_n^{(k)} \setminus A_k$). Hence, arguing as before, we may and do assume the sets $A_k$ are (compact and) $\mathscr{G}_{\delta}$. Take pairwise disjoint open sets $V_k$ such that $A_k \subset V_k$. Since $A_k$ is $\mathscr{G}_{\delta}$ and compact, there exists a sequence of continuous functions $u_n^{(k)}\colon \Omega \to [0,1]$ which converge pointwise (as $n \to \infty$) to the characteristic function $j_{A_k}$ of $A_k$ and vanish off $V_k$. Put \begin{equation*} f_n \df \sum_{k=1}^N u_n^{(k)}(\cdot) x_k \end{equation*} and observe that $f_n \in C_0(\Omega,E)$, $\|f_n\| \leqslant 1$ (since $\|x_k\| \leqslant 1$ for all $k$ and the sets $V_k$ are pairwise disjoint) and the functions $f_n$ converge pointwise (in the norm topology of $E$) to $\sum_{k=1}^N j_{A_k}(\cdot) x_k$. So, $\|T_{\nu} f_n\| \leqslant \|T_{\nu}\|$ for each $n$; and an application of \THM{bnc} gives $\|\sum_{k=1}^N \nu(A_k) x_k\| = \|\int_{\Omega} \sum_{k=1}^N j_{A_k}(\omega) x_k \dint{\mu(\omega)}\| = \lim_{n\to\infty} \|T_{\nu} f_n\| \leqslant \|T_{\nu}\|$. \end{proof} \begin{thm}{vsc} Let $F$ be a vsc Banach space that contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$ and $\Omega$ be a locally compact Hausdorff space. For every continuous linear operator $T\colon C_0(\Omega,E) \to F$ there exists a unique regular Borel i-measure $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F)$ such that \begin{equation*} T f = \int_{\Omega} f \dint{\mu} \qquad (f \in C_0(\Omega,E)). \end{equation*} Moreover, $\|T\| = \|\mu\|_{\Omega}$. \end{thm} \begin{proof} Below we shall continue to denote by $T_{\nu}$ the operator defined by \eqref{eqn:Tnu} (provided $\nu \in \EuScript{M}_r(\mathfrak{B}(\Omega),\mathscr{L}(E,F))$).\par For each $e \in F$, let $c_e\colon \Omega \to F$ stand for the constant function whose only value is $e$. Define $S\colon C(Y,E) \to F$ by $S u \df T(u\bigr|_{\Omega} - c_{u(\infty)})$. It is clear that $S$ is continuous and linear. So, it follows from \PRO{VSC} that there is an i-measure $\nu\colon \mathfrak{M}(Y) \to \mathscr{L}(E,F)$ for which $S u = \int_Y u \dint{\nu}\ (u \in C(Y,E))$. We define $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F)$ as the restriction of $\bar{\nu}$ to $\mathfrak{B}(\Omega)$. We conclude from \LEM{one-point} that $\mu$ is regular. Since every function $g \in C_0(\Omega,E)$ extends to a continuous function $\bar{g}$ on $Y$ which vanish at $\infty$, we see that $\int_{\Omega} g \dint{\mu} = \int_Y \bar{g} \dint{\bar{\nu}}$. But $\int_Y \bar{g} \dint{\bar{\nu}} = \int_Y \bar{g} \dint{\nu} = S \bar{g} = T(g)$ and hence $T = T_{\mu}$.\par Finally, since the operator $\Phi\colon \EuScript{M}_r(\mathfrak{B}(\Omega),\mathscr{L}(E,F)) \ni \nu \mapsto T_{\nu} \in \mathscr{L}(C_0(\Omega,E),F)$ is linear, \LEM{norm} yields that $\Phi$ is isometric and hence one-to-one, which finishes the proof. \end{proof} \begin{proof}[Proof of \THM{3}] Just notice that all wsc Banach spaces as well as all dual Banach spaces are vsc and all wsc Banach spaces contain no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$ (since they even contain no isomorphic copy of $c_0$), and then apply \THM{vsc} and \LEM{norm}. \end{proof} \begin{cor}{VSC} Assume $F$ is a vsc Banach space that contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$ and $T\colon C_0(\Omega,E) \to F$ is continuous and linear. If $f_n \in C_0(\Omega,E)$ are uniformly bounded and converge pointwise \textup{(}to a possibly discontinuous function\textup{)} in the norm topology of $E$, then $T f_n$ converge in the norm topology of $F$. In particular, if, in addition, $E = \mathbb{K}$, then $T$ sends weakly fundamental sequences into norm convergent sequences. \end{cor} \begin{proof} It follows from \THM{vsc} that $T f = \int_{\Omega} f \dint{\mu}$ for some regular Borel i-measure $\mu$. So, the first assertion follows from \THM{bnc}. The additional claim follows from the first and the characterisation of weakly fundamental sequences in $C_0(\Omega,\mathbb{K})$ (these are precisely those which are uniformly bounded and converge pointwise to a possibly discontinuous function). \end{proof} The reader interested in other results on continuous linear operators defined on the spaces of the form $C(X,\mathbb{K})$ (into arbitrary Banach spaces) is referred to Chapter~VI in \cite{d-u}. \begin{exm}{more} Taking into account all properties established above, a natural question arises whether the first assertion of \COR{VSC} holds for more general cases, such as: \begin{itemize} \item $T\colon C(X,E) \to F$ where $F$ is an arbitrary Banach space that contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$; \item $T\colon V \to F$ where $F$ is wsc and $V$ is a linear subspace of $C(X,E)$ \end{itemize} (above $T$ is assumed to be continuous and linear). Let us briefly explain that, in general, the answer is negative (in both the above cases). For a counterexample in the first settings, just put $F \df C([0,1],\mathbb{K})$ and take the identity operator on $F$. To disprove the assertion of \COR{VSC} in the second case, take an isometric copy $V$ of $F \df L^2([0,1])$ in $C([0,1],\mathbb{K})$ and define $T$ as a linear isometry of $V$ onto $L^2([0,1])$. \end{exm} \COR{VSC} enables us to give an example of classical Banach spaces which are not vsc. \begin{cor}{notvsc} For every infinite second countable locally compact topological space $\Omega$, the Banach space $C_0(\Omega,\mathbb{K})$ is not vsc. In particular, $c_0$ and $C([0,1],\mathbb{K})$ are not vsc. \end{cor} \begin{proof} Since $F \df C_0(\Omega,\mathbb{K})$ is separable, it contains no isomorphic copy of $\ell_{\infty}^{\mathbb{R}}$. So, if $F$ was vsc, the identity operator on $F$ would satisfy the assertion of \COR{VSC}, which is false. \end{proof} We now turn to regular weak* i-measures. \begin{dfn}{w*reg} A weak* i-measure $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F^*)$ is \textit{regular} if for any $f \in F$, the i-measure $\mu_f\colon \mathfrak{B}(\Omega) \ni A \mapsto \scalar{f}{\mu(A)(\cdot)} \in \mathscr{L}(E,\mathbb{K})$ is regular. \end{dfn} The reader should notice that the set of all $\mathscr{L}(E,F^*)$-valued regular Borel weak* i-measures on $\Omega$ is a vector space. We also wish to emphasize that, in general, for a weak* i-measure $\mu$ and a Borel set $A$ they may be no $\sigma$-compact subset $K$ of $A$ such that $\mu$ vanishes on every Borel subset of $A \setminus K$. \begin{lem}{reg} Every weak* i-measure $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,F^*)$ extends to a unique regular weak* i-measure $\bar{\mu}\colon \mathfrak{B}(X) \to \mathscr{L}(E,F^*)$. Moreover, $\|\bar{\mu}\|_X = \|\mu\|_X$. \end{lem} \begin{proof} For each $f \in F$, let $\nu_f\colon \mathfrak{B}(X) \to \mathscr{L}(E,\mathbb{K})$ be the unique regular i-measure which extends $\mu_f\colon \mathfrak{M}(X) \ni A \mapsto \scalar{f}{\mu(A)(\cdot)} \in \mathscr{L}(E,\mathbb{K})$ (see \PRO{extend}). It follows from the uniqueness of the extension that the operator $F \ni f \mapsto \nu_f \in \EuScript{M}_r(\mathfrak{B}(X),\mathscr{L}(E,\mathbb{K}))$ is linear. Moreover, $\|\nu_f(A)\| \leqslant \|\nu_f\|_X \cdot \|f\| = \|\mu_f\|_X \cdot \|f\| \leqslant \|\mu\|_X \cdot \|f\|$. One concludes that the rule $\scalar{f}{\bar{\mu}(A)(\cdot)} = \nu_f(A)\ (f \in F,\ A \in \mathfrak{B}(X))$ correctly defines a set function $\bar{\mu}\colon \mathfrak{B}(X) \to \mathscr{L}(E,F^*)$. It follows from the very definition of $\bar{\mu}$ that $\bar{\mu}$ is a regular weak* i-measure. What is more, if $A_k \in \mathfrak{B}(X)$ are paiwise disjoint and $x_k \in E$ have norms not exceeding $1$, then \begin{align*} \Bigl\|\sum_{k=1}^N \bar{\mu}(A_k) x_k\Bigr\| &= \sup \Bigl\{\Bigl|\sum_{k=1}^N (\bar{\mu}(A_k) x_k)(f)\Bigr|\colon\ f \in F,\ \|f\| \leqslant 1\Bigr\}\\ &= \sup \Bigl\{\Bigl|\sum_{k=1}^N \nu_f(A_k) x_k\Bigr|\colon\ f \in F,\ \|f\| \leqslant 1\Bigr\}\\ &= \sup \{\|\nu_f\|_X\colon\ f \in F,\ \|f\| \leqslant 1\} \leqslant \|\mu\|_X \end{align*} and therefore $\|\bar{\mu}\|_X = \|\mu\|_X$. The uniqueness of $\bar{\mu}$ follows from \PRO{extend}. \end{proof} As for i-measures, for any weak* i-measure $\mu\colon \mathfrak{M}(X) \to \mathscr{L}(E,F^*)$, we shall denote by $\bar{\mu}\colon \mathfrak{B}(X) \to \mathscr{L}(E,F^*)$ the unique extension of $\mu$ to a regular weak* i-measure. It is worth noting here that if $W$ is a linear subspace of $F^*$ that is sequentially closed in the weak* topology and $\mu(\mathfrak{M}(X)) \subset \mathscr{L}(E,W)$, then, in general, the range of $\bar{\mu}$ may contain operators which do not belong to $\mathscr{L}(E,W)$. This is why we deal here with dual Banach spaces instead of their weak* sequentially closed subspaces. As for i-measures, we have \begin{lem}{one-point*} A weak* i-measure $\mu\colon \mathfrak{B}(Y) \to \mathscr{L}(E,F^*)$ is regular iff $\nu \df \mu\bigr|_{\mathfrak{B}(\Omega)}$, treated as a weak* i-measure on $\Omega$, is regular. \end{lem} \begin{proof} The assertion immediately follows from \LEM{one-point}. \end{proof} \begin{lem}{norm*} For every regular weak* i-measure $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F^*)$, $\|T_{\mu}\| = \|\mu\|_{\Omega}$ where \begin{equation}\label{eqn:Tmu*} T_{\mu}\colon C_0(\Omega,E) \ni u \mapsto \int^{w*}_{\Omega} u \dint{\mu} \in F^*. \end{equation} \end{lem} \begin{proof} As usual, for each $f \in F$, denote by $\mu_f\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,\mathbb{K})$ a regular i-measure given by $\mu_f(A) = \scalar{f}{\mu(A)(\cdot)}$. Observe that $(T_{\mu} u)(f) = \int_{\Omega} u \dint{\mu_f}$ for any $f \in F$ and $u \in C_0(\Omega,E)$. It follows from \LEM{norm} that $\|\scalar{f}{T_{\mu}(\cdot)}\| = \|\mu_f\|_{\Omega}$ and therefore $\|T_{\mu}\| = \sup\{\|\mu_f\|_{\Omega}\colon\ f \in F,\ \|f\| \leqslant 1\} = \|\mu\|_{\Omega}$. \end{proof} \begin{thm}{W*} For every continuous linear operator $T\colon C_0(\Omega,E) \to F^*$ there exists a unique regular weak* i-measure $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F^*)$ such that $T = T_{\mu}$ where $T_{\mu}$ is given by \eqref{eqn:Tmu*}. Moreover, $\|T\| = \|\mu\|_{\Omega}$. \end{thm} \begin{proof} Thanks to \LEM{norm*}, it suffices to show the existence of $\mu$ (see the last paragraph in the proof of \THM{vsc}). We repeat some of arguments used in the proof of \THM{vsc}. For each $e \in E$, let $c_e\colon \Omega \to E$ be the constant function whose only value is $e$. Define $S\colon C(Y,E) \to F^*$ by $S u \df T(u\bigr|_{\Omega} - c_{u(\infty)})$. It follows from \THM{w*} that there exists an i-measure $\nu\colon \mathfrak{M}(X) \to \mathscr{L}(E,F^*)$ such that $S u = \int^{w*}_Y u \dint{\nu}$ for all $u \in C(Y,E)$. We define $\mu$ as the restriction of $\bar{\nu}$ (see \LEM{reg}) to $\mathfrak{B}(\Omega)$. We infer from \LEM{one-point*} that $\mu$ is a regular weak* i-measure. Now it suffices to repeat the reasoning presented in \THM{vsc} in order to verify that $T = T_{\mu}$. \end{proof} We conclude the section with the following consequence of \THM{W*}, whose proof is left to the reader. \begin{cor}[General Riesz Characterisation Theorem]{Riesz} For any continuous linear operator $T\colon C_0(\Omega,E) \to F$ there exists a unique regular weak* i-measure $\mu\colon \mathfrak{B}(\Omega) \to \mathscr{L}(E,F^{**})$ such that $T f = \int^{w*}_{\Omega} f \dint{\mu}$ for any $f \in C_0(\Omega,E)$. Moreover, $\|T\| = \|\mu\|_{\Omega}$. \end{cor} \section{Closure of a convex set} As we shall see, \THM{2} is a consequence of the next result. For the need of its formulation, we introduce the following \begin{dfn}{bar(M)} Let $D$ be a Borel subset of $\Omega$. For any set $A \subset M_{\mathfrak{B}(D)}(D,E)$, the space $\bar{\mathscr{M}}(A)$ is defined as the smallest set among all $B \subset M_{\mathfrak{B}(D)}(D,E)$ such that: \begin{enumerate}[($\bar{\textup{M}}$1)]\addtocounter{enumi}{-1} \item $A \subset B$; \item a function $u \in M_{\mathfrak{B}(D)}(D,E)$ belongs to $B$ provided the following condition is fulfilled: \begin{itemize} \item[(aec)] for every finite regular Borel measure $\mu$ on $D$ there exist a uniformly bounded sequence of functions $u_n \in B$ and a set $Z \in \mathfrak{B}(D)$ with $\mu(Z) = 0$ such that the vectors $u_n(\omega)$ converge to $u(\omega)$ in the weak topology of $E$ for any $\omega \in D \setminus Z$. \end{itemize} \end{enumerate} It is an easy exercise that $\mathscr{M}(A) \subset \bar{\mathscr{M}}(A)$ for any $A \subset M_{\mathfrak{B}(D)}(D,E)$, and that $\bar{\mathscr{M}}(V)$ is a linear subspace of $M_{\mathfrak{B}(D)}(D,E)$ provided $V$ is so.\par Using \LEM{1}, one may check that $\bar{\mathscr{M}}(C(X,E)) = M_{\mathfrak{B}(X)}(X,E)$ for any compact space $X$. \end{dfn} \begin{thm}{closure} Let $\mathscr{K}$ be a convex subset of $C_0(\Omega,E)$ and $\mathscr{B}$ be a countable collection of pairwise disjoint Borel subsets of $\Omega$ that cover $\Omega$. For a function $f \in C_0(\Omega,E)$ the following conditions are equivalent: \begin{enumerate}[\upshape(i)] \item $f$ belongs to the norm closure \textup{(}in $C_0(\Omega,E)$\textup{)} of $\mathscr{K}$; \item $f\bigr|_S \in \bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_S\bigr)$ \textup{(}where $\mathscr{K}\bigr|_S \df \bigl\{g\bigr|_S \in C(S,E)\colon\ g \in \mathscr{K}\bigr\}$\textup{)} for every Borel set $S \subset \Omega$ such that $S \cap B$ is $\sigma$-compact for each $B \in \mathscr{B}$; \item there exists a real constant $R > 0$ such that $f\bigr|_L \in \bar{\mathscr{M}}\bigl((\mathscr{K} \cap B(R))\bigr|_L\bigr)$ \textup{(}where $B(R) \df \{g \in C_0(\Omega,E)\colon\ \|g\| \leqslant R\}$\textup{)} for every $L \in \mathfrak{B}(\Omega)$ such that the set $L \cap B$ is compact for each $B \in \mathscr{B}$ and nonempty only for a finite number of such $B$. \end{enumerate} \end{thm} \begin{proof} We may and do assume that $\mathscr{K}$ is nonempty. It is readily seen that both conditions (ii) and (iii) are implied by (i). First we shall show that (i) follows from (ii). Assume $f$ satisfies (ii) and suppose, on the contrary, that $f$ is not in the norm closure of $\mathscr{K}$. We infer from the separation theorem that there is a continuous linear functional $\psi\colon C_0(\Omega,E) \to \mathbb{K}$ such that $\gamma \df \sup\{\operatorname{Re}(\psi_0(u))\colon\ u \in \mathscr{K}\} < \operatorname{Re}(\psi(f))$. Since $\mathbb{K}$ is wsc, it follows from \THM{3} that $\psi$ is of the form \begin{equation*} \psi(g) = \int_{\Omega} g \dint{\mu} \qquad (g \in C_0(\Omega,E)) \end{equation*} for some $\mathscr{L}(E,\mathbb{K})$-valued regular Borel i-measure $\mu$. Further, we infer from the regularity of $\mu$ that for any $B \in \mathscr{B}$ there is a $\sigma$-compact set $S_B \subset B$ such that $\mu$ vanishes on every Borel subset of $B \setminus S_B$. We put $S \df \bigcup_{B\in\mathscr{B}} S_B$. Since $\mathscr{B}$ is countable, we see that $S \in \mathfrak{B}(\Omega)$. What is more, for each $B \in \mathscr{B}$, $S \cap B = S_B$ (because members of $\mathscr{B}$ are pairwise disjoint) and thus $S \cap B$ is $\sigma$-compact. For any function $u \in M_{\mathfrak{B}(S)}(S,E)$ we shall denote by $u^{\#}$ the (unique) extension of $u$ to a member of $M_{\mathfrak{B}(\Omega)}(\Omega,E)$ which vanishes off $S$. We shall now verify that $f\bigr|_S \notin \bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_S\bigr)$ (which contradicts (ii)). To this end, it is enough to show that \begin{equation}\label{eqn:gamma} \operatorname{Re}\Bigl(\int_{\Omega} u^{\#} \dint{\mu}\Bigr) \leqslant \gamma \end{equation} for any $u \in \bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_S\bigr)$. To do that, denote by $\mathscr{H}$ the set of all functions $u \in M_{\mathfrak{B}(S)}(S,E)$ for which \eqref{eqn:gamma} holds. Since $\mu$ vanishes on every Borel subset of $\Omega \setminus S$, we see that $\mathscr{K}\bigr|_S \subset \mathscr{H}$. Now assume a function $u \in M_{\mathfrak{B}(S)}(S,E)$ satisfies condition (aec) (with $D \df S$ and $B \df \mathscr{H}$). Taking into account \COR{reg}, we conclude that there are a uniformly bounded sequence of functions $u_n \in \mathscr{H}$ and a set $Z \in \mathfrak{B}(S)$ such that $\mu$ vanishes on every Borel subset of $S \setminus Z$ and the vectors $u_n(\omega)$ converge to $u(\omega)$ in the weak topology of $E$ for any $\omega \in S \setminus Z$. One easily infers from \THM{bwc} that then $\lim_{n\to\infty} \int_{\Omega} u_n^{\#} \dint{\mu} = \int_{\Omega} u^{\#} \dint{\mu}$ and therefore the set $B \df \mathscr{H}$ satisfies condition ($\bar{\textup{M}}$1). Consequently, $\bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_S\bigr) \subset \mathscr{H}$ and we are done.\par We now turn to the proof that (i) is implied by (iii). This part is more subtle. Let $R > 0$ be as specified in (iii). We shall show that $f$ belongs to the norm closure of $\mathscr{K} \cap B(R)$. To this end, replacing $\mathscr{K}$ by $\mathscr{K} \cap B(R)$, we may assume that $\mathscr{K} \subset B(R)$ is such that \begin{itemize} \item[(iii')] $f\bigr|_L \in \bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_L\bigr)$ for every $L \in \mathfrak{B}(\Omega)$ such that the set $L \cap B$ is compact for each $B \in \mathscr{B}$ and nonempty only for a finite number of such $B$. \end{itemize} Enlarging, if necessary, $R$, we may and do assume that $f \in B(R)$ as well. As before, we suppose, on the contrary, that $f$ is not in the norm closure of $\mathscr{K}$ and take an $\mathscr{L}(E,\mathbb{K})$-valued regular Borel i-measure $\mu$ such that \begin{equation}\label{eqn:gam} \operatorname{Re}\Bigl(\int_{\Omega} u \dint{\mu}\Bigr) \leqslant \gamma \end{equation} for all $u \in \mathscr{K}$ and some real constant $\gamma$, while \begin{equation}\label{eqn:f-gam} (\varepsilon \df\,)\ \frac13 \Bigl(\operatorname{Re}\Bigl(\int_{\Omega} f \dint{\mu}\Bigr) - \gamma\Bigr) > 0. \end{equation} Further, let $\lambda$ be a finite nonnegative regular Borel measure on $\Omega$ for which $\mu \ll \lambda$. Using the last property, take $\delta > 0$ such that $\|\mu\|_A \leqslant \frac{\varepsilon}{R}$ whenever $A \in \mathfrak{B}(\Omega)$ is such that $\lambda(A) \leqslant 2 \delta$. Write $\mathscr{B} = \{B_1,B_2,\ldots\}$ and for any $n > 0$ take a compact set $L_n \subset B_n$ for which $\lambda(B_n \setminus L_n) \leqslant \frac{\delta}{2^n}$. Further, let $N > 0$ be such that $\sum_{n=N+1}^{\infty} \lambda(B_n) \leqslant \delta$. We put $L \df \bigcup_{n=1}^N L_n\ (\in \mathfrak{B}(\Omega))$. We see that $L \cap B_n$ coincides with $L_n$ for $n \leqslant N$ and is empty otherwise. Our aim is to show that $f\bigr|_L \notin \bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_L\bigr)$. Observe that $\lambda(\Omega \setminus L) \leqslant 2 \delta$ and therefore $\|\mu\|_{\Omega \setminus L} \leqslant \varepsilon/R$. Consequently, $|\int_{\Omega} j_{\Omega \setminus L} u \dint{\mu}| \leqslant \varepsilon$ whenever $u \in B(R)$ (where, as usual, $j_{\Omega \setminus L}$ denotes the characteristic function of $\Omega \setminus L$). So, we conclude from \eqref{eqn:gam} and \eqref{eqn:f-gam} that \begin{equation}\label{eqn:gam2} \operatorname{Re}\Bigl(\int_{\Omega} u^{\#} \dint{\mu}\Bigr) \leqslant \gamma + \varepsilon \end{equation} for all $u \in \mathscr{K}\bigr|_L$ and $\operatorname{Re}(\int_{\Omega} j_L f \dint{\mu}) > \gamma + \varepsilon$. Now similarly as in the proof that (i) follows from (ii), one shows that \eqref{eqn:gam2} holds for all $u \in \bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_L\bigr)$ and hence $f\bigr|_L \notin \bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_L\bigr)$ (because $(f\bigr|_L)^{\#} = j_L f$). \end{proof} \begin{cor}{bd} Let $\mathscr{K}$ be a convex set in $C_0(\Omega,E)$. \begin{enumerate}[\upshape(a)] \item If $\mathscr{K}$ is bounded, its norm closure consists precisely of those functions $f \in C_0(\Omega,E)$ that $f\bigr|_L \in \bar{\mathscr{M}}\bigl(\mathscr{K}\bigr|_L\bigr)$ for any compact set $L \subset \Omega$. \item If $\Omega$ is compact, the norm closure of $\mathscr{K}$ coincides with $\bar{\mathscr{M}}(\mathscr{K})$. \end{enumerate} \end{cor} \begin{proof} In both the cases put $\mathscr{B} \df \{\Omega\}$. In case (a), take $R > 0$ such that $\mathscr{K} \subset B(R)$ and apply item (iii) of \THM{closure}. In case (b) just apply point (ii) of that result. \end{proof} \begin{pro}{C*} Let $\mathfrak{A}$ be a $C^*$-algebra and $\mathscr{A}$ be a $*$-subalgebra of $C_0(\Omega,\mathfrak{A})$. Let $\mathscr{B}$ be a countable collection of pairwise disjoint Borel subsets of $\Omega$ that cover $\Omega$. The norm closure of $\mathscr{A}$ consists precisely of those functions $f \in C_0(\Omega,\mathfrak{A})$ such that \begin{itemize} \item[(cc)] $f\bigr|_K \in \bar{\mathscr{M}}\bigl(\mathscr{A}\bigr|_K\bigr)$ for every set $K \in \mathfrak{B}(\Omega)$ such that the set $K \cap B$ is compact for each $B \in \mathscr{B}$ and nonempty only for a finite number of such $B$. \end{itemize} In particular, if \textup{(}$f \in C_0(\Omega,\mathfrak{A})$ and\textup{)} $f\bigr|_L$ belongs to $\bar{\mathscr{M}}\bigl(\mathscr{A}\bigr|_L\bigr)$ for any compact set $L \subset \Omega$, then $f$ is in the uniform closure of $\mathscr{A}$. \end{pro} \begin{proof} First of all, we may and do assume that $\mathscr{A}$ is closed. It is enough to check that every function $f \in C_0(\Omega,\mathfrak{A})$ for which (cc) holds belongs to the norm closure of $\mathscr{A}$. To this end, take $R > \|f\|$. We shall show that condition (iii) of \THM{closure} (with $\mathscr{K} = \mathscr{A}$) holds for such $R$ (which will finish the proof). Let $L \subset \Omega$ be as specified in that condition (or, equivalently, as specified in (cc)). It follows from (cc) that $f\bigr|_L \in \bar{\mathscr{M}}\bigl(\mathscr{A}\bigr|_L\bigr)$. Now point (b) of \COR{bd} (applied for $\Omega \df L$ and $\mathscr{K} \df \mathscr{A}\bigr|_L$) yields that $f\bigr|_L$ belongs to the norm closure of $\mathscr{A}\bigr|_L$. Since the function $\mathscr{A} \ni g \mapsto g\bigr|_L \in C_0(L,E)$ is a $*$-homomorphism (with range $\mathscr{A}\bigr|_L$) between $C^*$-algebras, it sends the open unit ball of $\mathscr{A}$ onto the open unit ball of $\mathscr{A}\bigr|_L$. Consequently, $f\bigr|_L \in (\mathscr{A} \cap B(R))\bigr|_L$ and we are done. \end{proof} \begin{proof}[Proof of \THM{2}] Each of the three cases is a special case of one of \COR{bd} and \PRO{C*}. \end{proof} \THM{closure} is at least surprising and seems to be a convenient tool. Recently we use its consequence---\PRO{C*} (in its almost exact form)---to describe models for all so-called \textit{subhomogeneous} $C^*$-algebras (which may be seen as a solution of a long-standing problem). The paper on this is in preparation. Below we give an illustrative example of usefulness of \THM{2}. (The result below is certainly known.) \begin{cor}{0,1} Let $d$ denote the natural metric on $X \df [0,1]$. The linear span $V$ of all functions $d(x,\cdot)$ is dense in $C(X,\mathbb{R})$. \end{cor} \begin{proof} Thanks to \THM{2}, it suffices to show that $\mathscr{M}(V)$ contains all continuous functions, which is quite easy: $d(0,\cdot) + d(1,\cdot) \equiv 1$ and for any $x \in X \setminus \{1\}$ and small enough $h > 0$ the functions $\frac1h (d(x+h,\cdot) - d(x,\cdot))$ are uniformly bounded, belong to $V$ and converge pointwise to the function given by \begin{equation*} t \mapsto \begin{cases}1, & t \leqslant x\\-1, & t > x.\end{cases} \end{equation*} We conclude that the characteristic function of $[0,x]$ is a member of $\mathscr{M}(V)$ for any $x \in X$. So, the characteristic functions of all intervals of the form $(a,b]$ (with $0 \leqslant a < b \leqslant 1$) also belong to $V$. Noticing that every continuous function on $X$ is a uniform limit of linear combinations of such functions, we finish the proof. \end{proof} \begin{rem}{specific} The assertion of \COR{0,1} (under the notations of that result) is equivalent to the following property: \begin{quote} If two complex-valued Borel measures $\mu$ and $\nu$ on $X$ satisfy \begin{equation}\label{eqn:id} \int_X d(x,t) \dint{\mu(t)} = \int_X d(y,t) \dint{\nu(t)} \quad \textup{for all } x \in X, \end{equation} then $\mu = \nu$. \end{quote} We leave it as an exercise that there exists a finite metric space $(X,d)$ such that \eqref{eqn:id} holds for two \textit{different} probabilistic measures $\mu$ and $\nu$ on $X$. \end{rem} We conclude the paper with the following \begin{exm}{ubd} Taking into account \PRO{C*} and item (a) of \COR{bd}, it is natural to ask whether the assumption in this item that $\mathscr{K}$ is bounded is essential. Below we answer this issue in the affirmative.\par Let $\Omega = \mathbb{R}$, $E = \mathbb{K}$ and let $\mathscr{K}$ consist of all functions $u \in C_0(\mathbb{R},\mathbb{K})$ for which \begin{equation*} \sum_{n=1}^{\infty} \frac{u(n)}{2^n} = 0. \end{equation*} Observe that $\mathscr{K}$ is a closed proper linear subspace of $C_0(\mathbb{R},\mathbb{K})$ (as the kernel of a continuous linear functional). However, invoking Tietze's extension theorem, it is an easy exercise to show that $\mathscr{K}\bigr|_D = C(D,\mathbb{K})$ for any compact set $D \subset \mathbb{R}$. \end{exm} \end{document}
arXiv
# Bayesian inference and decision-making Bayesian inference is a statistical method that allows us to make decisions based on incomplete or uncertain information. It is an alternative to the frequentist approach, which assumes that the data we have is a representative sample from a larger population. In Bayesian inference, we treat our prior beliefs about the unknown parameters as probability distributions, and we update these distributions as we gather new data. This approach has several advantages, including the ability to quantify our uncertainty and to handle situations where there is a lack of data. In this section, we will explore the principles of Bayesian inference and how it can be applied to decision-making. We will also discuss the role of probability distributions in Bayesian statistics and how they can be used to quantify our uncertainty. ## Exercise Calculate the posterior probability distribution for a parameter given a prior distribution and some data. Use the Bayes theorem formula: $$ P(\theta|D) = \frac{P(D|\theta)P(\theta)}{P(D)} $$ where $P(\theta|D)$ is the posterior probability distribution, $P(D|\theta)$ is the likelihood function, $P(\theta)$ is the prior probability distribution, and $P(D)$ is the marginal likelihood. Bayesian inference can be extended to decision-making by considering the expected loss associated with different actions. The decision rule that minimizes the posterior expected loss, also known as Bayes rule, is the optimal decision-making strategy. # Markov chain Monte Carlo method Markov chain Monte Carlo (MCMC) is a class of algorithms that are used to estimate the posterior probability distribution in Bayesian inference. The key idea behind MCMC is to construct a Markov chain that has the desired posterior distribution as its equilibrium distribution. In this section, we will discuss the Markov chain Monte Carlo method and its applications in Bayesian statistics. We will also explore the Metropolis-Hastings algorithm, which is a widely used MCMC algorithm. ## Exercise Implement the Metropolis-Hastings algorithm to generate a Markov chain with a given target distribution. Use the following Python code as a starting point: ```python import numpy as np def metropolis_hastings(target_distribution, initial_state, proposal_distribution, num_samples): # TODO: Implement the Metropolis-Hastings algorithm pass ``` # Markov chains and transition probabilities Markov chains are a mathematical model that describes a sequence of random events in which the probability of a future event depends only on the present event, and not on the sequence of previous events. In the context of Bayesian inference, Markov chains are used to represent the uncertainty about the unknown parameters. In this section, we will discuss the properties of Markov chains and how to calculate the transition probabilities between states. We will also explore how to use these transition probabilities to construct a Markov chain that has the desired posterior distribution as its equilibrium distribution. ## Exercise Calculate the transition probabilities for a simple Markov chain with three states. Use the following Python code as a starting point: ```python def transition_probabilities(chain): # TODO: Calculate the transition probabilities pass ``` # Monte Carlo simulation and the Metropolis-Hastings algorithm Monte Carlo simulation is a computational method that uses random sampling to estimate the value of an integral, solve differential equations, or perform other calculations. In the context of Bayesian inference, Monte Carlo simulation is used to generate samples from the posterior probability distribution. In this section, we will discuss the Metropolis-Hastings algorithm, which is a widely used MCMC algorithm for generating samples from the posterior distribution. We will also explore how to use Monte Carlo simulation to estimate the expected value of a function of the unknown parameters. ## Exercise Implement the Metropolis-Hastings algorithm to generate samples from the posterior distribution of a parameter. Use the following Python code as a starting point: ```python def metropolis_hastings_samples(target_distribution, initial_state, proposal_distribution, num_samples): # TODO: Implement the Metropolis-Hastings algorithm pass ``` # Applications of Bayesian statistics in decision-making ## Exercise Design a Bayesian network to model a decision-making problem involving a set of uncertain variables. Use the following Python code as a starting point: ```python import networkx as nx def build_bayesian_network(variables): # TODO: Build a Bayesian network pass ``` # The role of probability distributions in Bayesian statistics ## Exercise Evaluate the difference between two probability distributions using a statistical test, such as the Kolmogorov-Smirnov test. Use the following Python code as a starting point: ```python import scipy.stats as stats def compare_distributions(distribution1, distribution2): # TODO: Perform a statistical test pass ``` # Examples and case studies ## Exercise Analyze a case study that involves the application of Bayesian statistics in a decision-making problem. Use the following Python code as a starting point: ```python def analyze_case_study(case_study): # TODO: Analyze the case study pass ``` # Extensions and advanced topics ## Exercise Implement a Bayesian model selection method to compare different models based on their predictive performance. Use the following Python code as a starting point: ```python def bayesian_model_selection(models, data): # TODO: Implement a Bayesian model selection method pass ``` # Conclusion In this textbook, we have explored the principles of Bayesian inference and decision-making, the Markov chain Monte Carlo method, the role of probability distributions in Bayesian statistics, and their applications in various decision-making problems. We have also discussed several advanced topics and extensions of the method. In conclusion, Bayesian statistics provides a powerful framework for making decisions based on incomplete or uncertain information. By treating our prior beliefs as probability distributions and updating them as we gather new data, we can quantify our uncertainty and make more informed decisions. The Markov chain Monte Carlo method, in particular, allows us to estimate the posterior probability distribution and generate samples from it, making it a versatile tool for decision-making. As a final exercise, consider the following question: How can Bayesian statistics be applied to your own decision-making problem? Think about the unknown parameters, the prior beliefs, and the data available to you, and consider how you could use Bayesian inference to make a more informed decision.
Textbooks
Only show content I have access to (50) Only show open access (12) Over 3 years (199) Physics and Astronomy (70) Materials Research (65) Life Sciences (35) Statistics and Probability (12) Earth and Environmental Sciences (10) Classical Studies (4) Area Studies (3) MRS Online Proceedings Library Archive (63) Disaster Medicine and Public Health Preparedness (16) Epidemiology & Infection (11) Psychological Medicine (9) Journal of Agricultural and Applied Economics (8) Canadian Journal of Neurological Sciences (7) The British Journal of Psychiatry (6) Radioprotection (5) European Psychiatry (4) Journal of Clinical and Translational Science (4) Proceedings of the Nutrition Society (4) International Psychogeriatrics (3) Journal of the Marine Biological Association of the United Kingdom (3) The Mathematical Gazette (3) Journal of Dairy Research (2) Journal of Law, Medicine & Ethics (2) Journal of the Staple Inn Actuarial Society (2) Proceedings of the Royal Society of Edinburgh, Section B: Biological Sciences (2) Quaternary Research (2) The Journal of Agricultural Science (2) Materials Research Society (65) Society for Disaster Medicine and Public Health, Inc. SDMPH (16) Arab Grid for Learning (8) Canadian Neurological Sciences Federation (7) Nestle Foundation - enLINK (6) EDPS Sciences - Radioprotection (5) The Royal College of Psychiatrists (5) European Psychiatric Association (4) MBA Online Only Members (3) Mathematical Association (3) Royal College of Speech and Language Therapists (3) Weed Science Society of America (3) American Society of Law, Medicine & Ethics (2) Institute and Faculty of Actuaries (2) American Society of Church History (1) Classical Association (1) International Glaciological Society (1) Society for Economic Measurement (SEM) (1) The British Institute for the Study of Iraq (Gertrude Bell Memorial) (BISI) (1) International Symposia in Economic Theory and Econometrics (10) Cambridge Handbooks in Psychology (3) Cambridge Studies in Biological and Evolutionary Anthropology (2) Case Studies in Neurology (1) Cambridge Handbooks (3) Cambridge Handbooks of Psychology (3) Accuracy of dopaminergic imaging as a biomarker for mild cognitive impairment with Lewy bodies Gemma Roberts, Paul C. Donaghy, Jim Lloyd, Rory Durcan, George Petrides, Sean J. Colloby, Sarah Lawley, Joanna Ciafone, Calum A. Hamilton, Michael Firbank, Louise Allan, Nicola Barnett, Sally Barker, Kirsty Olsen, Kim Howe, Tamir Ali, John-Paul Taylor, John O'Brien, Alan J. Thomas Journal: The British Journal of Psychiatry , FirstView Published online by Cambridge University Press: 23 December 2020, pp. 1-7 Dopaminergic imaging is an established biomarker for dementia with Lewy bodies, but its diagnostic accuracy at the mild cognitive impairment (MCI) stage remains uncertain. To provide robust prospective evidence of the diagnostic accuracy of dopaminergic imaging at the MCI stage to either support or refute its inclusion as a biomarker for the diagnosis of MCI with Lewy bodies. We conducted a prospective diagnostic accuracy study of baseline dopaminergic imaging with [123I]N-ω-fluoropropyl-2β-carbomethoxy-3β-(4-iodophenyl)nortropane single-photon emission computerised tomography (123I-FP-CIT SPECT) in 144 patients with MCI. Images were rated as normal or abnormal by a panel of experts with access to striatal binding ratio results. Follow-up consensus diagnosis based on the presence of core features of Lewy body disease was used as the reference standard. At latest assessment (mean 2 years) 61 patients had probable MCI with Lewy bodies, 26 possible MCI with Lewy bodies and 57 MCI due to Alzheimer's disease. The sensitivity of baseline FP-CIT visual rating for probable MCI with Lewy bodies was 66% (95% CI 52–77%), specificity 88% (76–95%) and accuracy 76% (68–84%), with positive likelihood ratio 5.3. It is over five times as likely for an abnormal scan to be found in probable MCI with Lewy bodies than MCI due to Alzheimer's disease. Dopaminergic imaging appears to be useful at the MCI stage in cases where Lewy body disease is suspected clinically. Reexamining Health Care Coalitions in Light of COVID-19 Daniel J. Barnett, Lauren Knieser, Nicole A. Errett, Andrew J. Rosenblum, Meena Seshamani, Thomas D. Kirsch Journal: Disaster Medicine and Public Health Preparedness / Accepted manuscript Published online by Cambridge University Press: 04 November 2020, pp. 1-18 The national response to the COVID-19 pandemic has highlighted critical weaknesses in domestic health care and public health emergency preparedness despite nearly two decades of federal funding for multiple programs designed to encourage cross-cutting collaboration in emergency response. Health care coalitions (HCCs), which are funded through the Hospital Preparedness Program, were first piloted in 2007 and have been continuously funded nationwide since 2012 to support broad collaborations across public health, emergency management, emergency medical services, and the emergency response arms of the health care system within a geographical area. This commentary provides a SWOT analysis to summarize the strengths, weaknesses, opportunities, and threats related to the current HCC model against the backdrop of COVID-19. We close with concrete recommendations for better leveraging the HCC model for improved health care system readiness. These include better evaluating the role of HCCs and their members (including the responsibility of the HCC to better communicate and align with other sectors), reconsidering the existing framework for HCC administration, increasing incentives for meaningful community participation in HCC preparedness, and supporting next-generation development of health care preparedness systems for future pandemics. Social Network Analysis of COVID-19 Public Discourse on Twitter: Implications for Risk Communication Paola Pascual-Ferrá, Neil Alperstein, Daniel J. Barnett Journal: Disaster Medicine and Public Health Preparedness , First View Published online by Cambridge University Press: 10 September 2020, pp. 1-9 The purpose of this study was to demonstrate the use of social network analysis to understand public discourse on Twitter around the novel coronavirus disease 2019 (COVID-19) pandemic. We examined different network properties that might affect the successful dissemination by and adoption of public health messages from public health officials and health agencies. We focused on conversations on Twitter during 3 key communication events from late January to early June of 2020. We used Netlytic, a Web-based software that collects publicly available data from social media sites such as Twitter. We found that the network of conversations around COVID-19 is highly decentralized, fragmented, and loosely connected; these characteristics can hinder the successful dissemination of public health messages in a network. Competing conversations and misinformation can hamper risk communication efforts in a way that imperil public health. Looking at basic metrics might create a misleading picture of the effectiveness of risk communication efforts on social media if not analyzed within the context of the larger network. Social network analysis of conversations on social media should be an integral part of how public health officials and agencies plan, monitor, and evaluate risk communication efforts. Prioritizing Communication About Radiation Risk Reduction in the United States: Results from a Multi-criteria Decision Analysis Rennie W. Ferguson, Daniel J. Barnett, Ryan David Kennedy, Tara Kirk Sell, Jessica S. Wieder, Ernst W. Spannhake Published online by Cambridge University Press: 23 June 2020, pp. 1-9 The lack of radiation knowledge among the general public continues to be a challenge for building communities prepared for radiological emergencies. This study applied a multi-criteria decision analysis (MCDA) to the results of an expert survey to identify priority risk reduction messages and challenges to increasing community radiological emergency preparedness. Professionals with expertise in radiological emergency preparedness, state/local health and emergency management officials, and journalists/journalism academics were surveyed following a purposive sampling methodology. An MCDA was used to weight criteria of importance in a radiological emergency, and the weighted criteria were applied to topics such as sheltering-in-place, decontamination, and use of potassium iodide. Results were reviewed by respondent group and in aggregate. Sheltering-in-place and evacuation plans were identified as the most important risk reduction measures to communicate to the public. Possible communication challenges during a radiological emergency included access to accurate information; low levels of public trust; public knowledge about radiation; and communications infrastructure failures. Future assessments for community readiness for a radiological emergency should include questions about sheltering-in-place and evacuation plans to inform risk communication. 4109 Acceptability of a Tenofovir Disoproxil Fumarate Intravaginal Ring for Human Immunodeficiency Virus Pre-Exposure Prophylaxis Among Sexually Active Women April Dobkin, Rebecca Barnett, Jessica McWalters, Laurie L. Ray, Lilia Espinoza, Aileen P. McGinn, Jessica M. Atrio, Marla J. Keller Journal: Journal of Clinical and Translational Science / Volume 4 / Issue s1 / June 2020 Published online by Cambridge University Press: 29 July 2020, pp. 21-22 Print publication: June 2020 OBJECTIVES/GOALS: Vaginal ring delivery of antiretroviral drugs may provide protection against acquisition of HIV-1 when used as pre-exposure prophylaxis. As part of a randomized placebo-controlled safety trial of a tenofovir disoproxil fumarate (TDF) intravaginal ring (IVR), we assessed product acceptability through surveys of 17 women after continuous ring use. METHODS/STUDY POPULATION: Sexually active, HIV negative women between the ages of 18 and 45 were enrolled to investigate the safety and pharmacokinetics of three months of continuous TDF IVR use. The study was designed to include 40 US participants randomly assigned (3:1) to a TDF or placebo IVR. Twelve were randomized to TDF and five were assigned to the placebo group before the study was electively discontinued due to development of vaginal ulcerations in eight women in the TDF group. Acceptability data regarding TDF and placebo ring use was gathered via self-administered, computer-based questionnaires at the one- and three-month study visits. Participants were asked about overall attitudes and feelings regarding the TDF and placebo IVR, vaginal changes associated with ring use, and their experiences with ring use during menses and with sex. RESULTS/ANTICIPATED RESULTS: The mean age of participants was 30 years (range 18 - 42). Sixteen of 17 (94%) participants completed all study questions at both visits. When asked about ring likeability at one-month, 12 of 16 (75%) women reported overall liking the ring, including 5 of 8 (63%) who developed ulcerations. Vaginal changes described during ring use included 8 participants who indicated that the "vagina was wetter" and 2 who reported that the "vagina was drier." Additionally, 10 of 12 (83%) who had their period during the first month of the study were not bothered by ring use during menses, and 11 of 16 (69%) stated that the ring was not bothersome with use during sex. When asked at the three-month visit, most reported that they would prefer to wear the ring rather than use a condom during sex, however, condom use was low at baseline in this population. DISCUSSION/SIGNIFICANCE OF IMPACT: Despite unanticipated ulcers, the IVRs were acceptable, especially when used with menses and during sex. Regardless of the group assigned or vaginal changes experienced, and even amongst those who developed ulcerations, the women had positive attitudes towards the ring, which is promising for future use of vaginal rings as a method for HIV prevention. INFLUENCES OF UPPER FLORIDAN AQUIFER WATERS ON RADIOCARBON IN THE OTOLITHS OF GRAY SNAPPER (Lutjanus griseus) IN THE GULF OF MEXICO Allen H Andrews, Beverly K Barnett, Jeffrey P Chanton, Laura A Thornton, Robert J Allman Journal: Radiocarbon / Volume 62 / Issue 5 / October 2020 Published online by Cambridge University Press: 06 May 2020, pp. 1127-1146 The otoliths (ear stones) of fishes are commonly used to describe the age and growth of marine and freshwater fishes. These non-skeletal structures are fortuitous in their utility by being composed of mostly inorganic carbonate that is inert through the life of the fish. This conserved record functions like an environmental chronometer and bomb-produced radiocarbon (14C)—a 14C signal created by atmospheric testing of thermonuclear devices—can be used as a time-specific marker in validating fish age. However, complications from the hydrogeology of nearshore marine environments can complicate 14C levels, as was the case with gray snapper (Lutjanus griseus) along the Gulf of Mexico coast of Florida. Radiocarbon of these nearshore waters is influenced by freshwater input from the karst topography of the Upper Floridan Aquifer—estuarine waters that are 14C-depleted from surface and groundwater inputs. Some gray snapper likely recruited to this kind of environment where 14C levels were depleted in the earliest otolith growth, although age was validated for individuals that were not exposed to 14C-depleted waters to an age of at least 25 years with support for a 30-year lifespan. Prospective predictors of decline v. stability in mild cognitive impairment with Lewy bodies or Alzheimer's disease Calum A. Hamilton, Fiona E. Matthews, Paul C. Donaghy, John-Paul Taylor, John T. O'Brien, Nicola Barnett, Kirsty Olsen, Ian G. McKeith, Alan J. Thomas Journal: Psychological Medicine , First View Published online by Cambridge University Press: 05 May 2020, pp. 1-9 Mild cognitive impairment (MCI) may gradually worsen to dementia, but often remains stable for extended periods of time. Little is known about the predictors of decline to help explain this variation. We aimed to explore whether this heterogeneous course of MCI may be predicted by the presence of Lewy body (LB) symptoms in a prospectively-recruited longitudinal cohort of MCI with Lewy bodies (MCI-LB) and Alzheimer's disease (MCI-AD). A prospective cohort (n = 76) aged ⩾60 years underwent detailed assessment after recent MCI diagnosis, and were followed up annually with repeated neuropsychological testing and clinical review of cognitive status and LB symptoms. Latent class mixture modelling identified data-driven sub-groups with distinct trajectories of global cognitive function. Three distinct trajectories were identified in the full cohort: slow/stable progression (46%), intermediate progressive decline (41%) and a small group with a much faster decline (13%). The presence of LB symptomology, and visual hallucinations in particular, predicted decline v. a stable cognitive trajectory. With time zeroed on study end (death, dementia or withdrawal) where available (n = 39), the same subgroups were identified. Adjustment for baseline functioning obscured the presence of any latent classes, suggesting that baseline function is an important parameter in prospective decline. These results highlight some potential signals for impending decline in MCI; poorer baseline function and the presence of probable LB symptoms – particularly visual hallucinations. Identifying people with a rapid decline is important but our findings are preliminary given the modest cohort size. Lifetime Antipsychotic Medication and Cognitive Performance in Schizophrenia at Age 43-years – the Northern Finland Birth Cohort 1966 A.P. Husa, J. Moilanen, G.K. Murray, R. Marttila, M. Haapea, I. Rannikko, J. Barnett, P.B. Jones, M. Isohanni, H. Koponen, J. Miettunen, E. Jääskeläinen Journal: European Psychiatry / Volume 30 / Issue S1 / March 2015 Published online by Cambridge University Press: 15 April 2020, p. 1 The effects of long-term antipsychotic medication on cognition in schizophrenia are unclear (Husa A.P. et al., Schizophr. Res. 2014). Understanding how long-term antipsychotic treatment affects cognition is crucial for the development of safe, evidence-based treatment of schizophrenia. To analyse the association between cumulative lifetime antipsychotic dose and cognition in schizophrenia at age 43 years in a general population sample. Sixty (33 males) schizophrenia spectrum subjects from the Northern Finland Birth Cohort 1966 were assessed at age 43 years by California Verbal Learning Test, Visual Object Learning Test, Abstraction Inhibition and Working Memory task, Verbal fluency, Visual series, Vocabulary, Digit Span and Matrix reasoning. Cumulative lifetime antipsychotic dose-years were collected from treatment records and interviews. A factor analysis based on the cognitive tests resulted in one cognitive factor. The association between this cognitive composite score and antipsychotic dose-years was analysed by linear regression. Higher lifetime antipsychotic dose-years were statistically significantly associated with poorer cognitive composite score at age 43 years (B=-0.32, p>0.001), also when adjusted for gender, onset age, remission and number of hospital treatment days (B=-0.42, p=0.008). To our knowledge, this is the first report of an association between cumulative lifetime antipsychotic dose and cognition in midlife in schizophrenia. Based on this data, the use of high antipsychotic doses may relate to poorer cognitive functioning in schizophrenia after twenty years of illness. These results do not support the view that antipsychotics prevent cognitive decline or promote cognitive recovery in schizophrenia. 2082 – Lifetime Use Of Antipsychotic Medication And Change Of Verbal Learning And Memory In Schizophrenia In 9-years Follow-up In General Population Sample A. Husa, J. Moilanen, I. Rannikko, M. Haapea, G. Murray, J. Barnett, M. Isohanni, J. Veijola, H. Koponen, J. Miettunen, E. Jääskeläinen Journal: European Psychiatry / Volume 28 / Issue S1 / 2013 Cognitive deficits, such as verbal memory dysfunction, are a core feature of schizophrenia. Yet the longitudinal course and associations of cognitive deficits with antipsychotic medication remain unclear. Our aim was to analyze how lifetime antipsychotic dosage associates with the change of verbal learning and memory in individuals with schizophrenia during a 9-year follow-up. Forty-two subjects with schizophrenic psychoses (22 males) from the Northern Finland 1966 Birth Cohort went through diagnostic interviews and cognitive assessment including California Verbal Learning Test (CVLT) at the ages of 34 and 43 years. Data of the subjects' lifetime antipsychotic doses in chlorpromazine equivalents were collected from patient history records, interviews and national registers. The association between verbal learning and memory (immediate free recall of trials 1-5 and free recall after long delay) and dose-years of antipsychotics was analyzed by logistic regression model. Higher dose-years of any and typical antipsychotics, but not atypical antipsychotics, associated statistically significantly to worse verbal learning and memory in cross-sectional analyses at age 34 years, even when onset age, sex, and severity of symptoms were controlled for. However, there was no statistically significant association between lifetime antipsychotic use and verbal learning and memory change between ages 34 and 43 years. High lifetime antipsychotic dose did not associate to decrease in verbal learning and memory in schizophrenia in 9 years of follow-up. To our knowledge, this is a first report on association between cumulative lifetime antipsychotic use and change in cognition in a long-term naturalistic follow-up. 1913 – Longitudinal Change In Verbal Learning And Memory In Schizophrenia And Controls: a Nine-year Study In The Northern Finland Birth Cohort 1966 I.A. Rannikko, M. Haapea, J. Miettunen, J. Veijola, G. Murray, J. Barnett, A. Husa, P. Jones, M. Isohanni, E. Jääskeläinen Patients with schizophrenia generally perform worse than control subjects on all cognitive domains, and particularly in memory functions. It is still unclear, how cognition changes during years of illness in schizophrenia. Our aim was to analyze the change in verbal learning and memory functions in subjects with schizophrenia and healthy controls during a 9-year follow-up. The sample was the general population based Northern Finland 1966 Birth Cohort. In 1999-2001 and in 2008-2010 field studies were performed, including repeated measures of clinical status and the California Verbal Learning Test (CVLT). CVLT was used for the estimation of the course of a possible change of verbal learning and memory during the follow-up. The sample included 41 individuals with schizophrenic psychoses and 74 non-psychotic controls. Both cases and controls had statistically significant decline in measures of CVLT. However, the change in verbal learning and memory in the 9 -year follow-up was not statistically significantly different between cases and controls. Among cases, age of illness onset and sex had no statistically significant effect on change of verbal learning and memory. According to our unselected, population based sample with long follow up, the impairments during the life span in verbal learning and memory in schizophrenia was not different compared to controls. These results imply that schizophrenia is not a progressing degenerative illness. Applying the Haddon Matrix to Hospital Earthquake Preparedness and Response Gai Cole, Andrew J. Rosenblum, Megan Boston, Daniel J. Barnett Published online by Cambridge University Press: 07 April 2020, pp. 1-8 Since its 1960s origins, the Haddon matrix has served as a tool to understand and prevent diverse mechanisms of injuries and promote safety. Potential remains for broadened application and innovation of the matrix for disaster preparedness. Hospital functionality and efficiency are particularly important components of community vulnerability in developed and developing nations alike. Given the Haddon matrix's user-friendly approach to integrating current engineering concepts, behavioral sciences, and policy dimensions, we seek to apply it in the context of hospital earthquake preparedness and response. The matrix's framework lends itself to interdisciplinary planning and collaboration between social and physical sciences, paving the way for a systems-oriented reduction in vulnerabilities. Here, using an associative approach to integrate seemingly disparate social and physical science disciplines yields innovative insights about hospital disaster preparedness for earthquakes. We illustrate detailed examples of pre-event, event, and post-event engineering, behavioral science, and policy factors that hospital planners should evaluate given the complex nature, rapid onset, and broad variation in impact and outcomes of earthquakes. This novel contextual examination of the Haddon matrix can enhance critical infrastructure disaster preparedness across the epidemiologic triad, by integrating essential principles of behavioral sciences, policy, law, and engineering to earthquake preparedness. Lifetime use of psychiatric medications and cognition at 43 years of age in schizophrenia in the Northern Finland Birth Cohort 1966 A.P. Hulkko, G.K. Murray, J. Moilanen, M. Haapea, I. Rannikko, P.B. Jones, J.H. Barnett, S. Huhtaniska, M.K. Isohanni, H. Koponen, E. Jääskeläinen, J. Miettunen Journal: European Psychiatry / Volume 45 / September 2017 Published online by Cambridge University Press: 23 March 2020, pp. 50-58 Higher lifetime antipsychotic exposure has been associated with poorer cognition in schizophrenia. The cognitive effects of adjunctive psychiatric medications and lifetime trends of antipsychotic use remain largely unclear. We aimed to study how lifetime and current benzodiazepine and antidepressant medications, lifetime trends of antipsychotic use and antipsychotic polypharmacy are associated with cognitive performance in midlife schizophrenia. Sixty participants with DSM-IV schizophrenia from the Northern Finland Birth Cohort 1966 were examined at 43 years of age with an extensive cognitive test battery. Cumulative lifetime and current use of psychiatric medications were collected from medical records and interviews. The associations between medication and principal component analysis-based cognitive composite score were analysed using linear regression. Lifetime cumulative DDD years of benzodiazepine and antidepressant medications were not significantly associated with global cognition. Being without antipsychotic medication (for minimum 11 months) before the cognitive examination was associated with better cognitive performance (P = 0.007) and higher lifetime cumulative DDD years of antipsychotics with poorer cognition (P = 0.020), when adjusted for gender, onset age and lifetime hospital treatment days. Other lifetime trends of antipsychotic use, such as a long antipsychotic-free period earlier in the treatment history, and antipsychotic polypharmacy, were not significantly associated with cognition. Based on these naturalistic data, low exposure to adjunctive benzodiazepine and antidepressant medications does not seem to affect cognition nor explain the possible negative effects of high dose long-term antipsychotic medication on cognition in schizophrenia. 40 - Society and Intelligence from Part VII - Intelligence and Its Role in Society By Susan M. Barnett, Heiner Rindermann, Wendy M. Williams, Stephen J. Ceci Edited by Robert J. Sternberg, Cornell University, New York Book: The Cambridge Handbook of Intelligence Published online: 13 December 2019 Print publication: 16 January 2020, pp 964-987 There are large between-country differences in measures of economic and noneconomic well-being. Many researchers view increasing the stock of human capital as the key to raising economic development, promoting democratization, and improving health, and hence improving overall societal well-being. The single most studied aspect of human capital concerns cognitive competence. Differences in population cognitive competence might explain these societal differences. Evidence suggests that education builds cognitive competence, and education and cognitive competence promote better social outcomes, in terms of both economic and noneconomic factors. However, measuring population cognitive competence for countries requires representative samples, culture-fair tests, equivalency in the relationship between test measures and other cognitive attributes, and comparability in testing situations. In most cases, none of this has been achieved. RANK GENERATING FUNCTIONS FOR ODD-BALANCED UNIMODAL SEQUENCES, QUANTUM JACOBI FORMS, AND MOCK JACOBI FORMS Basic hypergeometric functions Discontinuous groups and automorphic forms MICHAEL BARNETT, AMANDA FOLSOM, WILLIAM J. WESLEY Journal: Journal of the Australian Mathematical Society / Volume 109 / Issue 2 / October 2020 Let $\unicode[STIX]{x1D707}(m,n)$ (respectively, $\unicode[STIX]{x1D702}(m,n)$ ) denote the number of odd-balanced unimodal sequences of size $2n$ and rank $m$ with even parts congruent to $2\!\!\hspace{0.6em}{\rm mod}\hspace{0.2em}4$ (respectively, $0\!\!\hspace{0.6em}{\rm mod}\hspace{0.2em}4$ ) and odd parts at most half the peak. We prove that two-variable generating functions for $\unicode[STIX]{x1D707}(m,n)$ and $\unicode[STIX]{x1D702}(m,n)$ are simultaneously quantum Jacobi forms and mock Jacobi forms. These odd-balanced unimodal rank generating functions are also duals to partial theta functions originally studied by Ramanujan. Our results also show that there is a single $C^{\infty }$ function in $\mathbb{R}\times \mathbb{R}$ to which the errors to modularity of these two different functions extend. We also exploit the quantum Jacobi properties of these generating functions to show, when viewed as functions of the two variables $w$ and $q$ , how they can be expressed as the same simple Laurent polynomial when evaluated at pairs of roots of unity. Finally, we make a conjecture which fully characterizes the parity of the number of odd-balanced unimodal sequences of size $2n$ with even parts congruent to $0\!\!\hspace{0.6em}{\rm mod}\hspace{0.2em}4$ and odd parts at most half the peak. Double degenerate candidates in the open cluster NGC 6633 Joseph W. Barnett, Kurtis A. Williams Journal: Proceedings of the International Astronomical Union / Volume 15 / Issue S357 / October 2019 The study of white dwarfs, the end stage of stellar evolution for more than 95% of stars, is critical to bettering our understanding of the late stages of the lives of low mass stars. In particular, the post main sequence evolution of binary star systems is complex, and the identification and analysis of double degenerate systems is a crucial step in constraining models of binary star systems. Binary white dwarfs in open star clusters are particularly useful because cluster parameters such as distance, metal content, and total system age are more tightly constrained than for field double degenerates. Here we use the precision astrometry from the Gaia Data Release 2 catalog to study two other white dwarfs which were identified as candidate double degenerates in the field of the open star cluster NGC 6633. One of the two objects, LAWDS 4, is found to have astrometric properties fully consistent with that of the cluster. In such a case, the object is significantly overluminous for a single white dwarf, strongly indicating binarity. The second candidate binary, LAWDS 7, appears to be inconsistent with cluster membership, though a more thorough analysis is necessary to properly quantify the probability. At present we are proceeding to model the photometric and spectroscopic data for both objects as if they were cluster member double degenerates. Results of this latter analysis are forthcoming. Our results will add crucial data to the study of binary star evolution in open star clusters. EndNote: Feature-based classification of networks Ian Barnett, Nishant Malik, Marieke L. Kuijjer, Peter J. Mucha, Jukka-Pekka Onnela Journal: Network Science / Volume 7 / Issue 3 / September 2019 Published online by Cambridge University Press: 23 September 2019, pp. 438-444 Print publication: September 2019 Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural features. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. Within each such class, networks describing similar systems tend to have similar features. This occurs presumably because networks representing similar systems would be expected to be generated by a shared set of domain-specific mechanisms, and it should therefore be possible to classify networks based on their features at various structural levels. Here we describe and demonstrate a new hybrid approach that combines manual selection of network features of potential interest with existing automated classification methods. In particular, selecting well-known network features that have been studied extensively in social network analysis and network science literature, and then classifying networks on the basis of these features using methods such as random forest, which is known to handle the type of feature collinearity that arises in this setting, we find that our approach is able to achieve both higher accuracy and greater interpretability in shorter computation time than other methods. Peripheral inflammation in mild cognitive impairment with possible and probable Lewy body disease and Alzheimer's disease Eleanor King, John Tiernan O'Brien, Paul Donaghy, Christopher Morris, Nicola Barnett, Kirsty Olsen, Carmen Martin-Ruiz, John Paul Taylor, Alan J. Thomas Journal: International Psychogeriatrics / Volume 31 / Issue 4 / April 2019 Published online by Cambridge University Press: 11 March 2019, pp. 551-560 Objectives and design: To Investigate the peripheral inflammatory profile in patients with mild cognitive impairment (MCI) from three subgroups – probable Lewy body disease (probable MCI-LB), possible Lewy body disease, and probable Alzheimer's disease (probable MCI-AD) – as well as associations with clinical features. Memory clinics and dementia services. Patients were classified based on clinical symptoms as probable MCI-LB (n = 38), possible MCI-LB (n = 18), and probable MCI-AD (n = 21). Healthy comparison subjects were recruited (n = 20). Ten cytokines were analyzed from plasma samples: interferon (IFN)-gamma, interleukin (IL)-1beta, IL-2, IL-4, IL-6, IL-8, IL-10, IL-12p70, IL-13, and tumor necrosis factor (TNF)-alpha. C-reactive protein levels were investigated. There was a higher level of IL-10, IL-1beta, IL-2, and IL-4 in MCI groups compared to the healthy comparison group (p < 0.0085). In exploratory analyses to understand these findings, the MC-AD group lower IL-1beta (p = 0.04), IL-2 (p = 0.009), and IL-4 (p = 0.012) were associated with increasing duration of memory symptoms, and in the probable MCI-LB group, lower levels of IL-1beta were associated with worsening motor severity (p = 0.002). In the possible MCI-LB, longer duration of memory symptoms was associated with lower levels of IL-1beta (p = 0.003) and IL-4 (p = 0.026). There is increased peripheral inflammation in patients with MCI compared to healthy comparison subjects regardless of the MCI subtype. These possible associations with clinical features are consistent with other work showing that inflammation is increased in early disease but require replication. Such findings have importance for timing of putative therapeutic strategies aimed at lowering inflammation. Corrigendum: Research IT maturity models for academic health centers: Early development and initial evaluation Boyd M. Knosp, William K. Barnett, Nicholas R. Anderson, Peter J. Embi Journal: Journal of Clinical and Translational Science / Volume 3 / Issue 1 / February 2019 Published online by Cambridge University Press: 28 May 2019, p. 45 Print publication: February 2019 The Gopen–Yang Superior Semicircular Canal Dehiscence Questionnaire: development and validation of a clinical questionnaire to assess subjective symptoms in patients undergoing surgical repair of superior semicircular canal dehiscence B L Voth, J P Sheppard, N E Barnette, V Ong, T Nguyen, C H Jacky Chen, C Duong, J J Arsenault, C Lagman, Q Gopen, I Yang Journal: The Journal of Laryngology & Otology / Volume 132 / Issue 12 / December 2018 Published online by Cambridge University Press: 24 January 2019, pp. 1110-1118 To characterise subjective symptoms in patients undergoing surgical repair of superior semicircular canal dehiscence. Questionnaires assessing symptom severity and impact on function and quality of life were administered to patients before superior semicircular canal dehiscence surgery, between June 2011 and March 2016. Questionnaire sections included general quality of life, internal amplified sounds, dizziness and tinnitus, with scores of 0–100 points. Twenty-three patients completed the questionnaire before surgery. Section scores (mean±standard deviation) were: 38.2 ± 25.2 for general quality of life, 52.5 ± 23.9 for internal amplified sounds, 35.1 ± 28.8 for dizziness, 33.3 ± 30.7 for tinnitus, and 39.8 ± 22.2 for the composite score. Cronbach's α statistic averaged 0.93 (range, 0.84–0.97) across section scores, and 0.83 for the composite score. The Gopen–Yang Superior Semicircular Canal Dehiscence Questionnaire provides a holistic, patient-centred characterisation of superior semicircular canal dehiscence symptoms. Internal consistency analysis validated the questionnaire and provided a quantitative framework for further optimisation in the clinical setting. Sustainability considerations for clinical and translational research informatics infrastructure Jihad S. Obeid, Peter Tarczy-Hornoch, Paul A. Harris, William K. Barnett, Nicholas R. Anderson, Peter J. Embi, William R. Hogan, Douglas S. Bell, Leslie D. McIntosh, Boyd Knosp, Umberto Tachinardi, James J. Cimino, Firas H. Wehbe Journal: Journal of Clinical and Translational Science / Volume 2 / Issue 5 / October 2018 Published online by Cambridge University Press: 05 December 2018, pp. 267-275 A robust biomedical informatics infrastructure is essential for academic health centers engaged in translational research. There are no templates for what such an infrastructure encompasses or how it is funded. An informatics workgroup within the Clinical and Translational Science Awards network conducted an analysis to identify the scope, governance, and funding of this infrastructure. After we identified the essential components of an informatics infrastructure, we surveyed informatics leaders at network institutions about the governance and sustainability of the different components. Results from 42 survey respondents showed significant variations in governance and sustainability; however, some trends also emerged. Core informatics components such as electronic data capture systems, electronic health records data repositories, and related tools had mixed models of funding including, fee-for-service, extramural grants, and institutional support. Several key components such as regulatory systems (e.g., electronic Institutional Review Board [IRB] systems, grants, and contracts), security systems, data warehouses, and clinical trials management systems were overwhelmingly supported as institutional infrastructure. The findings highlighted in this report are worth noting for academic health centers and funding agencies involved in planning current and future informatics infrastructure, which provides the foundation for a robust, data-driven clinical and translational research program.
CommonCrawl
A list of five positive integers has all of the following properties: $\bullet$ The only integer in the list that occurs more than once is $8,$ $\bullet$ its median is $9,$ and $\bullet$ its average (mean) is $10.$ What is the largest possible integer that could appear in the list? We write the list of five numbers in increasing order. We know that the number $8$ occurs at least twice in the list. Since the median of the list is $9,$ then the middle number (that is, the third number) in the list is $9.$ Thus, the list can be written as $a,$ $b,$ $9,$ $d,$ $e.$ Since $8$ occurs more than once and the middle number is $9,$ then $8$ must occur twice only with $a=b=8.$ Thus, the list can be written as $8,$ $8,$ $9,$ $d,$ $e.$ Since the average is $10$ and there are $5$ numbers in the list, then the sum of the numbers in the list is $5(10)=50.$ Therefore, $8+8+9+d+e=50$ or $25+d+e=50$ or $d+e=25.$ Since $8$ is the only integer that occurs more than once in the list, then $d>9.$ Thus, $10 \leq d < e$ and $d+e=25.$ To make $e$ as large as possible, we make $d$ as small as possible, so we make $d=10,$ and so $e=15.$ The list $8,$ $8,$ $9,$ $10,$ $15$ has the desired properties, so the largest possible integer that could appear in the list is $\boxed{15}.$
Math Dataset
3.E: Review Exercises and Sample Exam [ "article:topic" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FBookshelves%2FAlgebra%2FBook%253A_Beginning_Algebra_(Redden)%2F03%253A_Graphing_Lines%2F3.0E%253A_3.E%253A_Review_Exercises_and_Sample_Exam 3: Graphing Lines Review Exercises Sample Exam Exercise \(\PageIndex{1}\) Rectangular Coordinate System Graph the given set of ordered pairs. \(\{(−3, 4), (−4, 0), (0, 3), (2, 4)\}\) \(\{(−5, 5), (−3, −1), (0, 0), (3, 2)\}\) Graph the points \((−3, 5), (−3, −3),\) and \((3, −3)\) on a rectangular coordinate plane. Connect the points and calculate the area of the shape. Graph the points \((−4, 1), (0, 1), (0, −2),\) and \((−4, −2)\) on a rectangular coordinate plane. Connect the points and calculate the area of the shape. Graph the points \((1, 0), (4, 0), (1, −5),\) and \((4, −5)\) on a rectangular coordinate plane. Connect the points and calculate the perimeter of the shape. Graph the points \((−5, 2), (−5, −3), (1, 2),\) and \((1, −3)\) on a rectangular coordinate plane. Connect the points and calculate the perimeter of the shape. Figure 3.E.1 3. Area: \(24\) square units 5. Perimeter: \(16\) units Calculate the distance between the given two points. \((−1, −2)\) and \((5, 6)\) \((2, −5)\) and \((−2, −2)\) \((−9, −3)\) and \((−8, 4)\) \((−1, 3)\) and \((1, −3)\) 1. \(10\) units 3. \(5\sqrt{2}\) units Calculate the midpoint between the given points. \((6, −3)\) and \((−8, −11)\) \((−6, 0)\) and \((0, 0)\) Show algebraically that the points \((−1, −1), (1, −3),\) and \((2, 0)\) form an isosceles triangle. Show algebraically that the points \((2, −1), (6, 1),\) and \((5, 3)\) form a right triangle. 1. \((2,-2)\) 3. \((\frac{1}{2},-\frac{3}{2})\) Exercise \(\PageIndex{4}\) Graph by Plotting Points Determine whether the given point is a solution. \(−5x+2y=7\); \((1, −1)\) \(6x−5y=4\); \((−1, −2)\) \(y=\frac{3}{4}x+1\); \((−\frac{2}{3}, \frac{1}{2})\) \(y=−\frac{3}{5}x−2\); \((10, −8)\) Find at least five ordered pair solutions and graph. \(y=−x+2\) \(y=2x−3\) \(y=\frac{1}{2}x−2\) \(y=−\frac{2}{3}x\) \(y=3\) \(x=−3\) \(x−5y=15\) \(2x−3y=12\) Exercise \(\PageIndex{6}\) Graph Using Intercepts Given the graph, find the \(x\)- and \(y\)- intercepts. Figure 3.E.10 1. \(y\)-intercept: \((0, −2)\); \(x\)-intercept: \((−4, 0)\) 3. \(y\)-intercept: none; \(x\)-intercept: \((5, 0)\) Find the intercepts and graph them. \(2x−y=−4\) \(\frac{1}{2}x−\frac{1}{3}y=1\) \(−\frac{1}{2}x+\frac{2}{3}y=2\) \(y=−\frac{5}{3}x+5\) \(y=−3x+4\) Exercise \(\PageIndex{8}\) Graph Using the \(y\)-Intercept and Slope Given the graph, determine the slope and \(y\)-intercept. 1. \(y\)-intercept: \((0, 1)\); slope: \(−2\) Determine the slope, given two points. \((0, −5)\) and \((−6, 3)\) \((\frac{1}{2}, −\frac{2}{3})\) and \((\frac{1}{4}, −\frac{1}{3})\) \((5, −\frac{3}{4})\) and \((2, −\frac{3}{4})\) 1. \(-\frac{7}{4}\) Exercise \(\PageIndex{10}\) Graph Using the \(y\)-Intercept and Slope Express in slope-intercept form and identify the slope and \(y\)-intercept. \(12x−4y=8\) \(−5x+3y=0\) 1. \(y=3x−2\); slope: \(3\); \(y\)-intercept \((0, −2)\) 3. \(y=\frac{4}{9}x+\frac{4}{3}\); slope: \(\frac{4}{9}\); \(y\)-intercept \((0, \frac{4}{3})\) \(y=−2x\) \(2x−3y=9\) \(2x+\frac{3}{2}y=3\) \(x−4y=0\) Exercise \(\PageIndex{12}\) Finding Linear Equations Given the graph, determine the equation of the line. 1. \(y=−2x+1\) 3. \(y=−5\) Find the equation of a line, given the slope and a point on the line. \(m = \frac{1}{2}\); \((−4, 8)\) \(m = −\frac{1}{5}\); \((−5, −9)\) \(m = \frac{2}{3}\); \((1, −2)\) \(m = −\frac{3}{4}\); \((2, −3)\) 1. \(y=\frac{1}{2}x+10\) 3. \(y=\frac{2}{3}x−\frac{8}{3}\) Find the equation of the line given two points on the line. \((−5, −5)\) and \((10, 7)\) \((−6, 12)\) and \((3, −3)\) \((\frac{5}{2}, −2)\) and \((−5, \frac{5}{2})\) \((7, −6)\) and \((3, −6)\) \((10, 1)\) and \((10, −3)\) 1. \(y=\frac{4}{5}x−1\) 3. \(y=−\frac{3}{4}x+\frac{1}{2}\) Exercise \(\PageIndex{15}\) Parallel and Perpendicular Lines Determine if the lines are parallel, perpendicular, or neither. \(\left\{\begin{aligned}−3x+7y&=14\\6x−14y&=42\end{aligned}\right.\) \(\left\{\begin{aligned}2x+3y&=18\\2x−3y&=36\end{aligned}\right.\) \(\left\{\begin{aligned}x+4y&=2\\8x−2y=&−1\end{aligned}\right.\) \(\left\{\begin{aligned}y&=2\\x&=2\end{aligned}\right.\) 1. Parallel 3. Perpendicular Find the equation of the line in slope-intercept form. Parallel to \(5x−y=15\) and passing through \((−10, −1)\). Parallel to \(x−3y=1\) and passing through \((2, −2)\). Perpendicular to \(8x−6y=4\) and passing through \((8, −1)\). Perpendicular to \(7x+y=14\) and passing through \((5, 1)\). Parallel to \(y=1\) and passing through \((4, −1)\). Perpendicular to \(y=1\) and passing through \((4, −1)\). 1. \(y=5x+49\) 3. \(y=−\frac{3}{4}x+5\) Exercise \(\PageIndex{17}\) Introduction to Functions Determine the domain and range and state whether it is a function or not. 1. \(\{(−10, −1), (−5, 2), (5, 2)\}\) 2. \(\{(−12, 4), (−1, −3), (−1, −2)\}\) 1. Domain: \(\{−10, −5, 5\}\); range: \(\{−1, 2\}\); function: yes 3. Domain: \(R\); range: \(R\); function: yes 5. Domain: \([−3,∞)\); range: \(R\); function: no Given the following, \(f(x)=9x−4\), find \(f(−1)\). \(f(x)=−5x+1\), find \(f(−3)\). \(g(x)=\frac{1}{2}x−\frac{1}{3}\), find \(g(−\frac{1}{3})\). \(g(x)=−\frac{3}{4}x+\frac{1}{3}\), find \(g(\frac{2}{3})\). \(f(x)=9x−4\), find \(x\) when \(f(x)=0\). \(f(x)=−5x+1\), find \(x\) when \(f(x)=2\). \(g(x)=\frac{1}{2}x−\frac{1}{3}\), find \(x\) when \(g(x)=1\). \(g(x)=−\frac{3}{4}x+\frac{1}{3}\), find \(x\) when \(g(x)=−1\). 1. \(f(−1)=−13\) 3. \(g(−\frac{1}{3})=−\frac{1}{2}\) 5. \(x=\frac{4}{9}\) Given the graph of a function \(f(x)\), determine the following. \(f(3)\) \(x\) when \(f(x)=4\) 1. \(f(3)=−2\) Exercise \(\PageIndex{20}\) Linear Inequalities (Two Variables) Is the ordered pair a solution to the given inequality? \(6x−2y≤1\); \((−3, −7)\) \(−3x+y>2\); \((0, 2)\) \(6x−10y<-1\); \((5,-3)\) \(x-\frac{1}{3}y>0\); \((1, 4)\) \(y>0\); \((−3, −1)\) \(x≤−5\); \((−6, 4)\) Graph the solution set. \(y≥−2x+1\) \(y<3x−4\) \(−x+y≤3\) \(\frac{5}{2}x+\frac{1}{2}y≤2\) \(3x−5y>0\) \(y>0\) Exercise \(\PageIndex{22}\) Graph the points \((−4, −2), (−4, 1),\) and \((0, −2)\) on a rectangular coordinate plane. Connect the points and calculate the area of the shape. Is \((−2, 4)\) a solution to \(3x−4y=−10\)? Justify your answer. 1. Area: \(6\) square units Given the set of \(x\)-values \(\{−2, −1, 0, 1, 2\}\), find the corresponding \(y\)-values and graph the following. \(y=x−1\) On the same set of axes, graph \(y=4\) and \(x=−3\). Give the point where they intersect. 3. Intersection: \((-3,4)\) Find the \(x\)- and \(y\)-intercepts and use those points to graph the following. \(2x−y=8\) \(12x+5y=15\) Calculate the slope of the line passing through \((−4, −5)\) and \((−3, 1)\). Determine the slope and \(y\)-intercept. Use them to graph the following. Given \(m=−3\), determine \(m_{⊥}\). Are the given lines parallel, perpendicular, or neither? \(\left\{\begin{aligned} -2x+3y&=-12\\4x-6y&=30 \end{aligned}\right.\) Determine the slope of the given lines. \(y=−2\) \(x=\frac{1}{3}\) Are these lines parallel, perpendicular, or neither? Determine the equation of the line with slope \(m=−\frac{3}{4}\) passing through \((8, 1)\). Find the equation to the line passing through \((−2, 3)\) and \((4, 1)\). Find the equation of the line parallel to \(5x−y=6\) passing through \((−1, −2)\). Find the equation of the line perpendicular to \(−x+2y=4\) passing through \((\frac{1}{2}, 5)\). 1. Slope: \(−\frac{3}{2}\); \(y\)-intercept: \((0, 6)\) 3. \(m_{⊥}=\frac{1}{3}\) 5. a. \(0\); b. Undefined; c. Perpendicular 8. \(y=5x+3\) Given a linear function \(f(x)=−\frac{4}{5}x+2\), determine the following. \(f(10)\) Graph the solution set: \(3x−4y>4\). Graph the solution set: \(y−2x≥0\). A rental car company charges $\(32.00\) plus $\(0.52\) per mile driven. Write an equation that gives the cost of renting the car in terms of the number of miles driven. Use the formula to determine the cost of renting the car and driving it \(46\) miles. A car was purchased new for $\(12,000\) and was sold 5 years later for $\(7,000\). Write a linear equation that gives the value of the car in terms of its age in years. The area of a rectangle is \(72\) square meters. If the width measures \(4\) meters, then determine the length of the rectangle. 1. \(f(10)=−6\) 5. cost\(=0.52x+32\); $\(55.92\) 7. \(18\) meters 3.8: Linear Inequalities (Two Variables)
CommonCrawl
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About Annals of Probability Ann. Probab. Volume 44, Number 6 (2016), 3740-3803. Mean field games with common noise René Carmona, François Delarue, and Daniel Lacker More by René Carmona More by François Delarue More by Daniel Lacker Full-text: Open access Enhanced PDF (562 KB) Article info and citation A theory of existence and uniqueness is developed for general stochastic differential mean field games with common noise. The concepts of strong and weak solutions are introduced in analogy with the theory of stochastic differential equations, and existence of weak solutions for mean field games is shown to hold under very general assumptions. Examples and counter-examples are provided to enlighten the underpinnings of the existence theory. Finally, an analog of the famous result of Yamada and Watanabe is derived, and it is used to prove existence and uniqueness of a strong solution under additional assumptions. Ann. Probab., Volume 44, Number 6 (2016), 3740-3803. Received: July 2014 First available in Project Euclid: 14 November 2016 Permanent link to this document https://projecteuclid.org/euclid.aop/1479114262 doi:10.1214/15-AOP1060 Mathematical Reviews number (MathSciNet) MR3572323 Zentralblatt MATH identifier Primary: 93E20: Optimal stochastic control Secondary: 60H10: Stochastic ordinary differential equations [See also 34F05] 91A13: Games with infinitely many players Mean field games stochastic optimal control McKean–Vlasov equations weak solutions relaxed controls Carmona, René; Delarue, François; Lacker, Daniel. Mean field games with common noise. Ann. Probab. 44 (2016), no. 6, 3740--3803. doi:10.1214/15-AOP1060. https://projecteuclid.org/euclid.aop/1479114262 [1] Ahuja, S. (2014). Wellposedness of mean field games with common noise under a weak monotonicity condition. Preprint. Available at arXiv:1406.7028. [2] Aliprantis, C. and Border, K. (2007). Infinite Dimensional Analysis: A Hitchhiker's Guide, 3rd ed. Springer, Berlin. [3] Barlow, M. T. (1982). One-dimensional stochastic differential equations with no strong solution. J. Lond. Math. Soc. (2) 26 335–347. [4] Bensoussan, A., Frehse, J. and Yam, P. (2013). Mean Field Games and Mean Field Type Control Theory. Springer, New York. [5] Bensoussan, A., Frehse, J. and Yam, P. (2014). The master equation in mean field theory. Preprint. Available at arXiv:1404.4150. [6] Bogachev, V. I. (2007). Measure Theory. Vol. I, II. Springer, Berlin. [7] Brémaud, P. and Yor, M. (1978). Changes of filtrations and of probability measures. Z. Wahrsch. Verw. Gebiete 45 269–295. [8] Cardaliaguet, P. (2012). Notes on mean field games. [9] Cardaliaguet, P., Graber, P. J., Porretta, A. and Tonon, D. (2015). Second order mean field games with degenerate diffusion and local coupling. NoDEA Nonlinear Differential Equations Appl. 22 1287–1317. [10] Carmona, R. and Delarue, F. (2013). Probabilistic analysis of mean-field games. SIAM J. Control Optim. 51 2705–2734. [11] Carmona, R. and Delarue, F. (2014). The master equation for large population equilibriums. In Stochastic Analysis and Applications 2014. Springer Proc. Math. Stat. 100 77–128. Springer, Cham. [12] Carmona, R., Delarue, F. and Lachapelle, A. (2013). Control of McKean–Vlasov dynamics versus mean field games. Math. Financ. Econ. 7 131–166. [13] Carmona, R., Fouque, J.-P. and Sun, L.-H. (2015). Mean field games and systemic risk. Commun. Math. Sci. 13 911–933. [14] Carmona, R. and Lacker, D. (2015). A probabilistic weak formulation of mean field games and applications. Ann. Appl. Probab. 25 1189–1231. [15] Carmona, R. and Zhu, X. (2016). A probabilistic approach to mean field games with major and minor players. Ann. Appl. Probab. 26 1535–1580. [16] Castaing, C., Raynaud de Fitte, P. and Valadier, M. (2004). Young Measures on Topological Spaces. Mathematics and Its Applications 571. Kluwer Academic, Dordrecht. [17] Dufour, F. and Stockbridge, R. H. (2012). On the existence of strict optimal controls for constrained, controlled Markov processes in continuous time. Stochastics 84 55–78. [18] Dugundji, J. (1951). An extension of Tietze's theorem. Pacific J. Math. 1 353–367. [19] El Karoui, N., Nguyen, D. H. and Jeanblanc-Picqué, M. (1987). Compactification methods in the control of degenerate diffusions: Existence of an optimal control. Stochastics 20 169–219. [20] Ethier, S. N. and Kurtz, T. G. (2005). Markov Processes: Characterization and Convergence, 2nd ed. Wiley, New York. [21] Filippov, A. F. (1962). On certain questions in the theory of optimal control. J. SIAM Control Ser. A 1 76–84. [22] Gomes, D. A., Mohr, J. and Souza, R. R. (2010). Discrete time, finite state space mean field games. J. Math. Pures Appl. (9) 93 308–328. [23] Gomes, D. A., Mohr, J. and Souza, R. R. (2013). Continuous time finite state mean field games. Appl. Math. Optim. 68 99–143. [24] Gomes, D. A., Patrizi, S. and Voskanyan, V. (2014). On the existence of classical solutions for stationary extended mean field games. Nonlinear Anal. 99 49–79. [25] Gomes, D. A., Pimentel, E. A. and Sánchez-Morgado, H. (2015). Time-dependent mean-field games in the subquadratic case. Comm. Partial Differential Equations 40 40–76. [26] Gomes, D. A. and Saúde, J. (2014). Mean field games models—A brief survey. Dyn. Games Appl. 4 110–154. [27] Guéant, O., Lasry, J.-M. and Lions, P.-L. (2011). Mean field games and applications. In Paris-Princeton Lectures on Mathematical Finance 2010. Lecture Notes in Math. 2003 205–266. Springer, Berlin. [28] Haussmann, U. G. and Lepeltier, J.-P. (1990). On the existence of optimal controls. SIAM J. Control Optim. 28 851–902. [29] Huang, M. (2010). Large-population LQG games involving a major player: The Nash certainty equivalence principle. SIAM J. Control Optim. 48 3318–3353. [30] Huang, M., Malhamé, R. P. and Caines, P. E. (2006). Large population stochastic dynamic games: Closed-loop McKean–Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6 221–251. [31] Jacod, J. and Mémin, J. (1981). Sur un type de convergence intermédiaire entre la convergence en loi et la convergence en probabilité. In Seminar on Probability, XV (Univ. Strasbourg, Strasbourg, 1979/1980) (French). Lecture Notes in Math. 850 529–546. Springer, Berlin. [32] Jacod, J. and Mémin, J. (1981). Weak and strong solutions of stochastic differential equations: Existence and stability. In Stochastic Integrals (Proc. Sympos., Univ. Durham, Durham, 1980). Lecture Notes in Math. 851 169–212. Springer, Berlin. [33] Kallenberg, O. (2002). Foundations of Modern Probability, 2nd ed. Springer, New York. [34] Kurtz, T. G. (1998). Martingale problems for conditional distributions of Markov processes. Electron. J. Probab. 3 29 pp. (electronic). [35] Kurtz, T. G. (2014). Weak and strong solutions of general stochastic models. Electron. Commun. Probab. 19 16. [36] Kurtz, T. G. and Protter, P. (1991). Weak limit theorems for stochastic integrals and stochastic differential equations. Ann. Probab. 19 1035–1070. [37] Lacker, D. (2014). A general characterization of the mean field limit for stochastic differential games. Preprint. Available at arXiv:1408.2708. [38] Lacker, D. (2015). Mean field games via controlled martingale problems: Existence of Markovian equilibria. Stochastic Process. Appl. 125 2856–2894. [39] Lasry, J.-M. and Lions, P.-L. (2007). Mean field games. Jpn. J. Math. 2 229–260. [40] Lasry, J.-M. and Lions, P.-L. and Guéant, O. (2008). Application of mean field games to growth theory. Preprint. [41] Lions, P. L. (2007). Théorie des jeux à champs moyen et applications. Cours au College de France. [42] Nourian, M. and Caines, P. E. (2013). $\varepsilon$-Nash mean field game theory for nonlinear stochastic dynamical systems with major and minor agents. SIAM J. Control Optim. 51 3302–3331. [43] Stroock, D. W. and Varadhan, S. R. S. (1979). Multidimensional Diffusion Processes. Grundlehren der Mathematischen Wissenschaften 233. Springer, Berlin. [44] Villani, C. (2003). Topics in Optimal Transportation. Graduate Studies in Mathematics 58. Amer. Math. Soc., Providence, RI. [45] Yamada, T. and Watanabe, S. (1971). On the uniqueness of solutions of stochastic differential equations. J. Math. Kyoto Univ. 11 155–167. The Institute of Mathematical Statistics Future Papers New content alerts Email RSS ToC RSS Article What is MathJax? The Yamada-Watanabe-Engelbert theorem for general stochastic equations and inequalities Kurtz, Thomas, Electronic Journal of Probability, 2007 Weak and strong solutions of general stochastic models Kurtz, Thomas, Electronic Communications in Probability, 2014 Strong uniqueness for SDEs in Hilbert spaces with nonregular drift Da Prato, G., Flandoli, F., Röckner, M., and Veretennikov, A. Yu., Annals of Probability, 2016 From the master equation to mean field game limit theory: Large deviations and concentration of measure Delarue, François, Lacker, Daniel, and Ramanan, Kavita, Annals of Probability, 2020 Strong solutions for stochastic differential equations with jumps Li, Zenghu and Mytnik, Leonid, Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, 2011 Translation invariant mean field games with common noise Lacker, Daniel and Webster, Kevin, Electronic Communications in Probability, 2015 On the Innovations Conjecture of Nonlinear Filtering with Dependent Data Heunis, Andrew and Lucic, Vladimir, Electronic Journal of Probability, 2008 From the master equation to mean field game limit theory: a central limit theorem Delarue, François, Lacker, Daniel, and Ramanan, Kavita, Electronic Journal of Probability, 2019 Nonuniqueness for a parabolic SPDE with $\frac{3}{4}-\varepsilon $-Hölder diffusion coefficients Mueller, Carl, Mytnik, Leonid, and Perkins, Edwin, Annals of Probability, 2014 Super-Brownian motion as the unique strong solution to an SPDE Xiong, Jie, Annals of Probability, 2013 euclid.aop/1479114262
CommonCrawl